uuid
int64 541B
3,299B
| dataset
stringclasses 1
value | text
stringlengths 1
4.29M
|
---|---|---|
1,477,468,750,281 | arxiv | \section{Introduction}
The single-wall carbon nanotube (SWNT) is a quasi-one-dimensional (1D)
material made by rolling up a graphene sheet, which possesses
two Dirac cones at $K$ and $K'$ points.
The circumference of nanotube is represented by the chiral vector,
$
\bm{C}_{\rm h} = n \bm{a}_1 + m \bm{a}_2
$,
on the graphene, where $\bm{a}_1$ and $\bm{a}_2$ are
the primitive lattice vectors and
a set of two integers, $(n, m)$, is called chirality
\cite{Saito1998}.
The SWNT is metallic (semiconducting) for
$
{\rm mod} (2n + m, 3) = 0
$
$(\neq 0)$,
because some wavevectors discretized in the circumference direction
pass (do not pass) through $K$ or $K'$ points when
they are expressed in the two-dimensional (2D)
Brillouin zone (BZ) of graphene.
Even for metallic SWNTs, a narrow band gap opens due to
the finite curvature in the tube surface
\cite{Hamada1992, Saito1992, Kane1997}.
The curvature enhances the spin-orbit (SO) interaction
through the mixing between $\pi$ and $\sigma$ orbitals, which also
contributes to the band gap
\cite{Ando2000}.
Recently, SWNTs have attracted attention from a viewpoint of topology
\cite{Klinovaja2012, Egger2012, Sau2013, Hsu2015,
Izumida2016, Lin2016, Okuyama2017, Izumida2017,
Marganska2018, Zang2018}.
The neutral SWNT can be regarded as a 1D insulator in the
presence of band gap and rotational symmetry (see below).
Due to the sublattice (or chiral) symmetry between $A$ and $B$
lattice sites,
the topology of a SWNT is characterized by a $\mathbb{Z}$
topological invariant, winding number
\cite{Wen1989}.
SWNTs can be 1D topological insulators
in both the absence and presence of a magnetic field,
which belong to classes BDI and AIII
in the periodic table in Ref.\
\cite{Schnyder2009},
respectively.
Izumida {\it et al.} introduced the winding number
for semiconducting SWNTs for the first time
\cite{Izumida2016}.
They also examined the edge states localized around the tube ends
with energy $E = E_{\rm F} = 0$, the number of which
is related to the winding number by the bulk-edge correspondence.
This enables us to know the winding number via the observation of
local density of states at the tube ends by the
scanning tunneling microscopy as already done for the graphene
\cite{Kobayashi2005}.
The present authors generalized the theory for metallic SWNTs
\cite{Okuyama2017}.
The narrow band gap in metallic SWNTs can be closed by applying
a magnetic field of a few Tesla along the tube axis.
This results in the topological phase transition,
where the winding number
changes discontinuously as a function of the magnetic field.
Independently, Lin {\it et al.} examined the topological nature in
a zigzag SWNT ($n > 0$ and $m = 0$)
by using the Su-Schrieffer-Heeger
model and topological invariant called Zak phase
\cite{Lin2016}.
They theoretically proposed
a possible manipulation of the edge states via
the topological phase transition, although it requires an
unrealistically huge magnetic field
in the case of a semiconducting SWNT.
There also exist theoretical studies on topological phases in
a SWNT proximity coupled to a superconductor
\cite{Klinovaja2012, Egger2012, Sau2013,
Hsu2015, Izumida2017, Marganska2018}.
In the present study,
we topologically classify all possible SWNTs.
The winding number is analytically derived
for all possible chiralities.
We also generalize the bulk-edge correspondence to the cases of
both semiconducting and metallic SWNTs in a magnetic field
along the tube axis,
which determines the number of edge states by the winding number.
Our main results are depicted in Fig.\ \ref{fig:class}:
(a) In the absence of a magnetic field,
the majority of SWNTs are topological insulators
with nonzero winding number.
The exceptions are metallic SWNTs of armchair type ($n = m$) and
semiconducting SWNTs with $n = m + 1$.
(b) In the presence of a magnetic field,
the topological phase transition takes place when the band gap is
closed by applying a magnetic field, for all SWNTs other than
the armchairs. In other words,
the SWNT can be topologically nontrivial
even for $n = m + 1$ when the magnetic field is tuned
appropriately.
Only armchair nanotubes are topologically trivial
regardless of the magnetic field, which is
due to the mirror symmetry with respect to
a plane including the tube axis
\cite{Izumida2016}.
Previously, some groups theoretically predicted a change in
the number of edge states in a SWNT as a function of magnetic field
\cite{Sasaki2005, Sasaki2008, Marganska2011}.
Our theory clearly explains their physical origin in terms of topology.
We noticed a theoretical study by Zang {\it et al.}
\cite{Zang2018}
during the preparation of this paper.
They utilized a similar technique to ours to analyze
the winding number in SWNTs.
They showed that some SWNTs can have the edge states and that
the topological phase transition takes place by applying a
magnetic field.
Their study, however, was only applicable to semiconducting SWNTs,
and they did not derive analytic expression for the winding number.
This paper is organized as follows.
In Sec.\ \ref{sec:semicond}, we introduce
a 1D lattice model for semiconducting SWNTs in the absence of
a magnetic field,
utilizing the rotational symmetry.
We include a magnetic field along the tube axis in Sec.\ \ref{sec:mag}.
In Sec.\ \ref{sec:analytical}, we analytically evaluate
the winding numbers in the case of semiconducting SWNTs in
both the absence and presence of a magnetic field.
The winding number determines
the number of edge states via the bulk-edge
correspondence, whose proof is given in Appendix \ref{app:EOM}.
In Sec.\ \ref{sec:metal},
we examine the topology in metallic SWNTs with small band gap
induced by the curvature effects.
After the discussion on our theoretical study in Sec.\ \ref{sec:discussion},
our conclusions are given in Sec.\ \ref{sec:conclusions}.
\begin{figure}[t] \begin{center}
\includegraphics{lattice-1st.pdf}
\caption{
(Color online) (a) The mapping of the $(n, m)$-SWNT to
a graphene sheet.
The chiral vector, $\bm{C}_{\rm h} = n \bm{a}_1 + m\bm{a}_2$,
indicates the circumference of the tube with
$\bm{a}_1$ and $\bm{a}_2$ being the primitive lattice vectors of
graphene.
The three vectors
$\bm{\Delta}_j$
$(j = 1, 2, 3)$
connect the nearest-neighbor atoms.
The $d$-fold symmetry around the tube axis (helical symmetry)
corresponds to the translational symmetry of
$\bm{C}_{\rm h} / d$ $(\bm{R} = p \bm{a}_1 + q \bm{a}_2)$, where
$d = \gcd(n, m)$ and integers $p$ and $q$ are given by
Eq.\ \eqref{eq:pq}.
This figure shows the case of $(n, m) = (6, 3)$ with
$d = 3$, $p = 1$, and $q = 0$.
(b) A 1D lattice model in which $A$ and $B$
lattice sites are aligned in the axial direction.
}
\label{fig:lattice}
\end{center} \end{figure}
\section{1D lattice model for semiconducting nanotube
\label{sec:semicond}}
In this section, we derive a 1D lattice model for
semiconducting SWNTs in the absence of a magnetic field.
Neither the Aharonov-Bohm (AB) effect in a magnetic field
nor curvature-induced narrow gap in metallic SWNTs are considered.
Throughout the paper, we consider the $(n, m)$-SWNT,
whose circumference is specified by
chiral vector
$
\bm{C}_{\rm h} = n \bm{a}_1 + m \bm{a}_2
$
on a graphene sheet,
where $\bm{a}_{1/2} = (\sqrt3/2, \pm 1/2) a$ with the lattice constant
$a = 0.246~{\rm nm}$ [see Fig.\ \ref{fig:lattice}(a)].
Its diameter is given by
$
d_{\rm t} = |\bm{C}_{\rm h}|/ \pi =
a \sqrt{n^2 + nm + m^2} / \pi
$.
The chiral angle $\theta$ is defined as
the angle between $\bm{C}_{\rm h}$ and $\bm{a}_1$:
$
\theta = \tan^{-1} [\sqrt3 m / (2n + m)]
$.
We restrict ourselves to the case of
$
0 \leq m \leq n
$
without loss of generality, which corresponds to
$
0 \leq \theta \leq \pi / 6
$
with $\theta = 0$ and $\pi / 6$ for
zigzag ($m = 0$) and armchair ($m = n$)
nanotubes, respectively.
\subsection{Derivation}
We start from the tight-binding model for graphene
\cite{Saito1998},
which consists of
$A$ and $B$ sublattices,
as depicted by filled and empty circles,
respectively, in Fig.\ \ref{fig:lattice}(a).
This model involves an isotropic hopping integral $\gamma$ between
the nearest-neighbor atoms.
An $A$ atom is connected to three $B$ atoms by vectors
$\bm{\Delta}_j$
$(j = 1, 2, 3)$ in Fig.\ \ref{fig:lattice}(a).
The Hamiltonian reads
\begin{align}
H = \sum_{\bm{r}_A} \sum_{j = 1}^3 \left(
\gamma c^{\dagger}_{\bm{r}_A}
c_{\bm{r}_A + \bm{\Delta}_j}
+ {\rm H.c.}
\right),
\end{align}
where
$\bm{r}_\sigma$ is the position of $\sigma = A$ or $B$
atom on the graphene sheet, and
$
c_{\bm{r}}
$
is the field operator for a $\pi$ electron of atom at
position $\bm{r}$.
$
\gamma = - 2 \hbar v_{\rm F} / (\sqrt3 a)
$
with
$
v_{\rm F} = 8.32 \times 10^5 ~ {\rm m/s}
$
being the Fermi velocity in the graphene.
The spin index $s$ is omitted,
which is irrelevant
in Secs.\ \ref{sec:semicond} and \ref{sec:mag}.
We derive a 1D lattice model of
the SWNT along the lines of Ref.\
\cite{Izumida2016},
where the helical-angular construction
\cite{White1993, Jishi1993}
is utilized.
$(n, m)$-SWNT has the $d$-fold rotational
symmetry around the tube axis, where
\begin{align}
d = \gcd(n, m)
\label{eq:def_d}
\end{align}
is the greatest common divisor of $n$ and $m$.
The rotation by $2\pi/d$ corresponds
to the translation by
$\bm{C}_{\rm h} / d$ on the graphene sheet.
The SWNT
also has the helical symmetry represented by
the translation by
$
\bm{R} = p \bm{a}_1 + q \bm{a}_2
$
on the graphene sheet,
with integers $p$ and $q$ satisfying
\begin{align}
mp - nq = d.
\label{eq:pq}
\end{align}
This means that the SWNT is invariant under the
translation by
$
a_z = \sqrt3 d a^2 / (2 \pi d_{\rm t})
$
along the tube axis together with the rotation by
$
\theta_z = 2 \pi [(2n + m) p + (n + 2m) q]
/ [2(n^2 + nm + m^2)]
$
around it [see Fig.\ \ref{fig:lattice}(a)].\footnote{
Note that there is an arbitrariness for the choice of
$p$ and $q$ in Eq.\ \eqref{eq:pq}:
$\bm{R}$ can be added by integer multiple of
$\bm{C}_{\rm h} / d$.
$a_z$ is invariant whereas
$\theta_z \rightarrow \theta_z \pm 2 \pi / d$
when $\bm{R} \rightarrow \bm{R} \pm \bm{C}_{\rm h} / d$.
}
Here, $\bm{R}$ and $\bm{C}_{\rm h} / d$ are a new set of
primitive lattice vectors of graphene;
the position of $A$ and $B$ atoms can be expressed as
\begin{align}
\bm{r}_A &= \ell \bm{R} + \nu (\bm{C}_{\rm h} / d),
\label{eq:r_A} \\
\bm{r}_B &= \ell \bm{R} + \nu (\bm{C}_{\rm h} / d)
+ \bm{\Delta}_1,
\label{eq:r_B}
\end{align}
on the graphene sheet with site indices $\ell$ and
$
\nu = 0, 1, 2, \ldots, d - 1
$.
By performing the Fourier transformation for the $\nu$ coordinate,
we obtain the Hamiltonian block diagonalized in
the subspace of orbital angular momentum
$
\mu = 0, 1, 2, \ldots, d - 1
$
as
$
H = \sum_{\mu = 0}^{d - 1} H_{\mu}
$,
\begin{align}
H_{\mu} = \sum_{\ell} \sum_{j = 1}^3 \left( \gamma
{\rm e}^{{\rm i} 2 \pi \mu \Delta''_j / d}
c^{\, \mu \, \dagger}_{A, \ell}
c^{\, \mu}_{B, \ell + \Delta'_j}
+ {\rm H.c.} \right).
\label{eq:H0}
\end{align}
This is a 1D lattice model in which $A$ and $B$
lattice sites are aligned in the axial direction
with the lattice constant $a_z$,
as shown in Fig.\ \ref{fig:lattice}(b).
Here, $c^{\, \mu}_{\sigma, \ell}$ is the field operator of
an electron with angular momentum $\mu$ and
at sublattice $\sigma$ of site index $\ell$.
The hopping to the $j$th nearest-neighbor atom
[vector in $\bm{\Delta}_j$ in Fig.\ \ref{fig:lattice}(a)]
gives rise to the hopping to
the sites separated by
$\Delta'_j$
in Fig.\ \ref{fig:lattice}(b) with phase factor
$\Delta''_j$,
where
\begin{align}
\bm{\Delta}_j - \bm{\Delta}_1
= \Delta'_j \bm{R} + \Delta''_j (\bm{C}_{\rm h} / d),
\label{eq:Delta}
\end{align}
or $\Delta'_1 = \Delta''_1 = 0$,
$\Delta'_2 = n / d$, $\Delta''_2 = -p$,
$\Delta'_3 = - m/d$, and $\Delta''_3 = q$.
\subsection{Bulk properties}
For the bulk system,
the Fourier transformation of $H_{\mu}$ along
$\ell$ direction yields the two-by-two Hamiltonian,
\begin{align}
H_{\mu} (k) &= \gamma \begin{bmatrix}
0 & f_\mu (k) \\
f^*_\mu (k) & 0
\end{bmatrix},
\label{eq:H0_k}
\end{align}
in the sublattice space for given wave number $k$.
$k$ runs through the 1D BZ,
$
-\pi \leq k a_z < \pi
$,
and
\begin{align}
f_\mu (k) &= \sum_{j = 1}^3
{\rm e}^{{\rm i} 2 \pi \mu \Delta''_j / d}
{\rm e}^{{\rm i} k a_z \Delta'_j}.
\label{eq:f0}
\end{align}
The dispersion relation for subband
$\mu$ is readily obtained as
\begin{align}
E_{\mu} (k) &= \pm |\gamma f_\mu (k)|.
\label{eq:epsilon0}
\end{align}
The system is an insulator for semiconducting SWNTs with
${\rm mod}(2n + m, 3) \neq 0$, that is,
$
f_\mu (k) \neq 0
$
in the whole BZ.
Then, the positive and negative $E_{\mu} (k)$'s form
the conduction and valence bands, respectively.
On the other hand, for metallic SWNTs with
${\rm mod}(2n + m, 3) = 0$, $f_\mu (k)$ becomes
zero at $\mu_+$ and $k_+$ ($\mu_-$ and $k_-$)
that correspond to the $K$ ($K'$) point on the graphene sheet,
as discussed in Sec.\ \ref{sec:metal}.
\subsection{Winding number and bulk-edge correspondence}
The bulk Hamiltonian in Eq.\ \eqref{eq:H0_k} anticommutes with
$\sigma_z$ in the sublattice space, which is called sublattice
or chiral symmetry.
Thanks to this symmetry as well as the finite band gap,
we can define the winding number
\cite{Wen1989},
\begin{align}
w_\mu = \int_{-\pi/a_z}^{\pi/a_z} \frac{{\rm d} k}{2 \pi}
\frac{\partial}{\partial k} \arg f_\mu (k)
\equiv \frac{1}{2 \pi} \oint_{\rm BZ} {\rm d} \arg f_\mu (k),
\label{eq:w}
\end{align}
for subband with angular momentum $\mu$ in semiconducting SWNTs
\cite{Izumida2016}.
The winding number is the number of times that
$f_\mu (k)$ in Eq.\ \eqref{eq:f0} winds around
the origin on the complex plane when $k$ runs through the 1D BZ.
Note that $w_\mu$ in Eq.\ \eqref{eq:w} is ill-defined for
metallic SWNTs where $f_\mu (k)$ is zero and
therefore $\arg f_\mu (k)$ cannot be defined at $\mu_\tau$ and $k_\tau$
($\tau = \pm 1$).
We will overcome this problem in Sec.\ \ref{sec:metal}.
The bulk-edge correspondence holds between
the winding number and number of edge states, $N_{\rm edge}$,
\begin{align}
N_{\rm edge}
= 4 \sum_{\mu = 0}^{d - 1}
|w_\mu|
\label{eq:bulk_edge}
\end{align}
in a long but finite SWNT.
The prefactor of 4 is
ascribable to the spin degeneracy and two edges at tube ends.
This relation was analytically shown for semiconducting SWNTs in
the absence of a magnetic field in Ref.\ \cite{Izumida2016}.
We generalize Eq.\ \eqref{eq:bulk_edge}
[and Eq.\ \eqref{eq:bulk_edge_metal}]
for both semiconducting
and metallic SWNTs in
a magnetic field in Appendix \ref{app:EOM}.
Here, we assume that the tube is cut by a broken line in
Fig.\ \ref{fig:lattice}(a),
which results in so-called minimal boundary edges
\cite{Akhmerov2008}.
The case of the other boundaries is discussed in
Sec.\ \ref{sec:discussion}.
\section{1D lattice model with finite magnetic field
\label{sec:mag}}
We extend our theory to include a magnetic field $B$
in the axial direction of the SWNT.
We neglect the spin-Zeeman effect throughout the paper,
which is justified unless the band gap is closed
in a huge magnetic field.\footnote{The large Zeeman effect
could overlap the conduction band for one spin and valence
band for the other spin, which makes the system metallic.}
Only the AB effect is taken into account as
the Peierls phase in the hopping integral.
We replace
\begin{align}
\gamma {\rm e}^{{\rm i} 2 \pi \mu \Delta''_j}
\rightarrow
\gamma {\rm e}^{{\rm i} 2 \pi \mu \Delta''_j}
\exp \left( {\rm i} 2 \pi \phi \frac{a_{\rm CC} \cos \Theta_j}
{\pi d_{\rm t}} \right)
\end{align}
in $H_\mu$ in Eq.\ \eqref{eq:H0} and $H_{\mu} (k)$ in Eq.\ \eqref{eq:H0_k}.
Here,
\begin{align}
\phi = \frac{B \, \pi (d_{\rm t}/2)^2}{h/e}
\label{eq:phi}
\end{align}
is the AB phase, or number of flux quanta penetrating the tube,
$a_{\rm CC} = a / \sqrt3$ is the bond length $|\bm{\Delta}_j|$,
and $\Theta_j$ is the angle between $\bm{\Delta}_j$
and $\bm{C}_{\rm h}$ on the graphene sheet:
$\Theta_j = \theta - (5 \pi / 6) + (2 \pi / 3) j$.
As a result, $f_\mu (k)$ in Eq.\ \eqref{eq:f0} changes to
\begin{align}
f_\mu (k; \phi) &= \sum_{j = 1}^3
{\rm e}^{{\rm i} 2 \pi \mu \Delta''_j / d}
\exp \left( {\rm i} 2 \pi \phi
\frac{a_{\rm CC} \cos \Theta_j} {\pi d_{\rm t}} \right)
{\rm e}^{{\rm i} k a_z \Delta'_j}
\label{eq:f0_phi}
\end{align}
in a magnetic field.
$f_\mu (k; \phi)$ can be zero even for semiconducting SWNTs,
that is, the band gap is closed
at $|\phi| = \phi^*=1/3$ \cite{Ajiki1993}.
When $|\phi| \ne \phi^*$, $w_\mu$ in Eq.\ \eqref{eq:w} can be
defined in terms of $f_\mu (k; \phi)$.
As we will show later, a sudden change in $w_\mu$
takes place at $|\phi| = \phi^*=1/3$, which corresponds to the
topological phase transition.
Note that only the decimal part of $\phi$ is
physically significant.
$\phi \rightarrow \phi + 1$
compensates with $\mu \rightarrow \mu - 1$ in the
definition of angular momentum.
Therefore, we can restrict ourselves to $0 \leq \phi < 1$ or
$-1/2 \leq \phi < 1/2$, depending on the situations.
\section{Topological classification of semiconducting nanotube
\label{sec:analytical}}
Now we topologically classify semiconducting SWNTs.
The winding number $w_\mu$
is analytically evaluated as a function of
chirality $(n, m)$
and magnetic field $B$ in the axial direction.
The winding number $w_\mu$ in Eq.\ \eqref{eq:w} can be
interpreted as the number of times that
$f_\mu (k)$ or $f_\mu (k; \phi)$ circulates around
the origin on the complex plane when $k$ runs through the 1D BZ,
$
-\pi \leq k a_z < \pi
$.
\begin{figure}[t] \begin{center}
\includegraphics[width=8cm]{flower-semicond.pdf}
\caption{
(Color online)
$f_\mu (k)$ on the complex plane for
the (6, 4)-SWNT with (a) $\mu = 0$ and (b) $\mu = 1$.
$d = 2$ and we choose $p = q = -1$.
It draws a flower-shaped trajectory centered at $z = 1$,
with petal $j=1$ to $5$.
The distance from $z = 1$ takes its maximum, $2$, when
the argument measured from $z = 1$ is $\Phi_j$ in
Eq.\ \eqref{eq:Phi_j}, whereas it is 1 for $\Phi_j \pm \Delta / 2$.
}
\label{fig:flower-semicond}
\end{center} \end{figure}
\subsection{Analysis without magnetic field}
We begin with the case in the absence of a
magnetic field (AB phase $\phi = 0$).
From Eq.\ \eqref{eq:f0}, we obtain
\begin{align}
f_\mu (k) = 1 + 2 &\cos \left[
\frac{n + m}{2d} k a_z - \frac{\pi \mu}{d} (p + q) \right]
\nonumber \\
\times &\exp \left\{ {\rm i} \left[
\frac{n - m}{2d} k a_z + \frac{\pi \mu}{d} (- p + q) \right]
\right\}.
\label{eq:f_over_gamma}
\end{align}
For armchair SWNTs of $n = m$
[$d = n$ in Eq.\ \eqref{eq:def_d} and
$p = 1$ and $q = 0$
in Eq.\ \eqref{eq:pq}],
Eq.\ \eqref{eq:f_over_gamma} indicates
a line segment on the complex plane.
For SWNTs other than armchair type,
$
f_\mu (k)
$
draws a ``flower-shaped'' closed loop,
as depicted for the $(6, 4)$-SWNT with $\mu = 0$ and $1$
in Figs.\ \ref{fig:flower-semicond}(a) and \ref{fig:flower-semicond}(b),
respectively.
We can see that the former does not circulate the origin, whereas
the latter does.
This results in $w_{\mu = 0} = 0$ and $w_{\mu = 1} = 1$,
respectively.
In general, the trajectory is centered at $z = 1$, and
$|f_\mu (k) - 1|$ takes the maximum value, $2$, when
\begin{align}
\arg \left[ f_\mu (k) - 1 \right]
= \frac{2 \pi d}{n + m} \left( j - \frac{\mu}{d} \right)
\equiv \Phi_j,
\label{eq:Phi_j}
\end{align}
with $j = 1, 2, \dots, \frac{n + m}{d}$.\footnote{
$|f_\mu (k) - 1| = 2$ when
$
k a_z = \frac{2\pi d}{n + m} \left[
j' + \frac{\mu}{d} (p + q)
\right]
$
with $j' = 1, 2, \ldots, \frac{n + m}{d}$.
Then,
$
\arg \left[ f_\mu (k) - 1 \right]
= j' \pi + \frac{n - m}{2d} k a_z
+ \frac{\pi \mu}{d} (-p + q)
= \frac{2 \pi d}{n + m} \left(
\frac{n}{d} j' - \frac{\mu}{d} \right)
$.
Since $\frac{n + m}{d}$ and $\frac{n}{d}$ are mutually prime,
$j = \frac{n}{d} j'$ can take any integer between
$1$ and $\frac{n + m}{d}$ with
$j' = 1, 2, \ldots, \frac{n + m}{d}$.
This justifies $\Phi_j$ in Eq.\ \eqref{eq:Phi_j}.
In a similar manner, we can show that
the argument is $\Phi_j \pm \Delta / 2$
when $|f_\mu (k) - 1| = 1$.
}
Note that $0 < \Phi_j \le 2 \pi$ for $0 \leq \mu < d$.
$|f_\mu (k) - 1| = 1$ when
$\arg [ f_\mu (k) - 1 ] = \Phi_j \pm \Delta / 2$ with
$
\Delta = \frac{2 \pi}{3} (n - m) / (n + m)
$.
Therefore, the $j$th ``petal'' surrounds the origin when
$
\Phi_j - \Delta / 2 < \pi < \Phi_j + \Delta / 2
$,
that is,
\begin{align}
\frac{n + 2m}{3d} + \frac{\mu}{d} < j
< \frac{2n + m}{3d} + \frac{\mu}{d}.
\label{eq:cond_j}
\end{align}
$w_\mu$ is equal to the number of integers $j$ that satisfy
Eq.\ \eqref{eq:cond_j} for given $(n, m)$ and $\mu$.
We evaluate $w_\mu$ for semiconducting SWNTs
[${\rm mod} (2n + m, 3) \ne 0$] in Table \ref{tab:w},
which are categorized according to
${\rm mod} (\frac{2n + m}{d}, 3) = 1$ or $2$.
The table also includes $w_\mu$ for metallic SWNTs with
${\rm mod} (2n + m, 3) = 0$ when the number of
times that $f_\mu (k)$ passes the origin is neglected.
\begin{table}[t]
\caption{
The winding number $w_\mu$ determined by
the number of times that $f_\mu (k)$ winds around the
origin on the complex plane when $k$ runs through the 1D BZ.
We assume $0 \leq m \leq n$, and $d = \gcd(n, m)$.
$\mu$ is an integer (angular momentum) in the absence of
a magnetic field while it is a real number in the presence of
an axial magnetic field (see text in Sec.\ \ref{sec:analytical} B).
For metallic SWNTs, we disregard the number of times that
$f_\mu (k)$ passes the origin.
}
\label{tab:w}
\begin{tabular}{ll}
\hline \hline
Type $\quad$ & $w_\mu$ \\
\hline
\multicolumn{2}{l}{
${\rm mod}\left( \frac{2n + m}{d}, 3 \right) = 1$
(semiconductor or metal-1)} \\
& $ \begin{cases}
\frac{(n - m)/d - 2}{3} &
\left( \frac{d}{3} \leq \mu \leq \frac{2d}{3} \right) \\
\frac{(n - m)/d + 1}{3} &
\left( 0 \leq \mu < \frac{d}{3} ~{\rm or}~
\frac{2d}{3} < \mu < d \right)
\end{cases}$ \\
\multicolumn{2}{l}{
${\rm mod}\left( \frac{2n + m}{d}, 3 \right) = 2$
(semiconductor or metal-1)} \\
& $ \begin{cases}
\frac{(n - m)/d + 2}{3} &
\left( \frac{d}{3} < \mu < \frac{2d}{3} \right) \\
\frac{(n - m)/d - 1}{3} &
\left( 0 \leq \mu \leq \frac{d}{3} ~{\rm or}~
\frac{2d}{3} \leq \mu < d \right)
\end{cases}$ \\
\multicolumn{2}{l}{
${\rm mod}\left( \frac{2n + m}{d}, 3 \right) = 0$ and $n \neq m$
(metal-2 other than armchair)} \\
& $ \begin{cases}
\frac{n - m}{3d} - 1 & (\mu = 0) \\
\frac{n - m}{3d} & (0 < \mu < d)
\end{cases}$ \\
\multicolumn{2}{l}{$n = m$ (metal-2 of armchair type)} \\
& $0 \qquad (0 \leq \mu < d)$ \\
\hline \hline
\end{tabular}
\end{table}
\subsection{Analysis with finite magnetic field}
When the axial magnetic field is present,
the trajectory of
$
f_\mu (k; \phi)
$
is examined to evaluate $w_\mu$.
We obtain $\Phi_j$ in Eq.\ \eqref{eq:Phi_j}
with $\mu$ replaced by $\mu + \phi$,
which means that the trajectory for each $\mu$ is
rotated around $z = 1$ on the complex plane
\cite{Zang2018}.
As we mentioned earlier in Sec.\ \ref{sec:mag},
only the decimal part of $\phi$ is physically
meaningful because
$
\phi \rightarrow \phi' = \phi - \lfloor \phi \rfloor
$
is equivalent to
$
\mu \rightarrow \mu' = \mu + \lfloor \phi \rfloor
$
with $\lfloor x \rfloor$ being the maximum integer not
exceeding $x$.
Thus we can make the same analysis as in the previous subsection
with $\mu' = 0, 1, 2, \ldots, d - 1$, $0 \leq \phi' < 1$,
and $0 < \Phi_j \leq 2 \pi$. Then the replacement of
$\mu$ by $\mu' + \phi'$ yields the same result as in Table \ref{tab:w}.
\subsection{Edge states and topological order}
By summing up $w_\mu$ in Eq.\ \eqref{eq:bulk_edge} carefully,
we obtain the number of edge states $N_{\rm edge}$,
as shown in Table \ref{tab:N_edge}.
Here, we assume $-1/2 \leq \phi < 1/2$
($\mu$ should be shifted accordingly).
The semiconducting SWNTs are categorized into type-1 and
type-2 for ${\rm mod}(2n + m, 3) = 1$ or $2$.\footnote{A comment
is given for the classifications in Tables \ref{tab:w} and \ref{tab:N_edge}.
Semiconducting SWNTs belong to type-1 in Table \ref{tab:N_edge} when
${\rm mod}(\frac{2n + m}{d}, 3) = 1$ and ${\rm mod}(d, 3) = 1$,
or ${\rm mod}(\frac{2n + m}{d}, 3) = 2$ and ${\rm mod}(d, 3) = 2$
in Table \ref{tab:w}. They belong to type-2 otherwise.}
The results for $N_{\rm edge}$ indicate that
(i) the semiconducting SWNTs other than $n=m+1$
are topological nontrivial in the absence of a magnetic field
(AB phase $\phi=0$) and (ii) all the semiconducting SWNTs
show the topological phase transition at $|\phi|=\phi^*=1/3$
when the energy gap is closed \cite{Ajiki1993}.
Note that $|\phi|=1/3$ corresponds to the magnetic field
of more than $100~{\rm T}$ when the tube diameter
$d_{\rm t} \sim 1~{\rm nm}$.
Table \ref{tab:N_edge} also includes the results for metallic SWNTs,
that are topological insulators except for the armchair
nanotubes, irrespectively of the magnetic field, as discussed
in the next section.
Figure \ref{fig:class}(a) illustrates the number of
edge states at $B = 0$, where a hexagon from the
leftmost one indicates the chiral vector
$\bm{C}_{\rm h} = n \bm{a}_1 + m \bm{a}_2$.
Almost all the SWNTs have edge states except for the
semiconducting SWNTs with $n = m + 1$ and
metallic ones of armchair type.
Figure \ref{fig:class}(b) shows the critical magnetic
field for the topological phase transition, where the number of
edge states changes discontinuously.
The critical magnetic field should be experimentally accessible
for metallic SWNTs with $d_{\rm t} \gtrsim 1~{\rm nm}$
(see Sec.\ \ref{sec:metal}).
\begin{table}[t]
\caption{
Number of edge states $N_{\rm edge}$ in the $(n, m)$-SWNT.
We assume $0 \leq m \leq n$.
The number of flux quanta $\phi$ is restricted to
$-1/2 \leq \phi < 1/2$.
$\Theta(x) = 1$ ($0$) for $x > 0$ ($x < 0$).
}
\label{tab:N_edge}
\begin{tabular}{ll}
\hline \hline
Type $\quad$ & Number of edge states $N_{\rm edge}$ \\
\hline
\multicolumn{2}{l}{Semiconductor type-1
[${\rm mod}(2n + m, 3) = 1$]} \\
& $\begin{cases}
4 \frac{n - m + 1}{3} &
(0 \leq |\phi| < \frac13) \\
4 \frac{n - m - 2}{3} &
(\frac13 < |\phi| \leq \frac12) \\
\end{cases}$ \\
\multicolumn{2}{l}{Semiconductor type-2
[${\rm mod}(2n + m, 3) = 2$]} \\
& $ \begin{cases}
4 \frac{n - m - 1}{3} &
(0 \leq |\phi| < \frac13) \\
4 \frac{n - m + 2}{3} &
(\frac13 < |\phi| \leq \frac12) \\
\end{cases}$ \\
\multicolumn{2}{l}{Metal other than armchair
[${\rm mod}(2n + m, 3) = 0$ and $n \neq m$]} \\
& $ 4 \left( \frac{n - m}{3} - 1 \right) + 2 \sum_{\tau, s} \Theta(
\Delta k_c + \tau s \Delta k_{\rm so} + \tau \Delta k_\phi)$ \\
\multicolumn{2}{l}{Metal of armchair type ($n = m$)} \\
& 0 \\
\hline \hline
\end{tabular}
\end{table}
\begin{figure}[t] \begin{center}
\includegraphics{edge.pdf}
\includegraphics{mag.pdf}
\caption{
(Color online)
(a) Number of edge states in the absence of a
magnetic field ($B = 0$) and
(b) critical magnetic field $B^*$
of topological phase transition at which
the number of edge states changes discontinuously.
A hexagon from the leftmost one indicates the
chiral vector
$
\bm{C}_{\rm h} = n \bm{a}_1 + m \bm{a}_2
$, where
$\bm{a}_1$ and $\bm{a}_2$ are the primitive
lattice vectors of graphene, shown in Fig.\ \ref{fig:lattice}.
}
\label{fig:class}
\end{center} \end{figure}
The number of edge states per diameter approaches
$N_{\rm edge} / d_{\rm t} \rightarrow
4 (n - m) / (3 d_{\rm t})$
as $d_{\rm t}$ increases.
This agrees with the result in Ref.\
\cite{Akhmerov2008}
for the edge states of graphene.
We thus obtain an asymptotic form of $N_{\rm edge}$ for
large $d_{\rm t}$,
\begin{align}
N_{\rm edge} \simeq \frac{8 \pi d_{\rm t}}{3a}
\cos \left( \theta + \frac{\pi}{3} \right).
\end{align}
This should be useful when a nanotube of large diameter is
examined in a continuum approximation.
\section{Analysis for metallic nanotube
\label{sec:metal}}
In this section, we discuss the topology of metallic SWNTs
with
$
{\rm mod} (2n + m, 3) = 0.
$
Without the curvature-induced effects, a band of
angular momentum $\mu_{+}$ ($\mu_{-}$) passes the Dirac
point $K$ ($K'$) with wave number $k_{+}$ ($k_{-}$) on the
graphene sheet:
\begin{align}
\mu_\pm &= \pm \frac{2n + m}{3}
= \pm \frac{d}{3} {\rm mod} \left( \frac{2n + m}{d}, 3 \right)
\quad ({\rm mod}~d),
\label{eq:mu_pm} \\
k_\pm &= \pm \frac{2 \pi}{3a_z} (2p + q),
\label{eq:k_pm}
\end{align}
\cite{Izumida2016}.
Metallic SWNTs are classified into metal-1
for ${\rm mod}(\frac{2n + m}{d}, 3) \ne 0$ and metal-2
for ${\rm mod}(\frac{2n + m}{d}, 3) = 0$.
$\mu_+ \neq \mu_-~({\rm mod}~d)$ in the former, whereas
$\mu_+ = \mu_- = 0$ in the latter.
In order to describe the narrow energy gap in metallic SWNTs,
we further extend our 1D lattice model to include
the curvature-induced effects besides the AB effect in
a magnetic field.
As seen in Appendix \ref{app:derivation},
our model is constructed so as to reproduce
the effective Hamiltonian for $\bm{k} \cdot \bm{p}$ theory, which
describes the curvature-induced effects and SO interaction
\cite{Izumida2009},
in the vicinity of $k_\pm$ with angular momentum $\mu_\pm$.
\subsection{1D lattice model with curvature effects}
The effective Hamiltonian for $\bm{k} \cdot \bm{p}$ theory is
given by
\begin{align}
H = &\sum_{\bm{r}_A, s} \sum_{j = 1}^3
\left( \gamma^{(1)}_{s, j} \,
c^{\, s \, \dagger}_{\bm{r}_A}
c^{\, s}_{\bm{r}_A + \bm{\Delta}_j} + {\rm H.c.}
\right)
\nonumber \\
+ &\sum_{\sigma=A,B} \sum_{\bm{r}_\sigma, s} \sum_{j = 1}^6
\gamma^{(2)}_{s, j} \,
c^{\, s \, \dagger}_{\bm{r}_\sigma}
c^{\, s}_{\bm{r}_\sigma + \bm{\Delta}^{\!\!(2)}_j}
\label{eq:H2d}
\end{align}
with $c^{\, s}_{\bm{r}}$ being the field operator for a $\pi$
electron with spin $s$ at atom of position $\bm{r}$
\cite{Okuyama2017}.
The quantization axis for spin $s = \pm 1$ is chosen in the tube direction
\cite{Izumida2009}.
This model consists of anisotropic and spin-dependent
hopping integrals to
the nearest-neighbor atoms and those to the second nearest neighbors.
As mentioned in Sec.\ \ref{sec:semicond} B,
the former connects $A$ and $B$ atoms
that are depicted by three vectors
$
\bm{\Delta}_j
$
$(j = 1, 2, 3)$,
whereas the latter connects atoms of the same species
indicated by six vectors
$
\bm{\Delta}^{\!\!(2)}_j
$
$(j = 1, 2, \ldots, 6)$ in Fig.\ \ref{fig:lattice_2nd}(a).
The explicit forms of hopping integrals,
$
\gamma^{(i)}_{s,j}
$
$(i = 1, 2)$,
are provided in Appendix \ref{app:derivation}.
As described in Sec.\ \ref{sec:semicond} A, we use a
set of primitive lattice vectors, $\bm{R}$ and $\bm{C}_{\rm h} / d$.
By performing the Fourier transformation for
the $\nu$ coordinate in Eqs.\ \eqref{eq:r_A} and \eqref{eq:r_B}, we obtain
$
H = \sum_{\mu = 0}^{d - 1} \sum_{s = \pm} H_{\mu, s}
$
with
\begin{align}
H_{\mu, s} = &\sum_{\ell} \sum_{j = 1}^3 \left( \gamma^{(1)}_{s, j}
{\rm e}^{{\rm i} 2 \pi \mu \Delta''_j / d}
c^{\, \mu, s \, \dagger}_{A, \ell}
c^{\, \mu, s}_{B, \ell + \Delta'_j}
+ {\rm H.c.} \right)
\nonumber \\
+ &\sum_{\sigma=A,B} \sum_{\ell} \sum_{j = 1}^6
\gamma^{(2)}_{s, j}
{\rm e}^{{\rm i} 2 \pi \mu \Delta^{\!(2)\prime\prime}_j / d}
c^{\, \mu, s \, \dagger}_{\sigma, \ell}
c^{\, \mu, s}_{\sigma, \ell + \Delta^{\!(2)\prime}_j},
\label{eq:H1d}
\end{align}
where $c^{\, \mu, s}_{\sigma, \ell}$ is the field operator
of an electron with angular momentum $\mu$, spin $s$, and
at sublattice $\sigma$ of
site index $\ell$ in Fig.\ \ref{fig:lattice_2nd}(b).
This is an extended 1D lattice model
(see Appendix \ref{app:derivation}
for $\Delta^{\!(2)\prime}_j$ and $\Delta^{\!(2)\prime\prime}_j$).
\begin{figure}[t] \begin{center}
\includegraphics{lattice-2nd.pdf}
\caption{
(Color online) (a)
An extension of Fig.\ \ref{fig:lattice}(a) to include the hopping to
the second-nearest-neighbor atoms.
The three vectors
$\bm{\Delta}_j$
$(j = 1, 2, 3)$
connect the nearest-neighbor atoms,
whereas the six vectors
$\bm{\Delta}^{\!\!(2)}_j$ $(j = 1, 2, \ldots, 6)$ connect
the second-nearest-neighbor ones.
(b) An extended 1D lattice model to describe the metallic SWNTs.
}
\label{fig:lattice_2nd}
\end{center} \end{figure}
\subsection{Bulk properties}
For the bulk system,
the Fourier transformation of $H_{\mu, s}$ along the $\ell$ direction
yields the two-by-two Hamiltonian,
\begin{align}
H_{\mu, s} (k) &= \epsilon_{\mu, s} (k; \phi)+\gamma
\begin{bmatrix}
0 & f_{\mu, s} (k; \phi) \\
f_{\mu, s}^* (k; \phi) & 0
\end{bmatrix},
\label{eq:H_k}
\end{align}
in the sublattice space for the 1D BZ,
$
-\pi \leq k a_z < \pi
$,
where
\begin{align}
f_{\mu, s} (k; \phi) &= \frac{1}{\gamma}\sum_{j = 1}^3
\gamma^{(1)}_{s, j}
{\rm e}^{{\rm i} 2 \pi \mu \Delta''_j / d}
{\rm e}^{{\rm i} k a_z \Delta'_j},
\label{eq:f} \\
\epsilon_{\mu, s} (k; \phi) &=
\sum_{j = 1}^6 \gamma^{(2)}_{s, j}
{\rm e}^{{\rm i} 2 \pi \mu \Delta^{\!(2)\prime\prime}_j / d}
{\rm e}^{{\rm i} k a_z \Delta^{\!(2)\prime}_j}.
\label{eq:epsilon}
\end{align}
The dispersion relation for subband $(\mu, s)$ is given by
\begin{align}
E_{\mu, s} (k; \phi) &=
\epsilon_{\mu, s} (k; \phi)
\pm |\gamma f_{\mu, s} (k; \phi)|.
\label{eq:energy_metal}
\end{align}
The system is an insulator when
$
|\epsilon_{\mu, s}(k; \phi)| < |\gamma f_{\mu, s}(k; \phi)|
$
in the whole BZ.
Thanks to the curvature-induced fine structure,
this condition is satisfied
even for metallic SWNTs except in the vicinity of
$|\phi| = \phi^*$,
where the band gap is closed by a magnetic field.
Then positive and negative $E_{\mu, s} (k)$'s form
the conduction and valence bands, respectively.
It should be mentioned that $\phi^* \ll 1/3$ in metallic SWNTs,
which corresponds
to the magnetic field of a few Tesla
\cite{Okuyama2017}.
\subsection{Winding number and bulk-edge correspondence}
For any SWNT with finite band gap,
we can define the winding number as
\begin{align}
w_{\mu, s} = \frac{1}{2 \pi} \oint_{\rm BZ}
{\rm d} \arg f_{\mu, s} (k; \phi),
\label{eq:w_metal}
\end{align}
for subband $(\mu, s)$.
Strictly speaking, it is a topological invariant
only if the sublattice symmetry holds
\cite{Wen1989, Asboth2015}:
$
\epsilon_{\mu, s} (k; \phi) = 0
$.
However, as far as the system is an insulator, i.e.,
$
|\epsilon_{\mu, s}(k; \phi)| < |\gamma f_{\mu, s}(k; \phi)|
$
in the whole BZ, it is well defined.
We discuss the topology of metallic SWNTs using $w_{\mu, s}$
in Eq.\ \eqref{eq:w_metal} except for the vicinity of $|\phi|=\phi^*$.
The bulk-edge correspondence in Eq.\ \eqref{eq:bulk_edge} is
generalized to
\begin{align}
N_{\rm edge} = 2 \sum_{\mu = 0}^{d - 1} \sum_{s = \pm}
|w_{\mu, s}|
\label{eq:bulk_edge_metal}
\end{align}
in terms of $w_{\mu, s}$.
The proof of this relation is given in Appendix \ref{app:EOM}.
Although the energy levels of edge states are slightly deviated from
$E_{\rm F} = 0$ in the presence of $\epsilon_{\mu, s} (k; \phi)$,
they are within the band gap as long as the gap remains finite.
\subsection{Classification with curvature effects}
Now we come to
classify the metallic SWNTs.
$f_\mu (k; \phi)$ defined in
Secs.\ \ref{sec:semicond} and \ref{sec:mag}
passes the origin on the complex plane
for $\mu = \mu_\pm$ (at $k=k_\pm$) corresponding to the
Dirac points in the absence of curvature-induced effects.
We evaluate $w_{\mu, s}$ using Eq.\ \eqref{eq:w_metal}
around the origin while we can use the results in Table \ref{tab:w}
otherwise since the topological nature does not change by a
small perturbation.
As an example, we show $f_\mu (k; \phi)$ on the complex
plane for $\mu=0$ in the (7,1)-SWNT (metal-2 with
$\mu_+=\mu_-=0$) in Fig.\ \ref{fig:flower}(a).
Petals $j=3$ and $5$ go through the origin in the absence of
curvature effects, whereas petal $j=4$ winds the origin.
The latter yields $w_\mu = 1$ in Table \ref{tab:w}.
The contribution from the
former is discussed in the following.
In the vicinity of origin on the complex plane,
\begin{align}
&f_{\mu_{\pm}, s} (k; \phi) \simeq \frac{\sqrt3}{2} a \,
{\rm e}^{\pm {\rm i} (\theta - \frac{2\pi}{3})}
\bigl[ (\Delta k_c \pm \Delta k_{\rm so}
\pm \Delta k_\phi)
\nonumber \\
& \quad + {\rm i} (k - k_\tau \mp \Delta k_z) \bigr].
\label{eq:f_metal}
\end{align}
from Eq.\ \eqref{eq:f_near_Dirac} in Appendix \ref{app:derivation}.
Here, $\Delta k_\phi$ represents the AB effect in a magnetic field,
$\Delta k_c$ and $\Delta k_z$ stem from the mixing between $\pi$ and
$\sigma$ orbitals, and $\Delta k_{\rm so}$ is due to the SO interaction
(see Appendix \ref{app:derivation}). Equation \eqref{eq:f_metal}
indicates a straight line made by the rotation of angle
$\pm (\theta - \frac{2\pi}{3})$ around the origin,
from a straight line which
intersects orthogonally the real axis at
$
r = (\Delta k_c \pm s \Delta k_{\rm so} \pm \Delta k_\phi)
(\sqrt3 a/2)
$.
It gives rise to the winding number
when the line intercepts the real axis in the negative part.
For armchair SWNTs of $\theta = \pi/6$,
this condition is never satisfied since
the line is parallel to the real axis.
For the other metallic SWNTs,
the condition holds if $r > 0$,
as shown in Fig.\ \ref{fig:flower}(b).
In consequence,
we obtain the complete expression for $w_{\mu, s}$ for
metallic SWNTs.
For $\mu \neq \mu_\pm$,
\begin{align}
w_{\mu, s} = w_\mu
\label{eq:w_mu_s0}
\end{align}
in Table \ref{tab:w}.
For $\mu = \mu_\pm$,
\begin{align}
w_{\mu_\pm, s} = w_{\mu_\pm}
+ \Theta(\Delta k_c \pm s \Delta k_{\rm so} \pm \Delta k_\phi),
\label{eq:w_mu_s}
\end{align}
where $w_{\mu_\pm}$ is given by Table \ref{tab:w} and
$\Theta(x) = 1$ ($0$) for $x > 0$ ($x < 0$).
This explains the topological phase transition
at $|\phi| \ll 1/3$,
which was demonstrated in Ref.\ \cite{Okuyama2017},
for the following reason.
$\Delta k_\phi$ is proportional to $B$ along the tube axis,
$\Delta k_\phi = - eBd_{\rm t}/(4 \hbar)$,
in Eq.\ \eqref{eq:k_phi} in Appendix \ref{app:derivation}.
For metallic SWNTs other than the armchair,
$
\Delta k_c \pm s \Delta k_{\rm so} \simeq \Delta k_c > 0
$
and thus Eq.\ \eqref{eq:w_mu_s} yields $w_{\mu_\pm, s} =
w_{\mu_\pm}+1$ at $B=0$.
When $B$ is increased beyond $B^*$, which satisfies
$
\Delta k_c + s \Delta k_{\rm so} + \Delta k_\phi = 0
$, $w_{\mu_+, s}$ becomes $w_{\mu_+}$.
We obtain the number of edge states $N_{\rm edge}$
through Eq.\ \eqref{eq:bulk_edge_metal}
by the summation of $w_{\mu_\pm, s}$ in Eqs.\ \eqref{eq:w_mu_s0}
and \eqref{eq:w_mu_s}. The expression for $N_{\rm edge}$
is common for metal-1 and -2, as shown in Table \ref{tab:N_edge}.
All the metallic SWNTs but the armchair are a topological insulator
in the absence of a magnetic field [Fig.\ \ref{fig:class}(a)] and show the
topological phase transition at $B=B^*$ [Fig.\ \ref{fig:class}(b)].
The armchair SWNTs are always topologically trivial:
They are forbidden to have finite winding
numbers regardless of the strength of the magnetic field,
which is attributable to
the mirror symmetry with respect to a
plane including the tube axis
\cite{Izumida2016}.
\begin{figure}[t] \begin{center}
\includegraphics{flower-metal.pdf}
\caption{
(Color online)
(a) $f_{\mu,s} (k)$ on the complex plane for
$\mu = 0$ in a metallic SWNT with chirality $(7, 1)$.
$d = 1$, $p = 1$, and $q = 0$. Petals $j=3$ and $5$ pass the
origin in the absence of curvature effects, whereas
petal $j=4$ rounds the origin.
(b) $f_{\mu, s} (k)$ around the origin on the complex plane
for $\mu =\mu_{+}$ which passes the $K$ point, in metallic SWNTs
with chiral angle $\theta$.
$
r = (\Delta k_c + s \Delta k_{\rm so} + \Delta k_\phi)
(\sqrt3 a / 2) > 0
$.
The dotted line intersects the real axis orthogonally at $r$.
The trajectory for a zigzag SWNT ($\theta = 0$) is obtained by
rotating it by $- 2 \pi / 3$
around the origin
(blue line),
whereas that for an armchair SWNT ($\theta = \pi/6$) is obtained by
rotating it by $- \pi / 2$
(orange line).
Therefore the intercept of the real axis is always negative
(pink segment) for SWNTs of $0 \leq \theta < \pi/6$.
}
\label{fig:flower}
\end{center} \end{figure}
\section{Discussion
\label{sec:discussion}}
We comment on the previous studies which predicted
an increase in the number of edge states in metallic SWNTs as
the magnetic field increases
\cite{Sasaki2005, Sasaki2008, Marganska2011}.
At the first sight, this seems contradictory against our results.
However, this is because they use parameters corresponding to
$\Delta k_c < 0$ in our model.
We obtain positive $\Delta k_c$ by fitting the dispersion relation with
that from the {\it ab initio} calculation known as
the extended tight-binding model
\cite{Izumida2009}.
However, its sign is quite sensitive to
the details in the model,
and therefore it should be experimentally confirmed which sign is favorable.
Also, others theoretically predicted no topological phase transition for
metallic SWNTs
\cite{Lin2016}.
This is due to the oversimplification with
$\Delta k_c = \Delta k_{\rm so} = 0$.
A comment should be made on the boundary condition,
which is important for the edge states in 1D topological insulators.
Our calculations have been performed for finite systems
in which a SWNT is cut by a broken line in Fig.\ \ref{fig:lattice}(a).
The angular momentum $\mu$ is
a good quantum number in this case.
This is a minimal boundary edge,
where every atom at the ends has just one dangling bond
\cite{Akhmerov2008}.
The bulk-edge correspondence in
Eqs.\ (\ref{eq:bulk_edge}) and (\ref{eq:bulk_edge_metal})
holds only for such edges.
Some other boundary conditions result in different numbers of edge states,
as discussed in Ref.\ \cite{Izumida2016}.
Then the winding number $w_{\mu}$ is shifted from that in the case of
minimal boundary. Since the shift of $w_{\mu}$ is independent of
magnetic field \cite{Izumida2016}, the topological phase transition and
the critical magnetic field should not be influenced by the boundary
conditions. The number of edge states is changed at the transition.
For armchair SWNTs, the topological phase transition does not take
place with any boundary condition, whereas the number of edge states
may be finite.
Although the examined boundaries are limited,
we speculate that the topological phase transition is determined by
the topological nature of the bulk irrespectively of the boundaries
in general.
\section{Conclusions
\label{sec:conclusions}}
We have classified the topology for all possible chiralities $(n,m)$
of SWNTs in the absence and presence of a magnetic field along the tube axis.
First, we have studied semiconducting SWNTs using a 1D lattice model
in Eq.\ \eqref{eq:H0} and depicted in Fig.\ \ref{fig:lattice}(b).
We have found that
(i) the semiconducting SWNTs other than $n=m+1$
are topological nontrivial in the absence of a magnetic field
and (ii) all the semiconducting SWNTs show the topological phase
transition at AB phase $|\phi|=\phi^*=1/3$. The phase transition,
however, should be hard to observe since a magnetic field of more
than $100~{\rm T}$ is required
when the tube diameter $d_{\rm t} \sim 1~{\rm nm}$.
Next, we have examined metallic SWNTs with a small band gap
using an extended 1D lattice model in Eq.\ \eqref{eq:H1d} and depicted in
Fig.\ \ref{fig:lattice_2nd}(b). Although the winding number $w_{\mu,s}$ is not
a topological invariant in the presence of $\gamma^{(2)}_{s,j}$
in Eq.\ \eqref{eq:H1d}, it is well defined except for the vicinity of
topological phase transition. Indeed we have proved the bulk-edge
correspondence for $w_{\mu,s}$ in Eq.\ \eqref{eq:bulk_edge_metal}.
We have observed that
(i) all the metallic SWNTs but the armchair type ($n=m$) are a
topological insulator in the absence of a magnetic field and show the
topological phase transition at a critical magnetic field $B^*$.
Since $B^*$ can be a few Tesla
\cite{Okuyama2017},
the topological phase
transition could be observed for metallic SWNTs.
(ii) The armchair SWNTs are always topologically trivial.
In conclusion, the majority of SWNTs are a topological insulator
in the absence of a magnetic field and show a topological
phase transition by applying a magnetic field along the tube.
Only metallic SWNTs of armchair type are topologically trivial
regardless of the magnetic field.
\acknowledgements
The authors acknowledge fruitful discussion with
K.\ Sasaki, A.\ Yamakage, M.\ Grifoni, and R.\ Saito.
This work was partially supported by JSPS KAKENHI Grants No.\ 26220711,
No.\ 15K05118, No.\ 15H05870, No.\ 15KK0147, No.\ 16H01046, and No.\ 18H04282.
|
1,477,468,750,282 | arxiv | \section{Introduction}
\label{sec:1}
Heavy quarkonium consist of two quarks with heavy mass, since its discovery, it has been well probed to investigate the strong interaction and test quantum chromodynamics (QCD) due to its characteristic scales in the system. Non-relativistic QCD(NRQCD) factorization framework~\cite{Bodwin:1994jh} was proposed to describe the production and decay of heavy quarkonium. This effective theory introduces the color-octet (CO) mechanism and surpasses the traditional color-singlet (CS) model~\cite{Berger:1980ni, Baier:1981uk, Humpert:1986cy, Gastmans:1986qv, Gastmans:1987be}. NRQCD factorization has achieved great successes in the production~\cite{Campbell:2007ws, Gong:2008ft, Butenschoen:2010rq, Ma:2010yw} and polarization~\cite{Gong:2010bk, Butenschoen:2012px, Chao:2012iv, Gong:2012ug, Feng:2018ukp} of heavy quarkonium at the hadron colliders. The high-energy and high-luminosity electron-positron collider is considered as one of the main next generation colliders, such as the CEPC~\cite{CEPCStudyGroup:2018rmc, CEPCStudyGroup:2018ghi}, FCC-ee~\cite{FCC:2018evy}, ILC~\cite{ILC:2007bjz, Erler:2000jg} and so on. According to their designs, these colliders can run at various collision energies with high luminosity. Therefore it could be expected that future high-energy $e^+e^-$ colliders will provide excellent experimental platforms for the study of heavy quarkonium. In contrast to hadron colliders, the $e^+e^-$ collider has less background for the production of heavy quarkonium and theoretical calculation is simpler. At the $e^+e^-$ collider, the best way to produce heavy quarkonium via the electron-positron annihilation, especially when the collision energy is around $Z$-boson mass; and due to the resonance effect, high yield of heavy quarkonium is expected~\cite{Sun:2013liv}.
In addition to the annihilation mode, photoproduction at the $e^+e^-$ collider is also an important production channel of heavy quarkonium. There are two main sources of photons. For example, at the CEPC, photons can come from bremsstrahlung of initial electron and positron, which is described by Weiz\"acker-Williams approximation (WWA)~\cite{Frixione:1993yw}. At the ILC, photons can be scattered out by external laser and the electron beam. In the eyes of QCD, the photon could be in a hadronic state besides the bare photon state~\cite{Chu:2017mnm}. Due to quantum fluctuation, the photon can undergo a transition for a short period of time into a light quark pair or gluon. As a result, the photon as a whole particle, can participate directly the hard interaction to produce heavy quarkonium, which is called as the direct photoproduction. Alternatively, the photon resolves and the light quarks or gluons in the photon also get into the interaction. This is called as the resolved photoproduction, as shown in figure~\ref{fig:11}.
\begin{figure}[t]
\centering
\includegraphics[width=.6\textwidth]{resolvedp}
\caption{\label{fig:11} Diagrams for example of direct(left) and resolved(right) photoproduction processes at $e^+e^-$ collider. The diagrams are drawn by JaxoDraw~\cite{Binosi:2003yf}.}
\end{figure}
The $J/\psi$ photoproduction has been studied in many literature~\cite{Ma:1997bi, Japaridze:1998ss, Godbole:2001pj, Qiao:2001wv, Kniehl:2002wd, Klasen:2003zn, Artoisenet:2007qm, Klasen:2001mi, Klasen:2008mh, Li:2009zzu, Chen:2014xka, Sun:2015hhv, Chen:2016hju, Yang:2020xkl, Yang:2022yxb}. The inclusive $J/\psi$ photoproduction was measured in 2001 by the DELPHI Collaboration at CERN LEP II~\cite{TodorovaNova:2001pt,Abdallah:2003du}. Theoretical calculation of the $p_t$ distribution based on the CS model is smaller by one order of magnitude than the experimental measurements. After considering the CO mechanism, NRQCD gave nice description of the measurements~\cite{Klasen:2001cu} and this has been viewed as one of the earliest evidences of the existence of color-octet processes in nature. With the CO LDMEs extracted by a global fit of worldwide data~\cite{Butenschoen:2011yh}, however, the next-to-leading order NRQCD prediction of $J/\psi$ photoproduction systematically underestimate the DELPHI data. It is noteworthy that only a few $J/\psi$ events were reconstructed at LEP II and the measurements have large uncertainties~\cite{He:2019tig1}. At present there are no other experiments yet to verify the results of LEP II. As for the hadron collider, NRQCD factorization has achieved great success in explaining the experimental measurements of heavy quarkonium, but it still does not give a unified description of those observables, such as the total yield, kinematic distribution, and polarization~\cite{Chen:2021tmf}. Consequently it is necessary to measure the production of heavy quarkonium at other platforms such as the high-energy $e^+e^-$ collider so as to further test the NRQCD factorization framework.
In this work, we study the inclusive $J/\psi$ photoproduction at the ILC, including both the color-octet channels and the resolved photoproduction processes. In section~\ref{sec:2}, we give the basic theoretical framework of the calculation. The numerical results and discussions are presented in section~\ref{sec:3} and a brief summary is in section~\ref{sec:4}.
\section{Formulation and calculation}
\label{sec:2}
At the International Linear Collider, initial photons can achieve high energy and high luminosity and its spectrum is described as~\cite{Ginzburg:1981vm},
\begin{equation}
f_{\gamma/e}(x)=\frac{1}{N}\left[1-x+\frac{1}{1-x}-4 r(1-r)\right],
\end{equation}
where $x=E_{\gamma} / E_{e}$, $r=x /\left[x_{m}(1-x)\right]$, and the normalization factor,
\begin{equation}
N=\left(1-\frac{4}{x_{m}}-\frac{8}{x_{m}^{2}}\right) \log(1+x_m)+\frac{1}{2}+\frac{8}{x_{m}}-\frac{1}{2 (1+x_m)^{2}}.
\end{equation}
Here $x_{m}=4 E_{e} E_{l} \cos ^{2} \frac{\theta}{2}$, $E_e$ and $E_l$ are the energies of incident electron and laser beams, respectively, and $\theta$ is the angle between them. The energy of the laser backscattering (LBS) photon is restricted by
\begin{equation}
0 \leq x \leq \frac{x_{m}}{1+x_{m}},
\end{equation}
with optimal value of $x_m$ being $4.83$~\cite{Telnov:1989sd}.
Within the NRQCD factorization framework, the production cross section of heavy quarkonium is separated into a product of the long distance matrix elements (LDMEs) and the short distance coefficients (SDCs). The SDC describes the production of an intermediate $Q\bar{Q}(n)$-pair, where the quantum number $n={}^{2S+1}\!L_J^{[c]}$ with $c$ being color multiplicity. The LDME describes the hadronization of the $Q\bar{Q}(n)$-pair into heavy quarkonium. The SDCs can be calculated perturbatively while the LDMEs are assumed to be process-independent and can be extracted by global fitting of experimental data. The differential cross section for heavy quarkonium ($H$) photoproduction is then formulated as double convolution of the cross section of parton-parton (or photon) processes and the corresponding parton distribution functions,
\begin{equation}
\begin{aligned}
\mathrm{d} \sigma & \left(e^{+} e^{-} \rightarrow e^{+} e^{-} H+X\right) \\
= &\int \mathrm{d} x_{1} f_{\gamma / e}\left(x_{1}\right) \int \mathrm{d} x_{2} f_{\gamma / e}\left(x_{2}\right) \sum_{i, j, k} \int \mathrm{d} x_{i} f_{i / \gamma}\left(x_{i}\right) \int \mathrm{d} x_{j} f_{j / \gamma}\left(x_{j}\right) \\&
\times \sum_{n} \mathrm{~d} \sigma(i j \rightarrow c \bar{c}[n]+k)\left\langle \mathcal{O}^{H}[n]\right\rangle,
\end{aligned}
\end{equation}
here, $f_{i/\gamma}$ is the Gl\"uck-Reya-Schienbein(GRS) parton distribution functions of the light quarks and gluon in photon~\cite{Gluck:1999ub}.
$d\sigma(ij\to c\overline{c}[n]+k)$ represents the differential partonic cross section, $i,j=\gamma,g,q,\bar{q}$ and $k=g,q,\bar{q}$ with $q=u,d,s$.
$c\overline{c}[n]$ are the intermediate $c\overline{c}$ pair with states $n={}^3\!S_1^{[\textbf{1}]},{}^1\!S_0^{[\textbf{8}]},{}^3\!S_1^{[\textbf{8}]},{}^3\!P_J^{[\textbf{8}]}$ for $H=J/\psi,\psi(2S)$ and $n={}^3\!P_J^{[\textbf{1}]},{}^3\!S_1^{[\textbf{8}]}$ for $H=\chi_{cJ}$($J=0,1,2$), respectively. $\langle{\cal O}^H[n]\rangle$ are the LDMEs of $H$.
Heavier charmonia, such as $\psi(2S)$ and $\chi_{cJ}(J=0,1,2)$, can decay into $J/\psi$. These feed-down contributions are taken into account by multiplying their direct-production cross sections with corresponding decay branching ratios to $J/\psi$,
\begin{equation}
\begin{aligned}
d \sigma^{\text {prompt} J / \psi} = & d \sigma^{J / \psi}+ d \sigma^{\psi(2 S)} B r(\psi(2 S) \rightarrow J / \psi+X)\\
&+\sum_{J} d \sigma^{\chi_{c J}} B r\left(\chi_{c J} \rightarrow J/\psi+\gamma\right).
\end{aligned}
\end{equation}
\begin{figure}[t]
\centering
\includegraphics[width=0.24\textwidth]{rrcc8g}
\includegraphics[width=0.24\textwidth]{rqcc1q} \includegraphics[width=0.24\textwidth]{rqcc8q}
\includegraphics[width=0.24\textwidth]{rgcc18g}\\
\includegraphics[width=0.24\textwidth]{qqbcc18g}
\includegraphics[width=0.24\textwidth]{ggcc18g} \includegraphics[width=0.24\textwidth]{gqcc1q}
\includegraphics[width=0.24\textwidth]{gqcc8q}
\caption{Some Feynman diagrams of the photoproduction. The diagrams are drawn by JaxoDraw~\cite{Binosi:2003yf}.}
\label{fig:21}
\end{figure}
Following are all the sub-processes to be calculated, which are for three production mechanisms. As for the direct photoproduction, we have
\begin{equation}
\begin{aligned}
&\gamma + \gamma \rightarrow c\bar{c}[{}^3\!S_1^{[\textbf{1}]},{}^1\!S_0^{[\textbf{8}]},{}^3\!S_1^{[\textbf{8}]},{}^3\!P_J^{[\textbf{8}]}] + c+\bar{c} \rightarrow J/\psi(\psi(2S)) +X,\\
&\gamma + \gamma \rightarrow c\bar{c}[{}^3\!P_J^{[\textbf{1}]},{}^3\!S_1^{[\textbf{8}]}] + c+\bar{c}\rightarrow \chi_{cJ} +X,\\
&\gamma + \gamma \rightarrow c\bar{c}[{}^3\!S_1^{[\textbf{8}]}] + g\rightarrow J/\psi(\psi(2S),\chi_{cJ}) +X.
\end{aligned}
\end{equation}
As for single resolved photoproduction, we have
\begin{equation}
\begin{aligned}
&\gamma + q(\bar{q},q=u,d,s) \rightarrow c\bar{c}[{}^3\!S_1^{[\textbf{1}]},{}^1\!S_0^{[\textbf{8}]},{}^3\!S_1^{[\textbf{8}]},{}^3\!P_J^{[\textbf{8}]}] + q(\bar{q})\rightarrow J/\psi(\psi(2S)) +X,\\
&\gamma + g \rightarrow c\bar{c}[{}^3\!S_1^{[\textbf{1}]},{}^1\!S_0^{[\textbf{8}]},{}^3\!S_1^{[\textbf{8}]},{}^3\!P_J^{[\textbf{8}]}] + c+\bar{c}\rightarrow J/\psi(\psi(2S)) +X,\\
&\gamma + g \rightarrow c\bar{c}[{}^3\!S_1^{[\textbf{1}]},{}^1\!S_0^{[\textbf{8}]},{}^3\!S_1^{[\textbf{8}]},{}^3\!P_J^{[\textbf{8}]}] + g\rightarrow J/\psi(\psi(2S)) +X,\\
&\gamma + q(\bar{q},q=u,d,s) \rightarrow c\bar{c}[{}^3\!P_J^{[\textbf{1}]},{}^3\!S_1^{[\textbf{8}]}] + q(\bar{q})\rightarrow \chi_{cJ} +X,\\
&\gamma + g \rightarrow c\bar{c}[{}^3\!P_J^{[\textbf{1}]},{}^3\!S_1^{[\textbf{8}]}] + c+\bar{c}\rightarrow \chi_{cJ} +X,\\
&\gamma + g \rightarrow c\bar{c}[{}^3\!S_1^{[\textbf{8}]}] + g\rightarrow \chi_{cJ} +X.
\end{aligned}
\end{equation}
As for double resolved photoproduction, we have
\begin{equation}
\begin{aligned}
&q(q=u,d,s) + \bar{q} \rightarrow c\bar{c}[{}^3\!S_1^{[\textbf{1}]},{}^1\!S_0^{[\textbf{8}]},{}^3\!S_1^{[\textbf{8}]},{}^3\!P_J^{[\textbf{8}]}] + g \rightarrow J/\psi(\psi(2S)) +X,\\
&q(q=u,d,s) + \bar{q} \rightarrow c\bar{c}[{}^3\!S_1^{[\textbf{1}]},{}^1\!S_0^{[\textbf{8}]},{}^3\!S_1^{[\textbf{8}]},{}^3\!P_J^{[\textbf{8}]}] + c+\bar{c} \rightarrow J/\psi(\psi(2S)) +X,\\
&g + g \rightarrow c\bar{c}[{}^3\!S_1^{[\textbf{1}]},{}^1\!S_0^{[\textbf{8}]},{}^3\!S_1^{[\textbf{8}]},{}^3\!P_J^{[\textbf{8}]}] + g \rightarrow J/\psi(\psi(2S)) +X,\\
&g + g \rightarrow c\bar{c}[{}^3\!S_1^{[\textbf{1}]},{}^1\!S_0^{[\textbf{8}]},{}^3\!S_1^{[\textbf{8}]},{}^3\!P_J^{[\textbf{8}]}] + c+\bar{c} \rightarrow J/\psi(\psi(2S)) +X,\\
&g + q(\bar{q},q=u,d,s) \rightarrow c\bar{c}[{}^3\!S_1^{[\textbf{1}]},{}^1\!S_0^{[\textbf{8}]},{}^3\!S_1^{[\textbf{8}]},{}^3\!P_J^{[\textbf{8}]}] + q(\bar{q})\rightarrow J/\psi(\psi(2S)) +X,\\
&q(q=u,d,s) + \bar{q} \rightarrow c\bar{c}[{}^3\!P_J^{[\textbf{1}]},{}^3\!S_1^{[\textbf{8}]}] + g \rightarrow \chi_{cJ} +X,\\
&q(q=u,d,s) + \bar{q} \rightarrow c\bar{c}[{}^3\!P_J^{[\textbf{1}]},{}^3\!S_1^{[\textbf{8}]}] + c+\bar{c} \rightarrow \chi_{cJ} +X,\\
&g + g \rightarrow c\bar{c}[{}^3\!P_J^{[\textbf{1}]},{}^3\!S_1^{[\textbf{8}]}] + g \rightarrow \chi_{cJ} +X,\\
&g + g \rightarrow c\bar{c}[{}^3\!P_J^{[\textbf{1}]},{}^3\!S_1^{[\textbf{8}]}] + c+\bar{c} \rightarrow \chi_{cJ} +X,\\
&g + q(\bar{q},q=u,d,s) \rightarrow c\bar{c}[{}^3\!P_J^{[\textbf{1}]},{}^3\!S_1^{[\textbf{8}]}] + q(\bar{q})\rightarrow \chi_{cJ} +X.\\
\end{aligned}
\end{equation}
Some Feynman diagrams of these photoproduction processes are presented in figure~\ref{fig:21}. The well-established package, Feynman Diagram Calculation (FDC)~\cite{Wang:2004du}, is used to do the analytical and numerical calculations. In FDC, the standard projection method~\cite{Bodwin:2002cfe} is employed to deal with the hard process. After dealing with the squared amplitudes analytically, FDC generates FORTRAN codes for numerical integration of phase space.
\section{Numerical results and discussions}
\label{sec:3}
To do the numerical calculation, we choose the electromagnetic fine structure constant $\alpha=1/137$ and the one-loop running strong coupling constant $\alpha_s(\mu_r)$.
To conserve the gauge invariant of the hard scattering amplitude, the charm quark mass, $m_c$, is set approximately as $m_c=m_H/2$, where the charmonia masses $m_H= 3.097, 3.415, 3.511, 3.556, 3.686 \mathrm{~GeV}$~\cite{Tanabashi:2018oca} for $H=J / \psi, \chi_{c J}(J=0,1,2)$ and $\psi(2 S)$, respectively. The branching ratios are taken as $Br(\psi(2 S) \rightarrow J / \psi)=0.61$ and $Br\left(\chi_{c J} \rightarrow J / \psi\right)=0.014,0.343,0.19$ for $J=0,1,2$~\cite{Tanabashi:2018oca}.
In dealing with the feed-down contributions, a shift of the transverse momentum of charmonium, $p_{t}^{H} \approx p_{t}^{H^{\prime}} \times\left(m_{H} / m_{H^{\prime}}\right)$, is used. The renormalization scale $\mu_r$ is set to be $\mu_r=m_T=\sqrt{m_{H}^{2}+(p_{t}^H)^{2}}$. Taking $\mu_r=m_T/2$, the cross section ($\sqrt{s}=500\mathrm{~GeV}$) shall be increased by about $80\%$, and taking $\mu_r=2m_T$, the cross section shall decreased by about $40\%$.
Such a large scale dependence could be tamed by higher order calculation or a proper scale setting, c.f. Ref.\cite{Wu:2019mky}. As for the non-perturbative LDMEs, we take~\cite{Feng:2018ukp},
\begin{equation}
\begin{aligned}
\langle\mathcal{O}^{\psi}({ }^{3} S_{1}^{[\textbf{1}]})\rangle &=\frac{3 N_{c}}{2 \pi}|R_{\psi}(0)|^{2}, \\
\langle{O}^{\chi_{cJ}}({ }^{3} P_{J}^{[\textbf{1}]})\rangle &=\frac{3}{4 \pi}(2 J+1)|R_{\chi_{c}}^{\prime}(0)|^{2},\\
\langle O^{J / \psi}({ }^{1} S_{0}^{[\textbf{8}]})\rangle &=5.66\times 10^{-2} \mathrm{~GeV}^{3}, \\
\langle O^{J / \psi}({ }^{3} S_{1}^{[\textbf{8}]})\rangle &=1.77 \times 10^{-2} \mathrm{~GeV}^{3}, \\
{\langle O^{J / \psi}({ }^{3} P_{0}^{[\textbf{8}]})\rangle}/{m_{c}^{2}} &=3.42 \times 10^{-3} \mathrm{~GeV}^{3},\\
\langle O^{\psi(2S)}({ }^{1} S_{0}^{[\textbf{8}]})\rangle &=-1.20 \times 10^{-4} \mathrm{~GeV}^{3}, \\
\langle O^{\psi(2S)}({ }^{3} S_{1}^{[\textbf{8}]})\rangle &=3.40 \times 10^{-3} \mathrm{~GeV}^{3}, \\
{\langle O^{\psi(2S)}({ }^{3} P_{0}^{[\textbf{8}]})\rangle}/{m_{c}^{2}} &=4.20 \times 10^{-3} \mathrm{~GeV}^{3},\\
\langle O^{\chi_{c0}}({ }^{3} S_{1}^{[\textbf{8}]})\rangle &=2.21 \times 10^{-3} \mathrm{~GeV}^{3},
\end{aligned}
\end{equation}
where the wave functions at the origin are given as $|R_{J / \psi}(0)|^{2}=0.81 \mathrm{~GeV}^{3}$, $|R_{\psi(2 S)}(0)|^{2}=0.53 \mathrm{~GeV}^{3}$ and $|R_{\chi_{c}}^{\prime}(0)|^{2}=0.075 \mathrm{~GeV}^{5}$~\cite{Eichten:1995ch}.
\begin{table}[tp]
\centering
\begin{tabular}{|c|cccc|}
\hline
$\sqrt{S}(\mathrm{GeV})$ & $\sigma_{directJ/\psi}$ & $\sigma_{\psi(2S)\rightarrow J/\psi}$ & $\sigma_{\chi_{cJ}\rightarrow J/\psi}$ & $\sigma_{promptJ/\psi}$\\
\hline
250 & $420.13$ & $28.27$ & $2.21$ & $450.61$ \\
500 & $667.84$ & $45.18$ & $4.44$ & $717.46$ \\
1000 & $1036.89$ & $70.66$ & $8.44$ & $1115.99$ \\
\hline
\end{tabular}
\caption{\label{tab:cross-section1}The integrated cross sections (in unit of pb) for prompt $J/\psi$ photoproduction under different collision energies at the ILC. The cut $p_t>1$ is set for $J/\psi$. Both the CS and the CO channels have been summed up.}
\end{table}
\begin{table}[tp]
\centering
\begin{tabular}{|c|cccccc|}
\hline
$\sqrt{S}(\mathrm{GeV})$ & $\sigma^{directJ/\psi}_{{}^{3}S_{1}^{[\textbf{1}]}}$ & $\sigma^{\psi(2S)\rightarrow J/\psi}_{{}^{3}S_{1}^{[\textbf{1}]}}$
& $\sigma^{\chi_{c J}\rightarrow J/\psi}_{{}^{3}P_{J}^{[\textbf{1}]}}$
& $\sigma^{directJ/\psi}_{{}^{1}S_{0}^{[\textbf{8}]}}$ & $\sigma^{directJ/\psi}_{{}^{3}S_{1}^{[\textbf{8}]}}$ & $\sigma^{directJ/\psi}_{{}^{3}P_{J}^{[\textbf{8}]}}$\\
\hline
250 & $21.61$ & $6.89$ & $2.05$ & $348.22$ & $0.34$ & $49.95$\\
500 & $32.12$ & $10.44$ & $4.15$ & $555.48$ & $0.61$ & $79.63$ \\
1000 & $47.34$ & $15.60$ & $7.92$ & $864.47$ & $1.10$ & $123.99$ \\
\hline
\end{tabular}
\caption{\label{tab:cross-section2}The integrated cross sections (in unit of pb) from various channels for $J/\psi$ photoproduction under different collision energies at the ILC. The cut $p_t>1$ is set for $J/\psi$.}
\end{table}
In table \ref{tab:cross-section1}, the integrated cross sections of prompt $J/\psi$ photoproduction at the ILC under different energies are listed. It can be seen that the integrated cross section becomes larger with the increment of the collision energy, while the ratios of feed-down contributions are not sensitive to the energy, which are about $7\%$.
The contribution of direct $J/\psi$ photoproduction dominates over those from the feed-down. Due to the large cross section, a huge number of $J/\psi$ events are expected to be generated via the photoproduction at the ILC.
Table \ref{tab:cross-section2} presents the integrated cross sections for different channels of $J/\psi$ photoproduction.
The color-octet channels are dominant and two channels, ${}^{1}S_{0}^{[\textbf{8}]}$ and ${}^{3}P_{J}^{[\textbf{8}]}$, provide about $95\%$ contributions to the direct $J/\psi$ production.
For the $J/\psi$ photoproduction at the ILC, the NRQCD factorization framework and the CS model therefore give predictions that differ by one order of magnitude.
In the color-singlet channel, the feed-down contributions from $\psi(2S)$ and $\chi_{c J}$ are significant, where the situation is very different from that in the color-octet channels.
Although the integrated cross section for the CS channel is very small when compared with that of the CO, it itself is still a sizable cross section.
For example, if setting the integrated luminosity of the ILC as $\mathcal{O}(10^4)\mathrm{~fb^{-1}}$, there would be $\mathcal{O}(10^8)$ $J/\psi$ mesons to be produced via only the CS channels.
Consequently, the measurement of $J/\psi$ photoproduction at the ILC could be done precisely and it shall be very helpful to test NRQCD factorization and to further study physics of heavy quarkonium.
In the following we take $\sqrt{S}=500\mathrm{~GeV}$ for more discussions.
Figure~\ref{fig:31} shows the prompt $J/\psi$ photoproduction in terms of transverse momentum distributions, where the direct and feed-down channels are displayed separately, both for the NRQCD and the CSM predictions. We can see that the direct production dominate in the whole $p_t$ region.
\begin{figure}[t]
\centering
\includegraphics[width=.5\textwidth]{pt_distr_prompt_feeddown}
\caption{\label{fig:31} The $p_t$ distributions for prompt $J/\psi$ photoproduction at the ILC ($\sqrt{S}=500\mathrm{~GeV}$).}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=.32\textwidth]{pt_distr_direct_jpsi_diff_ccn}
\includegraphics[width=.32\textwidth]{y_distr_direct_jpsi_diff_ccn}
\includegraphics[width=.32\textwidth]{cos_distr_direct_jpsi_diff_ccn}
\caption{\label{fig:32} Kinematic distributions for different intermediate $c\bar{c}$ states of $J/\psi$ photoproduction at the ILC ($\sqrt{S}=500\mathrm{~GeV}$). The $y$ and $\cos\theta$ distributions are plotted under the cut $p_t>1$. $y$ curves use same legends as those of $p_t$.}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=.32\textwidth]{pt_distr_direct_jpsi_sr_dr}
\includegraphics[width=.32\textwidth]{y_distr_direct_jpsi_sr_dr}
\includegraphics[width=.32\textwidth]{cos_distr_direct_jpsi_sr_dr}
\caption{\label{fig:33} Kinematic distributions for the resolved photoproduction of $J/\psi$ at the ILC ($\sqrt{S}=$
$500$ $\mathrm{GeV}$). The $y$ and $\cos\theta$ distributions are plotted under the cut $p_t>1$. All the distributions are plotted for the NRQCD predictions and $y$ curves use same legends as those of $p_t$.}
\end{figure}
Figure~\ref{fig:32} shows the kinematic distributions for different intermediate $c\bar{c}$ Fock states.
Here three kind of kinematic distributions of transverse momentum($p_t$), rapidity($y$) and angular
($\cos\theta$), are presented.
$\theta$ is the angle between $J/\psi$ and the collision beams.
It can be seen that these channels have different $p_t$ behaviors.
The CO channels dominate in small $p_t$ regions while the contribution from the CS channel become relatively important in large $p_t$ region.
In the rapidity and angular distributions, the curves of different channels do not intersect with each other in the whole region, and the ${}^{1}S_{0}^{[\textbf{8}]}$ channel is always primarily dominant.
The curves of these two kinematic distributions change gently in the wide middle regions,
which means there would be enough events in the whole region to make well measurement.
Taking the integrated luminosity of the ILC to be $\mathcal{O}(10^4)\mathrm{~fb^{-4}}$ as reference,
in the lowest region of $y$ and $\cos\theta$, e.g., $-0.5<y<0.5$ and $-0.05<\cos\theta<0.05$, there would produce respectively about $10^5$ and $10^4$ $J/\psi$ mesons by the NRQCD prediction,
and about $10^4$ and $10^3$ $J/\psi$ by the CS model.
Due to the high luminosity and the large cross section of $J/\psi$ photoproduction, precise measurements of these kinematic distributions at the ILC are very promising.
These kinematic distributions can be used to contract and constrain the LDMEs accurately.
In figure~\ref{fig:33}, kinematic distributions of the direct photoproduction, the single resolved photoproduction and the double resolved photoproduction are presented.
In the region of $1\mathrm{~GeV}<p_t<50\mathrm{~GeV}$, the single resolved photoproduction contributes $95\%$ the integrated cross section. Although in large $p_t$ region, the direct photoproduction give more contributions than the single resolved photoproduction, there may be not enough events to make precise measurement.
In the region of $\cos\theta$, about $-0.9<\cos\theta<0.9$, the single resolved and double resolved channels provide contributions of the same magnitude, while the direct production is always negligible.
\begin{figure}[t]
\centering
\includegraphics[width=.36\textwidth]{fd2}
\caption{\label{fig:34} Feynman diagram of single resolved photoproduction $\gamma+g\rightarrow c\bar{c}(n={}^{1}S_{0}^{[\textbf{8}]},$ ${}^{3}P_{J=0,1,2}^{[\textbf{8}]})+g \rightarrow J/\psi+X$ .}
\end{figure}
The single resolved sub-processes $\gamma+g\rightarrow c\bar{c}({}^{1}S_{0}^{[\textbf{8}]},{}^{3}P_{J=0,1,2}^{[\textbf{8}]})+g \rightarrow J/\psi+X$ provide the most contributions for $J/\psi$ photoproduction at the ILC. This can be explained by looking insight into their Feynman diagrams.
The diagrams shown in figure~\ref{fig:34} are absent for other sub-processes due to parity and color conservation, and they provide the main contributions of ${}^{1}S_{0}^{[\textbf{8}]},{}^{3}P_{J}^{[\textbf{8}]}$ channels.
The squared invariant mass of the gluon propagator in figure~\ref{fig:34} can be expressed as,
\begin{equation}
\begin{aligned}
k^{2} &=4 m_{c}^{2}-x \sqrt{S} M_{t} e^{-y} \\
&=4 m_{c}^{2}-2\left(E_{J/\psi}+E_{g}\right) M_{t} e^{-y} \\
&=-4 m_{c}^{2} e^{-2 y}-\left(1+e^{-2 y}\right)\left(p_{t}^{J/\psi}\right)^{2}-2 E_{g} M_{t} e^{-y},
\end{aligned}
\end{equation}
from which it can be seen that $1/k^2$ and hence the cross section could be very large in small $p_t$ and large $y$ regions.
This feature is also illustrated in the $p_t$ and $y$ distributions of figure~\ref{fig:32}.
To make our predictions more referential, we also calculate the integrated cross sections under various kinematic cuts, listed in table~\ref{tab:cross-section3}, and give kinematic distributions under the cuts, shown in figure~\ref{fig:35}.
It can be seen that these cuts have great effect on the cross sections and the kinematic distributions. After imposed these cuts, however, the cross section and thus the number of $J/\psi$ meson to be produced are still sizable.
\begin{table}[t]
\centering
\begin{tabular}{|c|ccc|}
\hline
cuts & $p_t>1\mathrm{~GeV}$ & $p_t>2\mathrm{~GeV}$
& $p_t>3\mathrm{~GeV}$\\
\hline
$|y|<2$ & $38.85$ & $13.94$ & $5.84$ \\
$|y|<3$ & $97.47$ & $35.83$ & $15.29$ \\
$|y|<4$ & $278.09$ & $103.62$ & $44.61$ \\
\hline
\end{tabular}
\caption{\label{tab:cross-section3} The integrated cross sections (in unit of pb) under various cuts for $J/\psi$ photoproduction at the ILC ($\sqrt{S}=500\mathrm{~GeV}$). All the predictions are based on NRQCD.}
\end{table}
\begin{figure}[t]
\centering
\includegraphics[width=.32\textwidth]{pt_distr_direct_jpsi_diff_cut_y}
\includegraphics[width=.32\textwidth]{y_distr_direct_jpsi_diff_cut_pt}
\includegraphics[width=.32\textwidth]{cos_distr_direct_jpsi_diff_cut_pt}
\caption{\label{fig:35} Kinematic distributions under various cuts for $J/\psi$ photoproduction at the ILC($\sqrt{S}=500\mathrm{~GeV}$). All the distributions are plotted for the NRQCD predictions.}
\end{figure}
\section{Summary}
\label{sec:4}
In this work, inclusive $J/\psi$ photoproduction at the future ILC is preliminarily studied at the leading order of $\alpha_s$ within the NRQCD factorization framework.
Both the color-octet channels and the resolved photoproduction processes are considered.
The numerical results show that the color-octet channels and the single resolved processes dominate primarily the production.
Due to the large cross section of the photoproduction and high luminosity of the ILC, it could be expected that there will be numerous $J/\psi$ mesons to be produced, and various kinematic distributions will be measured precisely.
Consequently, the photoproduction of $J/\psi$ at the ILC provides a good laboratory to test the NRQCD factorization framework and to deep our knowledge of the production mechanism of heavy quarkonium.
\acknowledgments
This work was supported in part by the Natural Science Foundation of China under Grants No. 12147116, No. 12175025, No. 12005028 and No. 12147102, by the China Postdoctoral Science Foundation under Grant No. 2021M693743 and by the Fundamental Research Funds for the Central Universities under Grant No. 2020CQJQY-Z003.
|
1,477,468,750,283 | arxiv |
\section*{Introduction}
The pursuit of methods to robustly and accurately measure animal behavior is at least as old as the scientific study of behavior itself~\cite{klette2008understanding}. Trails of hominid footprints, ``motion'' captured by pliocene deposits at Laetoli that date to 3.66 million years ago, firmly established that early hominoids achieved an upright, bipedal and free-striding gait~\cite{leakey1979pliocene}.
Beyond fossilized locomotion, behavior can now be measured in a myriad of ways: from GPS trackers, videography, to microphones, to tailored electronic sensors~\cite{kays2015terrestrial, brown2013observing, camomilla2018trends}.
Videography is perhaps the most general and widely-used method as it allows noninvasive, high-resolution observations of behavior~\cite{johansson1973visual, o2010camera, weinstein2018computer}. Extracting behavioral measures from video poses a challenging computational problem. Recent advances in deep learning have tremendously simplified this process~\cite{wu2020recent, mathis2020deep}, which quickly impacted neuroscience~\cite{mathis2020deep, datta2019computational}.
\medskip
In this primer we review markerless (animal) motion capture with deep learning. In particular, we review principles of algorithms, highlight their potential, as well as discuss pitfalls for experimentalists, and compare them to alternative methods (inertial sensors, markers, etc.). Throughout, we also provide glossaries of relevant terms from deep learning and hardware. Furthermore, we will discuss how to use them, what pitfalls to avoid, and provide perspectives on what we believe will and should happen next.
\medskip
\begin{figure*}[htp]
\centering
\includegraphics[width=\textwidth]{fig/figure1.jpg}
\caption{%
{\bf Schematic overview of markerless motion capture or pose estimation.} The pixel representation of an image (left) or sequence of images (video) is processed and converted into a list of keypoints (right).
Semantic information about object identity and keypoint type is associated to the predictions. For instance, the keypoints are structures with a name e.g. ear, the x and y coordinate as well as a confidence readout of the network (often this is included, but not for all pose estimation packages), and are then grouped according to individuals (subjects).}
\label{fig:overview}
\end{figure*}
What do we mean by ``markerless motion capture?'' While biological movement can also be captured by dense, or surface models~\cite{mathis2020deep, guler2018densepose, Zuffi20163dMenagerie}, here we will almost exclusively focus on ``keypoint-based pose estimation.'' Human and many other animal's motion is determined by the geometric structures formed by several pendulum-like motions of the extremities relative to a joint~\cite{johansson1973visual}. Seminal psychophysics studies by Johansson showed that just a few coherently moving keypoints are sufficient to be perceived as human motion~\cite{johansson1973visual}. This empirically highlights why pose estimation is a great summary of such video data. Which keypoints should be extracted, of course, dramatically depends on the model organism and the goal of the study; e.g., many are required for dense, 3D models~\cite{guler2018densepose, sanakoyeu2020transferring, Zuffi20163dMenagerie}, while a single point can suffice for analyzing some behaviors~\cite{mathis2020deep}. One of the great advantages of deep learning based methods is that they are very flexible, and the user can define what should be tracked.
\section*{Principles of deep learning methods for markerless motion capture}
In raw video we acquire a collection of pixels that are static in their location and have varying value over time.
For analyzing behavior, this representation is sub-optimal:
Instead, we are interested in properties of objects in the images, such as location, scale and orientation.
Objects are collections of pixels in the video moving or being changed in conjunction.
By decomposing objects into \emph{keypoints} with semantic meaning ---such as body parts in videos of human or animal subjects---a high dimensional video signal can be converted into a collection of time series describing the movement of each keypoint (Figure~\ref{fig:overview}).
Compared to raw video, this representation is easy to analyze, and semantically meaningful for investigating behavior and addressing the original research question for which the data has been recorded.
\medskip
\begin{figure*}[b]
\centering
\includegraphics[width=\textwidth]{fig/figure2.jpg}
\caption{{\bf Comparison of marker-based (traditional) and markerless tracking approaches.}
{\bf (A)} In marker-based tracking, \emph{prior} to performing an experiment, special measures have to be taken regarding hardware and preparation of the subject (images adapted from~\citealp{inayat2020matlab, maceira2019wearable}; IMU stands for
inertial measurement unit).
{\bf (B)} For markerless pose estimation, raw video is acquired and processed post-hoc: Using labels from human annotators, machine learning models are trained to infer keypoint representations directly from video (on-line inference without markers is also possible~\cite{Kane2020dlclive}).
Typically, the architectures underlying pose estimation can be divided into a feature extractor and a decoder: The former maps the image representation into a feature space, the latter infers keypoint locations given this feature representation.
In modern deep learning systems, both parts of the systems are trained end-to-end. \label{fig:comparisonmarkerbasedvsmarkerless}
}
\end{figure*}
Motion capture systems aim to infer keypoints from videos:
In marker-based systems, this can be achieved by manually enhancing parts of interest (by colors, LEDs, reflective markers), which greatly simplifies the computer vision challenge, and then using classical computer vision tools to extract these keypoints. Markerless pose estimation algorithms directly map raw video input to these coordinates. The conceptual difference between marker-based and marker-less approaches is that the former requires special preparation or equipment, while the latter can even be applied \emph{post-hoc}, but typically requires ground truth annotations of example images (i.e., a training set). Notably, markerless methods allow for extracting additional keypoints at a later stage, something that is not possible with markers (Figure~\ref{fig:comparisonmarkerbasedvsmarkerless}).
\medskip
Fundamentally, a pose estimation algorithm can be viewed as a function that maps frames from a video into the coordinates of body parts. The algorithms are highly flexible with regard to what body parts are tracked. Typically the identity of the body parts (or objects) have semantically defined meaning (e.g., different finger knuckles, the head), and the algorithms can group them accordingly (namely, to assemble an individual) so that the posture of multiple individuals can be extracted simultaneously (Figure~\ref{fig:overview}).
For instance, for an image of one human the algorithm would return a list of pixel coordinates (these can have subpixel resolution) per body part and frame (and sometimes an uncertainty prediction;~\citealp{insafutdinov2016deepercut, kreiss2019pifpaf, mathis2018deeplabcut}).
Which body parts the algorithm returns depends on both the application and the training data provided---this is an important aspect with respect to how the algorithms can be customized for applications.
\begin{figure*}[b]
\centering
\includegraphics[width=\textwidth]{fig/figure3.jpg}
\caption{Example augmentation images with labeled body parts in red. {\bf (A)} Two example frames of Alpine choughs (Pyrrhocorax graculus) near Mont Blanc with human-applied labels in red (original). The images to the right illustrate three augmentations (as labeled). {\bf (B)} Two example frames of a trail-tracking mouse (mus musculus) from~\cite{mathis2018deeplabcut} with four labeled bodyparts as well as augmented variants. \href{https://colab.research.google.com/github/DeepLabCut/Primer-MotionCapture/blob/master/COLAB_Primer_MotionCapture_Fig3.ipynb}{Open in Google Colaboratory}
}
\label{fig:AUG}
\end{figure*}
\subsection*{Overview of algorithms}
\justify While many pose estimation algorithms~\cite{moeslund2006survey, POPPE20074} have been proposed, algorithms based on deep learning~\citep{lecun2015dl} are the most powerful as measured by performance on human pose estimation benchmarks~\cite{ToshevDEEPPOSE,JainMODEEP,insafutdinov2016deepercut, newell2016stacked, cao2018openpose, Xiao2018, cheng2020higherhrnet}. More generally, pose estimation algorithms fall under ``object detection'', a field that has seen tremendous advances with deep learning (aptly reviewed in Wu et al.,~\citealp{wu2020recent}).
In brief, pose estimation can often intuitively be understood as a system of an encoder that extracts important (visual) features from the frame, which are then used by the decoder to predict the body parts of interests along with their location in the image frame.
\medskip
In classical algorithms (see~\citealp{moeslund2006survey, POPPE20074, wu2020recent}), handcrafted feature representations are used that extract invariant statistical descriptions from images. These features were then used together with a classifier (decoder) for detecting complex objects like humans~\cite{dalal2005histograms, moeslund2006survey}. Handcrafted feature representations are (loosely) inspired by neurons in the visual pathway and are designed to be robust to changes in illumination, and translations; typical feature representations are Scale Invariant Feature Transform (SIFT; \citealp{lowe2004distinctive}), Histogram of Gradients (HOG; \citealp{dalal2005histograms}) or Speeded Up Robust Features (SURF; ~\citealp{bay2008speeded}).
\medskip
In more recent approaches, both the encoder and decoders (alternatively called the backbone and output heads, respectively) are deep neural networks (DNN) that are directly optimized on the pose estimation task. An optimal strategy for pose estimation is jointly learning representations of the raw image or video data (encoder) and a predictive model for posture (decoder).
In practice, this is achieved by concatenating multiple layers of differentiable, non-linear transformations and by training such a model as a whole using the back propagation algorithm~\cite{lecun2015dl, goodfellow2016deep, wu2020recent}.
In contrast to classical approaches, DNN based approaches directly optimize the feature representation in a way most suitable for the task at hand (For a glossary of deep learning terms see Box~\ref{box1}).
\medskip
Machine learning systems are composed of a dataset, model, loss function (criterion) and optimization algorithm~\cite{goodfellow2016deep}. The dataset defines the input-output relationships that the model should learn: In pose estimation, a particular pose (output) should be predicted for a particular image (input), see Figures~\ref{fig:overview} \& \ref{fig:comparisonmarkerbasedvsmarkerless}B. The model's parameters (weights) are iteratively updated by the optimizer to minimize the loss function. Thereby the loss function measures the quality of a predicted pose (in comparison to the ground truth data). Choices about these four parts influence the final performance and behavior of the pose-estimation system and we discuss possible design choices in the next sections.
\subsection*{Datasets \& Data Augmentation}
\justify Two kinds of datasets are relevant for training pose estimation systems:
First, one or multiple datasets used for related tasks---such as image recognition---can be used for \emph{pre-training} computer vision models on this task (also known as transfer learning; see Box~\ref{box1}). This dataset is typically considerably larger than the one used for pose estimation. For example, ImageNet~\cite{deng2009imagenet}, sometimes denoted as ImageNet-21K, is a highly influential dataset and a subset was used for the ImageNet Large Scale Visual Recognition Challenge in 2012 (ILSRC-2012;~\citealp{russakovsky2015imagenet}) for object recognition. Full ImageNet contains 14.2 million images from 21K classes, the ILSRC-2012 subset contains 1.2 million images of 1,000 different classes (such as car, chair, etc;~\citealp{russakovsky2015imagenet}). Groups working towards state-of-the-art performance on this benchmark also helped push the field to build better DNNs and openly share code. This dataset has been extensively used for pre-training networks, which we will discuss in the model and optimization section below.
\medskip
The second highly relevant dataset is the one curated for the task of interest-- Mathis et al.~\cite{mathis2018deeplabcut} empirically demonstrated that the size of this dataset can be comparably small for typical pose estimation cases in the laboratory. Typically, this dataset contains 10--500 images, vs. the standard human pose estimation benchmark datasets, such as MS COCO~\cite{lin2014microsoft} or MPII pose~\cite{andriluka20142d}, which has annotated 40K images (of 26K individuals). This implies that the dataset that is curated is highly influential on the final performance, and great care should be taken to select diverse postures, individuals, and background statistics and labeling the data accurately (discussed below in ``pitfalls'').
\medskip
In practice, several factors matter: the performance of a fine-tuned model on the task of interest, the amount of images that need to be annotated for fine-tuning the network, and the convergence rate of optimization algorithm---i.e., how many steps of gradient descent are needed to obtain a certain performance. Using a pre-trained network can help with this in several regards:
\citet{he2018rethinking} show that in the case of large training datasets, pre-training typically aids with convergence rates, but not necessarily the final performance.
Despite this evidence that under the right circumstances (i.e., given enough task-relevant data) and with longer training, randomly initialized models can match the performance of fine-tuned ones for key point detection on COCO~\cite{he2018rethinking} and horses~\cite{mathis2019TRANSFER}, however, the networks are less robust~\cite{mathis2019TRANSFER}. Beyond robustness, using a pre-trained model is generally advisable when the amount of labeled data for the target task is small, which is true for many applications in neuroscience, as it leads to shorter training times and better performance with less data~\cite{he2018rethinking, mathis2018deeplabcut, mathis2019TRANSFER, arac2019deepbehavior}. Thus, pre-trained pose estimation algorithms save training time, increase robustness, and require substantially less training data. Indeed, most packages in Neuroscience now use pre-trained models~\cite{mathis2018deeplabcut,graving2019fast,arac2019deepbehavior,Bala2020, Liu2020optiflex, mathisimagenet2020}, although some do not~\cite{pereira2019fast,Gnel2019DeepFly3D, Zimmermann2020}, which can give acceptable performance for simplified situations with aligned individuals.
\medskip
More recently, larger datasets like the 3.5 billion Instagram dataset~\cite{mahajan2018exploring}, JFT which has 300M images~\cite{hinton2015distilling,xie2020noisy} and OpenImages~\cite{kuznetsova2018open} became popular, further improving performance and robustness of the considered models~\cite{xie2020noisy}. What task is used for pre-training also matters. Corroborating this insight, Li et al. showed that pre-training on large-scale object detection task can improve performance for tasks that require fine, spatial information like segmentation~\cite{li2019analysis}.
\medskip
Besides large datasets for pre-training, a curated dataset with pose annotations is needed for optimizing the algorithm on the pose estimation task. The process is discussed in more detail below and it typically suffices to label a few (diverse) frames. Data augmentation is the process of expanding the training set by applying specified manipulations (like rotate, scaling image size).
Based on the chosen corruptions, models become more invariant to rotations, scale changes or translations and thus more accurate (with less training data).
Augmentation can also help with improving robustness to noise, like jpeg compression artefacts and motion blur (Figure~\ref{fig:AUG}).
To note, data augmentation schemes should not affect the semantic information in the image: for instance, if color conveys important information about the identity of an animal, augmentations involving changes in color are not advisable.
Likewise, augmentations which change the spatial position of objects or subjects should always be applied to both the input image and the labels (Box~\ref{box2}).
\subsection*{Model architectures}
\justify Systems for markerless pose estimation are typically composed of a \emph{backbone} network (encoder), which takes the role of the feature extractor, and one or multiple \emph{heads} (decoders). Understanding the model architectures and design choices common in deep learning based pose estimation systems requires basic understanding of convolutional neural networks. We summarize the key terms in Box~\ref{box1}, and expand on what encoders and decoders are below.
\medskip
Instead of using handcrafted features as in classical systems, deep learning based systems employ ``generic'' encoder architectures which are often based on models for object recognition. In a typical system, the encoder design affects the most important properties of the algorithms such as its inference speed, training-data requirements and memory demands. For the pose estimation algorithms so far used in neuroscience the encoders are either stacked hourglass networks~\cite{newell2016stacked}, MobileNetV2s~\cite{sandler2018mobilenetv2}, ResNets~\cite{He_2016_CVPR}, DenseNets~\cite{huang2017densely} or EfficientNets~\cite{tan2019efficientnet}.
These encoder networks are typically pre-trained on one or multiple of the larger-scale datasets introduced previously (such as ImageNet), as this has been shown to be an advantage for pose estimation on small lab-scale sized datasets~\cite{mathis2019TRANSFER, mathis2018deeplabcut, arac2019deepbehavior}.
For common architectures this pre-training step does not need to be carried out explicitly-trained weights for popular architectures are already available in common deep learning frameworks.
\medskip
The impact of the encoder on DNN performance is a highly active research area. The encoders are continuously improved in regards to speed and object recognition performance~\cite{huang2017densely, sandler2018mobilenetv2, tan2019efficientnet, wu2020recent, kornblith2019better}.
Naturally, due to the importance of the ImageNet benchmark the accuracy of network architectures continuously increases (on that dataset).
For example, we were able to show that this performance increase is not merely reserved for ImageNet, or (importantly) other object recognition tasks~\cite{kornblith2019better}, but in fact that better architectures on ImageNet are also better for pose estimation~\cite{mathisimagenet2020}. However, being better on ImageNet, also comes at the cost of decreasing inference speed and increased memory demands. DeepLabCut (an open source toolbox for markerless pose estimation popular in neuroscience) thus incorporates backbones from MobileNetV2s (faster) to EfficientNets (best performance on ImageNet; \citealp{mathis2019TRANSFER,mathisimagenet2020}).
\medskip
\input{box1}
\input{box2}
In (standard) convolutional encoders, the high-resolution input images get gradually downsampled while the number of learned features increases. Regression based approaches which directly predict keypoint locations from the feature representation can potentially deal with this downsampled representation. When the learning problem is instead cast as identifying the keypoint locations on a grid of pixels, the output resolution needs to be increased first, often by deconvolutional layers~\cite{insafutdinov2016deepercut, Xiao2018}. We denote this part of the network as the decoder, which takes downsampled features, possibly from multiple layers in the encoder hierarchy, and gradually upsamples them again to arrive at the desired resolution. The first models of this class were Fully Convolutional Networks~\cite{long2015fully}, and later DeepLab~\cite{chen2017deeplab}. Many popular architectures today follow similar principles. Design choices include the use of skip connections between decoder layers, but also regarding skip connections between the encoder and decoder layers. Example encoder--decoder setups are illustrated in Figure~\ref{fig:model-architectures}. The aforementioned building blocks---encoders and decoders---can be used to form a variety of different approaches, which can be trained end-to-end directly on the target task (i.e., pose estimation).
\medskip
Pre-trained models can also be adapted to a particular application. For instance, DeeperCut~\cite{insafutdinov2016deepercut}, which was adapted by the animal pose estimation toolbox DeepLabCut~\cite{mathis2018deeplabcut}, was built with a ResNet~\cite{He_2016_CVPR} backbone network, but adapted the stride by atrous convolutions~\cite{chen2017deeplab} to retain a higher spatial resolution (Box~\ref{box1}). This allowed larger receptive fields for predictions, but retains a relatively high speed (i.e., for video analysis) but most importantly because ResNets can be pre-trained on ImageNet, those initialized weights could be used. Other architectures, like stacked hourglass networks~\cite{newell2016stacked} used in DeepFly3D~\cite{pavan2019} and DeepPoseKit~\cite{graving2019fast}, retain feature representations at multiple scales and pass those to the decoder (Figure~\ref{fig:model-architectures}A, B).
\begin{figure}[b]
\centering
\includegraphics[width=0.5\textwidth]{fig/figure4.jpg}
\caption{%
{\bf Schematic overview of possible design choices for model architectures and training process} {\bf(A)} A simple, but powerful variant~\cite{insafutdinov2016deepercut} is a ResNet-50~\cite{He_2016_CVPR} architecture adapted to replace the final down-sampling operations by atrous convolutions~\cite{chen2017deeplab} to keep a stride of 16, and then a single deconvolution layer to upsample to output maps with stride 8. It also forms the basis of other architectures (e.g.~\citealp{Xiao2018}). The encoder can also be exchanged for different backbones to improve speed or accuracy (see Box~\ref{box2}).
{\bf(B)} Other approaches like stacked hourglass networks~\cite{newell2016stacked}, are not pre-trained and employ skip connections between encoder and decoder layers to aid the up-sampling process. {\bf(C)} For training the network, the training data comprising input images and target heatmaps is used. The target heatmap is compared with the forward prediction. Thereby, the parameters of the network are optimized to minimize the loss that measures the difference between the predicted heatmap and the target heatmap (ground truth).
}
\label{fig:model-architectures}
\end{figure}
\begin{figure*}[b]
\centering
\includegraphics[width=\textwidth]{fig/figure5.jpg}
\caption{%
{\bf Multi-animal pose estimation approaches.} {\bf A}: Bottom-up approaches detect all the body parts (e.g. elbow and shoulder in example) as well as ``limbs'' (part confidence maps). These limbs are then used to associate the bodyparts within individuals correctly (Figure from OpenPose,~\citealp{cao2018openpose}). For both OpenPose and DeepLabCut, the bodyparts and part confidence maps, and part affinity fields (paf's) are predicted as different decoders (aka output heads) from the encoder.
{\bf B}: Top-down approaches localize individuals with bounding-box detectors and then directly predict the posture within each bounding box. This does not require part confidence maps, but is subject to errors when bounding boxes are wrongly predicted (see black bounding box encompassing two players in (c)). The displayed figures, adapted from Xiao et al.~\cite{Xiao2018}, improved this disadvantage by predicting bounding boxes per frame and forward predicting them across time via visual flow.}
\label{fig:bottom-up_top-down}
\end{figure*}
\subsection*{Loss functions: training architectures on datasets}
\justify Keypoints (i.e., bodyparts) are simply coordinates in image space. There are two fundamentally different ways for estimating keypoints (i.e., how to define the loss function). The problem can be treated as a regression problem with the coordinates as targets~\cite{ToshevDEEPPOSE, carreira2016human}. Alternatively, and more popular, the problem can be cast as a classification problem, where the coordinates are mapped onto a grid (e.g. of the same size as the image) and the model predicts a heatmap (scoremap) of location probabilities for each bodypart (Figure~\ref{fig:model-architectures}C).
In contrast to the regression approach~\cite{ToshevDEEPPOSE}, this is fully convolutional, and allows modeling of multi-modal distributions and aids the training process~\cite{tompson2014joint, newell2016stacked, insafutdinov2016deepercut, cao2018openpose}. Moreover, the heatmaps have the advantage that one can naturally predict multiple locations of the ``same'' bodypart in the same image (i.e., 2 elbows) without mode collapse (Figure~\ref{fig:bottom-up_top-down}A).
\medskip
Loss functions can also reflect additional priors or inductive biases about the data. For instance, DeepLabCut uses location refinement layers (locref), that counteract the downsampling inherent in encoders, by training outputs to predict corrective shifts in image coordinates relative to the downsampled output maps (Figure~\ref{fig:bottom-up_top-down}A). In pose estimation, it is possible to define a \emph{skeleton} or graph connecting keypoints belonging to subjects with the same identity (see below)~\cite{insafutdinov2016deepercut,cao2018openpose}. When estimating keypoints over time, it is also possible to employ temporal information and encourage the model to only smoothly vary its estimate among consecutive frames~\cite{insafutdinov2017cvpr,yao2019monet, xu2020eventcap,zhou2020monocular}. Based on the problem, these priors can be directly encoded and be used to regularize the model.
\medskip
How can pose estimation algorithms accommodate multiple individuals? Fundamentally, there are two different approaches: bottom-up and top-down methods (Figure~\ref{fig:bottom-up_top-down}). In top-down methods, individuals are first localized (often with another neural network trained on object localization) then pose estimation is performed per localized individual~\cite{Xiao2018,newell2016stacked,sun2019deep}. In bottom-up methods all bodyparts are localized, and networks are also trained to predict connections of bodyparts within individuals (i.e., limbs). These connections are then used to link candidate bodyparts to form individuals~\cite{cao2018openpose, insafutdinov2017cvpr,kreiss2019pifpaf,cheng2020higherhrnet}. To note, these techniques can be used on single individuals for increased performance, but often are not needed and usually imply reduced inference speed.
\subsection*{Optimization}
\justify For pre-training, stochastic gradient descent (SGD; \citealp{bottou2010large}) with momentum~\cite{sutskever2013importance} is an established method.
Different variants of SGD are now common (such as Adam;~\citealp{kingma2014adam}) and used for fine-tuning the resulting representations.
As mentioned above, pose estimation algorithms are typically trained in a multi-stage setup where the backbone is trained first on a large (labeled) dataset of a potentially unrelated task (like image classification). Users can also download these pre-trained weights. Afterwards, the model is fine-tuned on the pose-estimation task. Once trained, the quality of the prediction can be judged in terms of the root mean square error (RMSE), which measures the distance between the ground truth keypoints and predictions~\cite{mathis2018deeplabcut,pereira2019fast}, or by measuring the percentage of correct keypoints (PCK,~\citealp{andriluka20142d, mathis2019TRANSFER}); i.e., the fraction of detected keypoints that fall within a defined distance of the ground truth.
\medskip
To properly estimate model performance in an application setting, it is advisable to split the labeled dataset at least into train and test subsets.
If systematic deviations can be expected in the application setting (e.g., because the subjects used for training the model differ in appearance from subjects encountered at model deployment~\cite{mathis2019TRANSFER}, this should be reflected when choosing a way to split the data. For instance, if data from multiple individuals is possible, distinct individuals should form distinct subsets of the data.
On the contrary, strategies like splitting data by selecting every \textit{n}-th frame in a video likely overestimates the true model performance.
\medskip
The model is then optimized on the training dataset, while performance is monitored on the validation (test) split.
If needed, hyperparameters---like parameter settings of the optimizer, or also choices about the model architecture---of the model can be adapted based on an additional validation set.
\medskip
\begin{figure*}[t]
\centering
\includegraphics[width=\textwidth]{fig/figure6.jpg}
\caption{An overview of the workflow for deep learning based pose estimation, which highlights several critical decision points.}
\label{fig:workflow}
\end{figure*}
\medskip
\input{table1.tex}
All of the aforementioned choices influence the final outcome and performance of the algorithm. While some parts of the training pipeline are well-established and robust---like pre-training a model on ImageNet---choices about the dataset, architecture, augmentation, fine-tuning procedure, etc. will inevitably influence the quality of the pose estimation algorithm (Box~\ref{box2}). See Figure~\ref{fig:AUG} for a qualitative impression of augmentation effects of some of these decisions (see also Figure~\ref{fig:AUG2}). We will discuss this in more detail in the Pitfalls section.
\medskip
So far, we considered algorithms able to infer 2D keypoints from videos, by training deep neural networks on previously labeled data. Naturally, there is also much work in computer vision and machine learning towards the estimation of 3D keypoints from 2D labels, or to directly infer 3D keypoints. In the interest of space, we had to omit those but refer the interested reader to ~\cite{martinez2017simple, Mehta2017_3D, TomeRA17,chen20173d,yao2019monet} as well specifically for neuroscience~\cite{yao2019monet, pavan2019, nath2019deeplabcut, Zimmermann2020, karashchuk2020anipose, Bala2020}.
\medskip
Lastly, it is not understood how CNNs make decisions and they often find ``shortcuts''~\cite{geirhos2020shortcut}. While this active research area is certainly beyond the scope of this primer, from practical experience we know that at least within-domain---i.e., data that is similar to the training set---DNNs work very well for pose estimation, which is the typical setting relevant for downstream applications in neuroscience. It is worth noting that in order to optimize performance, there is no one-size-fits-all solution. Thus, we hope by building intuition in users of such systems, we provide the necessary tools to make these decisions with more confidence (Figure~\ref{fig:workflow}).
\section*{Scope and applications}
Markerless motion capture can excel in complicated scenes, with diverse animals, and with any camera available (mono-chrome, RGB, depth cameras, etc). The only real requirement is the ability of the human to be able to reliably label keypoints (manually or via alternative sources). Simply, you need to be able to see what you want to track. Historically, due to limitations in computer vision algorithms experimentalists would go to great lengths to simplify the environment, even in the laboratory (i.e., no bedding, white or black walls, high contrast), and this is no longer required with deep learning-based pose estimation. Now, the aesthetics one might want for photographs or videos taken in daily life are the best option.
\medskip
Indeed, the field has been able to rapidly adopt these tools for neuroscience. Deep learning-based markerless pose estimation applications in the laboratory have already been published for flies~\cite{mathis2018deeplabcut, pereira2019fast, graving2019fast, pavan2019, karashchuk2020anipose,Liu2020optiflex}, rodents~\cite{mathis2018deeplabcut,MathisWarren2018speed, pereira2019fast, graving2019fast, arac2019deepbehavior, pavan2019,Zimmermann2020,Liu2020optiflex}, horses~\cite{mathis2019TRANSFER}, dogs~\cite{yao2019monet}, rhesus macaque~\cite{berger2020wireless, yao2019monet, Bala2020, labuguen2020macaquepose} and marmosets~\cite{ebina2019arm}; the original architectures were developed for humans~\cite{insafutdinov2016deepercut, newell2016stacked, cao2018openpose}. Outside of the laboratory, DeepPoseKit was used for zebras~\cite{graving2019fast} and DeepLabCut for 3D tracking of cheetahs~\cite{nath2019deeplabcut}, for squirrels~\cite{barrett2020manual} and macaques~\cite{labuguen2020macaquepose}, highlighting the great ``in-the-wild'' utility of this new technology~\cite{mathis2020deep}. As outlined in the principles section, and illustrated by these applications, these deep learning architectures are general-purpose and can be broadly applied to any animal as well as condition.
\medskip
Recent research highlights the prevalent representations of action across the brain~\cite{kaplan2020brain}, which emphasizes the importance of quantifying behavior even in non-motor tasks. For instance, pose estimation tools have recently been used to elucidate the neural variability across cortex in humans during thousands of spontaneous reach movements~\cite{peterson2020behavioral}.
Pupil tracking is of great importance for visual neuroscience. One recent study by Meyer et al. used head-fixed cameras and DeepLabCut to reveal two distinct types of coupling between eye and head movements~\cite{meyer2020two}. In order to accurately correlate neural activity to visual input tracking the gaze is crucial. The recent large, open dataset from the Allen Institute includes imaging data of six cortical and two thalamic regions in response to various stimuli classes as well as pupil tracking with DeepLabCut~\cite{siegle2019survey}. The International Brain Lab has integrated DeepLabCut into their workflow to track multiple bodyparts of decision-making mice including their pupils~\cite{Harris2020dataIBL}.
\medskip
Measuring relational interactions is another major direction, that has been explored less in the literature so far, but is feasible. Since the feature detectors for pose estimation are of general nature one can easily not only track the posture of individuals but also the tools and objects one interacts with (e.g. for analyzing golf or tennis). Furthermore, social behaviors, and parenting interactions (for example in mice) can now be studied noninvasively.
\medskip
Due to the general capabilities, these tools have several applications for creating biomarkers by extracting high fidelity animal traits, for instance in the pain field~\cite{tracey2019composite} and for monitoring motor function in healthy and diseased conditions~\cite{micera2020advanced}.
DeepLabCut was also integrated with tools for x-ray analysis~\cite{laurence2020integrating}. For measuring joint center locations in mammals, arguably, x-ray is the gold standard. Of course, x-ray data also poses challenges for extracting body part locations from x-ray data. A recent paper shared methodology to integrate DeepLabCut with XROMM, a popular analysis suite, to advance the speed and accuracy for x-ray based analysis~\cite{laurence2020integrating}.
\section*{How do the (current) packages work?}
Here we will focus on packages that have been used in behavioral neuroscience, but the general workflow for pose estimation in computer vision research is highly similar. What has made experimentalist-focused toolboxes different is that they provide essential code to generate and train on one's own datasets. Typically, what is available in computer vision focused pose estimation repositories is code to run inference (video analysis) and/or run training of an architecture for specific datasets around which competitions happen (e.g., MS COCO;~\citealp{lin2014microsoft} and MPII pose;~\citealp{andriluka20142d}). While these are two crucial steps, they are not sufficient to develop tailored neural networks for an individual lab or experimentalist. Thus, the ``barrier to entry'' is often quite high to use these tools. It requires knowledge of deep learning languages to build appropriate data loaders, data augmentation pipelines, and training regimes. Therefore, in recent years several packages have not only focused on animal pose estimation networks, but in providing users a full pipeline that allows for (1) labeling a customized dataset (frame selection and labeling tools), (2) generating test/train datasets, (3) data augmentation and loaders, (4) neural architectures, (5) code to evaluate performance, (6) run video inference, and (7) post-processing tools for simple readouts of the acquired machine-labeled data.
Thus far, around 10 packages have become available in the past 2 years~\cite{mathis2018deeplabcut, pereira2019fast, graving2019fast, pavan2019, arac2019deepbehavior, Zimmermann2020, Bala2020, Liu2020optiflex}. Each has focused on providing slightly different user experiences, modularity, available networks, and balances to the speed/accuracy trade-off for video inference. Several include their (adapted) implementations of the original DeepLabCut or LEAP networks as well~\cite{graving2019fast, Liu2020optiflex}. But the ones we highlight have the full pipeline delineated above as a principle and are open source, i.e., at minimum inference code is available (see Table~\ref{tab:packages}). The progress gained and challenges they set out to address (and some that remain) are reviewed elsewhere~\cite{mathis2020deep, kordingLimitations2019}. Here, we discuss collective aims of these packages (see also Figure~\ref{fig:workflow}).
\medskip
\medskip
Current packages for animal pose estimation have focused on primarily providing tools to train tailored neural networks to user-defined features.
Because experimentalists need flexibility and are tracking very different animals and features, the most successful packages (in terms of user base as measured by citations and GitHub engagement) are species agnostic. However, given they are all based on advances from prior art in human pose estimation, the accuracy of any one package given the breadth of options that could be deployed (i.e, data augmentation, training schedules, and architectures) will remain largely comparable, if such tools are provided to the user. What will determine performance the most is the input training data provided, and how much capacity the architectures have.
\medskip
It is notable that using transfer learning has proven to be advantageous for better robustness (i.e., its ability to generalize, see~\citealp{mathis2018deeplabcut, mathis2019TRANSFER, arac2019deepbehavior}), which was first deployed by DeepLabCut (see Table~\ref{tab:packages}). Now, training on large animal-specific datasets has recently been made available in DeepLabCut as well (such as a horse pose dataset with >8,000 annotated images of 30 horses;~\citealp{mathis2019TRANSFER}). This allows the user to bypass the only manual part of curating and labeling ground truth data, and these models can directly be used for inference on novel videos. For DeepLabCut, this is an emerging community-driven effort, with external labs already contributing models and data\footnote{\href{http://modelzoo.deeplabcut.org}{modelzoo.deeplabcut.org}}.
\medskip
\input{box3}
In the future, having the ability to skip labeling and training and run video inference with robust models will lead to more reproducible and scalable research. For example, as we show in other sections of the primer, if the labeling accuracy is not of a high quality, and the data is not diverse enough, then the networks are not able to generalize to so-called ``out-of-domain'' data. If as a community we collectively build stable and robust models that leverage the breadth of behaviors being carried out in laboratories worldwide, we can work towards models that would work in a plug-in-play fashion. We anticipate new datasets and models to become available in the next months to years.
\medskip
All packages, just like all applications of deep learning to video, prefer access to GPU computing resources (See Box~\ref{box:hardware}). On GPUs one experiences faster training and inference times but the code can also be deployed on standard CPUs or laptops. With cloud computing services, such as Google Colaboratory and JupyterLab, many pose estimation packages can simply be deployed on remote GPU resources. This still requires (1) knowledge about these resources, and (2) toolboxes providing so-called ``notebooks'' that can be easily deployed. But, given these platforms have utility beyond just pose estimation, they are worthwhile to learn about.
\medskip
For the non-GPU aspects, only a few packages have provided easy-to-use graphical user interfaces that allow users with no programming experience to use the tool (see Table~\ref{tab:packages}). Lastly, the available packages vary in their access to 3D tools, multi-animal support, and types of architectures available to the user, which is often a concern for speed and accuracy. Additionally, some packages have limitations on only allowing the same sized videos for training and inference, while others are more flexible. These are all key considerations when deciding which eco-system to invest in learning (as every package has taken a different approach to the API).
\medskip
Perhaps the largest barrier to entry for using deep learning-based pose estimation methods is managing the computing resources (See Box~\ref{box:hardware}, Box~\ref{box:software}). From our experience, installing GPU drivers and the deep learning packages (TensorFlow, PyTorch), that all the packages rely on, is the biggest challenge. To this end, in addition to documentation that is ``user-focused'' (i.e., not just an API for programmers), resources like webinars, video tutorials, workshops, Gitter and community-forums (like StackOverflow and Image Forum SC) have become invaluable resources for the modern neuroscientist. Here, users can ask questions and get assistance from developers and users alike. We believe this has also been a crucial step for the success of DeepLabCut.
\medskip
While some packages provide full GUI-based control over the packages, to utilize more advanced features at least minimal programming knowledge is ideal. Thus, better training for the increasingly computational nature of neuroscience will be crucial. Making programming skills a requirement of graduate training, building better community resources, and leveraging the fast-moving world of technology to harness those computing and user resources will be crucial. In animal pose estimation, while there is certainly an attempt to make many of the packages user-friendly, i.e., to onboard users and have a scalable discussion around common problems, we found user forums to be very valuable~\cite{rueden2019scientific}. Specifically, DeepLabCut is a member of the Scientific Community Image Forum\footnote{\href{https://forum.image.sc/}{forum.image.sc}} alongside other packages that are widely used for image analysis in the life sciences such as Fiji~\cite{schindelin2012fiji}, napari, CellProfiler~\cite{McQuin2018CellProfiler3N} Ilastik~\cite{sommer2011ilastik} and scikit-image~\cite{van2014scikit}.
\medskip
\input{box4}
\section*{Practical considerations for pose estimation (with deep learning)}
\justify As a recent field gaining traction, it is instructive to regard the operability of deep learning-powered pose estimation in light of well-established, often gold standard, techniques.
\subsection*{General considerations and pitfalls}
\justify As discussed in {\it Scope and applications} and as evidenced by the strong adaptation of the tools, deep learning-based pose estimation work well in standard setups with visible animals. The most striking advantage over traditional motion capture systems is the absence of any need for body instrumentation. Although seemingly obvious, the previous statement hides the belated recognition that marker-based motion capture suffers greatly from the wobble of markers placed on the skin surface. That behavior, referred to as ``soft tissue artifact'' among movement scientists and attributable to the deformation of tissues underneath the skin such as contracting muscles or fat, is now known to be the major obstacle to obtaining accurate skeletal kinematics~\footnote{Intra-cortical pins and biplane fluoroscopy give direct, uncontaminated access to joint kinematics. The first, however, is invasive (and entails careful surgical procedures; \citealp{ramsey2003methodological}) whereas the second is only operated in very constrained and complex laboratory settings~\cite{list2017moving}. Both are local to a specific joint, and as such do not strictly address the task of pose estimation.} \cite{camomilla2017}. To make matters worse, contaminated marker trajectories may be harmful in clinical contexts, potentially invalidating injury risk assessment (e.g.~\citealp{smale2017}). Although a multitude of numerical approaches exists to tackle this issue, the most common, yet incomplete, solution is multi-body kinematics optimization (or ``inverse kinematics'' in computer graphics and robotics;~\citealp{begon2018}). This procedure uses a kinematic model and searches for the body pose that minimizes in the least-squares sense the distance between the measured marker locations and the virtual ones from the model while satisfying the constraints imposed by the various joints~\cite{lu1999bone}. Its accuracy is, however, decisively determined by the choice of the underlying model and its fidelity to an individual’s functional anatomy~\cite{begon2018}. In contrast, motion capture with deep learning elegantly circumvents the problem by learning a geometry-aware representation of the body from the data to associate keypoints to limbs~\cite{cao2018openpose,insafutdinov2016deepercut,mathis2020deep}, which, of course, presupposes that one can avoid the ``soft tissue artifact'' when labeling.
\medskip
At present, deep learning-powered pose estimation can be poorly suited to evaluate rotation about a bone's longitudinal axis. From early markerless techniques based on visual hull extraction this is a known problem~\cite{ceseracciu2014comparison}. In marker-based settings, the problem has long been addressed by rather tracking clusters of at least three non-aligned markers to fully reconstruct a rigid segment's six degrees of freedom~\cite{spoor1980rigid}. Performing the equivalent feat in a markerless case is difficult, but it is possible by labeling multiple points (for instance on either side of the wrist to get the lower-limb orientation). Still, recent hybrid, state-of-the-art approaches jointly training under both position and orientation supervision augur very well for video-based 3D joint angle computation~\cite{xu2020eventcap,zhou2020monocular}.
\medskip
With the notable exception of approaches leveraging radio wave signals to predict body poses through walls~\cite{zhao2018through}, deep learning-powered motion capture requires the individuals be visible; this is impractical for kinematic measurements over wide areas. A powerful alternative is offered by Inertial Measurement Units (IMUs)---low-cost and lightweight devices typically recording linear accelerations, angular velocities and the local magnetic field. Raw inertial data can be used for coarse behavior classification across species~\cite{kays2015terrestrial,chakravarty2019novel}. They can also be integrated to track displacement with lower power consumption and higher temporal resolution than GPS~\cite{bidder2015step}, thereby providing a compact and portable way to investigate whole body dynamics (e.g.~\citealp{wilson2018biomechanics}) or, indirectly, energetics~\cite{gleiss2011making}. Recent advances in miniaturization of electronical components now also allow precise quantification of posture in small animals~\cite{pasquet2016wireless}, and open new avenues for kinematic recordings in multiple animals at once at fine motor scales.
\medskip
\begin{figure*}[h]
\centering
\includegraphics[width=.97\textwidth]{fig/figure7.jpg}
\caption{{\bf Labeling Pitfalls: How corruptions affect performance}
{\bf (A)} Illustration of two types of labeling errors. Top is ground truth, middle is missing a label at the tailbase, and bottom is if the labeler swapped the ear identity (left to right, etc.). {\bf (B)} Using a small dataset of 106 frames, how do the corruptions in A affect the percent of correct keypoints (PCK) as the distance to ground truth increases from 0 pixel (perfect prediction) to 20 pixels (larger error)? The X-axis denotes the difference in the ground truth to the predicted location (RMSE in pixels), whereas Y-axis is the fraction of frames considered accurate (e.g., $\approx$80\% of frames fall within 9 pixels, even on this small training dataset, for points that are not corrupted, whereas for corrupted points this falls to $\approx$65\%). The fraction of the dataset that is corrupted affects this value. Shown is when missing the tailbase label (top) or swapping the ears in $1, 5, 10$ and $20\%$ of frames (of $106$ labeled training images). Swapping vs. missing labels has a more notable adverse effect on network performance.
}
\label{fig:corruption}
\end{figure*}
Nonetheless, IMU-based full body pose reconstruction necessitates multiple sensors over the body parts of interest; commercial solutions require up to 17 of them~\cite{roetenberg2009xsens}. That burden was recently eased by utilizing a statistical body model that incorporates anatomical constraints, together with optimizing poses over multiple frames to enforce coherence between the model orientation and IMU recordings---reducing the system down to six sensors while achieving stunning motion tracking~\cite{von2017sparse}. Yet, two additional difficulties remain. The first arises when fusing inertial data in order to estimate a sensor's orientation (for a comprehensive description of mathematical formalism and implementation of common fusion algorithms, see~\citealp{sabatini2011estimating}). The process is susceptible to magnetic disturbances that distort sensor readings and, consequently, orientation estimates~\cite{fan2018magnetic}. The second stems from the necessity to align a sensor's local coordinate system to anatomically meaningful axes, a step crucial (among others) to calculating joint angles (e.g.,~\citealp{lebleu2020lower}). The calibration is ordinarily carried out by having the subject perform a set of predefined movements in sequence, whose execution determines the quality of the procedure. Yet, in some pathological populations (let alone in animals), calibration may be challenging to say the least, deteriorating pose reconstruction accuracy~\cite{vargas2016imu}.
\medskip
A compromise to making the task less arduous is to combine videos and body-worn inertial sensors. Thanks to their complementary nature, incorporating both cues mitigates the limitations of each individual system; i.e., both modalities reinforce one another in that IMUs help disambiguate occlusions, whereas videos provide disturbance-free spatial information~\cite{gilbert2019fusing}. The idea also applies particularly well to the tracking of multiple individuals---even without the use of appearance features, advantageously---by exploiting unique movement signatures contained within inertial signals to track identities over time~\cite{henschel2019simultaneous}.
\subsection*{Pitfalls of using deep learning-based \\motion capture}
\justify Despite being trained on large scale datasets of thousands of individuals, even the best architectures fail to generalize to ``atypical'' postures (with respect to the training set). This is wonderfully illustrated by the errors committed by OpenPose on yoga poses~\cite{huang2019followmeup}.
\medskip
These domain shifts are major challenges (also illustrated below), and while this is an active area of research with much progress, the easiest way to make sure that the algorithm generalizes well is to label data that is similar to the videos at inference time. However, due to active learning implemented for many packages, users can manually refine the labels on ``outlier'' frames.
\medskip
Another major caveat of deep learning-powered pose estimation is arguably its intrinsic reliance on high-quality labeled images. This suggests that a labeled dataset that reflects the variability of the behavior should be used. If one -- due to the quality of the video -- cannot reliably identify body parts in still images (i.e., due to massive motion blur, uncertainty about body part (left/right leg crossing) or animal identity) then the video quality should be fixed, or sub-optimal results should be expected.
\medskip
To give readers a concrete idea about label errors, augmentation methods, and active learning, we also provide some simple experiments with shared code and data. Code for reproducing these analyses is available at~\href{https://github.com/DeepLabCut/Primer-MotionCapture}{github.com/DeepLabCut/Primer-MotionCapture}.
\medskip
To illustrate the importance of error-free labeling, we artificially corrupted labels from the trail-tracking dataset from Mathis et al.~\cite{mathis2018deeplabcut}. The corruptions respectively simulate inattentive labeling (e.g., with left–right bodyparts being occasionally confounded), and missing annotation or uncertainty as to whether to label an occluded bodypart. We corrupted $1, 5, 10$ and $20\%$ of the dataset (N=1,066 images) either by swapping two labels or removing one, and trained on $5\%$ of the data. The effect of missing labels is barely noticeable (Figure~\ref{fig:corruption}A). Swapping labels, on the other hand, causes a substantial drop in performance, with an approximate 10\% loss in percentage of correct keypoints (PCK) (Figure~\ref{fig:corruption}B). We therefore reason that careful labeling, more so than labeling a very large number of images, is the safest guard against poor ground truth annotations. We believe that explicitly modeling labeling errors, as done in Johnson and Everingham~\cite{johnson2011learning}, will be an active area of research and integrated in some packages.
\medskip
Even if labeled well, augmentation greatly improves results and should be used. For instance, when training on the example dataset of (highly)-correlated frames from one short video of one individual, the loss nicely plateaus and shows comparable train/test errors for three different augmentation methods (Figure~\ref{fig:AUG2}A, B). The three models also give good performance and generalize to a test video of a different mouse. However, closer inspection reveals that the "scalecrop" augmentation method, which only performs cropping and scaling during training~\cite{nath2019deeplabcut}, leads to swaps in bodyparts with this small training set from only one different mouse (Figure~\ref{fig:AUG2}C, D). The other two methods, which were configured to perform rotations of the training data, could robustly track the posture of the mouse. This discrepancy becomes striking when observing the PCK plots: imgaug and tensorpack outperform scalecrop by a margin of up to $\approx$ 30\% (Figure~\ref{fig:AUG2}E). One simple way to generalize to this additional case is by active learning~\cite{nath2019deeplabcut}, which is also available for some packages. Thereby one annotates additional frames with poor performance (outlier frames) and then trains the network from the final configuration, which thus only requires a few thousand iterations. Adding 28 annotated frames from the higher resolution camera, we get good generalization for test frames from both scenarios (Figure~\ref{fig:AUG2}F). Generally, this illustrates, how the lack of diversity in training data leads to worse performance, but can be fixed by adding frames with poor performance (active learning).
\subsection*{Coping with pitfalls}
\justify Fortunately, dealing with the most common pitfalls is relatively straightforward, and mostly demands caution and common sense. Rules of thumb and practical guidelines are given in Box~\ref{box:pitfalls}. Video quality should be envisaged as a trade-off between storage limitations, labeling precision, and training speed; e.g., the lower the resolution of a video, the smaller the occupied disk space and the faster the training speed, but the harder it gets to consistently identify bodyparts. In practice, DeepLabCut was shown to be very robust to downsizing and video compression, with pose reconstruction degrading only after scaling videos down to a third of their original size or compression by a factor of 1000~\cite{MathisWarren2018speed}.
\medskip
Body parts should be labeled reliably and consistently across frames that preferably capture a variety of behaviors. Note that some packages provide the user means to automatically extract frames differing in visual content based on unsupervised clustering, which simplifies the selection of relevant images in sparse behaviors.
\medskip
Utilize symmetries for training with augmentation and try to include image augmentations that are helpful. Use the strongest model (given the speed requirements). Check performance and actively grow the training set if errors are found.
\medskip
\input{box5}
\begin{figure*}[b]
\centering
\includegraphics[width=.93\textwidth]{fig/figure8.jpg}
\caption{{\bf Data Augmentation Improves Performance}
Performance of three different augmentation methods on the same dataset of around 100 training images from one short video of one mouse (thus correlated). Scalecrop is configured to only change the scale, and randomly crop images; Imgaug also performs motion blur and rotation ($\pm 180^\circ$) augmentation. Tensorpack performs Gaussian noise and rotation ($\pm 180^\circ$) augmentation. {\bf (A)} Loss over training iterations has plateaued, and {\bf (B)} test errors in pixels appear comparable for all methods. {\bf (C)} Tail base aligned skeletons across time for a video of a different mouse (displayed as a cross connecting snout to tail and left ear to right ear). Note the swap of the ``T'' in the shaded gray zone (and overlaid on the image to the right in {\bf (D)}). Imgaug and tensorpack, which also included full $180^\circ$ rotations, work perfectly). This example highlights that utilizing the rotational symmetry of the data during training can give excellent performance (without additional labeling). {\bf (E)} Performance of the networks on different mice recorded with the same camera (top) and a different camera ($\approx$ 2.5x magnification; bottom). Networks trained with tensorpack and imgaug augmentation generalize much better, and in particular generalize very well to different mice. The generalization to the other camera is difficult, but also works better for tensorpack and imgaug augmentation. {\bf (F)} Performance of networks on same data as in (E), but after an active learning step, adding $28$ training frames from the higher resolution camera and training for a few thousand iterations. Afterwards, the network generalizes well to both scenarios.
}
\label{fig:AUG2}
\end{figure*}
Pose estimation algorithms can make different types of errors: jitter, inversion (e.g. left/right), swap (e.g. associating body part to another individual) and miss~\cite{ruggero2017benchmarking}. Depending on the type of errors, different causes need to be addressed (i.e., check the data quality for any human-applied mistakes~\cite{mathis2018deeplabcut}, use suitable augmentation methods). Also for some cases, post processing filters can be useful (such as Kalman filters), but also graphical models or other methods that learn the geometry of the bodyparts. We also believe that future work will explicitly model labeling errors during training.
\section*{What to do with motion capture data?}
Pose estimation with deep learning is to relieve the user of the painfully slow digitization of keypoints. With markerless tracking you need to annotate a much smaller dataset and this can be applied to new videos. Pose estimation also serves as a springboard to a plethora of other techniques. Indeed, many new tools are specifically being developed to aid users of pose estimation packages to analyze movement and behavioral outputs in a high-throughput manner. Plus, many such packages existed pre-deep learning and can now be leveraged with this new technology as well. While the general topic of what to do with the data is beyond this primer, we will provide a number of pointers. These tools fall into three classes: time series analysis, supervised, and unsupervised learning tools.
\medskip
A natural step ahead is the quantitative analysis of the keypoint trajectories. The computation of linear and angular displacements, as well as their time derivatives, lays the ground for detailed motor performance evaluation---a great introduction to elementary kinematics can be found in~\cite{Winter2009}, and a thorough description of 151 common metrics is given in~\cite{schwarz2019systematic}. These have a broad range of applications, of which we highlight a system for assessing >30 behaviors in groups of mice in an automated way~\cite{de2019real}, or an investigation of the evolution of gait invariants across animals~\cite{catavitello2018kinematic}. Furthermore, kinematic metrics are the basis from which to deconstruct complex whole-body movements into interpretable motor primitives, non-invasively probing neuromuscular control~\cite{longo2019biomechanics}. Unsupervised methods such as clustering methods~\cite{Pedregosa2011}, MotionMapper~\cite{Berman2014}, MoSeq~\citep{wiltschko2015mapping}, or variational autoencoders~\cite{luxem2020identifying} allow the extraction of common ``kinematic behaviors'' such as turning, running, rearing. Supervised methods allow the prediction of human defined labels such as ``attack'' or ```freezing.'' For this, general purpose tools such as scikit-learn~\cite{Pedregosa2011} can be ideal, or tailored solutions with integrated GUIs such as JAABA can be used~\citep{kabra2013jaaba}. Sturman et al. have developed an open source package to utilize motion capture outputs together with classifiers to automate human annotations for various behavioral tests (open field, elevated plus maze, forced swim test). They showed that these open source methods outperform commercially available platforms~\cite{sturman2020deep}.
\medskip
Kinematic analysis, together with simple principles derived from physics, also allows the calculation of the energy required to move about, a methodology relevant to understanding the mechanical determinants of the metabolic cost of locomotion (e.g.~\citealp{saibene2003biomechanical}) or informing the design of bio-inspired robots (e.g.~\citealp{li2017mechanical,nyakatura2019reverse}).
\subsection*{Modeling and motion understanding}
\justify Looking forward, we also expect that the motion capture data will be used to learn task-driven and data-driven models of the sensorimotor as well as the motor pathway. We have recently provided a blueprint combining human movement data, inverse kinematics, biomechanical modeling and deep learning~\cite{sandbrink2020task}. Given the complexity of movement, as well as the highly nonlinear nature of the sensorimotor processing~\cite{madhav2020synergy, nyakatura2019reverse}, we believe that such approaches will be fruitful to leverage motion capture data to gain insight into brain function.
\section*{Perspectives}
As we highlighted thus far in this primer, markerless motion capture has reached a mature state in only a few years due to the many advances in machine learning and computer vision. While there are still some challenges left~\cite{mathis2020deep}, this is an active area of research and advances in training schemes (such as semi-supervised and self-supervised learning) and model architectures will provide further advances and even less required manual labour. Essentially, now every lab can train appropriate algorithms for their application and turn videos into accurate measurements of posture. If setups are sufficiently standardized, these algorithms already broadly generalize, even across multiple laboratories as in the case of the International Brain Lab~\cite{Harris2020dataIBL}. But how do we get there, and how do we make sure the needs of animal pose estimation for neuroscience applications are met?
\subsection*{Recent developments in deep learning}
\justify Innovations in the field of object recognition and detection affect all aforementioned parts of the algorithm, as we discussed already in the context of using pre-trained representations. An emerging relevant research direction in machine learning is large scale semi-supervised and self-supervised representation learning (SSL). In SSL, the problem of pre-training representations is no longer dependent on large labeled datasets, as introduced above. Instead, even larger databases comprised of unlabeled examples---often multiple orders of magnitude larger than the counterparts used in supervised learning---can be leveraged.
A variety of SSL algorithms are becoming increasingly popular in all areas of machine learning.
Recently, representations obtained by large-scale self-supervised pre-training began to approach or even surpass performance of the best supervised methods.
Various SSL methods \cite{oord2018representation, logeswaran2018efficient,wu2018unsupervised,henaff2019data,tian2019contrastive,hjelm2018learning,bachman2019learning,he2019momentum,chen2020simple, wu2018unsupervised,hjelm2018learning,bachman2019learning,he2019momentum,chen2020simple} made strides in both image recognition \cite{chen2020simple}, speech processing \citep{schneider2019wav2vec,baevski2019vq,baevski2020wav2vec,ravanelli2020multi} and NLP~\cite{devlin2019bert,Liu2019roberta}, already starting to outperform models obtained by supervised pre-training on large datasets. Considering that recent SSL models for computer vision are continued to being shared openly (e.g.~\citealp{xie2020noisy,chen2020simple}), it can be expected to impact and improve new model development in pose estimation, especially if merely replacing the backend model is required.
On top, SSL methods can be leveraged in end-to-end models for estimating keypoints and poses directly from raw, unlabeled video \cite{umer2020self, tung2017self, kocabas2019self}.
Approaches based on graph neural networks \cite{scarselli2008graph} can encode priors about the observed structure and model correlations between individual keypoints and across time \cite{cai2019exploiting}. For some applications (like modeling soft tissue or volume) full surface reconstructions are needed and this area has seen tremendous progress in recent years~\cite{guler2018densepose,sanakoyeu2020transferring, Zuffi2019ICCV}. Such advances can be closely watched and incorporated in neuroscience, but we also believe our field (neuroscience) is ready to innovate in this domain too.
\subsection*{Pose estimation specifically for neuroscience}
\justify The goal of human pose estimation---aside from the purely scientific advances for object detection---range from person localization in videos, self-driving cars and pedestrian safety, to socially aware AI, is related to, but does differ from, the applied goals of animal pose estimation in neuroscience. Here, we want tools that give us the highest precision, with the most rapid feedback options possible, and we want to train on small datasets but have them generalize well. This is a tall order, but so far we have seen that the glass is (arguably more than) half full. How do we meet these goals going forward? While much research is still required, there are essentially two ways forward: datasets and associated benchmarks, and algorithms.
\subsection*{Neuroscience needs (more) benchmarks}
\justify In order to push the field towards innovations in areas the community finds important, setting up benchmark datasets and tasks will be crucial (i.e., the Animal version of ImageNet). The community can work towards sharing and collecting data of relevant tasks and curating it into benchmarks. This also has the opportunity of shifting the focus in computer vision research: Instead of ``only'' doing human pose estimation, researchers probably will start evaluating on datasets directly relevant to neuroscience community. Indeed there has been a recent interest in more animal-related work at top machine learning conferences~\cite{khan2020animalweb, sanakoyeu2020transferring}, and providing proper benchmarks for such approaches would be ideal.
\medskip
For animals, such efforts are developing: Khan et al. recently shared a dataset comprising 22.4K annotated faces from 350 diverse species~\cite{khan2020animalweb} and Labuguen announced a dataset of 13K annotated macaque~\cite{labuguen2020macaquepose}.
We recently released two benchmark datasets that can be evaluated for state-of-the-art performance~\footnote{\href{https://paperswithcode.com}{paperswithcode.com}} on within domain and out-of-domain data~\footnote{\href{http://horse10.deeplabcut.org}{horse10.deeplabcut.org}}. The motivation is to train on a limited number of individuals and test on held out animals (the so-called ``out-of-domain'' issue)~\cite{mathis2019TRANSFER, mathisimagenet2020}. We picked horses due to the variation in coat colors (and provide >8K labeled frames). Secondly, to directly study the inherent shift in domain between individuals, we set up a benchmark for common image corruptions, as introduced by Hendrycks et al.~\cite{Hendrycks2019} that uses the image corruptions library proposed by Michaelis et al.~\cite{michaelis2019dragon}.
\medskip
Of course these aforementioned benchmarks are not sufficient to cover all the needs of the community, so we encourage consortium-style efforts to also curate data and provide additional benchmarks. Plus, making robust networks is still a major challenge, even when trained with large amounts of data~\cite{beery2018recognition, geirhos2020shortcut}. In order to make this a possibility it will be important to develop and share common keypoint estimation benchmarks for animals as well as expand the human ones to applications of interest, such as sports~\cite{huang2019followmeup}.
\subsection*{Sharing Pre-trained Models}
\justify We believe another major step forward will be sharing pre-trained pose estimation networks. If as a field we were to annotate sufficiently diverse data, we could train more robust networks that broadly generalize. This success is promised by other large scale data sets such as MS COCO~\cite{lin2014microsoft} and MPII pose~\cite{andriluka20142d}.
In the computer vision community, sharing model weights such that models do not need to be retrained has been critical for progress. For example, the ability to download pre-trained ImageNet weights is invaluable---training ImageNet from scratch on a standard GPU can take more than a week. Now, they are downloaded within a few seconds and fine tuned in packages like DeepLabCut.
However even for custom training setups, sharing of code and easy access to cloud computing resources enables smaller labs to train and deploy models without investment in additional lab resources.
Pre-training a typical object recognition model on the ILSVC is now possible on the order of minutes for less than 100 USD \cite{coleman2017dawnbench} thanks to high-end cloud computing, which is also feasible for labs lacking the necessary on-site infrastructure (Box~\ref{box:hardware}).
\medskip
In neuroscience, we should aim to fine tune even those models; namely, sharing of mouse-specific, primate-specific weights will drive interest and momentum from researchers without access to such data, and further drive innovations. Currently, only DeepLabCut provides model weights (albeit not at the time of the original publication) as part of the recently launched Model Zoo (\href{http://modelzoo.deeplabcut.org/}{modelzoo.deeplabcut.org}). Currently it contains models trained on MPII pose~\cite{insafutdinov2016deepercut}, dog and cat models as well as contributed models for primate facial recognition, primate full body recognition~\cite{labuguen2020macaquepose} and mouse pupil detection (Figure~\ref{fig:workflow}). Researchers can also contribute in a citizen-science fashion by labeling data on the web (\href{http://contrib.deeplabcut.org}{contrib.deeplabcut.org}) or by submitting models.
\medskip
Both datasets and models will benefit from common formatting to ease sharing and testing. Candidate formats are HDF5 (also chosen by NeuroData Without Borders~\cite{teeters2015neurodata} and DeepLabCut), TensorFlow data\footnote{%
\href{https://www.tensorflow.org/api_docs/python/tf/data}{tensorflow.org/api\_docs/python/tf/data}},
and/or PyTorch data\footnote{%
\href{https://pytorch.org/docs/stable/torchvision/datasets.html}{pytorch.org/docs/stable/torchvision/datasets.html}}. Specifically, for models, proto-buffer formats for weights are useful and easy to share~\cite{Kane2020dlclive, lopes2015bonsai} for deployment to other systems.
Platforms such as OSF and Zenodo allow banking of weights, and some papers (e.g.~\citealp{barrett2020manual, sturman2020deep}) have also shared their trained models. We envision that having easy-to-use interfaces to such models will be possible in the future.
\medskip
These pre-trained pose estimation networks hold several promises: it saves time and energy (as different labs do not need to annotate and train networks), as well as contributes to reproducibility in science. Like many other forms of biological data, such as genome sequences, functional imaging data, behavioral data is notoriously hard to analyze in standardized ways. Lack of agreement can lead to different results, as pointed out by a recent landmark study comparing the results achieved by 70 independent researchers analyzing nine hypothesis in shared imaging data~\cite{botvinik2020variability}. To increase reproducibility in behavioral science, video is a great tool~\cite{gilmore2017video}. Analyzing behavioral data is complex, owing to its unstructured, large-scale nature, which highlights the importance of shared analysis pipelines. Thus, building robust architectures that extract the same behavioral measurements in different laboratories would be a major step forward.
\section*{Conclusions}
Deep learning based markerless pose estimation has been broadly and rapidly adopted in the past two years. This impact was, in part, fueled by open-source code: by developing and sharing packages in public repositories on GitHub they could be easily accessed for free and at scale. These packages are built on advances (and code) in computer vision and AI, which has a strong open science culture. Neuroscience also has strong and growing open science culture~\cite{white2019future}, which greatly impacts the field as evidenced by tools from the Allen Institute, the UCLA Miniscope~\cite{aharoni2019all}, OpenEphys~\cite{siegle2017open}, and Bonsai~\cite{lopes2015bonsai} (just to name a few).
\medskip
Moreover, Neuroscience and AI have a long history of influencing each other~\cite{hassabis2017neuroscience}, and research in Neuroscience will likely contribute to making AI more robust~\cite{SINZ2019, hassabis2017neuroscience}. The analysis of animal motion is a highly interdisciplinary field at the intersection of biomechanics, computer vision, medicine and robotics with a long tradition~\cite{klette2008understanding}. The recent advances in deep learning have greatly simplified the measurement of animal behavior, which, as we and others believe~\cite{krakauer2017neuroscience}, in turn will greatly advance our understanding of the brain.
\begin{flushleft}
\textbf{Acknowledgments:}
\end{flushleft}
We thank Yash Sharma for discussions around future directions in self-supervised learning, Erin Diel, Maxime Vidal, Claudio Michaelis, Thomas Biasi for comments on the manuscript.
Funding was provided by the Rowland Institute at Harvard
University (MWM, AM),
the Chan Zuckerberg Initiative (MWM, AM, JL)
and the German Federal Ministry of Education and Research (BMBF) through the Tübingen AI Center (StS; FKZ: 01IS18039A).
StS thanks the International Max Planck Research School for Intelligent Systems (IMPRS-IS) and acknowledges his membership in the European Laboratory for Learning \& Intelligent Systems (ELLIS) PhD program.
The authors declare no conflicts of interest. M.W.M. dedicates this work to Adam E. Max.
\section*{References}
\input{manuscript.bbl}
\end{document}
|
1,477,468,750,284 | arxiv | \section{Introduction}
\label{sec: introduction}
Recent experimental observations of the long-range azimuthal correlations in
high-multiplicity proton-proton (p+p) \cite{Khachatryan:2010gv} and
proton-nucleus (p+A) collisions \cite%
{CMS:2012qk,Abelev:2012ola,Aad:2012gla,Adare:2013piz} shed some new light on
our understanding of {\it{fireballs}} created in such interactions.
The measured two-particle correlation function as a function of the
pseudorapidity separation, $\Delta \eta =\eta _{1}-\eta _{2}$, and the
relative azimuthal angle, $\Delta \phi =\phi _{1}-\phi _{2}$, of two
particles demonstrates a great deal of similarity to that measured in peripheral heavy-ion collisions \cite%
{Chatrchyan:2013nka}. In particular, two particles separated by many units
of pseudorapidity prefer to have similar azimuthal angles thus the
two-particle correlation function is peaked at $\Delta \phi =0$. Exactly the
same phenomenon was observed in heavy-ion collisions where it is believed to
originate from hydrodynamical evolution present in such interactions \cite%
{Florkowski:book}. In this picture the initial anisotropic distribution of
matter, characterized e.g. by ellipticity, is translated to the final momentum
anisotropy with $\cos (2\Delta \phi )$ term (and higher harmonics) in the
correlation function. However, the applicability of hydrodynamics to small
systems, as the ones created in p+p and p+A interactions, is questionable
and so far there is no consensus in this matter. Nevertheless, hydrodynamics%
\footnote{%
It should be noted that the long-range rapidity structure is put by hand into
hydrodynamic calculations.} applied to p+p and p+A collisions results in
qualitative and partly quantitative understanding of various sets of data %
\cite%
{Bozek:2011if,Shuryak:2013ke,Bozek:2013uha,Bzdak:2013zma,Qin:2013bha,Werner:2013tya,Bozek:2013ska}%
. On the other hand, the Color Glass Condensate \cite{Gelis:2010nm}, the
effective description of low-x gluons in the hadronic/nuclear wave function,
results in equally good description of the two-particle correlation functions %
\cite{Dusling:2013oia} (see also \cite{Kovchegov:2012nd,Kovner:2012jm} for a more qualitative discussion).
The advantage of the CGC approach over hydrodynamics is its microscopic character
and internal consistency. On the other hand, hydrodynamics naturally
describes various sets of data for which the CGC predictions are often not
clear. Moreover, hydrodynamics provides a solid intuitive understanding of the observed signal which is not the case for the CGC. To summarize, at present we have two competing languages\footnote{%
In Ref. \cite{Gelis:2013rba} both physical pictures are argued to be rather connected.} to understand small systems and it is crucial to
establish the true origin of the long-range azimuthal correlation. Several
observables and arguments \cite%
{Bzdak:2013zla,Bozek:2013sda,Coleman-Smith:2013rla,Bjorken:2013boa,McLerran:2013oju,Rezaeian:2013woa,Basar:2013hea, Bzdak:2013rya,Yan:2013laa,Bzdak:2013raa,Konchakovski:2014wqa,Bzdak:2013lva,Sickles:2013yna,Noronha:2014vva}
were recently put forward which hopefully can help to resolve this interesting
issue.
In this paper, we calculate the two-particle density function, $N^{\mathrm{%
pair}}(\Delta \eta ,\Delta \phi )$, in p+p and p+Pb collisions assuming the
incoherent elastic scattering of partons, as present in a multi-phase transport
model (AMPT) \cite{Lin:2004en}. This approach is simple and intuitive, and
more importantly is closely related to quantum chromodynamics (QCD). The
cascade model with the reasonable parton-parton cross-section, $\sigma =1-10$
mb, was proved to be very successful in understanding many features of heavy-ion
collision data, see e.g. \cite{Ma:2010dv,Xu:2011fi,Solanki:2012ne,Ma:2013pha}. This
approach has one crucial advantage over hydrodynamics, namely, there is no
need to assume local thermalization. So far such a calculation was not
published and it is important to establish whether a simple incoherent
scattering of partons with a reasonable partonic cross-section can generate the long-range structure in p+p and p+A two-particle correlation
functions.\footnote{%
We note that the negative result was reported by the CMS Collaboration in
Ref. \cite{CMS:2012qk}. Our results contradict their conclusion.}
Our main result is that the incoherent elastic scattering of partons, with a
partonic cross-section of $\sigma =1.5 - 3$ mb, naturally generates the
long-rage azimuthal correlation of charged particles both in p+p and p+A collisions.
A near side peak at $\Delta \phi =0$ grows with the growing number of
produced particles due to the growing density of partons, and consequently
the larger number of partonic scatterings. The $p_{T}$ dependence of the
near-side peak is also reproduced that is, the signal at $\Delta \phi =0$ is
best visible for $1<p_{T}<2$ GeV/$c$.
In the next section we give a brief introduction to the AMPT model. In Section \ref{sec:results} we present our results for the two-particle correlation functions in p+p and p+A collisions for various multiplicity and $p_{T}$
bins. We finish our paper with comments in Section~\ref{sec:comments} and conclusions in Section~\ref{sec:conclusions}.
\section{Model}
\label{sec:model}
The AMPT model with string melting mechanism is employed in this work (for comparison we also show some results obtained in the default model). It is initialized with a spatial and momentum distribution of minijet partons and soft string excitations from the HIJING model~\cite{Wang:1991hta}. The string melting mechanism converts all excited strings into quarks and antiquarks according to the flavor and spin structures of their valence quarks (in contrast to the default AMPT model, where only partons from minijets are present). The evolution of a quark-antiquark plasma\footnote{In our context we only need partonic scatterings and the composition of the partonic matter is less important.} is modeled by a simple parton cascade. At present, the parton cascade includes only two-body elastic scatterings with a cross-section obtained from the pQCD with a screening mass~\cite{Zhang:1997ej}. Clearly this is a simplified picture however, we believe it captures the main
features of parton dynamics present at the early stage of a collision. The parton cascade is
followed by the hadronization, where quarks are recombined into hadrons via a simple coalescence model.
Finally dynamics of the subsequent hadronic matter is described by a relativistic transport model~\cite{Li:1995pra}.
For more details on the AMPT model we refer the reader to Ref. \cite{Lin:2004en}.
The recent AMPT studies show that the partonic cross-section of $1.5$ mb can describe many experimental observables at the LHC~\cite{Xu:2011fi,Xu:2011fe,Ma:2013bia,Ma:2013pha}. In particular it was found that the long-range azimuthal correlation can be produced by the parton scatterings in Pb+Pb collisions at $\sqrt{s}=2.76$ TeV~\cite{Xu:2011jm}.
\section{Results}
\label{sec:results}
To directly compare our results with the CMS data we select events with
different values of the number of produced charged particles, $N_{\mathrm{track}}$.
In Figure \ref{fig:P_Ntrack} we present the multiplicity distributions, $P(N_{%
\mathrm{track}})$, in p+p collisions at $\sqrt{s}=7$ TeV and p+Pb interactions at $\sqrt{s}=5.02$
TeV, for charged particles produced in $|\eta |<2.4$ and $p_{T}>0.4$ GeV/$c$. Both
multiplicity distributions are in reasonable agreement with the CMS data%
\footnote{%
We do not compare directly with the CMS data since their $N_{\mathrm{track}%
}^{\mathrm{offline}}$ is not exactly our $N_{\mathrm{track}}$.}, see e.g. %
\cite{talk}.
\begin{figure}[t]
\begin{center}
\includegraphics[scale=0.44]{fig1.eps}
\end{center}
\par
\vspace{-5mm}
\caption{The multiplicity distribution calculated in AMPT, $P(N_{\mathrm{track}})$, as a function of the number of produced particles, $N_{\mathrm{track}}$, in p+p collisions at $\protect\sqrt{s}=7$ TeV, and p+Pb collisions at $\protect\sqrt{s}=5.02$ TeV for charged particles produced in $|\protect\eta |<2.4$ and $p_{T}>0.4$ GeV/$c$. }
\label{fig:P_Ntrack}
\end{figure}
Before we present our main results it could be pedagogical to illustrate the
initial parton distribution in the transverse plane in p+p and p+A
collisions with $N_{\mathrm{track}}>110$.
As seen in Figure \ref{fig:contour} the initial size of a system in p+p is roughly a factor of $2$ smaller than that in p+A. We checked that in a p+p collision partons are produced mainly in the overlap region of the two colliding protons, leading to a characteristic elliptical shape in a typical p+p event. In a p+A collision, the produced partons are localized in a few spots corresponding to the positions of the wounded nucleons \cite{Bialas:1976ed}.
\begin{figure}
\begin{center}
\includegraphics[scale=0.4]{fig2.eps}
\end{center}
\par
\vspace{-5mm}
\caption{The initial parton distribution in a p+p collision (left panel) and a p+Pb collision (right panel) for two typical AMPT events (with string melting mechanism) with the number of produced charged particles, $N_{\mathrm{track}}$, larger than $110$ ($|\protect\eta |<2.4$, $p_{T}>0.4$ GeV/$c$). Here $b$ is the impact parameter.}
\label{fig:contour}
\end{figure}
In Fig. \ref{fig:3D} we show the AMPT results for the two-particle density function in p+Pb collisions at $\sqrt{s}=5.02$ TeV as a function of the relative azimuthal angle, $\Delta \phi =\phi _{1}-\phi _{2}$, and the pseudorapidity separation, $\Delta \eta =\eta _{1}-\eta _{2}$, for events with $N_{\mathrm{track}}<35$ (left) and $N_{\mathrm{track}}>110$ (right). In this plot we take the pairs of charged particles with $1<p_{T}<3$ GeV/$c$. In qualitative agreement with the experimental data, the long-range near-side structure is absent for events with $N_{\mathrm{track}}<35$ and is clearly visible in events with $N_{\mathrm{track}}>110$.
\begin{figure}[h]
\begin{center}
\includegraphics[scale=0.44]{fig3.eps}
\end{center}
\par
\vspace{-5mm}
\caption{The AMPT two-particle density function in p+Pb collisions at $\protect\sqrt{s}=5.02$ TeV for low- (left) and high- (right) multiplicity events. The long-range near-side structure in pseudorapidity is clearly visible for high-multiplicity events.}
\label{fig:3D}
\end{figure}
To compare directly with the data, in Fig. \ref{fig:pPb_main} we present the two-particle distribution functions for
p+Pb collisions at $\sqrt{s}=5.02$ TeV and p+p at $\sqrt{s}=7$ TeV, as a function of the relative
azimuthal angle $\Delta \phi $ and averaged over pseudorapidity region $%
2<|\Delta \eta |<4$%
\begin{equation}
\frac{1}{N_{\mathrm{trig}}}\frac{d^{2}N^{\mathrm{pair}}}{d\Delta \phi }=%
\frac{1}{4}\int_{2<|\Delta \eta |<4}\frac{1}{N_{\mathrm{trig}}}\frac{d^{2}N^{%
\mathrm{pair}}}{d\Delta \phi d\Delta \eta }d\Delta \eta ,
\end{equation}%
for various ranges of $N_{\mathrm{track}}$ and different $p_{T}$ bins. Following the experimental procedure the zero-yield-at-minimum (ZYAM) method is implemented to remove a constant background, $C_{\mathrm{ZYAM}}$. In this calculation we take the partonic cross-section to be $\sigma =1.5$ mb. The AMPT results (solid and dashed curves) are in very good
agreement with the CMS data (full and open circles) for the near-side peak, $\Delta \phi \approx 0$. The agreement with the away-side peak, $\Delta \phi \approx \pi $, is less impressive however, this region is heavily populated
by jets which are of lesser interest in the present investigation. It is worth noticing that at the same $N_{\mathrm{track}}$ bin, the signal at $\Delta \phi =0$ in p+p collisions is noticeably smaller than that in p+A interactions. This feature agrees very well with the CMS data.
\begin{figure*}
\begin{center}
\includegraphics[scale=0.8]{fig4.eps}
\end{center}
\par
\vspace{-5mm}
\caption{Distribution of pairs in p+p collisions at $\protect\sqrt{s}=7$ TeV and p+Pb collisions at $\protect\sqrt{s}=5.02$ TeV as a function of the relative azimuthal angle $\Delta \protect\phi $ averaged over $2<|\Delta \protect\eta |<4$ in different $p_{T}$ and $N_{\mathrm{track}}$ bins. Our results (solid and dashed curves) based on the AMPT model (with string melting, $\sigma=1.5$ mb) are compared to the CMS data (full and open circles).}
\label{fig:pPb_main}
\end{figure*}
In Figure \ref{fig:pPb_main_2} we present the results for p+Pb collisions calculated in the AMPT model
with various values of $\sigma = 0, 0.5, 1.5$, and $3$ mb. We also show the result of the default AMPT model, where only partons from minijets interact and all soft strings decay independently into particles. In this scenario the number of interacting partons is not sufficiently high to produce a visible effect. On the contrary, in the string melting scenario (in which all initial soft strings melt into partons) the number of interacting partons is significantly larger, roughly a factor of $5$, thus allowing to obtain a sizable signal. As seen in Figure \ref{fig:pPb_main_2} the strength of the signal gradually increases with growing $\sigma$ and, as expected, the signal vanishes completely for $\sigma=0$ mb. It clearly demonstrates that in the AMPT model partonic scatterings are directly responsible for the signal at $\Delta\phi=0$, as observed in Figures \ref{fig:pPb_main} and \ref{fig:pPb_main_2}.
\begin{figure}
\begin{center}
\includegraphics[scale=0.45]{fig5.eps}
\end{center}
\par
\vspace{-5mm}
\caption{Distribution of pairs for various values of the partonic cross-section, $\sigma$, in p+Pb collisions at $\protect\sqrt{s}=5.02$ TeV as a function of the relative azimuthal angle $\Delta \protect\phi $ averaged over $2<|\Delta \protect\eta |<4$ for $N_{\mathrm{track}}>110$ and $1<p_{T}<2$ GeV/$c$. Our results (curves) from different AMPT model settings are compared with the CMS data (points). In the default AMPT model only few partons from minijets interact which is not sufficient to produce a sizable signal. In the string melting version all soft strings are converted into partons.}
\label{fig:pPb_main_2}
\end{figure}
In the last part of the paper we address the problem of the $p_{T}$ particle spectra. The measured $p_{T}$ distributions evolve towards higher $p_{T}$ with an increasing number of produced particles \cite{Chatrchyan:2013eya}. In principle this feature should be present in the AMPT model with the string melting mechanism owning to the frequent parton-parton scatterings. However, in our model the hadronization mechanism is rather crude (a simple coalescence) thus we should not expect the model to be particularly successful in describing the spectra (in contrast to the studied long-range rapidity correlation which presence or absence is independent on the particular mechanism of hadronization). Nevertheless, it is interesting to investigate whether the AMPT model can approximately reproduce the trends observed in the data. In Fig. \ref{fig:spectra} we present the $p_{T}$ distributions of produced pions, kaons and protons in p+Pb collisions for several centrality classes. The model, despite its simplicity, reproduces the CMS data \cite{Chatrchyan:2013eya} within the accuracy of $20\%$. The calculated spectra shift towards higher $p_t$ with an increasing number of produced particles, $N_{\rm{track}}$, as best visible in the rightmost plot (p+\={p}).
\begin{figure*}
\begin{center}
\includegraphics[scale=0.85]{fig6.eps}
\end{center}
\par
\vspace{-5mm}
\caption{The transverse momentum spectra (normalized to unity) in $|y|<1$ of produced pions, kaons and protons in p+Pb collisions at $\sqrt{s}=5.02$ TeV for three different centrality classes. The AMPT model (string melting) results are compared to the CMS data (full points). For better visibility, the results for $\langle N_{\mathrm{track}} \rangle_{p_T>0.4 \text{ GeV}/c} = 29$, $73$ and $133$ are shifted vertically by 0.6, 1.5 and 2.7 units, respectively.}
\label{fig:spectra}
\end{figure*}
\section{Comments}
\label{sec:comments}
It is worth noticing that the incoherent scattering of partons with basically one essential parameter, $\sigma = 1.5 - 3$ mb, allows to capture the main features of the p+p and p+Pb data for all measured multiplicities and the transverse momenta. This may be contrasted with the CGC framework \cite{Dusling:2013oia} where the saturation scale is fitted separately for each multiplicity and the colliding system.
The presence of the near-side peak in our results originates from the parton scatterings at the early stage of a collision, see Figure \ref{fig:pPb_main_2}. Obviously the lifetime, $\tau$, of the partonic stage increases with increasing number of initial partons, and consequently with $N_{\mathrm{track}}$. We checked that in p+Pb collisions $\tau \sim N_{\mathrm{track}}^{\alpha}$ with $\alpha \sim 1/2$ and for $N_{\mathrm{track}}=50, 100, 200$ the lifetime $\tau \approx 1, 1.4, 1.7$ fm, respectively. In p+p collisions $\tau$ grows slowly from $\tau \approx 0.6$ fm for $N_{\mathrm{track}}=10$ to $\tau \approx 0.8$ fm for $N_{\mathrm{track}}=100$. Our results indicate that for small and rapidly expanding systems there is enough time for multiple parton scatterings which can translate the initial anisotropy of produced matter into the final momentum anisotropy.
There are several problems in our approach that require further studies. For example only two-to-two elastic parton scatterings are included and higher order processes might become important at high densities. For a complete discussion of various problems in the partonic stage of the AMPT model we refer the reader to Section VII in Ref. \cite{Lin:2004en}.
A transport model calculations reported in Ref. \cite{Molnar:2004yh} suggest that a parton-parton cross-section of the order of $50$ mb is needed to generate a sizable elliptic flow in A+A collisions. However, in the AMPT model a cross-section of the order of $1.5 - 5$ mb is enough to reproduce the A+A data. It would be interesting to understand the origin of this contradiction.\footnote{We thank D. Molnar and P. Petreczky for comments on this point.}
We would like to emphasize that our goal was not to fit precisely the data. Our objective was to check if a minimal implementation of partonic scatterings, with a reasonable cross-section, can roughly reproduce the experimental data for p+p and p+Pb collisions. As seen in Figure \ref{fig:pPb_main_2}, the agreement with the experimental data is surprisingly good, suggesting that various shortcomings present in our approach are not very important.
It would be interesting to extend our discussion for peripheral Pb+Pb collisions. We leave this problem for a separate investigation. Also the detailed discussion of the elliptic and triangular \cite{Alver:2010gr} Fourier coefficients will be reported elsewhere.
\section{Conclusions}
\label{sec:conclusions}
In summary, we demonstrated that the incoherent scattering of partons in the
early stage of p+p and p+A collisions is sufficient to understand the
near-side azimuthal correlation of particles separated by a large gap in
pseudorapidity. Using the multi-phase transport model (AMPT with string melting), with a
parton-parton cross-section of $1.5$ mb, we calculated the two-particle
correlation function as a function of $\Delta \eta $ and $\Delta \phi $.
The main trends observed in the data were successfully reproduced. The
near-side peak at $\Delta \phi =0$ is gradually growing with the number of
produced particles owing to the growing density of partons. This in
consequence leads to more frequent parton-parton scatterings. Moreover, the
signal is best visible in the transverse momentum range $1<p_{T}<2$ GeV/$c$,
being in agreement with the CMS data.
In the default AMPT model, where only partons from minijets interact and soft strings decay independently into particles, the number of interacting partons is not sufficient to produce a visible signal.
Our study indicates that even in a very small system, as the one created in
a p+p collision, there is enough time for partonic scatterings before
the system becomes dilute. These scatterings translate the initial
anisotropy of matter into the final momentum anisotropy, leading to the $\cos
(2\Delta \phi )$ term (and higher harmonics) in the azimuthal correlation
function.
In this paper we focused solely on the main features of the two-particle
correlation function. Calculations of the elliptic and triangular Fourier
coefficients in p+p, p+A and peripheral A+A collisions are left for a
separate investigation.
\section*{Acknowledgments}
We thank Wei Li for clarifications regarding the CMS results.
Discussions with L. McLerran, V. Skokov and R. Venugopalan are appreciated.
G.-L.M. is supported by the Major State Basic Research Development Program
in China under Contract No. 2014CB845404, the NSFC of China under Projects
No. 11175232, No. 11035009, and No. 11375251, the Knowledge Innovation
Program of CAS under Grant No. KJCX2-EW-N01, CCNU-QLPL
Innovation Fund under Grant No. QLPL2011P01, and the ``Shanghai Pujiang
Program'' under Grant No. 13PJ1410600.
A.B. is supported through the RIKEN-BNL Research Center and the grant No. UMO-2013/09/B/ST2/00497.
|
1,477,468,750,285 | arxiv | \section{Introduction}
Nonequlibrium processes are common in nature, but a general framework to understand them is lacking as compared to equilibrium systems. However, recent development in the field of nonequilibrium statistical mechanics, had led for the discovery of {\it fluctuation theorems} (FT) \cite{eva93,eva94,jar97,cro98,cro99,sei05,sei08}, which are exact equalities that are valid even when the system of interest is driven far away from equilibrium. For such a nonequlibrium system, the statistical distribution of thermodynamic quantities such as work, heat, entropy etc. exhibit universal relations. These thermodynamic quantities have now been generalized to a single trajectory of system evolving in phase space. They are random variables depending on the phase space trajectory (stochastic thermodynamics \cite{sek98}). The physical origin of FTs rely on the time reversal symmetry of the dynamics \cite{cro98,cro99} and they are expected to have important applications in nanoscience and biophysics. The second law of thermodynamics emerges in the form of inequalities from these theorems \cite{jar97,sei05}. It can be shown that second law is valid on average. Here averaging is done over different trajectories, thus not ruling out the possibility of transient violations of second law for individual realisation \cite{sah11}. These theorems have helped us in understanding how thermodynamic irreversibility arises from the underlying time reversible dynamics \cite{kaw07}.
One of the FT was initially put forward by Jarzynski \cite{jar97} in the form of the nonequilibrium work theorem, by means of which one can extract information about equilibrium changes in free energy $\Delta F$ by measuring the nonequilibrium work $W$ performed on a system by the external drive. The system is initially prepared in equilibrium, and then driven away from equilibrium using some predetermined protocol $\lambda(t)$ which runs from $t=0$ to $t=\tau$. The Jarzynski Equality is given by
\begin{equation}
\la e^{-\beta W}\ra = e^{-\beta \Delta F}.
\end{equation}
The work $W$, depends on trajectory of the system, whose initial state is sampled from equilibrium distribution. The angular brackets denote averaging over an ensemble of such trajectories and the free energy differences $\Delta F=F(\lambda(\tau))-F(\lambda(0)$. A stronger fluctuation theorem was provided by Crooks \cite{cro98,cro99} in the form
\begin{equation}
\frac{P_f(W)}{P_r(-W)} = e^{\beta(W-\Delta F)},
\end{equation}
$P_f(W)$ and $P_r(W)$ being the work probability densities generated under the forward protocol $\lambda(t)$ and the reverse protocol $\lambda(\tau-t)$, respectively.
A more general FT was put forward by Seifert \cite{sei08} which contains the Jarzynski and the Crooks theorems as special cases. A system which is in contact with a heat bath, is initially prepared in some arbitrary distribution $p_0(x_0)$ of phase space points and is perturbed by varying an external parameter $\lambda(t)$ up to time $t=\tau$. In the reverse process, the system evolves from some other initial distribution $p_1(x_\tau)$ under the time-reversed protocol $\lambda(\tau-t)$. The Seifert's fluctuation theorem states that, the probability of a phase space trajectory along the forward process, $P[x(t)]$ is related to that along the reverse process, $\tilde P[\tilde x(t)]$, as
\begin{equation}
\frac{P[x(t)]}{\tilde P[\tilde x(t)]}=\frac{P[x(t)|x_0]p_0(x_0)}{\tilde P[\tilde x(t)|x_\tau]p_1(x_\tau)} = \frac{p_0(x_0)}{p_1(x_\tau)} \exp[{\Delta S_B}],
\end{equation}
where $\Delta S_B$ is the change in the entropy of the bath ($\Delta S_B =\frac{Q}{T}$, where Q is heat absorbed by the bath). $P[x(t)|x_0]$ is a short notation for functional of a path starting at $x_0$ and $x(t)$ being the phase space trajectory ending at $x_\tau$. If, in particular, the distribution $p_1(x_\tau)$ in the final distribution at time $\tau$ as dictated by dynamics, then the above relation gives the integral fluctuation theorem (IFT) for total entropy production \cite{sei08}:
\begin{equation}
\la e^{-\Delta s_{tot}}\ra = 1.
\label{IFT}
\end{equation}
where
\begin{equation}
\Delta S_{tot}=\Delta S + \Delta S_B= \ln\frac{p_0(x_0)}{p_1(x_\tau)} + \frac{Q}{T}.
\label{totent}
\end{equation}
Here $\ln\frac{p_0(x_0)}{p_1(x_\tau)}$ is the change of system entropy along a given trajectory. For details we refer to Seifert's article \cite{sei08}. If the system is in steady state, one can also obtain detailed entropy production fluctuation theorem (DFT), namely
\begin{equation}
\frac{p(\Delta S_{tot})}{p(-\Delta S_{tot})} = e^{\Delta S_{tot}}.
\end{equation}
The IFT follows directly from the DFT.
Using Jensen's inequality in Eq.(\ref{IFT}) we get
\begin{equation}
\la\Delta S_{tot}\ra \ge 0.
\end{equation}
This is a statement of second law of thermodynamics, expressed in the form of inequality for the average change in total entropy.
If the systems are driven by the feedback controlled protocols, which in turn depend on the measurement outcomes of the state of the system at intermediate times (information gain), then IFT gets modified a form \cite{lah12}
\begin{equation}
\la e^{-\Delta S_{tot}-I}\ra = 1,
\label{ModIFT}
\end{equation}
where $I$ is the mutual information which quantifies the change in uncertainty of state of the system upon making measurements. Application of Jensen's inequality generalizes second law for total entropy production:
\begin{equation}
\la \Delta S_{tot}\ra \ge -\la I\ra.
\label{ModSL}
\end{equation}
The average mutual information $\la I\ra$, is always non negative\cite{cov}. Thus the average entropy change can be made negative by feedback control, and the lower bound is given by $-\la I\ra$. There are few attempts to extend the IFT (Eq.(\ref{IFT})) to the quantum domain \cite{muk06,mon05,lut12}. In our present work we extend the IFT for $\Delta S_{tot}$ to quantum systems, in presence of multiple measurements and feedback. We assume that measurement procedure involves errors that are classical in nature. We show the robustness of FTs against intermediate measurements of any system observable (both von Neumann projective measurements or generalized positive operator valued measurements (POVM)).
We obtain these theorems for three different cases: (i) the system evolves in isolation from its surroundings; (ii) it is weakly coupled to a heat bath; and (iii) evolution of system coupled to heat bath is modelled in terms of work steps and heat steps following closely the treatment given Ref \cite{qua08} and is described in section \ref{strong}. Our treatment is based on path probability in state space. The measurement is assumed to be von Neumann type, i.e, projective measurement which results in the collapse of system state to one of the eigenstates of the corresponding observable. Case (i) namely isolated quantum system is discussed in detail. DFT is obtained for various different situations, i.e., (a) system evolving unitarily, (b) in the presence of measurement and feedback, and finally (c) in the presence of intermediate measurements, of any observables of the system. The IFT follows from DFT. For cases (ii) and (iii) we have derived generalized IFT. In the appendix, we have given a proof of IFT in presence of weak measurements. In passing, we note that all the extended quantum FTs retain the same form as their classical counterparts.
\section{Isolated quantum system}
\label{isoquant}
\subsection{Unitary evolution}
\label{uevol}
In this section we consider an isolated quantum system given by Hamiltonian $H(\lambda(t))$, where $\lambda(t)$ is some external time dependent protocol. To clarify our notation and for completeness we rederive DFT for this system following the treatment of ref \cite{mon05}. Initially at time t=0 energy measurement is performed and system is found to be in eigenstate $|i_0\ra$, with energy eigenvalue $E_{0}$. It then evolves unitarily from time $0$ to $\tau$ under the protocol $\lambda(t)$. The energy measurement at final time $\tau$ is performed and system is found to be at state $|i_{\tau}\ra$ with energy eigenvalue $E_{\tau}$. If the initial probability density of the state $|i_0\ra$ is $p(i_0)$ then the joint probability of $|i_0\ra$ and $|i_{\tau}\ra$ (forward state trajectory) is given by
\begin{align}
P_F(i_{\tau},i_{0})
&=p(i_{\tau}|i_{0})p(i_{0})\nn\\
&=|\langle i_{\tau}|U_{\lambda}(\tau,0)|i_{0}\rangle|^{2}p(i_{0}),
\end{align}
where $U_{\lambda}(t_2,t_1)$ denotes the unitary evolution operator for given $\lambda(t)$ from time $t_1$ to time $t_2$. It is defined as
\begin{equation}
U_{\lambda}(t_{2},t_{1})=T \exp\left( -\frac{i}{\hbar}\int^{t_{2}}_{t_{1}}H(\lambda(t))dt\right).
\end{equation}
Here, $T$ denotes time ordering.
The system entropy is defined as $S(t)=-\ln p(i_{t})$. As the system is isolated there is no generation of heat, i.e, Q=0, Using eq. (\ref{totent}) the total change in entropy production $\Delta S_{tot}$ during the evolution from time 0 to $\tau$ is equal to the change in system entropy alone.
\begin{equation}
\Delta S_{tot}=-\ln\frac{p(i_{\tau})}{p(i_{0})},
\label{totent1}
\end{equation}
where $p(i_{\tau})$ is the final probability of state $|i_{\tau}\ra$ at time $\tau$.
The probability density $P_F(\Delta S_{tot})$ for the forward path is by definition
\begin{align}
P_F(\Delta S_{tot})&=\sum_{i_{\tau},i_{0}}\delta\left( \Delta S_{tot}+\ln\frac{p(i_{\tau})}{p(i_{0})}\right) P_F(i_{\tau},i_{0})\nn\\
&=\sum_{i_{\tau},i_{0}}\delta\left( \Delta S_{tot}+\ln\frac{p(i_{\tau})}{p(i_{0})}\right)p(i_{\tau}|i_{0})p(i_{0}).
\label{iso1}
\end{align}
We now introduce time reversal operator $\Theta$. The time reversed state of $|i\ra$ is defined as $|\td{i}\ra=\Theta|i\ra$.
It can be readily shown that \cite{lah12}
\begin{equation}
p(i_2|i_1)=|\langle i_{2}|U_{\lambda}(t_{2},t_1)|i_{1}\rangle|^{2}=|\langle \tilde{i}_{1}|U_{\lambda^{\dg}}(\tilde{t}_{1},\tilde{t}_{2})|\tilde{i}_{2}\rangle|^{2}=p(\td{i}_1|\td{i}_2).
\label{microrev}
\end{equation}
where $\td{t}=\tau-t$ and $\lambda^{\dg}(\td{t})=\lambda(\tau-t)$ is the time reversed protocol of $\lambda(t)$. The evolution of the system from given time reversed state $\Theta|i_2\ra$ to the time-reversed state $\Theta |i_1\ra$, under the time reversed protocol $\lambda^\dagger(t)$, is given by the conditional probability $p(\td{i}_1|\td{i}_2)$. We consider the initial distribution of reverse trajectory to be equal to the final distribution of forward trajectory
\begin{equation}
p(\td{i}_{\tau})= p(i_{\tau}).
\label{revprob}
\end{equation}
The states $|i\ra$ and $|\td{i}\ra$ have one-to-one correspondence. Multiplying and dividing by $p(i_\tau)$ in the summand in eq.(\ref{iso1}) and using (\ref{microrev}) and (\ref{revprob}), we get
\begin{align}
P_F(\Delta S_{tot})&=\sum_{{i}_{\tau},{i}_{0}}\delta\left( \Delta S_{tot}+\ln\frac{p(i_{\tau})}{p(i_{0})}\right)p(\td{i}_{0}|\td{i}_{\tau})p(\td{i}_{\tau})\frac{p(i_{0})}{p(i_{\tau})}\nn\\
&=\sum_{{i}_{\tau},{i}_{0}}\delta\left( \Delta S_{tot}+\ln\frac{p(i_{\tau})}{p(i_{0})}\right)p(\td{i}_{0}|\td{i}_{\tau})p(\td{i}_{\tau})e^{\Delta S_{tot}}\nn\\
&=e^{\Delta S_{tot}}\sum_{{i}_{\tau},{i}_{0}}\delta\left( \Delta S_{tot}+\ln\frac{p(\td{i}_{\tau})}{p(\td{i}_{0})}\right)p(\td{i}_{0}|\td{i}_{\tau}) p(\td{i}_{\tau})\nn\\
&=e^{\Delta S_{tot}}\sum_{{i}_{\tau},{i}_{0}}\delta\left( \Delta S_{tot}-\ln\frac{p(\td{i}_{0})}{p(\td{i}_{\tau})}\right)P_R(\td{i}_{\tau},\td{i}_{0})\nn\\
&= e^{\Delta S_{tot}} P_R(-\Delta S_{tot}).
\label{asd}
\end{align}
To arrive at this result we have used eq.(\ref{totent1}) in the second step and eq.(\ref{revprob}) in third step. $P_R(\td{i}_{\tau},\td{i}_{0})$ is the joint probability of the corresponding states in the reverse direction. If the $\Delta S_{tot}$ is the total entropy change for forward path then the total entropy change in the corresponding reverse path is $-\Delta S_{tot}$. It follows from the fact that $p(\td{i}_\tau)$ and $p(\td{i}_0)$ is the initial and final probability distribution of state in the time reversed process because of unitary evolution. The eq.(\ref{asd}) can be written in the form
\begin{equation}
\dfrac{P_F(\Delta S_{tot})}{P_R(-\Delta S_{tot})}=e^{\Delta S_{tot}}.
\end{equation}
This is the detailed fluctuation theorem for change in total entropy, extended to the quantum regime. Simple cross multiplication followed by integration over $\Delta S_{tot}$ leads to the integral form of the above theorem:
\begin{equation}
\langle e^{-\Delta S_{tot}}\rangle =1.
\end{equation}
\subsection{Isolated quantum system with feedback}
\label{isofeed}
So far we have been dealing with a predetermined protocol, also known as open loop feedback. Often to increase the efficiency of a physical process ( eg. engines at nanoscale, molecular motor etc.), we need to perform intermediate measurements and change the protocol as per the outcomes of these measurements \cite{jac03,jac04,jac06,jac08,sag10,sag11,pon10,hor10}. Such a process is known as closed loop feedback.
Let the system evolve under some external protocol $\lambda_0(t)$, from its initial energy eigenstate $|i_0\ra$ measured at time $t_0$. At time $t_1$, we perform a measurement of some arbitrary observable and system collapses to state $|i_1\ra$.
We assume that measurement process leading to information gain involves classical errors. Here $y_1$ is the measured outcome with a probability $p(y_1|i_1)$, while system's actual state is $|i_1\ra$. Depending on the value of $y_1$ the protocol is changed to $\lambda_{y_1}(t)$. Under this new protocol the system evolves unitarily up to time $t_2$ where another measurement is performed and so on. This process terminates at time $\tau$ when the system collapses to its final energy eigenvalue $|i_{\tau}\ra$. We should note that initial and final measurements are energy measurements. The joint probability of the corresponding state trajectory for n number of intermediate measurements $y_1, y_2,\cdots y_n$ at times $t_1,t_2,\cdots t_n$ respectively is \cite{lah12}
\begin{align}
P_F(i_{\tau},..,i_{1},i_{0},y_{n},..,y_{1})&=p(i_{\tau}|i_{n})\cdots p(y_2|i_2)p(i_2|i_1)p(y_1|i_1)p(i_{1}|i_{0})p(i_{0})\nn\\
&=|\langle i_{\tau}|U_{\lambda_{y_{n}}}(\tau,t_{1})|i_{n}\rangle|^{2}...p(y_{2}|i_{2}) |\langle i_{2}|U_{\lambda_{y_1}}(t_{2},t_{1})|i_{1}\rangle|^{2}p(y_{1}|i_{1}) \nn\\
&\hspace{4cm}\times|\langle i_{1}|U_{\lambda_0}(t_{1},0)|i_{0}\rangle|^{2}p(i_{0}).
\label{PFiso}
\end{align}
It may be noted that the joint probability of path is expressed using classical probability rules. This is because we perform projective measurement on the system which collapses to one of the eigenvalues of the measured observables \cite{lah12,ran12}.
As a consequence, it wipes out the previous memory of evolution and the post-measurement evolution becomes uncorrelated to the pre-measurement evolution.
Thus if one performs intermediate measurements along two paths, the interference effects between the two paths disappear and the quantum effects are suppressed. Hence in the presence of measurement, path probability in state space obeys classical probability rules, and is given by product of transition probability of paths between consecutive measurements. However, it may be noted that quantum mechanics enters through the explicit calculation of transition probability between two consecutive states.
To generate the reverse trajectory of a path in state space given in eq.(\ref{PFiso}), we first choose one of the forward protocols with probability $p(y_n \cdots,y_2,y_1)$, and then blindly time reverse the protocol. We perform measurements at the appropriate times along reverse path to allow the state to collapse to the corresponding time-reversed eigenstates. We do not use these measurements to perform any feedback to respect causality \cite{hor10}. Then the expression for the joint probability of reverse trajectory is given by
\begin{equation}
P_R(\td{i}_{\tau}\cdots ,\td{i}_{0},y_{n},..,y_{1}) = p(\td{i}_N|\td{i}_{\tau})\cdots p(\td{i}_0|\td{i}_1) p(i_{\tau})p(y_n\cdots ,y_1).
\label{PRisofed}
\end{equation}
The mutual information gain due to measurements between the measured values and the actual value is defined as \cite{lah12,hor10}
\begin{equation}
I=\ln \frac{p(y_n|i_n)...p(y_2|i_2)p(y_1|i_1)}{p(y_n \cdots,y_2,y_1)}.
\label{mutlinf}
\end{equation}
We now calculate the joint probability density $P_F(\Delta S_{tot},\mI)$ of the entropy production and $\mI$ along the forward path, which is
\begin{align}
P_F(\Delta S_{tot},\mathcal{I})&=\int dy_n\cdots dy_1\sum_{i_{\tau}\cdots,i_{1},i_{0}}\delta\left( \Delta S_{tot}+\ln\frac{p(i_{\tau})}{p(i_{0})}\right)\delta\left(\mathcal{I}-I(i_{n},..,i_{1},y_{n},..,y_{1})\right)\nn\\
&\hspace{5cm} \times P_F(i_{\tau},..,i_{1},i_{0},y_{n},..,y_{1})\nn\\
&=\int dy_n\cdots dy_1\sum_{i_{\tau}\cdots,i_{1},i_{0}}\delta\left( \Delta S_{tot}+\ln\frac{p(i_{\tau})}{p(i_{0})}\right)\delta\left(\mathcal{I}-I(i_n,..,i_{1},y_{n},..,y_{1})\right) \nn\\
&\hspace{5cm} \times p(i_{\tau}|i_{n})\cdots p(y_2|i_2)p(i_2|i_1)p(y_1|i_1)p(i_{1}|i_{0})p(i_{0})\nn\\
&=\int dy_n\cdots dy_1\sum_{i_{\tau}\cdots,i_{1},i_{0}}\delta\left( \Delta S_{tot}+\ln\frac{p(i_{\tau})}{p(i_{0})}\right)\delta\left(\mathcal{I}-I(i_n,..,i_{1},y_{n},..,y_{1})\right)\nn\\
& \hspace{5cm} \times p(\td{i}_N|\td{i}_{\tau})\cdots p(\td{i}_0|\td{i}_1) p(i_{\tau})p(y_n\cdots ,y_1)e^{\Delta S_{tot}+I}\nn\\
&=\int dy_n\cdots dy_1\sum_{i_{\tau}\cdots,i_{1},i_{0}}\delta\left( \Delta S_{tot}+\ln\frac{p(i_{\tau})}{p(i_{0})}\right)\delta\left(\mathcal{I}-I(i_n,..,i_{1},y_{n},..,y_{1})\right)\nn\\
&\hspace{5cm} \times P_R(\td{i}_{\tau}\cdots ,\td{i}_{0},y_{n},..,y_{1})
e^{\Delta S_{tot}+I}\nn\\
&=e^{\Delta S_{tot}+\mI}\int dy_n\cdots dy_1\sum_{i_{\tau}\cdots,i_{1},i_{0}}\delta\left( \Delta S_{tot}+\ln\frac{p(i_{\tau})}{p(i_{0})}\right)\delta\left(\mathcal{I}-I(i_n,..,i_{1},y_{n},..,y_{1})\right)\nn\\
&\hspace{5cm} \times P_R(\td{i}_{\tau}\cdots ,\td{i}_{0},y_{n},..,y_{1}) \nn\\
&= e^{\Delta S_{tot}+\mI} P_R(-\Delta S_{tot},\mathcal{I}).
\end{align}
In deriving above result we have made use eq.(\ref{PFiso}), (\ref{PRisofed}), (\ref{mutlinf}). The path variable $I(i_n,..,i_{1},y_{n},..,y_{1})$ is given by eq. (\ref{mutlinf}), and $\mathcal{I}$ denotes its value.
It is important to note that the probability density function $P_R(-\Delta S_{tot},\mI)$ gives the probability of reverse trajectories along which the entropy chage is $-\Delta S_{tot}$ and whose corresponding forward trajectory has the mutual information $\mathcal{I}$ between its measured outcomes and actual states.
Once again, the initial and final distributions of states along forward trajectory get interchanged in the reverse trajectory because of unitary evolution between measurements. Along the reverse trajectory the change in total entropy is $-\Delta S_{tot}$. Thus we obtain the DFT
\begin{equation}
\dfrac{P_F(\Delta S_{tot},\mI)}{P_R(-\Delta S_{tot},\mI)}=e^{\Delta S_{tot}+\mI}.
\end{equation}
From the above equation the extended version of IFT and second law, eqs. (\ref{ModIFT}) and (\ref{ModSL}) can be readily obtained as discussed in earlier subsection.
\subsection{Isolated system under multiple measurements}
In this subsection we restrict ourselves on the influence of intermediate measurements of arbitrary observables on the statistics of $\Delta S_{tot}$. To this end we do not involve any feedback. Following closely the discussions in section (\ref{isofeed}),
of path probability in state space is given by
\begin{align}
P(i_{\tau},..,i_{1},i_{0})&=p(i_{\tau}|i_{n})\cdots p(i_2|i_1)p(i_{1}|i_{0})p(i_{0})\nn\\
&=|\langle i_{\tau}|U_{\lambda_{y_{n}}}(\tau,t_{1})|i_{n}\rangle|^{2}... |\langle i_{2}|U_{\lambda_{y_1}}(t_{2},t_{1})|i_{1}\rangle|^{2}
|\langle i_{1}|U_{\lambda_0}(t_{1},0)|i_{0}\rangle|^{2}p(i_{0}).
\end{align}
From preceding section we now calculate the probability density $P_F(\Delta S_{tot})$ of the total entropy change along forward path
\begin{align}
P_F(\Delta S_{tot})&=\sum_{i_{\tau}\cdots,i_{1},i_{0}}\delta\left( \Delta S_{tot}+\ln\frac{p(i_{\tau})}{p(i_{0})}\right)P(i_{\tau},..,i_{1},i_{0})\nn\\
&=\sum_{i_{\tau}\cdots,i_{1},i_{0}}\delta\left( \Delta S_{tot}+\ln\frac{p(i_{\tau})}{p(i_{0})}\right) p(i_{\tau}|i_{n})\cdots p(i_2|i_1)p(i_{1}|i_{0})p(i_{0})\nn\\
&=\sum_{i_{\tau}\cdots,i_{1},i_{0}}\delta\left( \Delta S_{tot}+\ln\frac{p(i_{\tau})}{p(i_{0})}\right)p(\td{i}_N|\td{i}_{\tau})\cdots p(\td{i}_0|\td{i}_1) p(i_{\tau})e^{\Delta S_{tot}}\nn\\
&=\sum_{i_{\tau}\cdots,i_{1},i_{0}}\delta\left( \Delta S_{tot}+\ln\frac{p(i_{\tau})}{p(i_{0})}\right)P_R(\td{i}_{\tau}\cdots ,\td{i}_{0})
e^{\Delta S_{tot}}\nn\\
&=e^{\Delta S_{tot}}\sum_{i_{\tau}\cdots,i_{1},i_{0}}\delta\left( \Delta S_{tot}+\ln\frac{p(i_{\tau})}{p(i_{0})}\right)P_R(\td{i}_{\tau}\cdots ,\td{i}_{0}),
\end{align}
where $P_R(\td{i}_{\tau}\cdots ,\td{i}_{0})$ is the probability of reverse path. The DFT for $\Delta S_{tot}$ follows from the above equation:
\begin{equation}
\dfrac{P_F(\Delta S_{tot})}{P_R(-\Delta S_{tot})}=e^{\Delta S_{tot}}.
\label{asdf}
\end{equation}
We observe from eq.(\ref{asdf}) the robustness of this FT against intermediate measurements \cite{han10,han11}. It retains the same form as in the classical case. The path probability, however, gets modified in presence of measurements and statistics of $\Delta S_{tot}$ is strongly influenced by the intermediate measurements. In the next section we derive IFT in presence of feedback for a quantum system coupled weakly to a bath. In the appendix, we have shown that the IFT for $\Delta S_{tot}$ is also robust against weak or generalized intermediate measurements.
\section{Weakly coupled quantum system}
Consider a driven system which is weakly coupled to a bath. The total Hamiltonian will be
\begin{equation}
H(\lambda(t))=H_S (\lambda(t))+ H_B+ H_{SB}.
\end{equation}
The external time dependent drive $\lambda(t)$ only affects the system Hamiltonian $H_S (\lambda(t))$ while the bath Hamiltonian $H_B$ and interaction Hamiltonian $H_{SB}$ are time independent. As the system is weakly coupled it is assumed that $H_{SB}$ is negligibly small compared to $H_S (\lambda(t))$ and $ H_B$.
Initially the super-system (system+bath) is coupled to a large reservoir of inverse temperature $\beta$ \cite{han11,han09}. At time $t=0$ the large reservoir is decoupled from the super-system. Hence initially the super-system will be in a canonical distribution,
\begin{equation}
\rho(\lambda_0)=\dfrac{e^{-\beta H(\lambda(0))}}{Y(\lambda(0))},
\end{equation}
where $Y(\lambda(0))= \Tr{}e^{-\beta H(\lambda(0))}$. The system and the bath Hamiltonians commute with each other, hence we can measure
simultaneously the energy eigenstates for system as well as bath. At t=0, the measured energy eigenvalues of system and bath are denoted by $E_0^{S}$ and $E_0^B$, respectively. We perform N number of intermediate measurements of some arbitrary observable at time $t_1,t_2\cdots t_N$ between time 0 to ${\tau}$. Initially the protocol was $\lambda_0(t)$. At $t_1$ the measured output is $y_1$, while its actual state is $i_1$, with probability $p(y_1|i_1)$. Now the protocol is changed to $\lambda_{y_1}(t)$ and system evolves up to time $t_2$ . Again measurement is performed and protocol is changed according to the output at intermediate times and so on. Finally at t=$\tau$ joint measurement is performed on system and bath Hamiltonians, and the measured eigenvalues are $E_\tau^S$ and $E_\tau^B$, respectively. The system-reservoir interaction energy can be neglected in the presence of weak coupling. Hence during the evolution process from time $t=0$ to $t=\tau$ for a single realization the change in the internal energy of the system is given by \cite{han09}
\begin{equation}
\Delta U=E_{\tau}^S-E_0^{S}
\end{equation}
and the heat dissipated to the bath is
\begin{equation}
Q=E_{\tau}^B-E_0^{B}.
\end{equation}
If $i_0$ and $i_{\tau}$ denote initial and final system energy eigenstates, then system entropy change is
\begin{equation}
\Delta S_{sys}=-\ln \frac{p(i_{\tau})}{p(i_0)},
\end{equation}
and the total entropy change is
\begin{equation}
\Delta S_{tot}=\Delta S_{sys}+\Delta S_{B}=-\ln \frac{p(i_{\tau})}{p(i_0)}+\frac{Q}{T},
\end{equation}
where T is the temperature of the bath. The mutual information between the state trajectory $\{i_1,i_2,\cdots, i_N\}$ and the measurement trajectory $\{y_1,y_2,\cdots y_N\}$ is
\begin{equation}
I \equiv \ln\left[\frac{p(y_1|i_1)\cdots p(y_N|i_N)}{P(y_1,\cdots, y_N)}\right].
\end{equation}
Denoting initial and final states of the bath by $\alpha_0$ and $\alpha_{\tau}$, it can written from microscopic reversibility \cite{sag10,hor10}
\begin{equation}
p(i_{\tau},\alpha_{\tau}|i_0,\alpha_0)=p(\td{i}_0,\td{\alpha_0}|\td{i}_{\tau},\td{\alpha}_{\tau}).
\label{microrev1}
\end{equation}
where $ p(i_{\tau},\alpha_{\tau}|i_0,\alpha_0)$ is the total transition probability for system and reservoir to evolve from state $|i_0,\alpha_0\ra$ to $|i_\tau,\alpha_\tau\ra$ under the full Hamiltonian. Here $|\tilde i,\tilde\alpha\ra \equiv \Theta|i,\alpha\ra$ is the time-reversed state of $|i,\alpha\ra$.
To generate the reverse trajectory, proper causal protocol has to be used which has been discussed in section \ref{isofeed}. Thus the forward and the reverse path probabilities of trajectories are respectively given by
\begin{equation}
P_F(A\to B)= p(i_{\tau},\alpha_{\tau}|i_N,\alpha_N)\cdots p(y_1|i_1)p(i_{1},\alpha_{1}|i_0,\alpha_0)p(i_0,\alpha_0),
\label{P_F}
\end{equation}
\begin{equation}
P_R(A\leftarrow B)=p(\td{i}_0,\td{\alpha_0}|\td{i}_1,\td{\alpha}_1)\cdots p(\td{i}_N,\td{\alpha_N}|\td{i}_{\tau},\td{\alpha}_{\tau}) p(\td{i}_{\tau},\td{\alpha}_{\tau})
p(y_1,y_2,\cdots y_N).
\label{P_R}
\end{equation}
The notations A and B denote initial and final values of the forward protocol, respectively. For reverse trajectory we have chosen the outcome of the forward trajectory with probability $p(y_1,y_2,\cdots y_N)$ and have blindly reversed the protocol, but performing measurements (without any feedback) at appropriate time instants. From (\ref{P_F}) and (\ref{P_R}) we get
\begin{align}
\frac{ P_F(A\to B)}{P_R(A\leftarrow B)}&=\frac{ p(i_{\tau},\alpha_{\tau}|i_N,\alpha_N)\cdots p(y_1|i_1)p(i_{1},\alpha_{1}|i_0,\alpha_0)p(i_0,\alpha_0)}
{p(\td{i}_0,\td{\alpha_0}|\td{i}_1,\td{\alpha}_1)\cdots p(\td{i}_N,\td{\alpha_N}|\td{i}_{\tau},\td{\alpha}_{\tau}) p(\td{i}_{\tau},\td{\alpha}_{\tau})
p(y_1,y_2,\cdots y_N)}\nn\\
&=\frac{p(y_N|i_N)\cdots p(y_1|i_1)}{P(y_1,\cdots, y_N)}\frac{p(i_0,\alpha_0)}{p(\td{i}_{\tau},\td{\alpha}_{\tau})}\nn\\
&=e^{I}\hspace{0.2 cm} \frac{p(i_0)p(\alpha_0)}{p(\td{i}_{\tau})p(\td{\alpha}_{\tau})}\nn\\.
\label{ratio}
\end{align}
In arriving at (\ref{ratio}), we have used microreversibility (\ref{microrev1}) and we have assumed that the system and the bath are weakly coupled.
The joint probability of system and bath states is approximated as a product of individual state probabilities.
Correction to this factorized initial state is at least of second order in system-bath interaction, and therefore they can be neglected in the limit of weak coupling.
The bath probability can be considered canonical with inverse temperature $\beta$.
This leads to
\begin{align}
\frac{ P_F(A\to B)}{P_R(A\leftarrow B)}&=e^{I}~e^{\Delta S_{sys}}\frac{e^{-\beta E_0^B}/Z_B}{e^{-\beta E_{\tau}^B}/Z_B}
=e^{I}~e^{\Delta S_{sys}}e^{Q/T}
=e^{\Delta S_{tot}+I}.
\label{IFT_weak}
\end{align}
A simple cross multiplication and integration over paths gives the extended IFT. It may be noted that in our framework we can also obtain the DFT, provided the system either begins and ends in equilibrium or in the same nonequilibrium steady state \cite{cro99}. In the next section, we prove the same IFT for $\Delta S_{tot}$ by means of the method developed in \cite{qua08} by using the quantum mechanical generalization of the Crooks fluctuation theorem.
\section{IFT using quantum Crooks fluctuation theorem}
\label{strong}
We consider the system to be coupled to a bath, but there is no assumption made in regard to the strength of the coupling. Each time step in the entire evolution is divided into two substeps. In first substep the protocol is changed while in second, protocol is kept fixed and system relaxes by dissipation of heat. The total evolution is divided into N steps. Each step starts at $t_{n}$ and ends at $t_{n+1}$, where $n=0,1,2,\cdots N-1$. We closely follow the treatment in \cite{qua08}.
For a quantum adiabatic process the protocol changes slowly and the system remains in same eigenstate in the work step. However, in the present case the work step is almost instantaneous and the process is non adiabatic. As a consequence the eigenstates before and after work step may be different. The system starts to evolve under a predetermined protocol $\lambda_0$. For simplicity, let us consider the observable measured at intermediate times to be the Hamiltonian itself. It can be readily generalized to the case of other observables. We consider that the feedback is applied at the beginning of the each work step and we change the protocol subsequently according to the result obtained from the measurement, as discussed earlier.
The conditional probability $p(y_{n-1}|i_{n-1})$ denotes that the measured outcome is $y_{n-1}$ while the actual collapsed state is $|i_{n-1},\lambda_{n-1}\ra$, at the beginning of the $n^{th}$ work step. Within the ket notation, $i_{n-1}$ represents the state of the system and $\lambda_{n-1}$ is the value of the control parameter. After the measurement of $t_{n-1}$, the protocol is changed to $\lambda_n(y_{n-1})$ from $\lambda_{n-1}(y_{n-2})$. During the work step, the system evolves unitarily from $t_{n-1}$ to $t'_{n-1}$, where it is measured to be in state $|i'_{n-1},\lambda_n\ra$.
The time taken in the work substep is considered to be too small for the system to relax. In the $n^{th}$ heat step, the system relaxes from state $|i'_{n-1},\lambda_n\ra$ to $|i_n,\lambda_n\ra$. Therefore, the path followed by the system in state space of the measured eigenstates from state $|i_0,\lambda_0=A\ra$ to $|i_\tau,\lambda_\tau=B\ra$ is represented as
$|i_0,\lambda_0\ra \to|i'_0,\lambda_1\ra \to|i_1,\lambda_1\ra \to |i'_1,\lambda_2\ra \to \cdots \to |i_{N-1},\lambda_{N-1}\ra \to |i'_{N-1},\lambda_{N}\ra \to |i_{N},\lambda_{N}\ra $.
Let $E(i_n,\lambda_n)$ be the energy eigenvalue of state $|i_n,\lambda_n\ra$.
By adding the contributions from all the work steps, the total work done on the system is given by
\begin{equation}
W=\sum_{n=0}^{N-1}\left[ E(i'_n,\lambda_{n+1})-E(i_n,\lambda_n)\right],
\end{equation}
while heat dissipated into the bath is
\begin{equation}
Q=-\sum_{n=0}^{N-1}\left[ E(i_{n+1},\lambda_{n+1})-E(i'_{n},\lambda_{n+1})\right].
\label{Q}
\end{equation}
The change in internal energy of the system along the trajectory is
\begin{equation}
\Delta E=Q+W= E(i_N,\lambda_N)-E(i_0,\lambda_0).
\end{equation}
As before, the mutual information is
\begin{equation}
I=\ln \frac{p(y_n|i_n)...p(y_2|i_2)p(y_1|i_1)}{p(y_n \cdots,y_2,y_1)}.
\label{I}
\end{equation}
The forward and the reverse path probabilities are respectively given by
\begin{equation}
P_F(A\to B) =p(i_0,\lambda_0)\prod_{n=0}^{N-1}p(y_n|i_n)p_F(|i_n,\lambda_n\ra\to |i_n',\lambda_{n+1}\ra)~p_F(|i_n',\lambda_{n+1}\ra\to |i_{n+1},\lambda_{n+1}\ra).
\end{equation}
and
\begin{align}
P_R(A\leftarrow B)&=p(i_N,\lambda_N)p(y_n \cdots,y_1) \prod_{n=0}^{N-1}p_R(|\tilde i_n,\lambda_n\ra\leftarrow |\tilde i_n',\lambda_{n+1}\ra) p_R(|\tilde i_n',\lambda_{n+1}\ra\leftarrow|\tilde i_{n+1},\lambda_{n+1}\ra).
\end{align}
As mentioned earlier, during the work step, the system can be regarded as an isolated quantum system and evolution is completely determined by the time-dependent Hamiltonian $H_S(\lambda(t))$. Thus the evolution is unitary.
Microscopic reversibility for work step gives \cite{qua08}
\begin{equation}
p_F(|i_n,\lambda_n\ra\to |i_n',\lambda_{n+1}\ra) = p_R(|\tilde i_n,\lambda_n\ra\leftarrow |\tilde i_n',\lambda_{n+1}\ra).
\label{micrev}
\end{equation}
The heat steps or relaxation steps are assumed to be microscopically reversible and obey
the local detailed balance for all the fixed values of the external parameter $\lambda$.
The detailed balance condition in relaxation substep implies
\begin{equation}
\frac{P_F(|i'_{n},\lambda_{n+1}\ra\to |i_{n+1},\lambda_{n+1}\ra)}{P_R(|\tilde i'_{n},\lambda_{n+1}\ra\leftarrow |\tilde i_{n+1},\lambda_{n+1}\ra)} = \exp[-\beta(E_{n+1},\lambda_{n+1}-E(i'_{n},\lambda_{n+1})].
\label{detbal}
\end{equation}
Using above two equations we get
\begin{align}
\frac{ P_F(A\to B)}{P_R(A\leftarrow B)} =\frac{p(y_n|i_n)...p(y_2|i_2)p(y_1|i_1)}{p(y_n \cdots,y_2,y_1)}\frac{p(i_0,\lambda_0)}{p(i_N,\lambda_N)}\prod^{N-1}_0 \exp[-\beta(E_{n+1},\lambda_{n+1}-E(i'_{n},\lambda_{n+1})].
\label{DFT}
\end{align}
The total entropy change along the trajectory, $\Delta S_{tot} = \Delta S+\Delta S_B$, which is a trajectory dependent random variable,
where $\Delta S\equiv -\ln\frac{P(i_N,\lambda_N)}{P(i_0,\lambda_0)}$ is the change in system entropy, and $\Delta S_B\equiv Q/T$ is the entropy change of the bath, along a single trajectory. Using (\ref{Q}) and (\ref{I}), eq. (\ref{DFT}) simplifies to
\begin{align}
\frac{ P_F(A\to B)}{P_R(A\leftarrow B)}
=e^{I}~e^{\Delta S_{sys}}e^{Q/T}
=e^{\Delta S_{tot}+I}
\end{align}
This immediately leads to the generalized integral fluctuation theorem for total entropy change, in the presence of measurement and feedback.
In the above derivation, we have taken measurements for feedback at the beginning of the work steps for simplicity. These measurements can be performed at any time in-between the work steps. The result will not be affected. It would only make the notations more complicated and would not provide any new physical insight. Feedback cannot be performed within the heat step which be definition requires protocol to be held constant.
As in case (ii), the DFT for $\Delta S_{tot}$ can be obtained if the initial and final distributions are in the equilibrium or in the same nonequilibrium steady state \cite{cro99}.
\section{Conclusions}
Based on the path probability formulation in state space, we have derived generalized total entropy production fluctuation theorems for quantum systems in presence of measurement and feedback, for three different cases. They retain the same form as in classical case. The second law of thermodynamics gets modified in the presence of information and feedback (eq. (\ref{ModSL})). For isolated quantum system with feedback, we have derived the generalized DFT for the total entropy. For this case DFT retains the same form in presence of multiple measurements of any system observable, thus showing the robustness of these fluctuation theorems against measurements (von Neumann type or generalized measurements). For the case (ii) of a weakly coupled quantum system under feedback, we have derived the extended IFT for total entropy. In case (iii), we have derived the extended IFT for $\Delta S_{tot}$, using the quantum Crooks fluctuation theorem, where quantum trajectory is characterized by a sequence of alternating work and heat steps. IFT is valid for any initial arbitrary state of a system. DFT in cases (ii) and (iii) can be obtained only when the system either begins or ends in equilibrium or remains in the same nonequilibrium steady state. By using our approach, the generalized DFT can be proved, but we have not provided the details. The derivation of the robustness of the fluctuation theorems against intermediate measurements is given only for case (i), namely, for the isolated quantum system. Following the same treatment, the robustness of fluctuation theorems can be readily demonstrated for cases (ii) and (iii) as well.
In conclusion, we have generalized total entropy production fluctuation theorem in presence of feedback to the quantum domain using three different approaches.
\vspace{1cm}
{\large \bf Acknowledgements}
\normalsize
\vspace{0.5cm}
One of us (AMJ) thanks DST, India for financial support.
|
1,477,468,750,286 | arxiv | \section{Introduction}
The age of galactic Globular Clusters (GCs) provides fundamental
information about the age of the
universe and the formation history of the Galaxy.
Recent improvements in the input physics needed for computing
stellar evolutionary models
have revived theoretical work on low-mass stars
and GC age determinations (Chaboyer \& Kim 1995,
Mazzitelli et al.\ 1995, D'Antona et
al. 1997, Salaris et al.\ 1997). Salaris et
al. (1997 - Paper~I)
have shown that the age of the supposedly oldest clusters -- the most
metal-poor ones -- is around 12 Gyr. This age reduction with respect to
earlier work (e.g.\ Chaboyer et al.\ 1992,
Salaris et al.\ 1993)
has been identified to be mainly due to the use of
an improved equation of state that
includes non-ideal gas effects (Rogers et al.\ 1996).
After this initial work, which was motivated by a possible ``conflict
over the age of the universe'', the next step is to address
the question of Galaxy formation. This means that one has to investigate many
clusters and determine their ages, looking for correlations of age with
cluster metallicity or galactocentric distance.
The position of the turn-off (TO) is the feature
in the colour-magnitude-diagrams (CMD) of stellar clusters that is most
sensitive to the age of the stellar population. The higher the cluster age,
the less luminous and redder is the TO. Two differential quantities
are suited as age indicators that are
independent of reddening and distance: the brightness difference in $V$
magnitude between
the TO and the horizontal branch (HB)
at the RR Lyrae stars region, called the $\Delta(V)$ or vertical method
(see, e.g., Sandage \& Cacciari 1990; Stetson et al.\ 1996
for a review)
and the $(B-V)$ colour difference between the TO and the base of the
Red Giant Branch (RGB), called the $\Delta(B-V)$ or horizontal method
(see, e.g., Chieffi \& Straniero 1989, VandenBerg et al.\ 1990).
In both
cases the TO position is differentially determined with respect to a
CMD branch (the HB or the RGB) whose location is virtually
independent of age in the case of old stellar populations.
Direct absolute age determinations based on the vertical method lead to
a large spread in ages, which seems to be correlated with
metallicity (Chaboyer et al.\ 1996). However, the quality and
diverseness of the data do not favour this method. The horizontal
method works best if
all problems with theoretical effective temperatures (convection
theory, atmospheres) and the conversion of theoretical quantities to
observed colour and brightness are avoided. It therefore is very well
suited for obtaining relative ages of clusters of similar metallicity,
but absolute ages are very difficult to obtain.
A combination of both methods seems therefore to be promising for an
accurate determination of absolute and relative ages.
Clusters are inspected in groups of similar metallicity, and one or
more suitable ``template'' clusters
(with homogeneous and good photometry for both TO and HB region)
are chosen to determine an
absolute age directly with the vertical method. Then, the horizontal
method is used for a differential comparison with other
clusters of the same group. This is the approach we have chosen (see
also Paper~I) and it is similar to the one by
Richer et al.\ (1996).
Our work differs from their analysis in that we use the
vertical method for determining the absolute ages by virtue of theoretical
isochrones and zero-age horizontal-branch models, while their absolute
ages were obtained
by fitting Bergbusch \& VandenBerg (1992) isochrones,
without explicitly using theoretical HB
models; moreover we use new and improved stellar models
(see Paper~I), taking into account the latest developments regarding
stellar opacities and equation of state.
The questions we want to address are: (i) what is the absolute age of
one or more template cluster in each metallicity group; (ii) how do
$\Delta(B-V)$ differences between clusters within a group
translate into age differences? This information is necessary for the
more global problem of whether there is an age spread between
galactic clusters, and if so, whether it is correlated with
metallicity. Assessing clearly the existence of an age spread among
GCs and of an age-metallicity relation is fundamental for
understanding the formation of our Galaxy. Very recently
Chaboyer et al.\ (1996) found strong evidence in favour of a spread in the ages
of galactic GCs and an age-metallicity relationship.
On the contrary, Stetson et al.\ (1996) concluded that there is no evidence for a
significant spread in ages among clusters of similar metallicity and
that the case concerning age differences between metallicity groups
remains unsettled, while Richer et al.\ (1996) found that the most
metal-poor clusters may be slightly older than clusters of higher
metallicities. The results of the latter group are, however, neither
inconsistent with a
picture in which all clusters of all metallicities formed
simultaneously. Between the most metal-rich clusters there appears to
be a considerable age spread of $\sim 2 $ Gyr (VandenBerg et al.\
1990), and in addition a number of exceptional clusters were found in
all investigations.
In the present paper we restrict ourselves to halo clusters - that
means GCs with kinematic properties typical of the halo component of
the Galaxy -
and try to answer both the questions concerning the distribution of
ages within individual metallicity groups and between groups.
In Sect.~2 we will review our
method used for determining absolute and relative cluster
ages. Sect.~3 contains our results for a large group of halo clusters,
which are compared with the results by Richer et al.\ (1996). In
Sect.~4 the implications for the cluster age distribution
are discussed, and a summary of our results follows in the final section.
\section{Method}
\subsection{Stellar Models}
As in Paper~I we rely on theoretical models for all evolutionary
phases; in particular we have computed stellar models from
the main-sequence (MS) up to the zero-age horizontal branch (ZAHB).
The input physics
employed in the models is the same as in Paper~I: for the
opacities we used a combination of the latest OPAL opacities (Rogers
\& Iglesias 1992; Iglesias \& Rogers 1996) and tables from
D.~Alexander (Alexander \& Ferguson 1994; Alexander, private
communication). The metal mixtures included identical $\alpha$-element
enhancement for all opacity tables.
The equation of state consisted of the OPAL EOS (Rogers,
Swenson \& Iglesias 1996) with extensions for the lowest temperatures
and degenerate helium cores taken respectively from Chieffi \&
Straniero (1989) and Straniero (1988).
Diffusion of helium and heavy elements is not included in the calculations.
The effect of diffusion on the ages obtained
from our models is discussed in Sect.~2.4, which is dedicated to the methods
for determing absolute and relative ages.
We computed stellar models for the following compositions: $(Z,Y)$ =
(0.0002, 0.230), (0.0004, 0.230), (0.0006, 0.232)
(0.0008, 0.232), (0.001, 0.233), (0.0015, 0.235)
(0.002, 0.236), (0.004, 0.242). For $\triangle Y / \triangle Z$ we
have taken a mean value of 3 (as in Bergbusch \& VandenBerg 1992;
Mazzitelli et al.\ 1995). The first two mixtures are those already used
in Paper~I. $\alpha$-elements are always enhanced (e.g.\ $[{\rm O/Fe}]
= +0.5$); the total metal-abundance $[{\rm M/H}]$ is about
0.2-0.3 dex higher than $[{\rm Fe/H}]$.
Stellar models with masses between 0.7 and $1.0 M_\odot$ were evolved
from the zero-age main sequence
up to the RGB. The mixing length has been calibrated as
explained in Paper~I. Isochrones for different ages were computed
from these evolutionary models. ZAHB models with varying envelope
masses were calculated as described in Paper~I.
Similarily, the conversion from theoretical effective temperature and
luminosity to observable colour $(B-V)$ and visual magnitude $V$ was
done using the transformations of Buser \& Kurucz (1978, 1992).
\subsection{Globular Clusters groups}
In this paper (as in Paper~I) we have used both the vertical and the
horizontal method for determining the distribution of the halo GCs
ages.
The strengths and weaknesses of these techniques have been
discussed extensively in many recent papers (VandenBerg et
al.\ 1990, Salaris et al.\ 1993, Chaboyer et al.\
1996). To summarize, the $\Delta(V)$ appears particularly
suitable for the determination of the absolute clusters ages, since
it is independent of the treatment of convection and is only weakly
sensitive
to metallicity. However, it can be safely applied only in the case of
clusters with homogeneous
photometry for both TO and HB, and with a well populated
horizontal part of the HB. Clusters with only a blue, vertical HB
are in principle excluded, because it is impossible to have
observational estimates of the absolute HB luminosity in the RR
Lyrae region or to constrain the age from the fit of theoretical ZAHB
models.
The $\Delta(B-V)$ can in principle be
applied to each kind of GC with sufficiently accurate photometry,
since each GC shows a main-sequence TO and an RGB. It is
only weakly sensitive to metallicity, but
absolute ages depend on the mixing length calibration and
on the transformations from effective temperatures
to colours. When all the problems related to the $T_{\rm eff}$
determination and the conversion to observed colours are minimized, as
in the case of clusters with similar metallicities, the horizontal
method turns out to be suitable for accurately determining the
relative ages of clusters (VandenBerg et al.\ 1990, Stetson et al.\ 1996).
Since our goal is to study the distribution of ages of a well
populated sample of halo GCs, one has to deal with clusters with very
different HB types and with photometries not always extended up to the
HB, or showing only a scarcely populated or blue HB.
Therefore, we have to apply a combination of both methods for getting
the ages of the cluster sample.
We have collected published CCD data for 25 halo clusters
(see Table~1), which span a wide range of metallicities,
galactocentric distances and HB types, divided into four groups according
to their metallicity.
The first group spans the range
$-2.1\leq{\rm [M/H]}<-1.6$ (metal poor clusters), the second
$-1.6\leq{\rm [M/H]}<-1.3$ (intermediate metal poor clusters), the third
$-1.3\leq{\rm [M/H]}<-0.9$ (intermediate metal rich clusters) and the
fourth $-0.9\leq{\rm [M/H]}<-0.6$ (metal rich clusters). The adopted
$\rm [Fe/H]$ values come from Zinn (1985), and the global metallicity has
been obtained considering an average $[\alpha{\rm/Fe}]=0.3$ and
${\rm [M/H]}={\rm[Fe/H]}+0.2$ according to the discussion in Paper~I. The
metallicity difference among the clusters in each group is about a
factor of 2. In the case of Rup106, Arp2 and Ter7, for which there
are no data in Zinn (1985), we have considered the $\rm[Fe/H]$ estimates
by Buonanno et al (1993, 1995a, 1995b) to which we have added the
contribution of the $\alpha$-elements
\footnote{Carretta \& Gratton (1997) recently have
recalibrated the Zinn \& West (1984) $\rm [Fe/H]$ scale, which corresponds to
the Zinn (1985) metallicity scale for almost all GCs in our sample.
The new calibration is based on a homogeneous set of cluster $\rm
[Fe/H]$ determinations from high resolution spectroscopic data; it provides
$\rm [Fe/H]$ values for the clusters in our sample
that are on average 0.20 dex higher than the ones we used.
This simply corresponds to an almost constant shift
of the $\rm [M/H]$ ranges adopted for our GCs groups,
but the membership of each single group does not change
and similarly the absolute and relative ages we derive are only very
marginally affected by the choice of this new scale (see
Sect.~3, which is about age determination).},
assuming that such an enrichment exists for these clusters as well.
We have considered only clusters with photometries that show at least
MS, TO and RGB, and that permit a clear determination
of the TO position (within an error $\leq \pm$0.15 mag).
In each group one cluster (or two clusters, if possible) is selected as
the ``reference'' one, and its absolute age is determined directly
by means of the vertical method; the photometry of the reference cluster
has to show not only a clearly defined TO position, but also
the RGB and a well-populated HB of such morphology that the ZAHB level
can be safely determined.
Thus, CMD morphology is more important than the
overall quality of the photometry for a cluster to be suitable as a
reference cluster.
Within each group the
relative ages with respect to the reference cluster are determined by
means of the horizontal method. Where there are two reference clusters the
age difference from the vertical method can be cross-checked with that
derived by means of the horizontal one.
\subsection{Absolute age determination}
The advantage of using the vertical method for determining the
absolute age of a cluster is that it is largely independent
of all uncertainties
connected with the calculation of effective temperatures and their
conversion to colours. It does not depend on the reddening and on the
assumed distance modulus (the same holds, of course, also for the
horizontal method), although the models yield a distance scale (by
means of the comparison between observed and theoretical ZAHB level),
which can be compared to independently determined values.
To obtain the
cluster age, the procedure is the following: from the observed
brightness of HB stars the apparent ZAHB brightness is derived (see
below); with the TO-brightness as given by the observers, $\Delta V$
is determined uniquely and is compared to theoretical predictions
of $\Delta V$ as a function of age for isochrones and ZAHB models of the
appropriate metallicity. These steps are sufficient to find the
cluster age.
However, one can go beyond this to check the reliability of the
results. First, the difference between observed apparent and
theoretical absolute ZAHB brightness gives the distance modulus
following from our models, which can be compared to independent
determinations from other distance
indicators; thus, the distance modulus provides an independent
way to assess the reliability of the derived ages.
Second, the isochrone can be compared to the CMD. Since
age and $(m-M)_V$ already have been fixed, isochrones (and ZAHB models)
remain to be shifted in
colour to match the observed MS. This shift corresponds to the
cluster reddening $E(B-V)$.
The overall fit to the
observed CMD may serve as an additional qualitative indicator. There is,
however, the following point to be taken into account:
since our age determination method does not rest on detailed isochrone
fitting, we have taken
the metallicity directly from the literature, without trying
to improve the fit by exploring the error range in [M/H].
Changing the metallicity within the allowed range can improve the
isochrone fit (see Sect.~3.2) without affecting appreciably the
determined age.
An important step in the vertical method is therefore to fix the zero-age level
of the observed HB. In Paper~I we selected two clusters (M68 and M15) with
well-populated HBs and a sufficiently large number of RR~Lyrae
variables.
With the mean brightness ($\langle V_{\rm RR}\rangle$) of
the variables fixed, we derived the zero-age level using the relation
by Carney et al.\ (1992):
\begin{equation}
V_{\rm ZAHB}=\langle V_{\rm RR}\rangle+0.05[{\rm Fe/H}]+0.20
\label{vzahb}
\end{equation}
We have also demonstrated in Paper~I
that (assuming a constant helium content of $Y=0.23$) our theoretical
relation between ZAHB brightness (taken at $\log T_{\rm eff}=3.85$) and
$[{\rm Fe/H}]$ together with Eq.~\ref{vzahb} agrees nicely with
that found empirically by Clementini et al.\ (1995) and also supports
that of Walker (1992a). The reader is referred to Paper~I
for a deeper discussion about the problem of the 'true'
observational relation between RR~Lyrae luminosities and metallicity.
In the present paper we consider an initial helium
abundance varying with $Z$, but the difference with the ZAHB
luminosities obtained in Paper~I is always less than 0.04
mag (reached at the highest metallicities considered).
In Fig.~\ref{theorhb} we show the final
relation between ZAHB luminosity and $[{\rm Fe/H}]$ from our
stellar models; in the same figure the
empirical relations by Clementini et al.\ (1995 - when considering
only ZAHB objects) and Walker (1992a) are also displayed (upper and
lower limits according to the errors given by the authors). The
agreement between the ZAHB theoretical models and both empirical
determinations is evident.
Very recently, first results on the absolute luminosity of HB
(Feast \& Catchpole 1997; de Boer et al.\ 1997) and
subdwarfs stars (Reid 1997; Gratton et al.\ 1997)
based on HIPPARCOS data have appeared. The implications of the latter
papers for the distances to M68 and M5 we will discuss in the
respective sections.
Feast \& Catchpole (1997)
used the trigonometric parallaxes of galactic Cepheid variables for
recalibrating the zero-point of the period-luminosity relation. After
applying a correction for metallicity effects, they derive a distance
modulus of $18.70\pm0.10$ for the LMC, almost 0.20 mag higher than the
value commonly used, based on a previous Cepheids calibration and on
other distance indicators (see, e.g. Walker 1992a).
This distance modulus, coupled with the RR Lyrae observations by
Walker (1992a) for LMC clusters, provides a mean absolute magnitude
$\langle V_{\rm RR}\rangle =0.25\pm0.10$ at
[Fe/H]=-1.9 for RR Lyrae stars; this value corresponds, by applying
Eq.~\ref{vzahb}, to $V_{\rm ZAHB}=0.36\pm0.10$. From our theoretical
models we get an
absolute ZAHB luminosity $V_{\rm ZAHB}=0.51$ at the instability strip
for the same metallicity. This means that our theoretical value would
be higher than the upper limit of the observationally
allowed range (0.46) by 0.05 mag.
The opposite conclusion can be reached when considering the results by
de~Boer et al.\ (1997), again based on HIPPARCOS parallaxes;
they present absolute $V$ magnitudes for a group of HB field halo stars,
bluer than the instability strip. We consider for example
the reddest star in their sample (HD161817), which is also the object
with the smallest error on $M_{V}$ (the errors for the other stars are
much bigger, up to 1.25 mag). The observations give $M_{V}=0.72\pm0.25$ and
$(B-V)_{0}$=0.14. The metallicity of the objects is not
given in the paper. By using our theoretical ZAHB models with ${\rm
[Fe/H]}=-1.03$
(as done by the authors when comparing their data with the ZAHB models
by Dorman 1992) we obtain $M_{V}=0.67$ at $(B-V)_{0}$=0.14. If we
consider the models with ${\rm[Fe/H]}=-1.6$, which corresponds
approximately to the average metallicity of field halo stars, we
obtain $M_{V}=0.58$. Both values are in
agreement with the observationally allowed range.
If, however, we take into account the
fact that HD161817 is likely to be evolved from the ZAHB towards higher
luminosities (as also noted by de~Boer et al.\ 1997), then the theoretical
models are more luminous than observationally allowed.
Therefore, the two papers discussed lead to discrepant
results. Compared to Feast \& Catchpole (1997) our predicted distance
moduli appear to be too low; compared to de~Boer et al.\ (1997) they
are too large. The first results based on HIPPARCOS data do not
disprove the reliability of our theoretical ZAHB luminosities.
\begin{figure}
\begin{center}
\mbox{\epsfxsize=0.9\hsize\epsffile{SW_f1.ps}}
\end{center}
\caption{Theoretical ZAHB luminosities as a function of $\rm[Fe/H]$.
Solid and dashed lines represent the upper and lower envelopes of the
empirical determinations by Clementini et al.\ (1995) and Walker
(1992a)}
\label{theorhb}
\end{figure}
To determine the observational ZAHB brightness, the application of
Eq.~1 requires good HB photometry and the presence of a sufficient
number of RR~Lyr stars. It was not possible for all metallicity groups
to find a cluster fulfilling both requirements.
Therefore, we developed a method to determine the
observational ZAHB level when the HB is well populated. This can be
applied each time the observational HB is populated in the horizontal
part, even if there are no stars in the instability strip.
Theoretically, all observed HB stars should be at least as bright as
the ZAHB. Thus, the lower envelope (for well-populated HBs) to the
observed HB provides a reasonable estimate for the ZAHB luminosity
(Sandage 1990). In practice, however, photometric errors, field stars
and other objects not belonging to the HB will spread out this lower
limit and a more statistical approach is necessary. To this end we
have looked into the brightness-distribution of HB stars in a few
colour bins. For each colour bin count histograms for brightness bins
were created; the brightness bins were typically 0.04-0.05 mag wide
(depending on the HB population). Formally,
we set the ZAHB level to the upper brightness of that bin which shows
a decrease in star counts by a factor $\geq$ 2 and where the brighter
bins contain more than 90\% of all candidate HB stars under consideration. This
is illustrated in Fig.~\ref{hbhisto}
for M5. For all the clusters to which we have applied the vertical
method these two conditions were always
fulfilled by the same luminosity bin. For M68 and M15, our method
reproduces the ZAHB levels at the RR~Lyrae instability strip as
determined by Eq.~\ref{vzahb} (Paper~I) within 0.01 mag, i.e.\ within
the binning error. In this paper we will adopt for M68 the ZAHB
luminosity and the associated formal error (the width of the
luminosity bin) determined with the new method;
the resulting absolute age is the same as in
Paper~I, but the formal error is lower (see Table~1). Note that
the way we define the ZAHB level leads to a systematic overestimate of
the ZAHB brightness of order half the bin width and
consequently to ages slightly too high; the uncertainty, however,
remains below $\approx$ 0.04-0.05 mag.
\begin{figure}
\begin{center}
\mbox{\epsfxsize=0.9\hsize\epsffile{SW_f2.ps}}
\end{center}
\caption{Brightness distribution of M5 HB stars in the colour region
$0.45<(B-V)<0.65$. The arrow marks the ZAHB level determined by both
the total number of stars above this bin and by the decrease in the
number of stars per bin}
\label{hbhisto}
\end{figure}
To check the influence of the chemical composition on our
vertical-method age estimates, we have shown in Paper~I
(but see also Salaris et al.\ 1994; Chaboyer 1995)
that a variation of the metallicity by a factor of 2 or the use of an
initial helium abundance $Y$ different by 0.01 in the theoretical
isochrones
changes the derived ages by slightly less than 1 Gyr. When taking
into account also helium and heavy element diffusion, contrary to our
expectations expressed in Paper~I, the vertical-method
ages derived decrease by less than 1 Gyr because of balancing effects
on TO and ZAHB (Castellani et al.\ 1996).
\subsection{Relative age determination}
In deriving the relative GC ages we have applied the horizontal
method, following the procedure presented by VandenBerg et al.\ (1990).
The ridge lines of two clusters are shifted horizontally in order to
match their TO colours, and then vertically to force coincidence
between the main sequences at a position 0.05 mag redder than the TO.
Differences
in the RGB colour (fixed for example at a point 2.5 mag more luminous
than the MS reference point) correspond to age differences derived
from our new theoretical isochrones.
As already discussed by VandenBerg et al.\ (1990), the precise point on the
RGB is of little significance, since the RGBs run essentially parallel
to one another. We have evaluated {\em mean} $\Delta(B-V)$ differences
from the cluster fiducials by considering, when possible, a
magnitude range along the RGB of typically 0.5-1.0 mag (depending on
the extension of the fiducial line), starting approximately from the
point 2.5 mag more luminous than the MS reference point. The formal
error of the relative ages derived by means of this procedure,
estimated from the uncertainty in measuring the shift in the position
of the RGB with respect to the reference cluster is about 0.5 Gyr.
As for the reliability of the age scaling derived adopting the
$\Delta(B-V)$ technique, the following points have to be mentioned:
\noindent
i) A comparison of the $\Delta(B-V)$ scaling with respect to the age
at fixed global metallicity, by adopting the oxygen-enhanced Bergbusch
\& VandenBerg (1992), the scaled-solar Straniero \& Chieffi
(1991) and our own isochrones shows a good agreement in spite of
different codes, different effective temperature normalization and/or
completely different input physics and heavy element distribution.
The relative ages obtained in the
three cases differ by no more than 20\%.
\noindent
ii) The effect of a variation in the initial helium content is almost
negligible. A variation of $\delta Y= \pm 0.01$ induces a change in the
relative ages of only $\approx 0.15$ Gyr.
\noindent
iii) A variation in the global metallicity by a factor of 2 changes
the relative ages by less than 0.5 Gyr.
\noindent
iv) A global change of the mixing length parameter $\alpha_{\rm MLT}$ for each
metallicity affects {\em absolute} ages derived by means of the
horizontal method much more than the {\em relative} ones. A variation of
$\alpha_{\rm MLT}$ by
$\pm 0.1$ changes the relative ages by 0.10-0.15 Gyr only, while changing
the absolute ages by $\approx$ 1 Gyr.
\noindent
v) The influence of the helium and heavy element diffusion on the
relative ages obtained by means of the horizontal method is almost
negligible (Cassisi 1996, private communication). A check at $Z=0.0002$
shows that the age differences determined with and without including
diffusion agree within 10\%.
\noindent
vi) The scaling of $\Delta(B-V)$ with respect to the age is
independent of the absolute age only for ages higher than a certain
value (depending on the metallicity). Below this limit, which, for
example, is around 11-12 Gyr at $Z=0.0002$, the age
differences depend on the absolute ages assumed for the reference
clusters; it is therefore important to fix the absolute age of the
reference cluster within each group by means of the vertical method.
\section{Cluster ages -- absolute and relative}
In this section we will discuss separately the absolute and relative
age determinations of the clusters within each of the four metallicity
groups. All the selected clusters, their metallicity, HB type,
galactocentric distance ($R_{GC}$), the derived ZAHB luminosity (and the
associated error) for the reference clusters, relative and
absolute ages (with their formal errors), are displayed in Table~1.
The ZAHB luminosities refer to the instability strip for M68 and
to the red boundary of the instability strip for NGC6584, NGC3201 and M5,
which have a well developed HB but no homogeneus RR Lyrae photometry.
In the case of NGC6171 and NGC6652 the ZAHB luminosity corresponds to the red HB.
The HB types come from Chaboyer et al.\ (1996).
An evaluation of
the galactocentric distances (in Kpc) has been obtained by
applying the following equation:
\begin{equation}
R_{GC}=((R_{GC}^{\odot})^2+d^2-2dR_{GC}^{\odot}cos(l)cos(b))^{0.5}
\label{galdis}
\end{equation}
where $b$ and $l$ are the cluster galactic coordinates,
$log(d)=(((m-M)_{o}+5)/5)-3$ (using $A_{V}$=3.3E(B-V)), and
the Sun's galactocentric distance is set to $R_{GC}^{\odot}$=8.0 Kpc.
The apparent distance moduli were obtained using our theoretical ZAHB
models and the ZAHB luminosities given in Table 1 for the reference
clusters, or the average HB magnitudes given by Chaboyer et al.\
(1996; in the case of NGC6366 the average HB luminosity comes from
Alonso et al.\ 1997), translated into ZAHB luminosities by means of
Eq.~\ref{vzahb}. The reddening values come from our work (and paper I
in the case of M68) for the ``reference'' clusters, from Alonso et
al.\ (1997) for NGC6366, and from Chaboyer et al.\ (1996) for all
others.
A simple estimate of the formal error in the absolute age for the
reference clusters is obtained as in Paper~I, by statistically adding
the formal uncertainties on the ZAHB level (displayed in Table 1) and
on the TO luminosity (as derived from the original papers) in order to
obtain the error in the observed $\Delta(V)$ value; this error is then
transformed into an uncertainty in the age by using the theoretical
isochrones. For the other clusters the formal error in the relative
ages derived by means of the horizontal method (see the previous
section) is statistically added to the error in the age of the
corresponding reference cluster.
At this point we have to comment on the meaning of the formal errors
given in Table~1. These errors have no true statistical meaning and
therefore the results of the statistical analyses of Sect.~4 should be
considered as being indicative and not rigorous. Chaboyer et al.\
(1996), who commented on this, have argued that the errors quoted by
the observers correspond to $1.63 \sigma$, and since these errors
referring to the HB or TO brightness are of a similar kind as those
given in Table~1, our errors could be considered as
corresponding to $1.63 \sigma$. However, we prefer to be more
conservative and assume that they are of the order of $1\sigma$. An
argument in favour of this is that according to
Chaboyer et al.\ (1996) the average $1\sigma$ error for all their
clusters is 0.083 mag -- corresponding to $\approx$1 Gyr, which is a similar age
uncertainty as we give in Table~1.
\begin{table*}
\caption[ ]{Halo Globular Cluster data. The columns display
respectively: cluster name, global metallicity (including
$\alpha$-enhancement), absolute age with the
associated formal error, relative age with respect to the reference
cluster, HB type, galactocentric distance (in kpc),
estimated level of the observational ZAHB (only for reference
clusters). The absolute age is obtained by means of the vertical method for
the reference clusters and by adding the relative ages for
all other clusters in the same group.}
\begin{tabular}{lrrrrrr}
\hline\noalign{\smallskip}
Name &$\rm [M/H]$& Age & Rel. age & HB type & $R_{GC}$ & $V_{zahb}$\\
\noalign{\smallskip} \hline\noalign{\smallskip}
& & \multicolumn{3}{c}{$-2.1\leq{\rm[M/H]}<-1.6$} \\
NGC4590~(M68)& -1.90 & 12.2$\pm$1.0 & & 0.44 & 10.2 & 15.72$\pm$0.04\\
NGC6341~(M92)& -2.04 & 11.8$\pm$1.1 & -0.4 & 0.88 & 9.5 & \\
NGC7099~(M30)& -2.03 & 12.7$\pm$1.1 & 0.5 & 0.88 & 7.3 & \\
NGC7078~(M15)& -1.95 & 12.2$\pm$1.1 & 0.0 & 0.72 & 10.5 & \\
NGC6397 & -1.70 & 12.2$\pm$1.1 & 0.0 & 0.93 & 5.9 & \\
NGC2298 & -1.65 & 12.5$\pm$1.1 & 0.3 & 0.93 & 17.0 & \\
Arp2 & -1.65 & 10.6$\pm$1.1 & -1.6 & 0.86 & 24.4 & \\
Rup106 & -1.65 & 10.1$\pm$1.1 & -2.1 &-0.82 & 18.7 & \\
& & \multicolumn{3}{c}{$-1.6\leq{\rm[M/H]}<-1.3$} \\
NGC6584 & -1.34 & 11.0$\pm$1.1 & &-0.09 & 6.6 & 16.60$\pm$0.05\\
NGC3201 & -1.36 & 10.5$\pm$1.2 & -0.5 & 0.08 & 10.1 & 14.90$\pm$0.05\\
NGC1904~(M79) & -1.49 & 11.0$\pm$1.2 & 0.0 & 0.89 & 19.0 & \\
NGC5272~(M3) & -1.46 & 11.0$\pm$1.2 & 0.0 & 0.08 & 11.9 &\\
NGC6254~(M10)& -1.40 & 11.0$\pm$1.2 & 0.0 & 0.94 & 4.7 & \\
NGC6752 & -1.34 & 10.5$\pm$1.2 & -0.5 & 1.00 & 5.2 & \\
NGC7492 & -1.31 & 11.0$\pm$1.2 & 0.0 & 0.90 & 24.8 & \\
& & \multicolumn{3}{c}{$-1.3\leq{\rm[M/H]}<-0.9$} \\
NGC5904~(M5) & -1.20 & 10.9$\pm$0.8 & & 0.37 & 6.2 & 15.15$\pm$0.05\\
Pal5 & -1.27 & 9.3$\pm$0.9 & -1.6 & -0.40 & 17.3 & \\
NGC288 & -1.20 & 9.8$\pm$0.9 & -1.1 & 0.95 & 11.7 & \\
NGC1851 & -1.13 & 8.9$\pm$0.9 & -2.0 &-0.33 & 17.6 & \\
NGC362 & -1.07 & 9.7$\pm$0.9 & -1.2 &-0.87 & 9.2 & \\
Pal12 & -0.94 & 7.5$\pm$0.9 & -4.0 &-1.00 & 17.0 & \\
& & \multicolumn{3}{c}{$-0.9\leq{\rm[M/H]}<-0.6$}\\
NGC6171~(M107) & -0.79 & 11.0$\pm$1.1 & &-0.76 & 3.6 & 15.72$\pm$0.04\\
NGC6652 & -0.69 & 8.0$\pm$1.2 & &-1.00 & 1.6 & 15.95$\pm$0.05\\
Ter7 & -0.80 & 6.5$\pm$1.2 & -4.5 &-1.00 & 27.5 & \\
NGC6366 & -0.79 & 13.2$\pm$1.2 & 2.2 &-1.00 & 4.9 & \\
\noalign{\smallskip} \hline
\end{tabular}
\end{table*}
\subsection{Metal-poor clusters: $-2.0\leq{\rm[M/H]}<-1.6$}
The clusters belonging to this group are M68 (Walker 1994), M15
(Durrell \& Harris 1993), M92 (Stetson \& Harris 1988), M30 (Richer
et al.\ 1988), NGC6397 (Buonanno et al.\
1989), NGC2298 (Janes \& Heasley 1988), Rup106 (Buonanno et al.\ 1993)
and Arp2 (Buonanno et al.\ 1995a).
The reference cluster is M68, and its age determined as in Paper~I by
means of the vertical method is 12.2$\pm$1.0 Gyr.
The distance modulus we derive from our ZAHB models is $15.26\pm0.06$.
The relative ages of the other clusters with respect to M68 have been
derived by means of the horizontal method and are displayed in
Table~1.
Recently, Reid (1997) has used HIPPARCOS parallaxes for nearby
metal-poor subdwarfs to determine improved absolute V magnitudes,
which we used in order to derive the distance modulus by means of the
MS fitting method (see, e.g., Sandquist et al.\ 1996).
We have considered only objects with an error
in the parallax determination of less than 12\% and a metallicity
including an average $\rm [\alpha/Fe]=0.3$ not higher than $Z\approx 0.0006$,
such that the colour corrections to be applied to this
empirical subdwarf sequence to match the metallicity of M68 are minimized.
In order to have stars representative of the unevolved part of the MS,
only objects fainter than absolute magnitude 5.8 have been selected.
Six stars were found which satisfy all these requirements. Assuming the
M68 metallicity given in Tab.~1 with an error of $\pm0.20$ dex, a
reddening of $0.07\pm0.01$ as given by Walker (1992b), and the colour
corrections given by our isochrones, which we extended for this
purpose down to lower masses, we obtain $(m-M)_{V,subdw}$=15.43$\pm$0.19.
Within the errors this value, obtained by using the subdwarfs with
HIPPARCOS parallaxes, agrees with that obtained from the ZAHB.
Gratton et al.\ (1997) performed a similar study based on high precision
trigonometric parallaxes from HIPPARCOS, coupled with accurate high resolution
spectroscopic determinations of $\rm [Fe/H]$ and $\rm [\alpha/Fe]$ for a sample
of about 100 subdwarfs. The average $\alpha$-element enhancement that
they determine is about 0.3 dex for ${\rm [Fe/H]}<-0.5$.
With these data they define the absolute
location of the empirical MS as a function of $\rm [M/H]$, and determine the
distances to 9 GCs by means of the MS fitting method, using a relation
for the scaling of the $(B-V)$-colour of the empirical MS with respect
to the metallicity that matches the observations and is
in good agreement with the one derived from our models.
Among their sample of GCs there are two of our template clusters,
namely M68 and M5 (see below). For M68 they get
$(m-M)_{V,subdw}=15.37\pm0.10$ to be compared with our value of
15.26, which becomes $(m-M)_{V}=15.23\pm0.06$, if we take into account
the slightly higher metallicities used by Gratton et al.\
(1997). Thus, also this distance modulus derived from HIPPARCOS
subdwarfs agrees within the errors with our value.
As an example for the determination of relative ages we show in
Fig.~\ref{hor1group1} the fiducial lines of NGC2298
and M92 registered to that of M68 as described in the previous
section; the two dashed
lines parallel to the RGB of M68 correspond to an age variation of
$\pm1$ Gyr with respect to the age of M68. When neglecting Rup106
and Arp2, this group is remarkably homogeneus in age (an
occurrence already discussed by VandenBerg et al.\ 1990 and Straniero \&
Chieffi 1991), the maximum age difference with respect to M68 being
$\approx 0.5$ Gyr.
\begin{figure}
\begin{center}
\mbox{\epsfxsize=0.9\hsize\epsffile{SW_f4.ps}}
\end{center}
\caption{Comparison of CMD ridge lines registered to that of M68 as
explained in the
text. The dashed lines on both sides of the M68 RGB indicate an age
difference of $\pm 1$ Gyr with respect to this cluster}
\label{hor1group1}
\end{figure}
Rup106 and Arp2 have been included in this group following the
metallicity determinations by Buonanno et al.\ (1993, 1995a), which
are based on
the characteristics of the observed cluster RGBs. The final value for
[M/H] has been obtained by adding the contribution of the
$\alpha$-elements as for the other clusters. The main lines of
Rup106, Arp2 and M68 are displayed in Fig.~\ref{hor2group1}, shifted as
before; Rup106 and Arp2 appear to be younger by
$\approx 2$ Gyr with respect to M68. This age difference is only half
as large as that claimed by Buonanno et al.\ (1993, 1995) with
respect to an 'average' metal-poor cluster, obtained by averaging the
fiducial lines of M92 (Stetson \& Harris 1988), M68 (McClure et al.\
1987), NGC6397 (Buonanno et al.\ 1989), M15 (Fahlman et al.\ 1985)
and M30 (Richer et al.\ 1988).
In this work we have compared Rup106 and Arp2 with the new M68
photometry by Walker (1994), and the differences in the average
$\Delta(B-V)$ values of Rup106 and Arp2 with respect to M68 are,
respectively, +0.041 mag and +0.030 mag, almost coincident with
the values +0.041 mag and +0.028 mag found by Buonanno et al.\ (1993,
1995a). The smaller age difference is due to the fact that the
relative ages determined with the horizontal method depend on the
absolute age of the template cluster for the age range we are dealing
with (the same behaviour is found when considering, for example, the
Straniero \& Chieffi 1991 or the Bergbusch \& Vandenberg 1992 isochrones).
If an age of 16 Gyr were assumed for M68, we would find
that Rup106 is younger than M68 by $\approx$4 Gyr,
and that Arp2 is $\approx$1 Gyr older than Rup106, in
good agreement with
the results found by Buonanno et al (1995a).
\begin{figure}
\begin{center}
\mbox{\epsfxsize=0.9\hsize\epsffile{SW_f5.ps}}
\end{center}
\caption{As Fig~3, but for Rup106 and Arp2; in this case the dashed
lines correspond
to age differences of +1, -1 and -2 Gyr with respect to M68}
\label{hor2group1}
\end{figure}
For testing in a different way the reliability of the relative ages of
Rup106 and Arp2 with respect to M68 as derived from the horizontal
method, we have checked if the same age difference is
consistent with that obtained from the vertical method (in a very similar
way as in Buonanno et al.\ 1993). As displayed in Fig.~\ref{M68Rup106}, we have
considered the Rup106 photometry (the Arp2 photometry shows only a very
poorly populated blue HB), and we have shifted horizontally and
vertically the CMD and ridge line in order to superimpose them on the
HB and RGB of M68; note the almost coincident shapes of
the RGBs, which indicate a very similar metallicity for these two
clusters.
The TO luminosities given by the observers are 21.05 mag
for Rup106 and 19.05 mag for M68; the vertical shift applied to Rup106
is -2.2 mag, and the horizontal one is -0.17 mag.
The difference in the TO luminosities gives the age
difference from the vertical method. Shifted in this way, the TO of
Rup106 differs by 0.2 mag (Rup106 TO being more luminous) with respect
to the M68 TO, and correspondingly Rup106 is $\approx 1.7$ Gyr younger
than M68, in very good agreement with the value derived from the
horizontal method.
\begin{figure}
\begin{center}
\mbox{\epsfxsize=0.9\hsize\epsffile{SW_f6.ps}}
\end{center}
\caption{Rup106 CMD and ridge line shifted in order to superimpose RGB
and HB to the M68 ones. The cluster TO luminosities are indicated}
\label{M68Rup106}
\end{figure}
\subsection{Intermediate metal-poor clusters: $-1.6\leq{\rm[M/H]} <-1.3$}
In this metallicity range we have considered the following clusters:
NGC6584 (Sarajedini \& Forrester 1995),
M3 (Ferraro et al.\ 1996),
M79 (Ferraro et al.\ 1992), NGC6752 (VandenBerg et al.\ 1990),
NGC7492 (Cot\'e et al.\ 1991), M10 (Hurley et al.\ 1989)
and NGC3201 (Covino \& Ortolani 1997).
The reference cluster is NGC6584. From the vertical method, using a
metallicity $Z=0.001$, we derive t=11.0$\pm 1.1$ Gyr (see
Fig.~\ref{N6584fit}; we obtain
an apparent distance modulus of $(m-M)_{V} =16.01\pm 0.05$ and a
reddening of $E(B-V)=0.13$, in agreement with previous estimates ranging
between 0.07 and 0.15. By applying the horizontal method, we derive
age differences not bigger than 0.5 Gyr for all clusters in the
sample (see Table~1).
In Fig.~\ref{N6584fit} (as in Fig.~\ref{N6171fit})
the observational data appear to be quantized; this is due to the fact
that the
available files with the photometric data provide V and (B-V) values
with only two decimal digits. However, the fiducial line and the
TO luminosity we use are the ones provided in the cited papers and were
derived by the authors using the original data with more than two
decimal digits. Moreover, this quantization does not affect the derived
observational value of the ZAHB brightness, which is determined with an
error of typically $\pm$0.05 mag.
In the case of NGC6584
we have verified that using a metallicity of $Z=0.0006$, which
corresponds to the lower boundary of the error range associated
with the metallicity determination ($\rm [M/H]$ of 0.2 dex),
the isochrone fit can be improved.
The absolute age is changed by only $\approx 0.5$ Gyr, and
the relative ages determined by means of the horizontal
method are affected much less.
\begin{figure}
\begin{center}
\mbox{\epsfxsize=0.9\hsize\epsffile{SW_f7.ps}}
\end{center}
\caption{Isochrones for ages between 9 and 12 Gyr and ZAHB theoretical
models compared to the CMD and the ridge line of NGC6584 (Sarajedini
\& Forrester 1995)}
\label{N6584fit}
\end{figure}
We can check the consistency of the horizontal age determination with the
absolute vertical age determination for another cluster with
a well populated horizontal part of the HB, that is NGC3201 (Covino \& Ortolani 1997).
We get an age of $10.0 \pm 1.6$ Gyr from the
vertical
method ($(m-M)_{V}=14.28\pm 0.05$, $E(B-V)=0.25$) adopting $Z=0.001$
(see Fig.~\ref{N3201fit}); this value is in very good agreement with the
age obtained by means of the horizontal method relative to NGC6584
(see Table~1 and Fig.~\ref{hor1group2}).
\begin{figure}
\begin{center}
\mbox{\epsfxsize=0.9\hsize\epsffile{SW_f8.ps}}
\end{center}
\caption{Isochrones for ages between 9 and 12 Gyr and ZAHB theoretical
models compared to the CMD and ridge line of NGC3201 (Covino \& Ortolani 1997).
For sake of clarity, only the ridge line is displayed for the
cluster MS}
\label{N3201fit}
\end{figure}
\begin{figure}
\begin{center}
\mbox{\epsfxsize=0.9\hsize\epsffile{SW_f9.ps}}
\end{center}
\caption{The CMD ridge line of NGC3201 registered to
that of NGC6584 as explained in the text. The dashed lines on both sides of
the NGC6584 RGB indicate an age difference of +1 and -1 Gyr with respect to NGC6584}
\label{hor1group2}
\end{figure}
We have also calculated the age of Rup106 in a third way
by assigning it to the
intermediate metal-poor group and by deriving its relative age with
respect to NGC6584. These two clusters differ by $\approx 0.3$ dex in
$\rm[M/H]$, and in principle -- following the criteria adopted in this work
-- could belong to the same metallicity group. Rup106 is measured to be
1.3 Gyr younger than NGC6584; thus
the derived absolute age in this case is $9.7\pm 1.2$ Gyr,
consistent with the value derived in the preceding subsection (see
Tab.~1).
\subsection{Intermediate metal-rich clusters:
$-1.3\leq{\rm[M/H]}<-0.9$}
This group includes M5 (Sandquist et al.\ 1996), NGC1851 (Walker 1992b),
NGC288 (Bolte 1992), NGC362 (VandenBerg et al.\ 1990), Pal12 (Stetson et
al.\ 1989) and Pal5 (Smith et al.\ 1986).
The reference cluster is M5. By applying the vertical method to the
recent photometric data by Sandquist et al.\ (1996), and using Z=0.0015
we get an age of
$10.9\pm 0.8$ Gyr (see Fig.~\ref{M5fit}) and $E(B-V)=0.06$,
$(m-M)_{V}=14.55\pm0.05$. If we compute the
real distance modulus, by adopting $A_{V}=3.3E(B-V)$, we get
$(m-M)_{0} =14.35 \pm 0.05$, a value that agrees well with the
value of $14.37 \pm 0.18$ found by Storm et al.\ (1994) from the
Baade-Wesselink method for two cluster RR Lyrae stars.
Our $(m-M)_{V}$
is also confirmed by the recent result of Gratton et al.\ (1997),
$(m-M)_{V,subdw}=14.58\pm0.04$, which is based on HIPPARCOS subdwarf
parallaxes (see Sect.~3.1), even if we take into account the slightly
higher metallicity for M5 used by Gratton et al.\ (1997). In this
case, our distance modulus becomes $(m-M)_{V}=14.50\pm0.07$.
\begin{figure}
\begin{center}
\mbox{\epsfxsize=0.9\hsize\epsffile{SW_f10.ps}}
\end{center}
\caption{Isochrones for ages between 9 and 12 Gyr and ZAHB theoretical
models compared to the CMD and ridge line of M5 (Sandquist et al.\
1996). Only the ridge line is displayed for the
cluster MS, and along the RGB, at luminosities lower than the HB,
only a subsample of stars is shown}
\label{M5fit}
\end{figure}
The relative ages with respect to M5 are displayed in Table 1. All the
other clusters are found to be significantly younger, the youngest one
being Pal12. In particular,
when taking into account the recent photometric study by Bolte (1992)
of NGC288, we find that NGC288 and NGC362 are practically coeval
(see Fig.~\ref{hor1group3}). This confirms the qualitative result by
Stetson et al.\
(1996), who found using essentially the vertical method that
NGC1851, NGC362 and NGC288 should have the same age, thus giving
very strong evidence against age as the second parameter. Our
result is in agreement with their investigation; by using the
horizontal method, the three clusters are found to be coeval within less
than 1 Gyr (Fig.~\ref{hor1group3})).
\begin{figure}
\begin{center}
\mbox{\epsfxsize=0.9\hsize\epsffile{SW_f11.ps}}
\end{center}
\caption{Comparison of CMD ridge lines. The dashed lines on both sides
of the M5 RGB indicate an age
difference of +1, -1 and -2 Gyr with respect to the reference cluster
M5}
\label{hor1group3}
\end{figure}
\subsection{Metal-rich clusters: $-0.9 \leq {\rm [M/H]}<-0.6$}
M107 (Ferraro et al.\ 1991), NGC6652 (Ortolani et al.\ 1994),
NGC6366 (Alonso et al.\ 1997) and Ter7
(Buonanno et al.\ 1995b) are the four clusters considered in this
group. The reference cluster is NGC6171 (=M107). By employing isochrones for
$Z=0.004$ we get from the vertical method an age of $11.0 \pm 1.1$ Gyr,
together with $E(B-V)=0.38$ and $(m-M)_{V} =15.02 \pm 0.04$ (see
Fig.~\ref{N6171fit}). The reddening we derive agrees with previous estimates,
ranging between 0.30 and 0.48.
\begin{figure}
\begin{center}
\mbox{\epsfxsize=0.9\hsize\epsffile{SW_f12.ps}}
\end{center}
\caption{Isochrones for ages between 10 and 12 Gyr and ZAHB
theoretical models compared to the CMD and ridge line of M107
(Ferraro et al.\ 1991)}
\label{N6171fit}
\end{figure}
By applying the horizontal method with
respect to M107 (see Fig.~\ref{hor1group4}), we find a large age
spread within this group: NGC6366 is $\approx 2$ Gyr older than
M107, while Ter7 is $\approx 4.5$ Gyr younger than M107. This is
the largest age spread among all the clusters considered in this
study. However some caution is required when
considering NGC6366, since this cluster (see Harris 1993 and Alonso et al.\ 1997) is
affected by differential reddening; the determination of its relative age with respect
to NGC6171 could therefore be less accurate even if the cluster
appears undoubtedly to be an ``old'' halo GC (see also the discussion
in Alonso et al.\ 1997).
\begin{figure}
\begin{center}
\mbox{\epsfxsize=0.9\hsize\epsffile{SW_f13.ps}}
\end{center}
\caption{Comparison of CMD ridge lines.
The dashed lines on both sides of the M107 RGB indicate an
age difference of +2, -2 and -4 Gyr with respect to this cluster}
\label{hor1group4}
\end{figure}
The cluster ridge line for NGC6652 (not provided in the paper by
Ortolani et al.\ 1994) has been derived by determining the
median of the colour distribution within brightness bins. The
resulting TO luminosity is in very good agreement
with the value $V_{TO}$=19.20$\pm$0.15 given by Ortolani et al.\ (1994).
Since the cluster RGB shows a large dispersion in colour and the ridge line for this
CMD region is not well defined, we have considered the ridge line only
up to the subgiant branch. Together with a very
well defined and populated red HB, this is sufficient for directly
estimating the absolute cluster age (as given in Table 1)
by means of the vertical method. By using isochrones for $Z=0.004$ we get
an age of $8.0\pm1.2$ Gyr, $E(B-V)=0.23$, $(m-M)_{V}=15.23$
(see Fig.~\ref{N6652fit}).
\begin{figure}
\begin{center}
\mbox{\epsfxsize=0.9\hsize\epsffile{SW_f16.ps}}
\end{center}
\caption{Isochrones for ages between 7 and 10 Gyr
and ZAHB theoretical models compared to the CMD and ridge line of
NGC6652 (Ortolani et al.\ 1994). Only the ridge line is displayed for the
cluster MS}
\label{N6652fit}
\end{figure}
The consistency between the NGC6652 absolute age and
its relative age with respect to M107 derived by means of the horizontal method
can be checked qualitatively by registering the M107
ridge line to that of NGC6652, as shown in Fig.~\ref{hor2group4}.
Since the RGB of NGC6652 shows a large dispersion in colour, it is
not possible to derive a reliable independent estimate of its relative age with respect
to M107, but we can verify the consistency of the relative
RGB positions with the absolute ages.
If we consider the magnitude range $M_{V}\approx$ 16-17, where the
NGC6652 RGB appears to be better defined, the M107 fiducial line
lies at the left boundary of the NGC6652 RGB,
while the line corresponding to an age difference of -4 Gyr with
respect to M107 lies to the right of the RGB.
Recalling that the absolute ages of M107 and NGC6652 as
obtained by means of the vertical method are, respectively, 11.0 and
8.0 Gyr, the relative
positions of the RGBs are in qualitative agreement
with the difference in absolute ages.
\begin{figure}
\begin{center}
\mbox{\epsfxsize=0.9\hsize\epsffile{SW_f17.ps}}
\end{center}
\caption{Relative age of NGC6652 with respect to M107 derived
from the horizontal method.
The dashed lines on the right side of the M107 RGB indicate an
age difference of -2 and -4 Gyr with respect to this cluster}
\label{hor2group4}
\end{figure}
\subsection{Comparison with previous results}
The GC ages displayed in Table~1 can be directly
compared with the results from the work by Richer et al.\ (1996). Their
approach has already been discussed in Sect.~1. They also arranged
the clusters into four groups, according to the cluster $\rm
[Fe/H]$-content. The
metallicity range of each group is very similar to our choice, as are the
metallicities adopted for each cluster. Only in the case of Rup106 and
Arp2 they are substantially higher.
As for the absolute ages, we find that our
values are systematically lower by $\approx 4$ Gyr, due basically to the more
up-to-date stellar models we used (see Paper~I).
The relative ages among the metal-poor GCs in common
with Richer et al.\ (1996) are in agreement with their results within the
formal errors associated with the determination of the relative ages
($\approx 0.5$ Gyr for both this paper and Richer et al.\ 1996).
When considering the second group (intermediate metal-poor clusters),
we have in common with Richer et al.\ (1996) M3, NGC6752, NGC1904 and
NGC7492,
which are coeval within 0.5 Gyr. Richer et al.\ (1996) find the same
result for the first 4 cluster, while they obtain an age
1.7 Gyr higher for NGC7492. This is surprising, since we are using the same
source of photometric data for example for NGC6752 and NGC7492. The
reason for this difference could be due to a point on the ridge line
of NGC7492 that is clearly discrepant. It is about 2.5 mag
brighter than the point on the main sequence used for registering all
clusters to the reference one. If only this point is considered for
determining the relative age, one indeed finds that
NGC7492 is around 1.7 Gyr older than NGC6752.
In the intermediate metal-poor group Richer et al.\ (1996) also
include Arp2 and Rup106. They find that Rup106 is younger by 1 Gyr
with respect to Arp2, and by 4 Gyr with respect for example to
NGC6752. We also determined the relative age of Rup106 with respect to
clusters of the second group (see Sect.~3.4), and the result is
that it is younger by only $\approx 1.3$ Gyr. As
previously discussed, this result is due to the dependence of the
relative ages obtained by the horizontal method on the absolute age of
the reference cluster, for ages lower than a certain value depending
on the assumed metallicity.
The intermediate metal-rich clusters in common with Richer et al.\
(1996) are M5, NGC288, NGC362, NGC1851 and Pal12. The substantial
difference with their work is that we find NGC1851, NGC288 and NGC362
to be coeval within 1 Gyr (in agreement with the result by Stetson et
al.\ 1996 obtained by the vertical method). This seems to be
due to the use of the new Bolte (1992) photometry for
NGC288. Had we used the NGC288 ridge line from Buonanno et al.\ (1989),
we would have obtained an age difference by $\approx$2 Gyr
with respect to NGC362 (NGC288 being older), in agreement with the results
by Richer et al.\
(1996). Another difference is that we have adopted the very recent M5
photometry by Sandquist et al.\ (1996), which also displays a
well-populated HB, and we find that M5 is older than NGC362 and NGC1851, while
Richer et al.\ find these three clusters to be coeval; this difference again
is due to the different data used. If we use the old data for M5 by
Richer \& Fahlman (1987, as in Richer et al.\ 1996), the results again agree
with Richer et al.\ (1996).
Among the metal-rich clusters there is only Ter7 in common with the
work by Richer et al.\ (1996). They consider 3 other clusters belonging
to the disk GC system, for which there are indications that the
original helium content could be substantially higher than $Y=0.23$ (see
Alonso et al.\ 1997 and references therein).
\section{Discussion}
The results displayed in Table 1 can be used for checking for
the existence of an age spread and an age-metallicity relation
for halo clusters, as well as for
testing the hypothesis that age is the so-called ``second parameter'',
responsible for the HB morphology of galactic GCs.
Recent work by Chaboyer et
al.\ (1996) reaches the conclusion that age is the second parameter,
but the analyses by Richer et al.\ (1996) and Stetson et al.\ (1996)
do not confirm this. Checking Table 1, it is evident that the cluster pairs
Rup106--Arp2 and NGC288--NGC362 have almost the same metallicity
and ages, but completely different HB morphologies. We support
therefore the conclusion that HB morphology must, at least in part,
be due to causes other than only metallicity and age.
\begin{figure}
\begin{center}
\mbox{\epsfxsize=0.9\hsize\epsffile{SW_f14.ps}}
\end{center}
\caption{Age (in Gyr) of the 25 clusters in our GCs sample as a
function of their $\rm [M/H]$. The clusters with the circled dots
(Rup106, Arp2, Ter7, Pal12, NGC6366, M107, NGC6652) are
those excluded from the analysis is some cases (see text).
The error on the individual ages (of order $\pm 1$ Gyr) can be found
in Table 1, while the error
on $\rm [M/H]$ is typically of the order of 0.20 dex}
\label{tmetal}
\end{figure}
\begin{figure}
\begin{center}
\mbox{\epsfxsize=0.9\hsize\epsffile{SW_f15.ps}}
\end{center}
\caption{As in the previous figure, but in this case the age is
displayed as a function of the cluster galactocentric distance (in
kpc)}
\label{tdist}
\end{figure}
As for the cluster age distribution, we find that
if we take into account all 25 GCs considered in our investigation (see Table~1,
Fig.~\ref{tmetal}, Fig.~\ref{tdist}),
we obtain an average age of $\langle{\rm t}\rangle=10.6$ Gyr, with a
standard deviation $\sigma$=1.7 Gyr.
However, as discussed in the previous section, the
relative age of NGC6366 with respect to M107
is uncertain, due to differential reddening,
and therefore we will not consider NGC6366 in the following analysis.
Without NGC6366 (24 clusters),
$\langle{\rm t}\rangle=10.5$ Gyr and $\sigma$=1.6 Gyr, very close to
the results for the complete sample.
For the first three metallicity groups the average ages and the dispersions are,
in the order of increasing metallicity, $\langle{\rm t}\rangle=11.8,\,10.9,\,9.4$
Gyr and $\sigma=0.9,\,0.2,\,1.1$
Gyr. In the most metal-rich group we have only three clusters, which have
$\langle$t$\rangle$=8.5 Gyr. The rather large variance for the
low-metallicity group results from the two clusters Arp2 and Rup106.
If we omit these clusters (see below), we obtain $\langle{\rm t}
\rangle=12.3$ and $\sigma=0.3$.
To quantify how much of the age range among the clusters could be due
to errors and whether a real
intrinsic age range exists, we have performed the same statistical test used by
Chaboyer et al.\ (1996). We have calculated an ``expected''
distribution for the assumption of no intrinsic age range by
randomly generating 10000 ages using a Gaussian distribution. The mean
of the distribution was
given by the mean age of the clusters (10.5 Gyr) , and the $\sigma$ by
the error on the individual age determinations. This is
repeated for all clusters considered, so that the final
distribution
contains 240000 points. The F-test (Press et al.\ 1992) was then
applied in order to determine if this ``expected''
distribution has the same variance as the age distribution obtained in
our analysis.
As in Chaboyer et al.\ (1996) we state that an age range exists if
the probability that the two distributions have the same variance is
smaller than 5\%.
The F-test rejects the possibility that
the clusters are coeval with a confidence level higher than 99.8\%.
The size of the true age range ($\sigma_{range}$) can be estimated according to
$\sigma_{range}=(\sigma_{obs}^2-\sigma_{exp}^2)^{0.5}$, where $\sigma_{obs}$ is the sigma
of the actual data, and $\sigma_{exp}$ is the sigma of the expected distribution
(Chaboyer et al.\ 1996); we obtain $\sigma_{range}$=1.2 Gyr.
As for an age-metallicity relation, a formal linear fit to the data yields
\begin{equation}
t=(-3.27\pm0.53){\rm [M/H]}+(5.94\pm0.77).
\label{tm1}
\end{equation}
The linear Pearson correlation coefficient is -0.80,
implying that the confidence level for a linear correlation between
age and $\rm [M/H]$ is not high.
The correlation coefficient for the relation between age and $R_{\rm
GC}$ is even lower (-0.34). A visual inspection of Fig.~\ref{tdist}
confirms that there is only an indication
that the youngest clusters, with the exception of NGC6652,
seem to be located in the outer Halo.
To summarize, when considering all the clusters in our sample (except
NGC6366), we find an age spread
among the GCs ($\sigma_{range}$=1.2 Gyr),
but no statistically compelling evidence for either an age-metallicity
or an age-$R_{GC}$ relation. There is however an indication
that the more metal-poor clusters
are on average older then the clusters of higher metallicities, and
that the age spread within each metallicity bin tends to be higher for
increasing metallicities. Our analysis based on new stellar models
therefore confirms the results of Richer et al.\ (1996) in this
respect.
Recently, Lin \& Richer (1992) and Buonanno et al.\ (1994) have
suggested that Pal12, Rup106, Arp2 and Ter7 (which appear to be
younger than other clusters of approximately the same metallicity)
could have been captured by the Milky Way from a companion galaxy, and
therefore represent later infall events. In this case they could not
be indicative of the halo formation phase. This argument is based
mainly on the fact that these four clusters appear to lie along a
single great circle passing through the northern tip of the Magellanic
Stream, thus suggesting a common orbit that could be the result of an
accretion event from a companion galaxy. Although it is clear that
proper motion studies are necessary for a definitive answer, the
argument is reinforced by the fact that Ter7 and Arp2 lie very close
to the Sagittarius Dwarf Galaxy (but see also the discussion in
Chaboyer et al.\ 1996), which is currently being tidally disrupted and
absorbed by the Milky Way.
If this is the case, the previous statistical analysis should be performed
excluding these 4 objects from our halo GCs sample. Furthermore, from
Fig.~\ref{tmetal} it is evident that there are only two clusters of the
metal-rich group left -- M107 and NGC6652 (at respectively 11 and 8 Gyr). In the
following we will give results in brackets for the case when
M107 and NGC6652 are neither taken into account, and therefore the highest
metallicity clusters are disregarded completely.
The average age of the remaining 20 (18) clusters is then
$\langle {\rm t }\rangle=10.9$ (11.0) Gyr ($\sigma$=1.2 (1.1) Gyr), which is
slightly older, but has a much narrower distribution than the complete
sample. The correlation coefficient between age and metallicity is almost
-0.82 (-0.86), which is the same as for the 24 clusters.
The F-test reveals that the ``coeval'' hypothesis can be rejected with
a lower confidence level of $\approx$70\% (25\%);
the derived formal $\sigma_{range}$ is equal to 0.5 (0.2) Gyr, less than
half of $\sigma_{range}$ for the complete sample.
We therefore conclude that, once Pal12, Rup106, Arp2 and Ter7 are excluded,
the genuine halo clusters formed within $\approx 1.5$ Gyr of each
other. Our smallest sample is therefore coeval and the one including
M107 and NGC6652 cannot be excluded to be so as well.
The formal linear regression to this sample gives
$t=(-2.71\pm0.45){\rm[M/H]}+(7.03\pm0.66)$.
\section{Summary}
Our results can be summarized as follows:
\begin{enumerate}
\item We have determined the ages of a sample of 25 halo clusters by use of
stellar models which take into account all recent improvements
in stellar input physics data.
\item The method for obtaining ages is a combination of the
$\Delta(V)$-method for a few ``reference'' clusters and the $\Delta(B-V)$-method
for other clusters of a similar metallicity. The clusters are split
into four groups according to simila metallicity.
\item Our results, summarized in Table~1, confirm that GCs are
$\approx 12\pm 1$ Gyr old or younger, as we already claimed in
Paper~I for a subset of three metal-poor clusters. The lower ages as
compared to previous investigations are due to our new stellar physics
input and our purely theoretical approach for the HB luminosities.
\item Since age differences depend on absolute age, we obtain smaller
age differences for a given $\Delta(B-V)$. Therefore our sample
becomes more homogeneous even if we use the same original data as in
previous papers. This applies, e.g., for Rup106 and Arp2 with respect
to M68.
\item Several cross checks (e.g.\ for Rup106) result in consistent
ages.
\item NGC6366 is the oldest cluster of our sample with $13.2\pm1.2$
Gyr, but the photometry for this cluster might be affected by
differential reddening, so we excluded it from the analysis in
Sect.~4. The next oldest cluster is M30 with $12.7\pm1.1$.
\item We confirm earlier results that the metal-poor clusters
form a very coeval group and that the more metal-rich groups show a
larger age spread.
\item For the whole sample, which has not been selected on any
specific grounds except that the photometric quality should allow the
application of our age determination methods, we obtain a mean age of
$10.6\pm 1.7$ Gyr ($1\sigma$-error) and reject the assumption that all
clusters are coeval. A linear correlation between metallicity and age
is not confirmed.
\item For samples with all ``peculiar'' and metal-rich clusters
excluded, the mean age becomes better defined ($11.0\pm1.1$ Gyr).
The age range of a large sample of clusters with the same average age
and individual errors as ours would be only 0.2 Gyr (1$\sigma$
range). The clusters in this sample are coeval. This is
still true, if we include M107 and NGC6652, although the probability
for this hypothesis is lower.
Whether or not the assumption of a common age can be rejected safely,
depends critically on the inclusion of individual clusters.
Clearly, the sample size is too small for any reliable
conclusions.
\item There is no evidence for any correlation between age and
galactocentric distance.
\item Known counter-examples against the hypothesis that age is the second
parameter affecting HB morphology are confirmed. NGC288 and NGC362
have the same age. This result does not agree with Richer et al.\
(1996), and is due to new photometric data of NGC288, but it is in agreement
with the results by Stetson et al.\ (1996).
\item Other differences with respect to Richer et al.\ (1996) can be
explained in terms of our new models, lower absolute ages, or different
original data.
\end{enumerate}
We confirm and substantiate the results of Richer et al.\
(1996) in large parts, although differences for individual clusters
exist, and we determine significantly lower ages for all clusters.
According to both investigations the more metal-poor GCs all formed
within 1 Gyr and throughout the whole halo. The more metal-rich
systems possibly formed another Gyr later and over a somewhat longer
timescale. The cluster population is contaminated by a few clusters
not fitting into this simple picture. Consistent and high-quality
photometric data for a large sample of clusters is needed to confirm
our results.
\begin{acknowledgements}
We are grateful to Drs.~Alexander and Rogers for computing
special opacity tables for our purposes, E.L. Sandquist for providing us
with his excellent M5 photometry before publication. S. Covino and S. Ortolani
are acknowledged for providing us with their NGC3201 photometry.
It is a pleasure
to thank M.~Bartelmann, J.~Guerrero and A.~Piersimoni for helpful
discussions and D.~Syer for polishing our English.
\end{acknowledgements}
|
1,477,468,750,287 | arxiv | \section{Introduction}
In a recent paper\cite{Koike:2022ddx}, three of the present authors studied
the transverse polarization of hyperons
produced in semi-inclusive deep inelastic scattering, $ep\to e\Lambda^\uparrow X$.
For a large-$P_T$ hyperon production, this process can be analyzed in the framework of the
collinear factorization, in which the
polarization appears as a twist-3 observable in the absence of a leading twist-2 effect.
For $ep\to e\Lambda^\uparrow X$, the responsible twist-3 effects are
(i) the twist-3 distribution functions (DFs) in the initial proton combined with the twist-2
transversity fragmentation
function (FF) for $\Lambda$ and (ii) the twist-3 FFs for the polarized hyperon
combined with the twist-2 unpolarized
parton DFs in the proton. The twist-3 FFs in (ii) are chiral-even, and both
(a) quark and (b) gluon types of twist-3 FFs contribute.
In \cite{Koike:2022ddx}, the twist-3 polarized cross section for $ep\to e\Lambda^\uparrow X$
from the above (i) and (ii)(a) was derived in the leading order (LO) with respect to the QCD
coupling constant. As a sequel to \cite{Koike:2022ddx}, we will derive in this paper the
LO cross section
from (ii)(b), which completes the LO twist-3 cross section for this process.
Since the gluons are ample in the collision environment and
the twist-3 quark and gluon FFs mix under renormalization, the effect of (ii)(b)
could be as important as (ii)(a). We also remind that the twist-3 fragmentation effect
is important to understand the single transverse-spin asymmetry
in $p^\uparrow p\to \pi X$\cite{Kanazawa:2014dca,Gamberg:2017gle}, which shows a similar
rising asymmetry at large $x_F$ as the polarization in $pp\to\Lambda^\uparrow X$.
Our present study has a direct relevance to
the hyperon polarization phenomenon in the future Electron-Ion-Collider (EIC) experiment.
Here we make some remarks on the phenomenological use of the twist-3 cross section.
As we will see, it contains several unknown nonperturbative
functions, determination of which requires global analysis of
data for various processes such as
$ep\to e\Lambda^\uparrow X$,
$e^+ e^-\to \Lambda^\uparrow X$ and $pp\to \Lambda^\uparrow X$, etc,
combined with an appropriate modelling of those functions.
We also recall that
in the small-$P_T$ region
the transverse-momentum-dependent (TMD) factorization holds for
$ep\to e\Lambda^\uparrow X$ and
$e^+ e^-\to \Lambda^\uparrow X$, and
we anticipate that the two frameworks
match in the intermediate region of $P_T$\footnote{Study on this matching will be reported elsewhere.}
as for the case of $p^\uparrow p\to\ell^+\ell^- X$\cite{Ji:2006ub} and
$ep^\uparrow \to e\pi X$
\cite{Ji:2006br,Koike:2007dg,Zhou:2008fb}.
Information on the TMD functions obtained from the analysis of those small-$P_T$ data will also
help to
constrain the twist-3 functions owing to the
relations between the TMD functions and the twist-3 functions\cite{Kanazawa:2015ajw,Koike:2019zxc}.
In this connection we mention the
recent data on $e^+ e^-\to \Lambda^\uparrow X$
at Belle\cite{Belle:2018ttu} and the phenomenological analyses of the data
in terms of the TMD factorization\cite{DAlesio:2020wjq,Callos:2020qtu,Chen:2021hdn, Chen:2021zrr}.
These studies will be useful
to analyze the EIC data at large-$P_T$ in terms of the twist-3 cross section derived in this work.
The formalism of calculating the twist-3 gluon FFs contribution is very complicated and
was completed only recently for a similar process
in the pp collision, $pp\to\Lambda^\uparrow X$\cite{Koike:2021awj}. Here we apply the method to
$ep\to e\Lambda^\uparrow X$.
Since the kinematics for this process was described in \cite{Koike:2022ddx} and
the method is in parallel to the case for $pp\to\Lambda^\uparrow X$\cite{Koike:2021awj},
our presentation in this paper will be brief, referring to those papers for the details.
The remainder of this paper is organized as follows: In section 2,
we introduce the twist-3 gluon FFs relevant in our study. In section 3,
we briefly
describe the formalism for calculating the
twist-3 gluon FF contribution to $ep\to e\Lambda^\uparrow X$
and present the LO cross section.
Section 4 is devoted to a brief summary.
\section{Twist-3 gluon fragmentation functions}
\subsection{Three types of twist-3 gluon FFs and $q\bar{q}g$ FFs}
Here we list the twist-3 gluon FFs for spin-1/2 hyperon which are necessary to
derive the twist-3 cross section for $ep\to e\Lambda^\uparrow X$
\cite{Koike:2019zxc,Koike:2021awj}.
They are classified into the intrinsic, kinematical and dynamical FFs.
First, the intrinsic gluon FFs are defined as the lightcone correlators of
the gluon's field strength $F^{\mu\nu}_a$ with color index $a$
\cite{Koike:2019zxc,Mulders:2000sh}:
\begin{align}
\label{intrinsic}
&\widehat\Gamma^{\alpha \beta}(z) = \frac{1}{N^2-1} \sum_{X}
\int \frac{\drm \lambda}{2\pi} \Exp{-i \lambda/z}
\bra{0}(\com{\infty w}{0}F^{w \beta}(0))_a
\ket{hX}\bra{hX}
(F^{w \alpha}(\lambda w)\com{\lambda w}{\infty w})_a \ket{0}{\nonumber}\\
&= -g^{\alpha \beta}_\perp \widehat G(z)
- i \epsilon^{P_h w \alpha \beta}(S \cdot w)\Delta \widehat G(z)
+ M_h \epsilon^{P_h w S_\perp \{\alpha}w^{\beta\}}\Delta\widehat G_{3\bar T}(z)
+ i M_h \epsilon^{\alpha \beta w S_\perp }
\Delta \widehat G_{3T}(z)
+\cdots,
\end{align}
where $P_h$ is the four momentum of the hyperon with its mass $M_h$.
$P_h^\mu$ can be regarded as lightlike in the twist-3 accuracy and
$w^\mu$ is another lightlike vector satisfying $P_h \cdot w=1$.
$S^\mu$ is the spin vector of the hyperon normalized as $S^2=-M_h^2$ and
can be decomposed as
$S^\mu = (S \cdot w) P_h^\mu + (S \cdot P_h)w^\mu + M_h S_\perp^\mu$
with the transverse spin vector $S_\perp^\mu$ ($S_\perp^2=-1$).
$g^{\alpha\beta}_\perp \equiv g^{\alpha\beta} - P_h^\alpha w^\beta - P_h^\beta w^\alpha$,
$N=3$ is the number of colors for $SU(N)$ and
the ellipsis denotes twist-4 or higher.
$\ket{h}$ denotes the hyperon state.
$[\lambda w, \mu w] \equiv \mathcal{P}
\exp{\left[i g \int_{\mu}^{\lambda} \drm \tau w \cdot A(\tau w)\right]}$
is the gauge-link operator which guarantees gauge invariance of the correlation function.
We use the convention for the Levi-Civita symbol as $\epsilon^{0123} =1$.
The shorthand notation
$\epsilon^{P_h w \alpha \beta}
\equiv \epsilon^{\mu\nu\alpha\beta}{P_h}_\mu w_\nu$, etc. is used,
and $\{ \alpha\,\beta \}$ denotes the symmetrization of Lorentz indices.
$\widehat G(z)$ and $\Delta \widehat G(z)$ are
twist-2 unpolarized and helicity FFs, respectively, and
$\Delta \widehat G_{3 \bar T}(z)$ and $\Delta \widehat G_{3T}(z)$ are intrinsic twist-3 FFs.
All FFs in (\ref{intrinsic}) are defined to be real and have a support on $0<z<1$.
$\Delta \widehat G_{3 \bar T}(z)$ is na\"{i}vely T-odd, and contributes to the hyperon polarization.
Second, the kinematical gluon FFs are defined from the derivative of the
correlation functions for the intrinsic one:
\begin{align}
\label{kinematical}
\widehat \Gamma^{\alpha \beta \gamma}_\partial (z) &=
\frac{1}{N^2 -1}\sum_{X} \int \frac{\drm \lambda}{2 \pi}
\Exp{- i \lambda /z}
\bra{0}(\com{\infty w}{0}F^{w \beta}(0))_a \ket{hX}\bra{hX}
(F^{w \alpha}(\lambda w)\com{\lambda w}{\infty w})_a
\ket{0}\overset{\leftarrow}{\partial}{}^\gamma {\nonumber}\\
& = -i \frac{M_h}{2}g_\perp^{\alpha \beta}
\epsilon^{P_h w S_\perp \gamma}\widehat G_T^{(1)}(z)
+ \frac{M_h}{2}\epsilon^{P_h w \alpha \beta}S_\perp^{\gamma}
\Delta \widehat G^{(1)}_T (z)
{\nonumber}\\
&
-i \frac{M_h}{8}\left(
\epsilon^{P_h w S_\perp \{\alpha}g_\perp^{\beta \} \gamma}
+\epsilon^{P_h w \gamma \{\alpha}S^{\beta \}}_\perp
\right) \Delta \widehat H^{(1)}_T (z) +\cdots,
\end{align}
where
\begin{align}
F^{w\alpha}(\lambda w)[\lambda w, \infty w]\ket{0} \overset{\leftarrow}{\partial}{}^\gamma
\equiv
\lim_{\xi \to 0} \frac{\drm}{\drm \xi_\gamma}
F^{w\alpha}(\lambda w + \xi)[\lambda w + \xi , \infty w + \xi] \ket{0} .
\end{align}
There are three twist-3 gluonic
kinematical FFs, $\widehat G^{(1)}_T (z)$, $\Delta \widehat G^{(1)}_T (z)$ and $\Delta \widehat H^{(1)}_T (z)$, which
are real functions and have a support on $0<z<1$.
Among them, $\widehat G^{(1)}_T (z)$ and $\Delta \widehat H^{(1)}_T (z)$
are na\"{i}vely T-odd contributing to the hyperon polarization, while $\Delta \widehat G^{(1)}_T (z)$ is na\"{i}vely T-even.
They can also be written as the $k^2_T / M_h^2$-moment of the TMD FFs
\cite{Mulders:2000sh}.
Third, the dynamical gluon FFs are defined from the 3-gluon correlators.
Contraction of color indices with
two structure constants for color $SU(N)$, i.e. $- i f_{abc}$ and
$d_{abc}$, yields two types of FFs\cite{Koike:2019zxc, Yabe:2019awq, Kenta:2019bxd, Gamberg:2018fwy}:
\begin{align}
\label{dynamicalFA}
& \widehat\Gamma^{\alpha \beta \gamma}_{FA}\left(\frac{1}{z_1},\frac{1}{z_2}\right) {\nonumber}\\
&= \frac{- i f_{abc}}{N^2-1}\sum_{X}\iint
\frac{\drm \lambda}{2 \pi}\frac{\drm \mu}{2 \pi}
\Exp{-i \lambda /z_1}\Exp{- i \mu (1/z_2 - 1/z_1)}
\bra{0}F^{w \beta}_b(0)\ket{hX}
\bra{hX}F^{w \alpha}_a(\lambda w) gF^{w \gamma}_c(\mu w) \ket{0}
{\nonumber}\\
&
= -M_h \left(
\widehat N_1\left(\frac{1}{z_1},\frac{1}{z_2}\right)g^{\alpha \gamma}_\perp
\epsilon^{P_h w S_\perp \beta}
+\widehat N_2\left(\frac{1}{z_1},\frac{1}{z_2}\right) g^{\beta \gamma}_\perp
\epsilon^{P_h w S_\perp \alpha}
-\widehat N_2\left(\frac{1}{z_2}-\frac{1}{z_1},\frac{1}{z_2}\right)g^{\alpha \beta}_{\perp}
\epsilon^{P_h w S_\perp \gamma}
\right) ,
\end{align}
\begin{align}
\label{dynamicalFS}
&\widehat\Gamma^{\alpha\beta\gamma}_{FS}\left(\frac{1}{z_1},\frac{1}{z_2}\right) {\nonumber}\\
&=\frac{d_{abc}}{N^2-1}\sum_{X}\iint\frac{\drm \lambda}{2\pi}
\frac{\drm \mu}{2 \pi}\Exp{-i \lambda/z_1}
\Exp{-i \mu(1/z_2 - 1/z_1)}
\bra{0}F^{w \beta}_b(0) \ket{hX}\bra{hX}F^{w\alpha}_a(\lambda w) g F^{w\gamma}_c(\mu w) \ket{0}{\nonumber}\\
&= -M_h \left(\widehat O_1\left(\frac{1}{z_1},\frac{1}{z_2}\right)g^{\alpha \gamma}_\perp
\epsilon^{P_h w S_\perp \beta}+ \widehat O_2\left(\frac{1}{z_1},\frac{1}{z_2}\right)
g^{\beta \gamma}_\perp \epsilon^{P_h w S_\perp \alpha}
+\widehat O_2\left(\frac{1}{z_2}-\frac{1}{z_1},\frac{1}{z_2}\right)g^{\alpha \beta}_\perp
\epsilon^{P_h w S_\perp \gamma}\right),
\end{align}
where the gauge-link operators are suppressed for simplicity.
There are four purely gluonic dynamical FFs,
$\widehat N_{1,2}\left(\frac{1}{z_1},\frac{1}{z_2}\right)$ and
$\widehat O_{1,2}\left(\frac{1}{z_1}.\frac{1}{z_2}\right)$,
which are
complex functions and have a support on $1/z_2>1$ and $1/z_2 > 1/z_1 >0$.
Their real parts
are na\"{i}vely $T$-even,
while their imaginary parts are na\"{i}vely $T$-odd.
$\widehat N_1\left(\frac{1}{z_1},\frac{1}{z_2}\right)$ and
$\widehat O_1\left(\frac{1}{z_1},\frac{1}{z_2}\right)$ satisfy the symmetry relations
\begin{align}
\widehat N_1\left(\frac{1}{z_1},\frac{1}{z_2}\right) =
- \widehat N_1 \left(\frac{1}{z_2} - \frac{1}{z_1}, \frac{1}{z_2}\right), \qquad
\widehat O_1 \left(\frac{1}{z_1},\frac{1}{z_2}\right)=
\widehat O_1 \left(\frac{1}{z_2}-\frac{1}{z_1},\frac{1}{z_2}\right).
\label{symmetry}
\end{align}
Finally, we introduce other dynamical FFs defined from the quark-antiquark-gluon
correlators\cite{Koike:2019zxc}, which are necessary for the
derivation of the twist-3 cross section for $ep \to e\Lambda^\uparrow X$,
\begin{align}
\label{dynamicalDelta}
\widetilde\Delta^\alpha_{ij}\left(\frac{1}{z_1},\frac{1}{z_2}\right)
&=\frac{1}{N}\sum_{X}\iint \frac{\drm \lambda}{2\pi}\frac{\drm \mu}{2\pi}
\Exp{-i \lambda/z_1}\Exp{-i\mu(1/z_2-1/z_1)}
\bra{0}gF^{w\alpha}_a(\mu w) \ket{hX}\bra{hX}
\bar\psi_j(\lambda w)t^a\psi_i(0)\ket{0}{\nonumber}\\
&= M_h \left(
\epsilon^{\alpha P_h w S_\perp}(\Slash{P}_h)_{ij}
\widetilde D_{FT}\left(\frac{1}{z_1},\frac{1}{z_2}\right)
+ i S^{\alpha }_\perp (\gamma_5 \Slash{P}_h)_{ij}
\widetilde G_{FT}\left(\frac{1}{z_1},\frac{1}{z_2}\right)
\right) ,
\end{align}
where $t^a$ is the generators of SU(N) and the spinor indices $i, j$ are shown explicitly.
These two functions $\widetilde{D}_{FT}\left(\frac{1}{z_1},\frac{1}{z_2}\right)$ and
$\widetilde{G}_{FT}\left(\frac{1}{z_1},\frac{1}{z_2}\right)$ are
complex functions and
have a support on $1/z_1 >0,1/z_2 <0$ and $1/z_1 -1/z_2 >1$.
Their real parts are na\"{i}vely $T$-even, while the imaginary parts are na\"{i}vely $T$-odd.
\subsection{Constraint relations among twist-3 gluon FFs}
The gluon FFs introduced above are not independent but are subject to
the QCD equation-of-motion (EOM) relations and the Lorentz invariance relations (LIRs).
The complete set of those relations were
derived in
\cite{Koike:2019zxc}. Here we quote those relations which are useful to simplify
the twist-3 cross section
for $ep\to e\Lambda^\uparrow X$.
The relevant EOM relation allows us to express the intrinsic FF in terms of the
kinematical and dynamical FFs as
\begin{align}
\label{eom1}
&\frac{1}{z}\Delta\widehat G_{3 \bar T}(z) =
- \Im \widetilde D_{FT}(z) +
\frac{1}{2}\left(\widehat G_T^{(1)}(z)+\Delta\widehat H_T^{(1)}(z) \right)
{\nonumber}\\
&\qquad+ \int \drm\left(\frac{1}{z'}\right)\frac{1}{1/z -1/z'} \Im
\left(
2 \widehat N_1\left(\frac{1}{z'},\frac{1}{z}\right) + \widehat N_2\left(\frac{1}{z'},\frac{1}{z}\right)-\widehat N_2\left(\frac{1}{z}-\frac{1}{z'},\frac{1}{z}\right)
\right),
\end{align}
where $\widetilde D_{FT}(z)$ is defined as
\begin{align}
\widetilde D_{FT}(z) \equiv
\frac{2}{C_F}\int_{0}^{1/z}\drm \left(\frac{1}{z'}\right)
\widetilde D_{FT}\left(\frac{1}{z'},\frac{1}{z'}-\frac{1}{z}\right), \qquad
{\rm with}\ C_F=\frac{N^2-1}{2N}.
\label{DFTtild}
\end{align}
Other relations derived from the LIRs and the EOM relations
represent the derivative of the kinematical FFs in terms of other FFs as
\begin{align}
\label{rel1}
&\frac{1}{z}\frac{\partial\widehat G_T^{(1)}(z)}{\partial(1/z)}
= -2\left( \Im \widetilde D_{FT}(z) - \widehat G_T^{(1)}(z)\right){\nonumber}\\
&\qquad+4\int\drm\left(\frac{1}{z'}\right)\frac{1}{1/z-1/z'}
\Im\left(
\widehat N_1\left(\frac{1}{z'},\frac{1}{z}\right) - \widehat N_2\left(\frac{1}{z}-\frac{1}{z'},\frac{1}{z}\right)
\right){\nonumber}\\
&\qquad+2\int\drm\left(\frac{1}{z'}\right)\frac{1/z}{(1/z-1/z')^2}
\Im\left(
\widehat N_1\left(\frac{1}{z'},\frac{1}{z}\right) +\widehat N_2\left(\frac{1}{z'},\frac{1}{z}\right)
-2\widehat N_2\left(\frac{1}{z}-\frac{1}{z'},\frac{1}{z}\right)
\right),
\end{align}
and
\begin{align}
\label{rel2}
& \frac{1}{z}\frac{\partial\Delta\widehat H_T^{(1)}(z)}{\partial(1/z)}
= -4\left(\Im\widetilde D_{FT}(z)-\Delta\widehat H_{T}^{(1)}(z)\right){\nonumber}\\
&\qquad+8\int\drm\left(\frac{1}{z'}\right)\frac{1}{1/z-1/z'}
\Im\left(
\widehat N_1\left(\frac{1}{z'},\frac{1}{z}\right) +\widehat N_2\left(\frac{1}{z'},\frac{1}{z}\right)
\right){\nonumber}\\
&\qquad+4\int\drm \left( \frac{1}{z'} \right)\frac{1/z}{(1/z-1/z')^2}
\Im\left(
\widehat N_1\left(\frac{1}{z'},\frac{1}{z}\right)+\widehat N_2\left(\frac{1}{z'},\frac{1}{z}\right)
\right).
\end{align}
The relations (\ref{eom1}), (\ref{rel1}) and (\ref{rel2}) show that
the purely gluonic twist-3 FFs are related to the quark-antiquark-gluon
FFs, which implies the contribution to $ep\to e\Lambda^\uparrow X$
from the latter needs to be considered together.
It's been shown that the above three relations (\ref{eom1}), (\ref{rel1}) and (\ref{rel2})
are crucial to guarantee the frame independence of the cross section for $pp\to\Lambda^\uparrow X$.
Using these relations, we will express the cross section in terms of
$\widehat G^{(1)}_T$, $\Delta \widehat H^{(1)}_T$,
$\Im \widehat N_{1,2}$, $\Im \widehat O_{1,2}$,
$\Im \widetilde{D}_{FT}$ and $\Im \widetilde{G}_{FT}$ (see eq. (\ref{result}) below), which
gives the most concise expression for the cross section.
We also note that, in principle,
the twist-3 kinematical FFs, $\widehat G^{(1)}_T$ and $\Delta \widehat H^{(1)}_T$,
can be also eliminated in terms of the twist-3 dynamical FFs
(see eqs. (74) and (75) of \cite{Koike:2019zxc}).
\section{Twist-3 gluon FF contribution to $ep \to e\Lambda^\uparrow X$}
\subsection{Kinematics}
\label{kinematics}
Here we briefly summarize the kinematics for the process\cite{Koike:2022ddx},
\begin{eqnarray}
e(\ell) + p (p) \to e (\ell') + \Lambda^{\uparrow}(P_h, S_\perp) + X,
\label{epeLX}
\end{eqnarray}
where $\ell$, $\ell'$, $p$ and $P_h$ are the momenta of each particle and
$S_\perp$ is the transverse spin vector for $\Lambda$. With the
virtual photon's momentum $q=\ell-\ell'$,
we introduce the five Lorentz invariants as
\begin{eqnarray}
&&S_{ep} \equiv (p+\ell)^2 \simeq 2 p \cdot \ell,\qquad
Q^2 \equiv -q^2
\nonumber\\
&&x_{bj} \equiv \frac{Q^2}{2 p \cdot q} , \qquad
z_f \equiv \frac{p \cdot P_h}{p \cdot q},\qquad
q_T \equiv \sqrt{-q_t^2},
\end{eqnarray}
where
\begin{eqnarray}
q_t^\mu \equiv q^\mu - \frac{P_h \cdot q}{p \cdot P_h}p^\mu
-\frac{p \cdot q}{p \cdot P_h} P_h^\mu
\end{eqnarray}
is a space-like momentum satisfying $q_t \cdot p = q_t \cdot P_h = 0$.
\begin{figure}[h]
\begin{center}
\includegraphics[scale=0.9]{hadronframe}
\vspace{0.3cm}
\caption{Hadron frame and the transverse spin vector $\vec{S}_\perp$.
To make clear the convention for
$\Phi_S$, rotate the $Z$ and $X$ axes around the $Y$ axis by
$\theta$ (polar angle of $\vec{P}_h$)
so that the new $Z$ axis becomes parallel to $\vec{P}_h$. $\Phi_S$ is
defined to be
the azimuthal angle
of $\vec{S}_\perp$ around $\vec{P}_h$ measured from the new $X$ axis,
just like $\phi$ and $\chi$ are measured from the $x$ axis around the $z$ axis.}
\label{hadronframe}
\end{center}
\end{figure}
As in \cite{Koike:2022ddx}, we work in the hadron
frame\cite{Meng:1991da} (See Fig. \ref{hadronframe}),
where $p^\mu$ and $q^\mu$ are collinear and take the following form:
\begin{align}
&p^\mu = \frac{Q}{2 x_{bj}}(1,0,0,1),\\
&q^\mu = (0,0,0,-Q).
\end{align}
Defining the azimuthal angles for the hadron plane and the lepton plane
as $\chi$ and $\phi$, respectively, as shown in Fig. \ref{hadronframe},
$P_h^\mu$ and $\ell^\mu$ can be written as
\begin{eqnarray}
&& P_h^\mu = \frac{z_f Q}{2} \left(
1 + \frac{q_T^2}{Q^2}, \frac{2 q_T}{Q} \cos{\chi},
\frac{2 q_T}{Q} \sin{\chi}, -1 + \frac{q_T^2}{Q^2}
\right),
\label{Ph}\\
&&\ell^\mu = \frac{Q}{2}(\cosh{\psi}, \sinh{\psi}\cos{\phi},
\sinh{\psi}\sin{\phi},-1 ),
\label{lepmom}
\end{eqnarray}
where $\psi$ is defined by
\begin{eqnarray}
\cosh{\psi} \equiv \frac{2x_{bj} S_{ep}}{Q^2} -1.
\label{coshpsi}
\end{eqnarray}
With this parameterization,
the transverse momentum of the hyperon $P_{hT}$
is given by
$P_{hT}= z_f q_T$.
For the calculation of the cross section,
we introduce four axes by
\begin{align}
\label{txyz}
& T^\mu \equiv \frac{1}{Q}(q^\mu + 2 x_{bj} p^\mu)=(1,0,0,0),{\nonumber}\\
&Z^\mu \equiv - \frac{q^\mu}{Q} = (0,0,0,1), \nonumber\\
&X^\mu \equiv \frac{1}{q_T} \left[
\frac{P_h^\mu}{z_f} - q^\mu - \left(
1+ \frac{q_T^2}{Q^2}
\right)x_{bj} p^\mu
\right] = (0, \cos{\chi}, \sin{\chi}, 0) , {\nonumber}\\
& Y^\mu \equiv \epsilon^{\mu \nu \rho \sigma}Z_\nu T_\rho X_\sigma
= (0, - \sin{\chi}, \cos{\chi},0),
\end{align}
where the actual form in the hadron frame is given after the last equality in each equation.
The final hyperon resides in the $XZ$-plane and the transverse spin vector of the hyperon
can be written as
\begin{align}
\label{spinvector}
S_\perp^\mu = \cos{\theta}\cos{\Phi_S}X^\mu + \sin{\Phi_S}Y^\mu - \sin{\theta}\cos{\Phi_S}Z^\mu,
\end{align}
where $\theta$ is the polar angle of $\vec{P_h}$ as measured from the $Z$-axis
and $\Phi_S$ is the azimuthal angle of $\vec{S}_\perp$ around $\vec{P}_h$
as measured from the $XZ$-plane.
From (\ref{Ph}), the polar angle $\theta$ is written as
\begin{align}
\cos{\theta} &= \frac{P_{hz}}{|\vec P_h|}
= \frac{q_T^2 - Q^2}{q_T^2 + Q^2}, \\
\sin{\theta}&= \frac{P_{h T}}{|\vec P_h |}
= \frac{2 q_T Q}{q_T^2 + Q^2}.
\end{align}
With the kinematical variables defined above,
the polarized differential cross section for (\ref{epeLX}), $\sigma\equiv\sigma(p,\ell,\ell',P_h,S_\perp)$,
takes the following form:
\begin{align}
\label{Xsec}
\frac{\drm^6 \sigma}{\drm x_{bj} \drm Q^2 \drm z_f \drm q_T^2 \drm \phi \drm \chi}
= \frac{\alpha_{em}^2}{128 \pi^4 x_{bj}^2 S_{ep}^2 Q^2}z_f
L^{\rho\sigma}(\ell , \ell')W_{\rho\sigma}(p,q,P_h) ,
\end{align}
where $\alpha_{em} = e^2/(4\pi)$ is the QED coupling constant,
$ L^{\rho\sigma}=2(\ell^\rho \ell'^\sigma + \ell^\sigma \ell'^\rho)-Q^2 g^{\rho\sigma}$
is the leptonic tensor and $W_{\rho\sigma}$ is the hadronic tensor.
Although there are two azimuthal angles, $\phi$ and $\chi$, the cross section depends on
the relative angle $\varphi \equiv \phi - \chi$ only.
Therefore it can be expressed in terms of
$S_{ep}$, $Q^2$, $x_{bj}$, $z_f$, $q_T^2$, $\varphi$ and $\Phi_S$.
\subsection{Hadronic tensor}
We now calculate the twist-3 gluon FF contribution to (\ref{Xsec})
following the formalism developed for $pp \to \Lambda^\uparrow X$\cite{Koike:2021awj}.
It occurs as a nonpole contribution from the hard part
as in the case of other twist-3 fragmentation contributions in
$ep^{\uparrow} \to e \pi X$ \cite{Kanazawa:2013uia} and
$pp \to \Lambda^{\uparrow} X$ \cite{Koike:2017fxr, Koike:2021awj}.
We first factorize the twist-2 unpolarized quark DFs $f_1(x)$ from the hadronic tensor
$W_{\rho\sigma}(p,q,P_h)$:
\begin{align}
\label{factor}
W_{\rho\sigma}(p,q,P_h) = \int \frac{\drm x}{x}f_1(x) w_{\rho\sigma}(xp,q,P_h),
\end{align}
where $x$ is the momentum fraction of the quark in the proton,
and we have omitted the factor associated with the quark's fractional electric charge as well as
summation over quark flavors.
Up to twist-3,
$w_{\rho\sigma}$ receives contribution from
the 2-gluon, 3-gluon and quark-antiquark-gluon correlation
functions corresponding to (a)-(e) of Fig. \ref{genericdiagrams}:
\begin{align}
w_{\rho \sigma} \equiv w_{\rho \sigma}^{\rm (a)} + w_{\rho \sigma}^{\rm (b)} + w_{\rho \sigma}^{\rm (c)}
+ w_{\rho \sigma}^{\rm (d)} + w_{\rho \sigma}^{\rm (e)},
\end{align}
where each term can be written as
\begin{align}
w_{\rho \sigma}^{\rm (a)} &= \int \frac{\drm^4 k}{(2\pi)^4}\Gamma^{(0)\mu\nu}_{ab}(k)
S^{ab}_{\mu \nu , \rho \sigma}(k), \\[7pt]
w_{\rho \sigma}^{\rm (b)}&= \frac{1}{2}\iint \frac{\drm^4 k}{(2\pi)^4}\frac{\drm^4 k'}{(2\pi)^4}
\Gamma^{(1)\mu \nu \lambda}_{\mathrm{L} abc}(k,k')
S^{\mathrm{L} abc}_{\mu \nu \lambda, \rho \sigma }(k,k'),
\label{gggL}\\[7pt]
w_{\rho \sigma}^{\rm (c)}&= \frac{1}{2}\iint \frac{\drm^4 k}{(2\pi)^4}\frac{\drm^4 k'}{(2\pi)^4}
\Gamma^{(1)\mu \nu \lambda}_{\mathrm{R} abc}(k,k')
S^{\mathrm{R} abc}_{\mu \nu \lambda, \rho \sigma }(k,k'),
\label{gggR}\\[7pt]
w_{\rho \sigma}^{\rm (d)}&= \Tr{} \iint \frac{\drm^4 k}{(2\pi)^4}\frac{\drm^4 k'}{(2\pi)^4}
\widetilde{\Delta}^{(1)\alpha}_{\mathrm{L} a}(k,k')\widetilde{S}^{\mathrm{L} a}_{\alpha, \rho \sigma}(k,k'),
\label{tildeL}\\[7pt]
w_{\rho \sigma}^{\rm (e)}&= \Tr{} \iint \frac{\drm^4 k}{(2\pi)^4}\frac{\drm^4 k'}{(2\pi)^4}
\widetilde{\Delta}^{(1)\alpha}_{\mathrm{R} a}(k,k')\widetilde{S}^{\mathrm{R} a}_{\alpha, \rho \sigma}(k,k').
\label{tildeR}
\end{align}
Here
$S^{ab}_{\mu\nu, \rho\sigma}(k)$,
$S^{\mathrm{L}(\mathrm{R}) abc}_{\mu\nu\lambda, \rho\sigma}(k,k')$,
and
$\widetilde{S}^{\mathrm{L}(\mathrm{R}) a}_{\alpha,\rho\sigma}(k,k')$
represent the partonic hard parts
with
$k$ and $k'$ the momenta of partons fragmenting into the final hyperon,
and the dependence on $q$ is suppressed for simplicity.
$\Gamma^{(0)\mu\nu}_{ab}$,
$\Gamma^{(1)\mu\nu\lambda}_{\mathrm{L}(\mathrm{R}) abc}$
and
$\widetilde{\Delta}^{(1)\alpha}_{\mathrm{L}(\mathrm{R}) a}$
denote the fragmentation matrix elements defined as
\begin{align}
\Gamma^{(0)\mu\nu}_{ab}(k) &
\sum_{X}\int\drm^4 \xi \Exp{- i k \cdot \xi}
\bra{0}A^{\nu}_{b}(0)\ket{hX}\bra{hX}A^{\mu}_{a}(\xi)\ket{0}, \\[7pt]
\Gamma^{(1)\mu\nu\lambda}_{\mathrm{L} abc}(k,k')&=
\sum_{X}\iint\drm^4 \xi \drm^4 \eta \Exp{- i k \cdot \xi}\Exp{-i(k'-k)\cdot \eta}
\bra{0}A^{\nu}_{b}(0)\ket{hX}\bra{hX}A^{\mu}_{a}(\xi)gA^{\lambda}_c(\eta)\ket{0}, \\[7pt]
\Gamma^{(1)\mu\nu\lambda}_{\mathrm{R} abc}(k,k')&=
\sum_{X}\iint\drm^4 \xi \drm^4 \eta \Exp{- i k \cdot \xi}\Exp{-i(k'-k)\cdot \eta}
\bra{0}A^{\nu}_{b}(0)gA^{\lambda}_c(\eta)\ket{hX}\bra{hX}A^{\mu}_{a}(\xi)\ket{0},
\label{qqbargL}\\[7pt]
\widetilde{\Delta}^{(1)\alpha}_{\mathrm{L} a,ij}(k,k')&=
\sum_{X}\iint\drm^4 \xi \drm^4 \eta \Exp{- i k \cdot \xi}\Exp{-i(k'-k)\cdot \eta}
\bra{0}gA^{\alpha}_{a}(\eta)\ket{hX}\bra{hX}\psi_i(0)\bar \psi_j(\xi)\ket{0},
\label{qqbargR}\\[7pt]
\widetilde{\Delta}^{(1)\alpha}_{\mathrm{R} a,ij}(k,k')&=
\sum_{X}\iint\drm^4 \xi \drm^4 \eta \Exp{- i k \cdot \xi}\Exp{-i(k'-k)\cdot \eta}
\bra{0}\psi_i(0)\bar \psi_j(\xi)\ket{hX}\bra{hX}gA^{\alpha}_{a}(\eta)\ket{0}.
\end{align}
The contribution with two parton lines in the left (right) of the cut
in Fig. \ref{genericdiagrams} (b)-(e) are characterized
by the symbol $\mathrm{L}$ ($\mathrm{R}$) in the hard parts and the fragmentation matrix elements.
The superscripts $(0)$ and $(1)$ indicate the order of the gauge coupling $g$
corresponding, respectively, to the 2-parton
and 3-parton correlation functions.
The factor $1/2$ in (\ref{gggL}) and (\ref{gggR}) takes into account the exchange symmetry in the
corresponding matrix element.
In (\ref{tildeL}) and (\ref{tildeR}), the hard parts and the fragmentation matrix elements
are matrices both in color and spinor spaces for the quark and ${\rm Tr}$ indicates trace over
both indices.
The hard parts and the fragmentation matrix elements satisfy
$ \Gamma^{(1)\mu \nu \lambda}_{\mathrm{R} abc}(k,k')=
\Gamma^{(1)\nu \mu \lambda}_{\mathrm{L} bac} (k',k)^*$ ,
$ \widetilde \Delta^{(1)\alpha}_{\mathrm{R} a}(k,k')=
\gamma^0 \widetilde \Delta^{(1)\alpha}_{\mathrm{L} a}(k',k)^\dagger\gamma^0$ ,
$S^{\mathrm{R} abc}_{ \mu \nu \lambda,\rho\sigma}(k,k')= S^{\mathrm{L} bac}_{ \nu\mu\lambda,\sigma\rho}(k',k)^*$
and
$\widetilde S^{\mathrm{R} a}_{\alpha,\rho\sigma}(k,k')= \gamma^0 \widetilde S^{\mathrm{L} a}_{\alpha,\sigma\rho}(k',k)^\dagger \gamma^0$.
We thus have
\begin{eqnarray}
w_{\rho\sigma} = w^{\rm (a)}_{\rho\sigma} + 2 \Re\, w^{\rm (b)}_{\rho\sigma}
+ 2 \Re\, w^{\rm (d)}_{\rho\sigma}.
\end{eqnarray}
\begin{figure}[ht]
\begin{center}
\includegraphics[scale=0.93]{cutdiagrams}
\end{center}
\caption{Cut diagrams for the twist-3 gluon fragmentation contribution to $ep\to e\Lambda^\uparrow X$.
In each diagram, the lower blob represents the unpolarized quark distribution, the middle one
represents the partonic hard cross section and the upper one represents the fragmentation matrix elements
for the final hyperon.}
\label{genericdiagrams}
\end{figure}
To extract the twist-3 contribution to $ep\to e\Lambda^\uparrow X$
we apply the collinear expansion to the hard part,
$S^{ab}_{\mu\nu,\rho\sigma}$, $S^{\mathrm{L} abc}_{\mu\nu\lambda,\rho\sigma}$
and $\widetilde{S}^{\mathrm{L} a}_{\alpha,\rho\sigma}$,
with respect to the momenta $k$ and $k'$
around $P_h/z$ and $P_h/z'$, respectively, taking into account of the following
Ward-Takahashi (WT) identities\cite{Koike:2021awj}:
\begin{align}
& k^\mu S^{ab}_{\mu \nu,\rho\sigma}(k) = k^\nu S^{ab}_{\mu \nu,\rho\sigma}(k) = 0,
\label{WTI1}\\[7pt]
& k^\mu S^{\mathrm{L} abc}_{\mu\nu\lambda,\rho\sigma}(k,k')=\frac{i f^{abc}}{N^2 -1}S_{\lambda \nu,\rho\sigma}(k'), \\[7pt]
&k'^\nu S^{\mathrm{L} abc}_{\mu \nu\lambda,\rho\sigma}(k,k')= 0, \\[7pt]
&(k'-k)^\lambda S^{\mathrm{L} abc}_{\mu\nu\lambda,\rho\sigma}(k,k')= \frac{- i f^{abc}}{N^2 -1}S_{\mu \nu,\rho\sigma}(k'), \\[7pt]
& (k-k')^\alpha \widetilde S^{\mathrm{L} a}_{\alpha,\rho\sigma}(k,k') =0,
\label{WTI2}
\end{align}
where $S_{\mu\nu,\rho\sigma}(k)\equiv \delta^{ab}S^{ab}_{\mu\nu,\rho\sigma}(k)$.
We note that unlike the case for $pp\to\Lambda^\uparrow X$ no ghost-like terms appear
in the WT identities (\ref{WTI1})-(\ref{WTI2}) for the present case.
This way, one obtains the hadronic tensor $w_{\rho\sigma}$
in terms of the gauge invariant FFs as (See eq. (51) of
\cite{Koike:2021awj} and eq. (56) of \cite{Kanazawa:2013uia})
\begin{eqnarray}
&&w_{\rho\sigma}=
\Omega^{\alpha}_{\;\; \mu} \Omega^{\beta}_{\;\; \nu}\int \drm \left( \frac{1}{z} \right) z^2
{\widehat \Gamma^{\mu\nu}(z)}
{S_{\alpha\beta,\rho\sigma}\left({1\over z}\right)}
{\nonumber}\\[7pt]
&&\qquad - i \Omega^{\alpha}_{\;\; \mu}\Omega^{\beta}_{\;\; \nu}\Omega^{\gamma}_{\;\; \lambda}
\int\drm \left(\frac{1}{z}\right)z^2
{\widehat \Gamma^{\mu\nu\lambda}_{\partial}(z) }
\left. \frac{\partial
{S_{\alpha\beta , \rho\sigma}(k)}
}{\partial k^\gamma}\right|_{k=P_h/z} {\nonumber}\\[7pt]
&&\qquad + \Re \left[
i \Omega^{\alpha}_{\;\; \mu}\Omega^{\beta}_{\;\; \nu}\Omega^{\gamma}_{\;\; \lambda}
\iint \drm \left(\frac{1}{z}\right)\drm \left(\frac{1}{z'}\right)zz' \frac{1}{1/z-1/z'} \right.
{\nonumber}\\[4pt]
&& \qquad\qquad\times \left.
\left\{ -\frac{i f^{abc}}{N}
{\widehat \Gamma^{\mu\nu\lambda}_{FA}\left(\frac{1}{z'},\frac{1}{z}\right) }
+\frac{N d^{abc}}{N^2-4}
{\widehat \Gamma^{\mu\nu\lambda}_{FS}\left(\frac{1}{z'},\frac{1}{z}\right)}
\right\}
{S^{\mathrm{L} abc}_{\alpha\beta\gamma,\rho\sigma}\left(\frac{1}{z'},\frac{1}{z}\right)}
\right]{\nonumber}\\[7pt]
&& \qquad
+ \Re\left[ i \Omega^{\alpha}_{\;\; \mu} \iint \drm\left(\frac{1}{z}\right)
\drm\left(\frac{1}{z'}\right) z\,
{\rm Tr}_{\rm s}\left\{
{\widetilde \Delta^{\mu}\left(\frac{1}{z'},\frac{1}{z'}-\frac{1}{z}\right)}
{\widetilde S^{\mathrm{L}}_{\alpha,\rho\sigma}\left(\frac{1}{z'},\frac{1}{z'}-\frac{1}{z}\right)}
\right\}\right],
\label{hadronic tensor}
\end{eqnarray}
where
$\widehat \Gamma^{\mu\nu}(z)$,
$\widehat \Gamma^{\mu\nu\lambda}_\partial (z)$,
$\widehat \Gamma^{\mu\nu\lambda}_{FA}\left(\frac{1}{z'},\frac{1}{z}\right)$,
$\widehat \Gamma^{\mu\nu\lambda}_{FS}\left(\frac{1}{z'},\frac{1}{z}\right)$
and
$\widetilde \Delta^{\mu}\left(\frac{1}{z'},\frac{1}{z'}-\frac{1}{z}\right)$
are given by
(\ref{intrinsic}), (\ref{kinematical}), (\ref{dynamicalFA}), (\ref{dynamicalFS}) and
(\ref{dynamicalDelta}).
For the hard part we have used the notation
$S_{\alpha\beta,\rho\sigma}\left({1\over z}\right)$ for
$S_{\alpha\beta,\rho\sigma}\left({P_h\over z}\right)$ and
$S^{\mathrm{L} abc}_{\alpha\beta\gamma,\rho\sigma}\left(\frac{1}{z'},\frac{1}{z}\right)$ for
$S^{\mathrm{L} abc}_{\alpha\beta\gamma,\rho\sigma}\left(\frac{P_h}{z'},\frac{P_h}{z}\right)$,
etc, suppressing $P_h$ for short.
In the last term of (\ref{hadronic tensor}),
$\widetilde{S}^{\mathrm{L}}_{\alpha,\rho\sigma}$ is defined from
$\widetilde{S}^{\mathrm{L} a}_{\alpha, \rho \sigma}$ in (\ref{tildeL}) by
$\displaystyle \left(\widetilde{S}^{\mathrm{L} a}_{\alpha, \rho \sigma} \right)_{rs}
= {1\over 2N}t^a_{rs}\widetilde{S}^{\mathrm{L}}_{\alpha,\rho\sigma}$,
where $r,s$ indicates the color indices for the quark, and
${\rm Tr}_{\rm s}$ denotes the trace in the spinor space.
The LO diagrams for the hard parts of Figs. \ref{genericdiagrams}(a), (b) and (d)
are, respectively, shown in Figs. \ref{hardS}, \ref{hardSL} and \ref{hardSLtilde}.
It is easy to show that the hadronic tensor $w_{\rho \sigma}$ satisfies
the electromagnetic gauge invariance,
$q^\rho w_{\rho\sigma} = q^\sigma w_{\rho\sigma} = 0$,
owing to the WT identity in QED.
\begin{figure}[th]
\begin{center}
\includegraphics[scale=0.95]{S}
\end{center}
\caption{The lowest order
Feynman diagrams for $S_{\alpha\beta,\rho\sigma}\left({1\over z}\right)$ in (\ref{hadronic tensor}).
We set $\hat p \equiv xp$ and $ \hat P_h \equiv P_h / z$.
The symbol $\otimes$ indicates the fragmentation to the final hadron
and $p_d$ is the momentum of an unobserved parton in the final state.
}
\label{hardS}
\end{figure}
\begin{figure}[th]
\begin{center}
\includegraphics[scale=0.95]{SL}
\end{center}
\caption{
The lowest order Feynman diagrams for
$S^{\mathrm{L} abc}_{\alpha\beta\gamma,\rho\sigma}\left(\frac{1}{z'},\frac{1}{z}\right)$ in (\ref{hadronic tensor}).
We set $\hat P'_h \equiv P_h / z'$.
Three crosses ($\times$) on the quark line in the upper diagrams
indicates that the virtual photon line with a cross at one end needs to be attached to one of these crosses,
and all three diagrams have to be included.
Thus the number of diagrams
for $S^{\mathrm{L} abc}_{\alpha\beta\gamma,\rho\sigma}$
is $ (3 + 3) \times 2 = 12$. The meaning of the other symbols are the same as
that in Fig. \ref{hardS}.
}
\label{hardSL}
\end{figure}
\begin{figure}[th]
\begin{center}
\includegraphics[scale=0.95]{tildeSL}
\end{center}
\caption{The lowest order Feynman diagrams for
$\widetilde{S}^{\mathrm{L}}_{\alpha,\rho\sigma}\left(\frac{1}{z'}, \frac{1}{z'}-\frac{1}{z}\right)$.
The meaning of the symbols is the same as that in Fig. \ref{hardSL}.
The total number of diagrams for $\widetilde{S}^{\mathrm{L}}_{\alpha,\rho\sigma}$ is $(4 + 2) \times 2 = 12$.}
\label{hardSLtilde}
\end{figure}
\subsection{Spin dependent cross section}
The calculation of
$L^{\rho\sigma}W_{\rho\sigma}$ in ({\ref{Xsec}) can be done in the same way
as \cite{Koike:2022ddx}:
$W^{\rho\sigma}$ can be expanded in terms of the six tensors\cite{Meng:1991da}
$\mathscr{V}_k^{\rho\sigma}$ ($k=1,\cdots,4, 8 ,9)$ defined by
\begin{align}
&\mathscr{V}^{\mu \nu}_1 = X^\mu X^\nu + Y^\mu Y^\nu ,&
& \mathscr{V}^{\mu \nu}_2 = g^{\mu \nu}+ Z^\mu Z^\nu, &
& \mathscr{V}^{\mu \nu}_3 = T^\mu X^\nu + X^\mu T^\nu ,
{\nonumber}\\
&\mathscr{V}^{\mu \nu}_4 = X^\mu X^\nu - Y^\mu Y^\nu ,&
& \mathscr{V}^{\mu \nu}_8 = T^\mu Y^\nu + Y^\mu T^\nu,&
&\mathscr{V}^{\mu \nu}_9 = X^\mu Y^\nu + Y^\mu X^\nu.
\end{align}
By introducing the inverses of $\mathscr{V}_k^{\rho\sigma}$, $\tilde{\mathscr{V}}_k^{\rho\sigma}$
satisfying $\mathscr{V}_k^{\rho\sigma}\tilde{\mathscr{V}}_{k'\,\rho\sigma}=\delta_{kk'}$,
as
\begin{align}
& \tilde{\mathscr{V}}^{\mu \nu}_1 = \frac{1}{2}(2T^\mu T^\nu + X^\mu X^\nu + Y^\mu Y^\nu) ,\quad
\tilde{\mathscr{V}}^{\mu \nu}_2 = T^\mu T^\nu, &
&\tilde{\mathscr{V}}^{\mu \nu}_3 = - \frac{1}{2}(T^\mu X^\nu + X^\mu T^\nu), {\nonumber}\\
&\tilde{\mathscr{V}}^{\mu \nu}_4 = \frac{1}{2}(X^\mu X^\nu - Y^\mu Y^\nu) , \quad
\tilde{\mathscr{V}}^{\mu \nu}_8 = -\frac{1}{2}(T^\mu Y^\nu + Y^\mu T^\nu), &
&\tilde{\mathscr{V}}^{\mu \nu}_9 = \frac{1}{2}(X^\mu Y^\nu + Y^\mu X^\nu),
\end{align}
$W^{\mu\nu}$ can be expanded as
\begin{align}
W^{\mu \nu} = \sum_{k=1,\cdots,4,8,9}
\mathscr{V}^{\mu \nu}_k[W_{\rho \sigma}\tilde{\mathscr{V}}^{\rho \sigma}_k].
\end{align}
Then one obtains
\begin{eqnarray}
L^{\mu \nu}W_{\mu \nu}=
\sum_{k=1,\cdots 4,8,9}
[L_{\mu \nu}\mathscr{V}^{\mu \nu}_k]
[W_{\rho \sigma}\tilde{\mathscr{V}}^{\rho \sigma}_k]
= Q^2 \sum_{k=1, \cdots 4,8,9}
\mathscr{A}_k(\phi-\chi)
[W_{\rho \sigma}\tilde{\mathscr{V}}^{\rho \sigma}_k],
\label{LWcont}
\end{eqnarray}
where
$\mathscr{A}_k(\varphi) \equiv L_{\mu \nu}\mathscr{V}^{\mu\nu}_k/Q^2$
are given by
\begin{align}
& \mathscr{A}_1(\varphi) = 1 + \cosh^2{\psi}, &
&\mathscr{A}_2(\varphi) = -2, &
&\mathscr{A}_3(\varphi) = - \cos{\varphi}\sinh{2\psi},{\nonumber} \\
&\mathscr{A}_4(\varphi) = \cos{2\varphi} \sinh^2{\psi} ,&
&\mathscr{A}_8(\varphi) = - \sin{\varphi}\sinh{2\psi}, &
&\mathscr{A}_9(\varphi) = \sin{2\varphi} \sinh^2{\psi},
\label{Aks}
\end{align}
with $\psi$ defined in (\ref{coshpsi}).
From (\ref{LWcont}) and (\ref{Aks}), one sees the cross section
can be decomposed into the five structure functions with different azimuthal dependences
which are carried by the $\mathscr{A}_k(\varphi)$s.
Substituting (\ref{intrinsic}), (\ref{kinematical}), (\ref{dynamicalFA}),
(\ref{dynamicalFS}) and (\ref{dynamicalDelta})
into (\ref{hadronic tensor}), we find that the cross section (\ref{Xsec}) takes the
following structure:
\begin{align}
&\frac{\drm^6 \sigma}{\drm x_{bj}\drm Q^2 \drm z_f \drm q_\mathrm{T}^2
\drm \phi \drm \chi} {\nonumber}\\[7pt]
&= \frac{\alpha_{em}^2 \alpha_s M_h}{16\pi^2 x_{bj}^2 S_{ep}^2Q^2}
\sum_{k}
\mathscr{A}_k(\varphi) \mathcal{S}_k
\iint \drm x \drm \left(\frac{1}{z}\right) \frac{z^3}{x} f_1(x)
\delta \left(\frac{q_T^2}{Q^2} - \left(1-\frac{1}{\hat x} \right)\left(1-\frac{1}{\hat z} \right) \right)
{\nonumber}\\[7pt]
&\times\left\{
\frac{1}{z}\Delta\widehat G_{3\bar T}(z)\hat \sigma^k_{int}
+ \widehat G^{(1)}_T(z)\hat \sigma^k_{NDG}
+\frac{1}{z}\frac{\partial \widehat G_{T}^{(1)}(z)}{\partial (1/z)}\hat \sigma^k_{DG}
+\Delta \widehat H^{(1)}_T (z)\hat \sigma^k_{NDH}
+\frac{1}{z}\frac{\partial \Delta\widehat H_{T}^{(1)}(z)}{\partial (1/z)}\hat \sigma^k_{DH}
\right.
{\nonumber}\\[7pt]
&+ \frac{1}{2}\int \drm\left(\frac{1}{z'}\right)
\left[
\sum_{i=1}^{3} \Im \widehat N_i\left(\frac{1}{z'},\frac{1}{z}\right)
\left(
\frac{1}{1/z-1/z'}\hat \sigma^{-(1)}_{i,k}
+ \frac{1}{z}\left(\frac{1}{1/z-1/z'}\right)^2 \hat\sigma^{-(2)}_{i,k}
+ z' \hat \sigma^{-(3)}_{i,k}
+ \frac{z'^2}{z}\hat\sigma^{-(4)}_{i,k}
\right)
\right.
{\nonumber}\\[7pt]
&\left.
+ \sum_{i=1}^{3} \widehat O_i\left(\frac{1}{z'},\frac{1}{z}\right)
\left(
\frac{1}{1/z-1/z'}\hat \sigma^{+(1)}_{i,k}
+ \frac{1}{z}\left(\frac{1}{1/z-1/z'}\right)^2 \hat\sigma^{+(2)}_{i,k}
+ z' \hat \sigma^{+(3)}_{i,k}
+ \frac{z'^2}{z}\hat\sigma^{+(4)}_{i,k}
\right)
\right]
{\nonumber}\\[7pt]
&
+ \int\drm\left(\frac{1}{z'}\right)\frac{2}{C_F} \left[
\Im
{\widetilde D_{FT}\left(\frac{1}{z'},\frac{1}{z'}-\frac{1}{z} \right)} \right.
\left(
{\hat \sigma^{k}_{DF}}
+ \frac{1}{z}\frac{1}{1/z-1/z'}
{\hat \sigma^k_{DF2}}
+ \frac{z'}{z}
{\hat \sigma^k_{DF3}} \right. {\nonumber}\\[7pt]
& \left. \qquad\qquad + \frac{1}{1-(1-q_T^2 / Q^2)z_f/z'}
{\hat \sigma^k_{DF4}}
+ \frac{1}{1-(1-q_T^2/Q^2)z_f(1/z-1/z')}
{\hat \sigma^k_{DF5}}
\right)
{\nonumber} \\[7pt]
&\qquad\qquad+\Im
{\widetilde G_{FT}\left(\frac{1}{z'},\frac{1}{z'}-\frac{1}{z} \right)}
\left(
{\hat \sigma^{k}_{GF}}
+ \frac{1}{z}\frac{1}{1/z-1/z'}
{\hat \sigma^k_{GF2}}
+ \frac{z'}{z}
{\hat \sigma^k_{GF3}} \right. {\nonumber}\\[7pt]
&\left. \qquad\qquad \left. \left. + \frac{1}{1-(1-q_T^2 / Q^2)z_f/z'}
{\hat \sigma^k_{GF4}}
+ \frac{1}{1-(1-q_T^2/Q^2)z_f(1/z-1/z')}
{\hat \sigma^k_{GF5}}
\right)
\right]
\right\},
\label{Xsec2}
\end{align}
where
\begin{eqnarray}
\mathcal{S}_{1,2,3,4}\equiv\sin{\Phi_S}, \quad\mathcal{S}_{8,9}\equiv\cos{\Phi_S},\quad
\hat x = x_{bj}/x,\quad
\hat z= z_f/z,
\end{eqnarray}
and we have set
\begin{eqnarray}
\widehat N_3\left(\frac{1}{z'},\frac{1}{z}\right)
\equiv -\widehat N_2\left(\frac{1}{z}-\frac{1}{z'},\frac{1}{z}\right),\qquad
\widehat O_3\left(\frac{1}{z'},\frac{1}{z}\right)
\equiv \widehat O_2\left(\frac{1}{z}-\frac{1}{z'},\frac{1}{z}\right)
\label{N3O3}
\end{eqnarray}
for convenience.
Partonic hard parts for each FF can be computed from the
corresponding diagrams, Figs. \ref{hardS}, \ref{hardSL} and \ref{hardSLtilde}.
We have reached
the form (\ref{Xsec2}) based on the observation that the $z'$-dependence of the hard parts
for the dynamical FFs appears in the cross section only through the factors explicitly shown in
(\ref{Xsec2}) (See Appendix C of
\cite{Koike:2021awj}), and hence we can define
all the partonic hard cross sections $\hat{\sigma}$'s in (\ref{Xsec2}) as the functions
of $\hat{x}$, $\hat{z}$, $Q$ and $q_T$. In addition we found by
explicit calculation of the LO diagrams that
\begin{eqnarray}
&&\hat \sigma^{\pm(3)}_{i,k}= \hat \sigma^{\pm (1)}_{i,k},
\label{sigma1}\\
&&\hat \sigma^k_{DF}= \hat \sigma^k_{GF} = 0.
\label{sigma2}
\end{eqnarray}
In order to transform the cross section (\ref{Xsec2}) into a more concise form, we note
the following points:
(I) Owing to the symmetry property under $1/z'\leftrightarrow 1/z-1/z'$
of $\widehat N_1$ and $\widehat O_1$ (\ref{symmetry})
and the relations (\ref{N3O3}),
the terms of $\hat \sigma^{\pm(3)}_{i,k}$ and $\hat \sigma^{\pm(4)}_{i,k}$
can be combined, respectively, with those of $\hat \sigma^{\pm(1)}_{i,k}$
and $\hat \sigma^{\pm(2)}_{i,k}$, taking into account the relation (\ref{sigma1}).
(II) Using (\ref{eom1}), (\ref{rel1}) and (\ref{rel2}), one can eliminate
the intrinsic FF and the derivative of the kinematical FFs in favor of the kinematical
and the dynamical FFs.
This way we finally obtain the twist-3 gluon FF contribution to
$ep\to e\Lambda^\uparrow X$ as\footnote{As was noted at the end of Sec. 2, the kinematical FFs
$\widehat G^{(1)}_T (z)$ and $\Delta \widehat H^{(1)}_T (z)$
can, in principle, be eliminated in terms of the dynamical FFs.
}
\begin{align}
\label{result}
&\frac{\drm^6 \sigma}{\drm x_{bj}\drm Q^2 \drm z_f \drm q_\mathrm{T}^2
\drm \phi \drm \chi} {\nonumber}\\[7pt]
&= \frac{\alpha_{em}^2 \alpha_s M_h}{16\pi^2 x_{bj}^2 S_{ep}^2Q^2}
\sum_{k}
\mathscr{A}_k(\phi - \chi) \mathcal{S}_k
\int^1_{x_{min}} {dx\over x}\int^1_{z_{min}} {dz\over z}
z^2 f_1(x)
\delta \left(\frac{q_T^2}{Q^2} - \left(1-\frac{1}{\hat x} \right)\left(1-\frac{1}{\hat z} \right) \right)
{\nonumber}\\[7pt]
&\times\left\{
{\widehat G^{(1)}_T (z)}
{ \hat \sigma^k_G }
+
{\Delta \widehat H^{(1)}_T (z)}
{\hat \sigma^k_H} \right. {\nonumber}\\[7pt]
&+ \int\drm\left(\frac{1}{z'}\right)\left[
\frac{1}{1/z-1/z'}\Im \left(
{\widehat N_1 \left( \frac{1}{z'},\frac{1}{z} \right)}
{\hat\sigma^k_{N1}}
+
{\widehat N_2 \left( \frac{1}{z'},\frac{1}{z} \right)}
{\hat \sigma^k_{N2}}
+
{\widehat N_2 \left(\frac{1}{z}-\frac{1}{z'},\frac{1}{z} \right)}
{\hat \sigma^k_{N3}}
\right)
\right. {\nonumber}\\[7pt]
&+ \frac{1}{z}\left(\frac{1}{1/z-1/z'}\right)^2 \Im\left(
{\widehat N_1\left( \frac{1}{z'},\frac{1}{z} \right)}
{\hat \sigma^k_{DN1}}
+
{\widehat N_2 \left( \frac{1}{z'},\frac{1}{z} \right)}
{\hat \sigma^k_{DN2}}
+
{\widehat N_2 \left(\frac{1}{z}-\frac{1}{z'},\frac{1}{z} \right)}
{\hat \sigma^k_{DN3}}
\right) {\nonumber}\\[7pt]
&+
\frac{1}{1/z-1/z'}\Im \left(
{\widehat O_1 \left( \frac{1}{z'},\frac{1}{z} \right)}
{\hat\sigma^k_{O1}}
+
{\widehat O_2 \left( \frac{1}{z'},\frac{1}{z} \right)}
{\hat \sigma^k_{O2}}
+
{\widehat O_2 \left(\frac{1}{z}-\frac{1}{z'},\frac{1}{z} \right)}
{\hat \sigma^k_{O3}}
\right){\nonumber}\\[7pt]
&\left. + \frac{1}{z}\left(\frac{1}{1/z-1/z'}\right)^2 \Im\left(
{\widehat O_1\left( \frac{1}{z'},\frac{1}{z} \right)}
{\hat \sigma^k_{DO1}}
+
{\widehat O_2 \left( \frac{1}{z'},\frac{1}{z} \right)}
{\hat \sigma^k_{DO2}}
+
{\widehat O_2 \left(\frac{1}{z}-\frac{1}{z'},\frac{1}{z} \right)}
{\hat \sigma^k_{DO3}}
\right)\right] {\nonumber}\\[7pt]
&
+ \int\drm\left(\frac{1}{z'}\right)\frac{2}{C_F} \left[
\Im
{\widetilde D_{FT}\left(\frac{1}{z'},\frac{1}{z'}-\frac{1}{z} \right)} \right.
\left(
{\hat \sigma^{k}_{DF1}}
+ \frac{1}{z}\frac{1}{1/z-1/z'}
{\hat \sigma^k_{DF2}}
+ \frac{z'}{z}
{\hat \sigma^k_{DF3}} \right. {\nonumber}\\[7pt]
& \left. + \frac{1}{1-(1-q_T^2 / Q^2)z_f/z'}
{\hat \sigma^k_{DF4}}
+ \frac{1}{1-(1-q_T^2/Q^2)z_f(1/z-1/z')}
{\hat \sigma^k_{DF5}}
\right)
{\nonumber} \\[7pt]
&+\Im
{\widetilde G_{FT}\left(\frac{1}{z'},\frac{1}{z'}-\frac{1}{z} \right)}
\left(
\frac{1}{z}\frac{1}{1/z-1/z'}
{\hat \sigma^k_{GF2}}
+ \frac{z'}{z}
{\hat \sigma^k_{GF3}} \right. {\nonumber}\\[7pt]
&\left. \left. \left. + \frac{1}{1-(1-q_T^2 / Q^2)z_f/z'}
{\hat \sigma^k_{GF4}}
+ \frac{1}{1-(1-q_T^2/Q^2)z_f(1/z-1/z')}
{\hat \sigma^k_{GF5}}
\right)
\right]
\right\},
\end{align}
where the lower limits of $x$ and $z$ are, respectively, given by $x_{min} =x_{bj}
\left(1 + \frac{z_f}{1-z_f}\frac{q_T^2}{Q^2}\right)$ and
$z_{min} = z_f \left( 1 + \frac{x_{bj}}{1-x_{bj}}\frac{q_T^2}{Q^2}\right)$.
The partonic hard cross sections which appear newly in (\ref{result})
are defined from those in (\ref{Xsec2}) as
\begin{align}
&\hat \sigma^k_G = \frac{1}{2} \hat \sigma^k_{int} + \hat \sigma^k_{NDG} + 2 \hat \sigma^k_{DG},
\label{Xsecparton1}\\
&\hat \sigma^k_H = \frac{1}{2} \hat \sigma^k_{int} + \hat \sigma^k_{NDH} + 4 \hat \sigma^k_{DH}, \\
&\hat \sigma^k_{N1} = 2 \hat \sigma^k_{int} + 4 \hat \sigma^k_{DG} + 8 \hat \sigma^k_{DH} ,
\label{sigmaN1}\\
&\hat \sigma^k_{N2} = \hat \sigma^k_{int} + 8 \hat \sigma^k_{DH} + \frac{1}{2}(\hat \sigma^{-(1)}_{2,k} -\hat \sigma^{-(1)}_{3,k}), \\
&\hat \sigma^k_{N3} = -\hat \sigma^k_{int} - 4\hat \sigma^k_{DG} + \frac{1}{2}(\hat \sigma^{-(1)}_{2,k} -\hat \sigma^{-(1)}_{3,k}) , \\
&\hat \sigma^k_{DN1} = 2\hat \sigma^k_{DG} +4\hat \sigma^k_{DH} + \frac{1}{2}(\hat \sigma^{-(2)}_{1,k} -\hat \sigma^{-(4)}_{1,k}), \\
&\hat \sigma^k_{DN2} = 2\hat \sigma^k_{DG} + 4\hat \sigma^k_{DH} + \frac{1}{2}(\hat \sigma^{-(2)}_{2,k} -\hat \sigma^{-(4)}_{3,k}), \\
&\hat \sigma^k_{DN3} = -4\hat \sigma^k_{DG} + \frac{1}{2}(\hat \sigma^{-(4)}_{2,k} -\hat \sigma^{-(2)}_{3,k}),\\
&\hat \sigma^k_{O1} =\hat \sigma^{+(1)}_{1,k} , \\
&\hat \sigma^k_{O2} =\frac{1}{2}(\hat \sigma^{+(1)}_{2,k} + \hat \sigma^{+(1)}_{3,k}),
\label{sigmaO2}\\
&\hat \sigma^k_{O3} = \frac{1}{2}(\hat \sigma^{+(1)}_{2,k} + \hat \sigma^{+(1)}_{3,k}) ,
\label{sigmaO3}\\
&\hat \sigma^k_{DO1} = \frac{1}{2}(\hat \sigma^{+(2)}_{1,k} +\hat \sigma^{+(4)}_{1,k}),\\
&\hat \sigma^k_{DO2} = \frac{1}{2}(\hat \sigma^{+(2)}_{2,k} + \hat \sigma^{+(4)}_{3,k}), \\
&\hat \sigma^k_{DO3} = \frac{1}{2}(\hat \sigma^{+(4)}_{2,k} +\hat \sigma^{+(2)}_{3,k}),\\
&\hat \sigma^k_{DF1} = -\hat \sigma^k_{int} -
2 \hat \sigma^k_{DG} - 4 \hat \sigma^k_{DH},
\label{Xsecparton2}
\end{align}
and others are the same as those appearing in (\ref{Xsec2}).
Although $\hat \sigma^k_{DF}=0$ as shown in (\ref{sigma2}), $\hat \sigma^k_{DF1}$ term appears
in (\ref{result})
due to the relations (\ref{eom1}), (\ref{rel1}) and (\ref{rel2}).
To write down the partonic hard cross sections in (\ref{result}),
we further take into account the following relations:
\begin{eqnarray}
&& \hat \sigma^k_{O2}= \hat \sigma^k_{O3},
\label{O2O3}\\[5pt]
&& \hat \sigma^k_{DF1}=-{1\over 2} \hat \sigma^k_{N1},
\label{DF1N1}\\[5pt]
&& \hat \sigma^k_{DN1} = \hat \sigma^k_{DN2}=\hat \sigma^k_{DO1}= \hat \sigma^k_{DO2},
\label{DN12DO12}\\[5pt]
&& \hat \sigma^k_{DN3}= - \hat \sigma^k_{DO3}.
\label{DN3DO3}
\end{eqnarray}
The relations (\ref{O2O3}) and (\ref{DF1N1}) are obvious from
(\ref{sigmaO2}), (\ref{sigmaO3}), (\ref{sigmaN1}) and (\ref{Xsecparton2}),
and (\ref{DN12DO12}) and (\ref{DN3DO3}) are obtained by explicit calculation of the LO diagrams.
Then the independent hard cross sections
are given as follows.
\begin{eqnarray}
\begin{dcases}
\hat \sigma^1_{G}= C_F
\frac{2 Q^2}{q_T^3}
\frac{(-1 + \hat z)^2 (-1 -\hat x^2 + \hat z^2 (1-6 \hat x + 6 \hat x^2))}{ \hat x \hat z^3},\\[7pt]
\hat \sigma^2_{G}= C_F
\frac{8}{q_T}
{\hat x (-1 + \hat z)}, \\[7pt]
\hat \sigma^3_{G}= C_F
\frac{2 Q}{q_T^2}
\frac{(-1 +\hat z) ( \hat z - \hat x -2 \hat z \hat x + \hat z^2(-2 + 4 \hat x))}{\hat z^2},\\[7pt]
\hat \sigma^4_{G}= \frac{1}{2}\hat \sigma^2_G,
\\[7pt]
\hat \sigma^8_{G}= C_F
\frac{2Q}{q_T^2}
\frac{(-1+ \hat z) (- \hat x + \hat z(-1 + 2 \hat x))}{\hat z^2},\\[7pt]
\hat \sigma^9_{G}= C_F
\frac{4}{q_T}
\frac{\hat x (-1 + \hat z)}{\hat z},
\end{dcases}
\label{sigmaG}
\end{eqnarray}
\begin{eqnarray}
\begin{dcases}
\hat \sigma^1_{H}= -C_F
\frac{4Q^2}{q_T^3}
\frac{(-1 + \hat z)^2}{\hat z^2}, \\[7pt]
\hat \sigma^2_{H}= 0,\\[7pt]
\hat \sigma^3_{H}= -C_F
\frac{2Q}{q_T^2}
\frac{(-1+\hat z)(-1 + 2 \hat z)}{\hat z^2},\\[7pt]
\hat \sigma^4_{H}= -C_F
\frac{4}{q_T}
\frac{(-1 + \hat z)}{\hat z},\\[7pt]
\hat \sigma^8_{H}= \hat \sigma^3_H,\\[7pt]
\hat \sigma^9_{H}= \hat \sigma^4_H,
\end{dcases}
\end{eqnarray}
\begin{eqnarray}
\begin{dcases}
\hat \sigma^1_{N1}= C_F
\frac{8 Q^2}{q_T^3}
\frac{ (-1+ \hat z)^2 (3 \hat z (1-2\hat x)\hat x + \hat x (1+ \hat x) +
\hat z^2(1-6\hat x + 6\hat x^2))}{\hat x \hat z^3 },\\[7pt]
\hat \sigma^2_{N1}= C_F
\frac{32}{q_T}
\frac{\hat x (-1+\hat z)^2}{\hat z}, \\[7pt]
\hat \sigma^3_{N1}= C_F
\frac{8Q}{q_T^2}
\frac{ (-1 + \hat z)^2(-1 -2\hat x + \hat z(-2 + 4 \hat x))}{\hat z^2},\\[7pt]
\hat \sigma^4_{N1}= -C_F
\frac{8}{q_T}
\frac{(-1+\hat z)(1 -2(-1 + \hat z)\hat x)}{\hat z}, \\[7pt]
\hat \sigma^8_{N1}= -C_F
\frac{8Q}{q_T^2}
\frac{ (-1 + \hat z)^2}{\hat z^2},\\[7pt]
\hat \sigma^9_{N1}=- C_F
\frac{8}{q_T}
\frac{(-1+ \hat z)}{\hat z},
\end{dcases}
\end{eqnarray}
\begin{eqnarray}
\begin{dcases}
\hat \sigma^1_{N2}=- C_F
\frac{4Q^2}{q_T^3}
\frac{(-1+ \hat z)^2 (-1+(-1 + 3 \hat z)\hat x)}{\hat z^3 },\\[7pt]
\hat \sigma^2_{N2}=- C_F
\frac{8}{q_T}
\frac{\hat x (-1+\hat z)}{\hat z }, \\[7pt]
\hat \sigma^3_{N2}=- C_F
\frac{2Q}{q_T^2}
\frac{(-1+\hat z)(-3(1+ \hat x)+ \hat z (3+4 \hat x))}{\hat z^2 },\\[7pt]
\hat \sigma^4_{N2}= -C_F
\frac{4}{q_T}
\frac{(-1+\hat z)(2+ \hat x)}{\hat z},\\[7pt]
\hat \sigma^8_{N2}= -C_F
\frac{2Q}{q_T^2}
\frac{(-1+\hat z) (-3 - \hat x + \hat z(3 + 2 \hat x))}{\hat z^2},\\[7pt]
\hat \sigma^9_{N2}= \hat \sigma^4_{N2},
\end{dcases}
\end{eqnarray}
\begin{eqnarray}
\begin{dcases}
\hat \sigma^1_{N3}= -C_F
\frac{4 Q^2}{q_T^3}
\frac{(-1+\hat z)^2(3 \hat z (2- 3 \hat x)\hat x + \hat x (1+ \hat x) + 2\hat z^2(1-6\hat x + 6 \hat x^2))}{\hat x \hat z^3 },\\[7pt]
\hat \sigma^2_{N3}= -C_F
\frac{8}{q_T}
\frac{\hat x (-1+\hat z)(-3 + 4 \hat z)}{\hat z},\\[7pt]
\hat \sigma^3_{N3}= -C_F
\frac{2 Q}{q_T^2}
\frac{(-1+\hat z)(1 + \hat z(7 - 20 \hat x) + 5\hat x + 8\hat z^2(-1 + 2\hat x))}{\hat z^2 },\\[7pt]
\hat \sigma^4_{N3}= \frac{1}{2}\hat \sigma^2_{N3},\\[7pt]
\hat \sigma^8_{N3}= -C_F
\frac{2Q}{q_T^2}
\frac{(-1+\hat z)(1 - \hat x + \hat z(-1 + 2 \hat x))}{\hat z^2},\\[7pt]
\hat \sigma^9_{N3}= -C_F
\frac{4}{q_T}
\frac{\hat x (-1+\hat z) }{\hat z},
\end{dcases}
\end{eqnarray}
\begin{eqnarray}
\begin{dcases}
\hat \sigma^1_{DN1}= C_F
\frac{2Q^2}{q_T^3}
\frac{(-1+\hat z)^2(2 \hat z (1-3\hat x)\hat x + (1+\hat x)^2 + \hat z^2(1-6\hat x +6\hat x^2))}{\hat x \hat z^3 },\\[7pt]
\hat \sigma^2_{DN1}= \frac{1}{4}\hat \sigma^2_{N1}, \\[7pt]
\hat \sigma^3_{DN1}= C_F
\frac{4Q}{q_T^2}
\frac{(-1+\hat z)^2(-1 -\hat x + \hat z(-1 + 2\hat x))}{\hat z^2 },\\[7pt]
\hat \sigma^4_{DN1}= -C_F
\frac{4}{q_T}
\frac{(-1+\hat z)(1 + \hat x - \hat x \hat z)}{\hat z},\\[7pt]
\hat \sigma^8_{DN1}= \frac{1}{2}\hat \sigma^8_{N1},\\[7pt]
\hat \sigma^9_{DN1}= \frac{1}{2}\hat \sigma^9_{N1},
\end{dcases}
\end{eqnarray}
\begin{eqnarray}
\begin{dcases}
\hat \sigma^1_{DN3}=- C_F
\frac{4Q^2}{q_T^3}
\frac{(-1+\hat z)^2(1 + 2 \hat z(2-3 \hat x)\hat x + \hat x^2 + \hat z^2(1-6\hat x + 6\hat x^2))}{\hat x \hat z^3 },\\[7pt]
\hat \sigma^2_{DN3}= -2 \hat \sigma^2_{DN1},\\[7pt]
\hat \sigma^3_{DN3}= -C_F
\frac{8Q}{q_T^2}
\frac{(-1+\hat z)^2( -\hat x + \hat z(-1 + 2\hat x))}{\hat z^2 },\\[7pt]
\hat \sigma^4_{DN3}= - \hat \sigma^2_{DN1},\\[7pt]
\hat \sigma^8_{DN3}= 0,\\[7pt]
\hat \sigma^9_{DN3}=0,
\end{dcases}
\end{eqnarray}
\begin{eqnarray}
\begin{dcases}
\hat \sigma^1_{O1}= -C_F
\frac{8Q^2}{q_T^3}
\frac{ (-1+\hat z)^2(-1-\hat x + \hat z(-2 + 3 \hat x))}{\hat z^3 }, \\[7pt]
\hat \sigma^2_{O1}= - C_F
\frac{16}{q_T}
\frac{ \hat x (-1+\hat z)}{\hat z}, \\[7pt]
\hat \sigma^3_{O1}= -C_F
\frac{4Q}{q_T^2}
\frac{ (-1+\hat z)(-1 -3\hat x + \hat z(-1 + 4\hat x))}{\hat z^2 }, \\[7pt]
\hat \sigma^4_{O1}= \frac{1}{2}\hat \sigma^2_{O1}, \\[7pt]
\hat \sigma^8_{O1}= - C_F
\frac{4Q}{q_T^2}
\frac{(-1+\hat z) (-1 -\hat x + \hat z(-1 + 2 \hat x))}{\hat z^2 }, \\[7pt]
\hat \sigma^9_{O1}= \frac{1}{2}\hat \sigma^2_{O1},
\end{dcases}
\end{eqnarray}
\begin{eqnarray}
\begin{dcases}
\hat \sigma^1_{O2}= C_F
\frac{4Q^2}{q_T^3}
\frac{ (-1 + \hat z)^2 (1 + \hat x + 3\hat z(2 - 3 \hat x)\hat x + 2 \hat x^2 + \hat z^2(1-6 \hat x + 6\hat x^2))}{\hat x \hat z^3 }, \\[7pt]
\hat \sigma^2_{O2}= - C_F
\frac{8}{q_T}
\frac{\hat x (-1 +\hat z)(3-2 \hat z)}{\hat z}, \\[7pt]
\hat \sigma^3_{O2}= C_F
\frac{2Q}{q_T^2}
\frac{(-1+ \hat z)(1 + \hat z(5-16 \hat x)+ 7 \hat x + \hat z^2(-4 + 8 \hat x))}{\hat z^2 }, \\[7pt]
\hat \sigma^4_{O2}= \frac{1}{2}\hat \sigma^2_{O2}, \\[7pt]
\hat \sigma^8_{O2}= \frac{1}{2} \hat \sigma^8_{O1}, \\[7pt]
\hat \sigma^9_{O2}= \frac{1}{2} \hat \sigma^9_{O1},
\end{dcases}
\end{eqnarray}
\begin{eqnarray}
\begin{dcases}
\hat \sigma^1_{DF2}= \frac{C_F}{N}
\frac{Q^2}{q_T^3}
\frac{(-1+\hat z)(1 + \hat x - \hat z \hat x)}{\hat x \hat z^2 }, \\[7pt]
\hat \sigma^2_{DF2}= 0, \\[7pt]
\hat \sigma^3_{DF2}= - \frac{C_F}{N}
\frac{Q}{q_T^2}
\frac{(-1 + \hat z)}{\hat z}, \\[7pt]
\hat \sigma^4_{DF2}= -\frac{C_F}{N}
\frac{1}{q_T}, \\[7pt]
\hat \sigma^8_{DF2}= \hat \sigma^3_{DF2} ,\\[7pt]
\hat \sigma^9_{DF2}= \hat \sigma^4_{DF2},
\end{dcases}
\end{eqnarray}
\begin{eqnarray}
\begin{dcases}
\hat \sigma^1_{DF3}= -\frac{C_F}{N}
\frac{1}{q_T}
\frac{3\hat z(1-2\hat x)\hat x + \hat x (1+\hat x) + \hat z^2(1-6\hat x +6\hat x^2)}{\hat z^2}, \\[7pt]
\hat \sigma^2_{DF3}= -\frac{C_F}{N}
\frac{4 q_T }{Q^2}
\hat x^2 , \\[7pt]
\hat \sigma^3_{DF3}= -\frac{C_F}{N}
\frac{1}{Q}
\frac{\hat x(-1 -2 \hat x + \hat z(-2 + 4 \hat x))}{\hat z }, \\[7pt]
\hat \sigma^4_{DF3}= -\frac{C_F}{N}
\frac{1}{q_T}
\frac{(-1+ \hat x) (-1 + 2(-1 + \hat z)\hat x)}{\hat z}, \\[7pt]
\hat \sigma^8_{DF3}= \frac{C_F}{N}
\frac{1}{Q}
\frac{\hat x}{\hat z }, \\[7pt]
\hat \sigma^9_{DF3}= \frac{C_F}{N}
\frac{1}{q_T}
\frac{-1 + \hat x}{\hat z},
\end{dcases}
\end{eqnarray}
\begin{eqnarray}
\begin{dcases}
\hat \sigma^1_{DF4}=
\frac{C_F}{N}
\frac{Q^2}{q_T^3}
\frac{(-1+\hat z)(-1 + \hat z + 5 \hat x -6 \hat z \hat x -6 \hat x^2 + 6 \hat z \hat x^2)}{\hat x^2} \\
\ \ +\frac{C_F}{q_T}
\frac{(\hat x-1)^2(1+\hat x) - \hat z (\hat x-1)^2
(1+6 \hat x) - \hat z^3(1-6\hat x + 6\hat x^2) + \hat z^2(1+\hat x - 6\hat x^2 +
6\hat x^3)}
{\hat z \hat x(-1 + \hat z + \hat x)},\\[7pt]
\hat \sigma^2_{DF4}= \frac{C_F}{N}
\frac{4}{q_T}
{\hat z (-1 + \hat z)}
-C_F
\frac{4q_T}{Q^2}
\frac{\hat z (1 + \hat z - \hat x)\hat x}{(-1 + \hat z + \hat x)},
\\[7pt]
\hat \sigma^3_{DF4}= \frac{C_F}{N}
\frac{2Q}{q_T^2}
\frac{(-1+\hat z) (-\hat x + \hat z(-1 + 2\hat x))}{ \hat x}
+
\frac{2C_F}{Q}
\frac{(\hat z +\hat z^2 + \hat x - 2\hat z \hat x -2 \hat z^2 \hat x -
\hat x^2 + 2 \hat z \hat x^2)}{(-1 + \hat z + \hat x)},\ \\[7pt]
\hat \sigma^4_{DF4}= \frac{C_F}{N}
\frac{1}{q_T}
\frac{(1 + \hat x + 2\hat z^2 \hat x - \hat z(1+2 \hat x))}{\hat x}
-C_F
\frac{2}{q_T}
\frac{(-1 + \hat x)(1 + \hat z^2 \hat x + \hat x^2 -
\hat z(1+ \hat x^2))}{\hat x(-1 + \hat z +\hat x)},\\[7pt]
\hat \sigma^8_{DF4}= \frac{C_F}{N}
\frac{2 }{Q} \hat z
-C_F
\frac{4}{Q}
\frac{ \hat z (-1 + \hat x)}{(-1 + \hat z + \hat x)},\\[7pt]
\hat \sigma^9_{DF4}= \frac{C_F}{N}
\frac{1}{q_T}
\frac{ (1 - \hat x + \hat z( -1 + 2\hat x))}{\hat x}
- C_F
\frac{2}{q_T}
\frac{(-1+ \hat x)(1- \hat x + \hat z(-1 + 2 \hat x))}{\hat x (-1 + \hat z + \hat x)},
\end{dcases}
\end{eqnarray}
\begin{eqnarray}
\begin{dcases}
\hat \sigma^1_{DF5}= \frac{C_F}{N}
\frac{1}{q_T}
\frac{(-1 + \hat x)^2(1 + \hat x + 6 \hat z^2 \hat x - \hat z(1 + 6 \hat x))}{ \hat z^2 \hat x} \\
\quad -
\frac{C_F}{q_T}
\frac{(\hat x-1)^2(1+\hat x) - \hat z (\hat x-1)^2(1+6 \hat x)
-\hat z^3(1-6\hat x + 6\hat x^2) + \hat z^2(1+\hat x - 6\hat x^2 + 6\hat x^3)}{
\hat z \hat x(-1 + \hat z + \hat x)}, \\[7pt]
\hat \sigma^2_{DF5}= \frac{C_F}{N}
\frac{4q_T}{Q^2}
{ (-1 + \hat x) \hat x}
+C_F
\frac{4 q_T}{Q^2}
\frac{\hat z (1 + \hat z - \hat x)\hat x}{(-1 + \hat z + \hat x)}, \\[7pt]
\hat \sigma^3_{DF5}= \frac{C_F}{N}
\frac{2}{Q}
\frac{(-1 + \hat x) (-\hat x + \hat z(-1 + 2\hat x))}{\hat z }
-\frac{2C_F}{Q}
\frac{(\hat z +\hat z^2 + \hat x - 2\hat z \hat x -2 \hat z^2 \hat x - \hat x^2 + 2 \hat z \hat x^2)}{(-1 + \hat z + \hat x)},\\[7pt]
\hat \sigma^4_{DF5}= \frac{C_F}{N}
\frac{1}{q_T}
\frac{ (-1+ \hat x)(-1 + \hat z + \hat x - 2\hat z \hat x -2 \hat x^2 + 2 \hat z \hat x^2 )}{\hat x \hat z }\\
\qquad\quad+C_F
\frac{2}{q_T}
\frac{(-1+ \hat x) (1 + \hat z^2 \hat x + \hat x^2 - \hat z(1+ \hat x^2))}{\hat x (-1 + \hat z +\hat x)},\\[7pt]
\hat \sigma^8_{DF5}= - \frac{C_F}{N}
\frac{2}{Q}
{(-1 + \hat x)}
+C_F
\frac{4}{Q}
\frac{\hat z (-1 + \hat x)}{(-1 + \hat z + \hat x)},\\[7pt]
\hat \sigma^9_{DF5}= -\frac{C_F}{N}
\frac{1}{q_T}
\frac{ (-1 + \hat x)(1 - \hat x + \hat z( -1 + 2\hat x))}{\hat x \hat z}
+ C_F
\frac{2}{q_T}
\frac{(-1 + \hat x) (1- \hat x + \hat z(-1 + 2 \hat x))}{\hat x (-1 + \hat z + \hat x)},
\end{dcases}
\end{eqnarray}
\begin{eqnarray}
\begin{dcases}
\hat \sigma^1_{GF2}= -\frac{C_F}{N},
\frac{Q^2}{q_T^3}
\frac{(-1+\hat z)(1 - \hat x + \hat z \hat x)}{\hat x \hat z^2 }, \\[7pt]
\hat \sigma^2_{GF2}= 0, \\[7pt]
\hat \sigma^3_{GF2}= \hat \sigma^3_{DF2}, \\[7pt]
\hat \sigma^4_{GF2}=\hat \sigma^4_{DF2}, \\[7pt]
\hat \sigma^8_{GF2}= \hat \sigma^8_{DF2}, \\[7pt]
\hat \sigma^9_{GF2}= \hat \sigma^9_{DF2},
\end{dcases}
\end{eqnarray}
\begin{eqnarray}
\begin{dcases}
\hat \sigma^1_{GF3}= -\frac{C_F}{N}
\frac{1}{q_T}
\frac{\hat z (5-6 \hat x)\hat x + \hat x (-1+\hat x)+\hat z^2(1-6\hat x + 6\hat x^2)}{\hat z^2}, \\[7pt]
\hat \sigma^2_{GF3}= \hat \sigma^2_{DF3}, \\[7pt]
\hat \sigma^3_{GF3}= -\frac{C_F}{N}
\frac{1}{Q}
\frac{\hat x(-1+2\hat z)(-1+2\hat x)}{\hat z }, \\[7pt]
\hat \sigma^4_{GF3}= -\frac{C_F}{N}
\frac{1}{q_T}
\frac{(-1+ \hat x) (1 + 2(-1 + \hat z)\hat x)}{\hat z}, \\[7pt]
\hat \sigma^8_{GF3}= - \hat \sigma^8_{DF3}, \\[7pt]
\hat \sigma^9_{GF3}=- \hat \sigma^9_{DF3},
\end{dcases}
\end{eqnarray}
\begin{eqnarray}
\begin{dcases}
\hat \sigma^1_{GF4}= \frac{C_F}{N}
\frac{Q^2}{q_T^3}
\frac{(-1+\hat z) (-1 + \hat z + 7 \hat x -6 \hat z \hat x -6 \hat x^2 + 6 \hat z \hat x^2)}{\hat x^2 } \\
\qquad\qquad-C_F
\frac{1}{q_T}
\frac{(-1+\hat x)^2 -6\hat z(-1+\hat x)\hat x + \hat z^2(1 - 6\hat x + 6\hat x^2)}{ \hat z \hat x},\\[7pt]
\hat \sigma^2_{GF4}= \frac{C_F}{N}
\frac{4}{q_T}
{ \hat z (-1+\hat z)}
-C_F
\frac{4q_T}{Q^2}
{ \hat z \hat x},\\[7pt]
\hat \sigma^3_{GF4}= \frac{C_F}{N}
\frac{2Q}{q_T^2}
\frac{(-1+\hat z) (1 -\hat x + \hat z(-1 + 2\hat x))}{ \hat x}
-C_F
\frac{2}{Q}
{(1-\hat x + \hat z(-1+2\hat x))},\\[7pt]
\hat \sigma^4_{GF4}= \frac{C_F}{N}
\frac{1}{q_T}
\frac{ (-1 +\hat z + \hat x -2\hat z \hat x + 2\hat z^2 \hat x)}{\hat x}
-C_F
\frac{2}{q_T}
\frac{ (-1+\hat x) (1+ (-1+\hat z)\hat x) }{\hat x},\\[7pt]
\hat \sigma^8_{GF4}= \frac{C_F}{N}
\frac{2Q}{q_T^2}
{(-1+\hat z)}
-C_F
\frac{2 }{Q},\\[7pt]
\hat \sigma^9_{GF4}= \frac{C_F}{N}
\frac{1}{q_T}
\frac{ (-1+\hat z - \hat x + 2\hat z \hat x)}{\hat x}
- C_F
\frac{2}{q_T}
\frac{(-1 +\hat x)}{\hat x},
\end{dcases}
\end{eqnarray}
\begin{eqnarray}
\begin{dcases}
\hat \sigma^1_{GF5}= \frac{C_F}{N}
\frac{1}{q_T}
\frac{(-1 +\hat x)^2 (-1 + \hat z+\hat x -6 \hat z \hat x +6\hat z^2 \hat x)}{ \hat z^2 \hat x} \\
\qquad\qquad -C_F
\frac{1}{q_T}
\frac{(-1+\hat x)^2 -6\hat z(-1+\hat x)\hat x + \hat z^2(1 - 6\hat x + 6\hat x^2)}{\hat z \hat x},\\[7pt]
\hat \sigma^2_{GF5}= \frac{C_F}{N}
\frac{4q_T}{Q^2}
{(-1+\hat x) \hat x}
-C_F
\frac{4q_T}{Q^2}
{ \hat z \hat x},\\[7pt]
\hat \sigma^3_{GF5}=\frac{C_F}{N}
\frac{2}{Q}
\frac{(-1+\hat x)(1 -\hat x + \hat z(-1 + 2\hat x))}{ \hat z}
-C_F
\frac{2}{Q}
{(1-\hat x + \hat z(-1+2\hat x))},\\[7pt]
\hat \sigma^4_{GF5}=\frac{C_F}{N}
\frac{1}{q_T}
\frac{ (-1+\hat x)(-1 +\hat z +3 \hat x -2\hat z \hat x -2\hat x^2 + 2\hat z \hat x^2)}{\hat x \hat z} \\
\qquad\qquad -C_F
\frac{2}{q_T}
\frac{ (-1+\hat x) (1+ (-1+\hat z)\hat x) }{\hat x},\\[7pt]
\hat \sigma^8_{GF5}= - \frac{C_F}{N}
\frac{2}{Q}
\frac{(-1+\hat z)(-1+\hat x)}{\hat z}
-C_F
\frac{2 }{Q},\\[7pt]
\hat \sigma^9_{GF5}= - \frac{C_F}{N}
\frac{1}{q_T}
\frac{(-1+\hat x) (1-3\hat x +\hat z(-1 + 2\hat x))}{\hat x \hat z}
- C_F
\frac{2}{q_T}
\frac{(-1+\hat x)}{\hat x}.
\end{dcases}
\label{sigmaGF5}
\end{eqnarray}
Eqs. (\ref{sigmaG})-(\ref{sigmaGF5}) and the relations (\ref{O2O3}),
(\ref{DF1N1}),
(\ref{DN12DO12}),
(\ref{DN3DO3}) specify all the partonic cross sections in the final
formula (\ref{result}).
\section{Summary}
In this paper we have studied the transversely polarized spin-1/2 hyperon production
in SIDIS, $ep\to e\Lambda^\uparrow X$. Specifically,
we have derived the LO twist-3 gluon FF contribution
to the polarized cross section. Since the twist-3 gluon FFs are related to
the $q\bar{q}g$-FFs through the EOM relations and the LIRs, we consistently took
into account the latter contribution together.
This has completed the twist-3 LO cross section for this process
together with the results for the contribution from the twist-3 DF and the
twist-3 quark FFs derived in \cite{Koike:2022ddx}.
The final result for the cross section is given in
(\ref{result}). It consists of five components with different azimuthal structures as
\begin{align}
\frac{\drm^6 \sigma}{\drm x_{bj} \drm Q^2 \drm z_f \drm q_T^2 \drm \phi \drm \chi} =
&\mathcal{F}_1 \sin \Phi_S + \mathcal{F}_2 \sin \Phi_S
\cos \varphi + \mathcal{F}_3 \sin \Phi_S \cos 2\varphi {\nonumber}\\
&+\mathcal{F}_4 \cos \Phi_S \sin \varphi + \mathcal{F}_5 \cos \Phi_S \sin 2\varphi,
\end{align}
where $\varphi=\phi-\chi$ is the relative azimuthal angle between the lepton ($\phi$) and the
hadron ($\chi$) planes and $\Phi_S$ is the azimuthal angle of
the transverse spin vector of $\Lambda^\uparrow$ measured from the hadron plane
with the structure functions
$\mathcal{F}_{1,2,3,4,5}$ written as convolution of the twist-3 FFs and the quark DF
in the proton and the partonic hard cross sections.
The LO cross section given in \cite{Koike:2022ddx}
and the present study contains several unknown nonperturbative functions, and their
determination requires global anayses of many twist-3 processes in which the same twist-3 functions
appear. Information from analyses of small-$P_T$ data in terms of the TMD factorization is
also of great help to constrain some of the twist-3 functions.
In any case our twist-3 cross section formula
is the starting point of analyzing the large-$P_T$ hyperon polarization in SIDIS
which we hope to
be measured
in the future EIC experiment.
\section*{Acknowledgments}
This work has been supported by
the establishment of Niigata university fellowships towards the creation of science
technology innovation (R.I.),
the Grant-in-Aid for
Scientific Research from the Japanese Society of Promotion of Science
under Contract Nos.~19K03843 (Y.K.) and 18J11148 (K.Y.),
National Natural Science Foundation in China
under grant No. 11950410495, Guangdong Natural Science Foundation under
No. 2020A1515010794
and research startup funding at South China
Normal University (S.Y.).
\bibliographystyle{unsrt}
|
1,477,468,750,288 | arxiv | \section{Introduction}
The vector $\textbf{a}=(a_0,\ldots, a_{d-1})$ is called \textit{unimodal} if for some (not necessarily unique) index $i$, $(a_0,\ldots, a_i)$ is non-decreasing and $(a_i,\ldots, a_{d-1})$ is non-increasing. If that is the case, we say that the unimodal vector $\textbf{a}$ \textit{peaks} at $i$. We say that the vector $\textbf{a}$ \textit{dips} at $i$ if $f_{j}>f_i<f_{k}$ for some $0\leq j<i<k\leq d-1$. Clearly, the vector $\textbf{a}$ is unimodal if and only if it does not dip anywhere. The question of unimodality of the members of certain classes of vectors has been of long-standing interest in algebra, combinatorics, graph theory and geometry (see e.g. \cite{B1,sta}).
By \textit{$f$-vector} (\textit{face vector}) we mean the vector $(f_0,\ldots,f_{d-1})$, where $f_i$ is the number of $i$-dimensional proper faces of a $d$-polytope. The unimodality of face vectors of polytopes is extensively studied (see e.g. \cite{bj1,eck,maj,SZ1}). In 1961 (according to Bj\"orner \cite{bj1}), Motzkin conjectured that the $f$-vector of any polytope is unimodal. The Unimodality Conjecture for polytopes was also stated by Welsh \cite{wel}. Danzer already showed in 1964 (see \cite[Section 2.1]{zi1}) that the conjecture cannot stand in its full generality, still leaving open the question: which natural classes of polytopes have unimodal $f$-vectors?
Examples of simplicial polytopes with non-unimodal face vectors were first published by Bj\"orner \cite{bj2}. Bj\"orner's original counterexamples were $24$-dimensional, but subsequently also $20$-dimensional counterexamples were constructed by Bj\"orner \cite{bj3} and independently by Lee \cite{bil,lee}. It was shown by Eckhoff \cite{eck} that, in fact, this is the smallest dimension in which simplicial counterexamples can be found. In Section \ref{non}, we construct a $12$-dimensional cubical polytope with non-unimodal face vector and we show in Section \ref{small}, that there is no cubical counterexamples of dimensions less than 11.
Bj\"orner conjectured a partial unimodality property for polytopes. Namely, the face vectors of polytopes increase on the first quarter, and they decrease on the last quarter.
Bj\"orner has proved in \cite{bj1}, that this conjecture holds for simplicial polytopes, the face vectors of simplicial polytopes moreover increase up to the middle, and they decrease
on the last quarter. In Section \ref{par}, we prove a similar statement for cubical polytopes: their face vectors can dip only on the middle one-third part.
\section{Cubes and cubical polytopes}
The \textit{$d$-cube} (denote by $C^d$) is a polytope
combinatorially equivalent to the unit cube $[0, 1]^d$.
It is a well-known fact that the face vector of the $d$-cube is unimodal and peaks at $\lfloor \frac{d}{3} \rfloor$ (in addition, it also peaks at $\lfloor \frac{d+1}{3} \rfloor$). This fact can also be expressed as follows: let $j\in \{0,1,2\}$ such that $d\equiv j \mod 3$, then the $f$-vector of the $d$-cube peaks at $\frac{d-j}{3}$. If $j=2$, then it also peaks at $\frac{d-j}{3}+1$. The $f$-vector of the $d$-cube has no more peaks.
The $f$-vector of the $d$-cube is strictly increasing up to $\lfloor \frac{d}{3} \rfloor$ and it is strictly decreasing from $\lfloor \frac{d+1}{3} \rfloor$ on. That is,
\begin{equation}\label{szig}
f_0(C^d)<\cdots <f_{\lfloor \frac{d}{3} \rfloor}(C^d)\hspace{3mm} \text{and}\hspace{3mm} f_{\lfloor \frac{d+1}{3}\rfloor}(C^d)>\cdots > f_{d-1}(C^d)
\end{equation}
A $d$-polytope is called \textit{cubical} provided all its facets are combinatorially equivalent to $(d-1)$-cubes (in other words, its all proper faces are cubes).
The following important combinatorial invariant of a cubical polytope is introduced by Adin \cite{adi}.
Let $P$ be a cubical $d$-polytope with f-vector $\textbf{f}=(f_0,\ldots,f_{d-1})$ and let $H$ be the $d\times d$ matrix given by
\begin{equation}\label{fhH}
H(i,j)=2^{-j}\binom{d-i-1}{d-j-1}, \hspace{4mm} \text{for} \hspace{3mm} 0\leq i,j \leq d-1
\end{equation}
Define the \textit{short cubical h-vector} $\textbf{h}^{(sc)}=(h_0^{(sc)},\ldots,h_{d-1}^{(sc)})$ of $P$ by $\textbf{h}^{(sc)}=\textbf{f}\cdot H^{-1}$.
Equivalently, the face vector of $P$ can be expressed by
\begin{equation}\label{fh}
\textbf{f}=\textbf{h}^{(sc)}\cdot H
\end{equation}
\begin{lemma}\label{egy} Let $d$ be a positive integer and let $H$ be the matrix defined in \text{\textnormal{(\ref{fhH})}}.
\vspace{1.5mm}Then
\begin{enumerate}
\item[(\textit{i})] $H(i,i)<H(i,i+1)<\cdots <H(i, \lfloor\frac{d+2i}{3}\rfloor-1)\vspace{2mm}\leq H(i, \lfloor\frac{d+2i}{3}\rfloor)>\cdots >H(i,d-1)$ \\for $0\leq i \vspace{1.5mm}\leq d-1$
\item[(\textit{ii})]$H(i,j)=H(i+1,j)+2H(i+1,j+1)$ for $0\leq i,j \leq d-2$
\end{enumerate}
\end{lemma}
\begin{proof}Denote the $i^{th}$ row of $H$ by $H(i,*)$. Let us note that for all $0\leq i \leq d-1$, $H(i,*)$ can be viewed as the concatenation of two vectors. The first one is the null vector (with $i$ components) and the second one is $2^{1-d}\cdot \textbf{f}(C^{d-i-1})$, where $\textbf{f}(C^{d-i-1})$ is the face vector (supplemented with the last component $f_{d-i-1}=1$) of a $(d-i-1)$-dimensional cube. Therefore, the inequalities of $(i)$ follow from (\ref{szig}). Thus, $H(i,*)$ is unimodal and peaks at $\lfloor \frac{d-i-1+1}{3} \rfloor+i=\lfloor \frac{d-i}{3} \rfloor+i=\lfloor \frac{d+2i}{3} \rfloor$ and also at $\lfloor \frac{d-i-1}{3} \rfloor+i=\lfloor \frac{d+2i-1}{3} \rfloor$ for all $0\leq i \leq d-1$.
The recursion of $(ii)$ follows from the so-called Pascal's rule, i.e. $\binom{n}{k}= \binom{n-1}{k-1}+\binom{n-1}{k}$.
\end{proof}
\begin{remark}\label{rem} Alternatively, we can separate three different cases as follows: let $j\in \{0,1,2\}$ such that $d-i\equiv j \mod 3$, then $H(i,*)$ peaks at $\frac{d+2i-j}{3}$. If $j=0$, then it also peaks at $\frac{d+2i-j}{3}-1$. The vector $H(i,*)$ has no more peaks. \end{remark}
%
%
%
The following lemma
can be verified through a case-by-case analysis. The proof based on the inequalities $(i)$ and the recursion $(ii)$ of Lemma \ref{egy} and some elementary properties of the binomial coefficients. We omit the details.
Alternatively, it can also be proved by adapting the methods (with some necessary modifications) applied by Bj\"orner in the proof of Lemma 6 and Lemma 7 of \cite{bj1}.
%
\begin{lemma}\label{bi}
Let $d$ be a positive integer and $0\leq i\leq k\leq d-1$. Let $H$ be the matrix defined in \text{\textnormal{(\ref{fhH})}}. Let $a_j=H(i,j)+H(k,j)$ for $0\leq j \leq d-1$. Then
$$a_0<\cdots <a_{\lfloor\frac{d+2i}{3}\rfloor-1}\vspace{0mm}\leq a_{ \lfloor\frac{d+2i}{3}\rfloor}>\cdots >a_{d-1}$$
\end{lemma}
%
The following important lemma
is needed to prove Theorem \ref{thp} and Theorem \ref{kis}. Statement $(ii)$ is a brief formulation of the Cubical Dehn-Sommerville Equations.
\begin{lemma}[Adin \cite{adi}, Lemma 1, Corollary 10]\label{ad1}
Let $P$ be a cubical $d$-polytope. Then
\begin{enumerate}
\item[(\textit{i})] all the components of $\textbf{h}^{(sc)}(P)$ are positive integers,
\item[(\textit{ii})] $\textbf{h}^{(sc)}(P)$ is symmetric: $h_i^{(sc)}=h_{d-i-1}^{(sc)}$, $(0\leq i \leq d-1),$
\item[(\textit{iii})] $\textbf{h}^{(sc)}(P)$ is unimodal.
\end{enumerate}
\vspace{-1mm}\end{lemma}
\section{Capped and neighborly cubical polytopes}\label{cnc}
There is a cubical analogue of the simplicial stacking operation, the so-called \textit{capping operation} described by Jockusch \cite{joco} as follows: let $Q$ be a cubical $d$-polytope, then the polytope $P$ is called a \textit{capped polytope over} $Q$ if there is a
$d$-cube $C$ such that $P = Q \cup C$ and $Q \cap C$ is a facet of both $Q$ and $C$. Roughly speaking, $P$ is obtained by glueing a cube onto a facet of $Q$. A polytope $P$ is said to be an $n$\textit{-fold capped polytope over} $Q$ if there is a sequence $P_0, P_1,\ldots,P_n$ ($1\leq n$) of polytopes such that for $i=0,\ldots,n-1$
\begin{enumerate}
\item[(\textit{i})] $P_{i+1}$ is a capped polytope over $P_i$,
\item[(\textit{ii})] $P_0=Q$ and $P_n=P$.
\end{enumerate}
For $n=0$, the $n$\textit-fold capped polytope over $Q$ is $Q$ itself. A polytope is said to be $n$\textit{-fold capped} (or simply \textit{capped}) if it is an $n$-fold capped polytope over a cube. Capped polytopes are the cubical analogues of the (simplicial) stacked polytopes.
Since the capping operation destroys a cubical facet while it creates $2d-1$ new ones, the capping operation increases the component $f_k$ of the face vector by $2^{d-k}\binom{d}{k}-2^{d-k-1}\binom{d-1}{k}$ if $0\leq k \leq d-2$ and by $2(d-1)$ if $k=d-1$. Hence, if $P$ is an $n$-fold capped polytope over $Q$, then we have
\begin{equation}\label{cap}
f_k(P)=\left\{ \begin{array}{ll}
f_k(Q)+n\Big(2^{d-k}\binom{d}{k}-2^{d-k-1}\binom{d-1}{k}\Big) &
\hspace{3mm}\vspace{0mm}\textrm{if $0\leq k \leq d-2$} \\
f_k(Q)+2n(d-1) & \hspace{3mm}\textrm{if $k=d-1$}
\end{array} \right.
\end{equation}
By using (\ref{cap}) and the fact that the face vector of the $d$-cube is unimodal and peaks at $\lfloor \frac{d+1}{3} \rfloor$, it is not difficult to show that the face vector of a capped $d$-polytope is also unimodal and also peaks at $\lfloor \frac{d+1}{3} \rfloor$.
The \textit{$k$-skeleton} of a $d$-polytope is the union of its $k$-dimensional faces. A cubical $d$-polytope (with $2^n$ vertices for some $n\geq d$) is called \textit{neighborly cubical} provided its $(\lfloor \frac{d}{2} \rfloor -1)$-skeleton is combinatorially equivalent to the $(\lfloor \frac{d}{2} \rfloor -1)$-skeleton of a cube. The concept of neighborly cubical polytopes was introduced by Babson, Billera and Chan \cite{bill}. Neighborly cubical polytopes can be considered as the cubical analogues of the (simplicial) cyclic polytopes.
It is proved in \cite{maj1}, that the Unimodality Conjecture holds for neighborly cubical polytopes. The number of vertices and the dimension of a neighborly cubical polytope determine its $f$-vector and it is given by (see \cite{maj1})
\begin{displaymath}
f_k=\left\{ \begin{array}{ll}
2^{n-k}\displaystyle\sum_{i=0}^{\frac{d-2}{2}}
\textstyle
\Big(\binom{d-i-1}{k-i}+\binom{i}{k-d+i+1}\Big)\binom{n-d+i}{i} &
\textrm{if $d$ is even} \\
2^{n-k}\Bigg(\displaystyle\sum_{i=0}^{\frac{d-3}{2}}
\textstyle
\Big(\binom{d-i-1}{k-i}+\binom{i}{k-d+i+1}\Big)\binom{n-d+i}{i}+\displaystyle\sum_{j=0}^{n-d}2^{-j}\textstyle\binom{\frac{d-1}{2}}{d-k-1}
\binom{n-\frac{d+3}{2}-j}{n-d-j}\Bigg) & \textrm{if $d$ is odd}
\end{array} \right.
\end{displaymath}
By using the above explicit formula, it can be shown that the $f$ -vector of a neighborly cubical polytope peaks approximately at $\lfloor \frac{2d}{3} \rfloor$ if $n$ is large \vspace{2mm} enough.
\section{Partial unimodality for cubical polytopes}\label{par}
In this section, we show that the maximal component of the face vector of a cubical polytope can occur only in the middle one-third part (i.e. between $\lfloor \frac{d}{3} \rfloor$ and $\lfloor \frac{2d}{3} \rfloor$), furthermore, the violation of the Unimodality Conjecture is possible only in this part.
\begin{theorem}[partial unimodality for cubical polytopes]\label{thp}
Let $P$ be a cubical $d$-polytope with face vector $(f_0,\ldots,f_{d-1})$. Then
\begin{enumerate}
\item[(\textit{i})]$f_0<\cdots <f_{\lfloor \frac{d}{3} \rfloor-1}\leq f_{\lfloor \frac{d}{3} \vspace{1mm}\rfloor}$
\item[(\textit{ii})] $f_{\lfloor \frac{2d}{3} \rfloor}>\cdots>f_{d-1}$
\end{enumerate}
\end{theorem}
\begin{proof}Let $H$ be the matrix defined in \text{\textnormal{(\ref{fhH})}}. For $0\leq i\leq {d-1}$, denote the $i^{th}$ row of $H$ by $H(i,*)$. For $0\leq i\leq \lfloor \frac{d-1}{2}\rfloor$, let us define the vectors $\textbf{b}^i$ by
\begin{displaymath}
\textbf{b}^i=\left\{ \begin{array}{ll}
H(i,*)+H(d-i-1,*) &
\hspace{3mm}\textrm{if $2i\neq d-1$} \vspace{2mm}\\
H(i,*) & \hspace{3mm}\textrm{if $2i=d-1$}
\end{array} \right.
\end{displaymath}
By using the above notation and the symmetric property of the short $h$-vector (see $(ii)$ of Lemma \ref{ad1}), the relation (\ref{fh}) can be rewritten as
$$\textbf{f}(P)=\sum^{\lfloor\frac{d-1}{2}\rfloor}_{i=0}h^{(sc)}_i\textbf{b}^i$$
From Lemma \ref{bi} follows that $\textbf{b}^i$ is unimodal and peaks at $\lfloor \frac{d+2i}{3} \rfloor$ for all $0\leq i \leq \lfloor\frac{d-1}{2} \rfloor$ Furthermore, we have
$$\textbf{b}^i_i<\cdots <\textbf{b}^i_{\lfloor\frac{d+2i}{3}\rfloor-1}\vspace{0mm}\leq \textbf{b}^i_{ \lfloor\frac{d+2i}{3}\rfloor}>\cdots >\textbf{b}^i_{d-1}$$
Therefore, all the vectors $\textbf{b}^i$ peak between $\lfloor \frac{d+2\cdot 0}{3} \rfloor=\lfloor \frac{d}{3} \rfloor$ and $\lfloor \frac{d+2\lfloor \frac{d}{2} \rfloor}{3} \rfloor=\lfloor \frac{2d}{3} \rfloor$ and
$$\textbf{b}^i_0\leq\cdots \leq\textbf{b}^i_{\lfloor\frac{d}{3}\rfloor-1}\vspace{0mm}\leq \textbf{b}^i_{ \lfloor\frac{d}{3}\rfloor} \text{ and }\textbf{b}^i_{ \lfloor\frac{2d}{3}\rfloor}>\cdots >\textbf{b}^i_{d-1}$$
for all $0\leq i\leq \lfloor \frac{d-1}{2}\rfloor$. Furthermore, $\textbf{b}^i_0<\cdots <\textbf{b}^i_{\lfloor\frac{d}{3}\rfloor-1}\vspace{0mm}\leq \textbf{b}^i_{ \lfloor\frac{d}{3}\rfloor}$ for $i=0$. Consequently, $\textbf{f}(P)$ has the stated property, since $\textbf{f}(P)$ is a non-negative linear combination of the vectors $\textbf{b}^i$ (see $(i)$ of Lemma \ref{ad1}).
\end{proof}
\begin{remark}\label{r2} In fact, we could state a little bit more about $\textbf{f}(P)$. Namely, it can be easily checked that if $d\equiv j \mod 6 $ for some $j\in \{0,2,3\}$, then
$\textbf{b}^{\lfloor \frac{d}{2} \rfloor}$ peaks at $\lfloor \frac{2d}{3} \rfloor-1$. Consequently, the sequence $(f_{\lfloor \frac{2d}{3} \rfloor-1},f_{\lfloor \frac{2d}{3} \rfloor},\ldots,f_{d-1})$ is strictly decreasing if the dimension of $P$ is congruent to $0$ or $2$ or $3$ modulo $6$.\end{remark}
\section{Non-unimodal $f$-vectors}\label{non}
According to Theorem \ref{thp}, capped polytopes and neighborly cubical polytopes are extremal among all cubical polytopes, in the sense that the maximal component of their $f$-vectors occur at the most far away position from their middle. Since the peaks of the $f$-vectors of a capped and a neighborly cubical polytope are situated as far away from each other as possible, it seems to be reasonable to involve these polytopes in constructing counterexamples to the Unimodality Conjecture for cubical polytopes. In fact, most of the non-cubical counterexamples were also based on this idea. For instance, Danzer used stacked polytopes (with peaks at $\lfloor \frac{d}{2} \rfloor$) and crosspolytopes (with peaks at $\lfloor \frac{2d}{3} \rfloor$ ) for his first counterexamples (according to Ziegler \cite[Section 2.1]{zi1}).
According to Theorem \ref{thp} and Remark \ref{r2}, for all cubical $12$-polytopes,
$$f_0<\cdots <f_3\leq f_4 \hspace{2mm} \text{and} \hspace{2mm} f_7>\cdots >f_{11}$$
Therefore, the possible positions for a dip are $5$ and $6$. The starting point of our construction is a neighborly cubical $12$-polytope, to which we apply the capping operation. Due to (\ref{cap}), by applying the capping operation,
we are adding a vector with peak at $\lfloor \frac{d}{3} \rfloor$
to a vector whose peak is at $\lfloor \frac{2d}{3} \rfloor$.
Joswig and Ziegler proved in \cite{jos} that there exists a $d$-dimensional neighborly cubical polytope with $2^n$ vertices for any $n\geq d\geq 2$. They constructed neighborly cubical polytopes as linear projections of cubes. For the base of our counterexample, we chose a neighborly cubical $12$-polytope with $2^{131}$ vertices, to which we apply the capping operation $1.841\cdot 10^{42}$ times. Then we obtain a cubical polytope with non-unimodal $f$-vector.
By using (\ref{cap}) and the formula of the $f$-vector of neighborly cubical polytopes (see in Section \ref{cnc}), it can be computed, that the $f$-vector of this polytope dips at $5$ indeed.
\begin{theorem}
There exists a $12$-dimensional cubical polytope with $3.770370722\cdot 10^{45}$
vertices for which $f_4 >f_5 < f_6$. Consequently, the Unimodality Conjecture fails for cubical polytopes in general.
\end{theorem}
As a matter of historical interest, we mention that Lee's simlicial counterexample was one of the first applications of the sufficiency part of the famous $g$-theorem (see Billera and Lee, Corollary 2 in \cite{bil}).
\section{Unimodality for small dimensional cubical polytopes}\label{small}
Using the relations between the $f$-vectors and the $g$-vectors of simplicial polytopes, Eckhoff \cite{eck} has shown that the $f$-vector of a simplicial polytope is unimodal if its dimension is less than $20$. We prove the following statement in a similar way.
\begin{theorem}\label{kis}
The $f$-vectors of cubical $d$-polytopes are unimodal for all $d\leq 10$.
\end{theorem}
\begin{proof}
First let $P$ be a cubical $10$-polytope with $f$-vector $\textbf{f}=(f_0,\ldots,f_9)$. It follows from Theorem \ref{thp}, that $f_0<f_1< f_2\leq f_3$ and $f_6>\ldots> f_9$. Hence, $\textbf{f}$ can possibly dip at $4$ and $5$. Using Lemma \ref{ad1} and the relation $\textbf{f}=\textbf{h}^{(sc)}\cdot H$, one can show that all the assumptions $$f_3>f_4<f_5,\hspace{1mm} f_3>f_4<f_6, \hspace{1mm}f_3>f_5<f_6 \hspace{1mm}\text{ and }\hspace{1mm} f_4>f_5<f_6$$ lead to contradictions. Consequently, the face vector of $P$ is unimodal indeed.
For $d<10$, similar reasoning completes the proof.
\end{proof}
The method of the above proof does not lead to an analogous result in the case $d=11$. However, we could construct some symmetric unimodal vector $\textbf{v}$ (whose components are positive integers), such that
$v\cdot H$ would be non-unimodal, we could not guarantee that there would exist some 11-polytope, whose short cubical $h$-vector would equal $\textbf{v}$. Since the complete combinatorial characterization of cubical polytopes is not known, the following question still remains open:
\begin{question}Is there any cubical $11$-polytope with non-unimodal face vector?
\end{question}
To violate the unimodality in dimension $d=11$, for the role of the $h$-vector, we need a candidate with an ``outlandish shape'', namely, its middle component should be relatively large (compared to other components). Hence, by considering the characterization of simplicial polytopes, one may believe that the answer to the above question is negative.
|
1,477,468,750,289 | arxiv | \section{Introduction}
Let $\mathcal C$ be a class of topological spaces. A topological space $Y$ is called an absolute extender (AE) for the
class $\mathcal{C}$ if, given a topological space $X$, a closed set $A\subset X$ and a continuous function
$f\from A\to Y$, there is a continuous extension $F\from X\to Y$. In this sense, Dugundji's extension theorem tells us that
locally convex spaces are absolute extenders for the class of metric spaces. With this in mind, consider a functor
$\textbf{F}\from \textbf{Tych} \to \mathcal C$ that goes from the class of Tychonoff spaces to the class of
topological spaces $\mathcal{C}$ and that assigns to each topological space $X$ a topological space $F(X)\in \mathcal{C}$
so that $F(X)$ contains a closed embedded copy of $X$ and so that every continuous function
$f\from X\to Y$, $Y\in \mathcal C$, has a continuous extension $f_\#\from F(X)\to Y$. If $A\subset X$ is such that
$F(A)$ is a subspace of $F(X)$, we will say that $A$ is $F$-embedded in $X$. Let us take a topological space $X$
and a subspace $A$, if any continuous function $f\from A\to Y$, $Y\in \mathcal C$, has a continuous extension
$\tilde f\from X\to Y$, we will say that $A$ is an {\it $F$-valued retract} (or an {\it $F$-retract}) of $X$. Thus,
a continuous function $r\from X\to F(A)$ is called an {\it $F$-retraction}, or an {\it $F$-valued retraction}, if the
restriction of $r$ to $A$ coincides with the embedding of $A$ in $F(A)$ and there is a continuous retraction
$r_\#\from F(X)\to F(A)$ that extends $r$. In particular, $A$ is an {\it $L$-retract\/} of $X$ if $A$ is $L$-embedded in
$X$ and the subspace $L(A)$ of $L(X)$ spanned by $A$ is a linear retract of $L(X)$.
Let us denote by $\textbf{HLocon}$ the category that has as objects the Hausdorff locally convex spaces and whose arrows are
the continuous linear mappings between them. Considering all of the above, we are especially interested in studying
the $L$-valued retracts, where $\textbf{L}\from \textbf{Tych} \to \textbf{HLocon}$ is the functor that assigns to each
Tychonoff space $X$ its free locally convex space $L(X)$.
This interest is largely motivated by the recent interest to the free locally convex spaces in current mathematical research.
In addition, as is well known, free locally convex spaces have a strong link with weak spaces of continuous functions,
and although in general it is not possible to establish a natural topology $\eta$ on $C(X)$ so that $L(X)$ and $(C(X),\eta)$
be a dual pair just like $L_p(X)$ ($L(X)$ endowed with its $*$-weak topology) and $C_p(X)$, we can introduce concepts for
$C_p(X)$ motivated by concepts in the theory of $L(X)$. In particular, we will see that the $L$-retracts lead to a concept
in the $C_p$-theory that is stronger than the notion of an $\ell$-embedded set.
Specifically, these concepts are so similar that, based on them, we will try to carry out a study of the relation of
$L$-equivalence in the same way that the relation of $\ell$-equivalence (which derives from the constructions of the weak
spaces of continuous functions) has been investigated. In fact, we will see that the relation of $L$-equivalence can be
studied from the field of $C_p$-theory, and that this connection only implies imposing a minor extra condition. Although our
results focus on the $L$-equivalence of continuous mappings, we should keep in mind that this implies the $L$-equivalence of
topological spaces.
\section{Basic properties of free locally convex spaces}
In what follows, every topological space is assumed to be Tychonoff, that is, $T_1$ and completely regular. Likewise,
all topological vector spaces are assumed to be Hausdorff and are over $\reals$. The weak topological dual of a locally
convex spaces $E$ will be denoted by $E'$. We say that $E$ is {\it weak\/} if $E$ is topologically isomorphic to $(E')'$
(equivalently, the topology of $E$ is projective with respect to $E'$).
We define the free locally convex space (in the Markov sense) over a topological space $X$ as a pair $(\delta_X, L(X))$
formed by a continuous injection $\delta_X\from X\to L(X)$ and a locally convex space $L(X)$ such that $L(X)$ is the linear
span of $\delta_X(X)$ and for each continuous function $f\from X\to E$ to a locally convex space $E$ there is a continuous
linear mapping $f_\#\from L(X)\to E$ such that $f=f_\#\circ \delta_X$. Similarly to Graev, we define the free locally convex
space in the Graev sense over the topological space with a distinguished point $(X, x_{0})$ as a pair
$(\delta_{X}, GL(X,x_0))$ formed by a continuous injection $\delta_X\from X\to GL(X,x_0)$ with $\delta_{X}(x_0)=0$ and
a locally convex space $GL(X,x_0)$ such that $GL(X,x_0)$ is the linear span of $\delta_X(X)$ and for every continuous
function $f\from X\to E$ where $E$ is a locally convex space and $f(x_0)=0$, there is a unique continuous linear mapping
$f_\#\from GL(X,x_{0})\to E$ such that $f=f_\#\circ \delta_X$.
The mapping $\delta_X$ is know as {\it the Dirac's embedding}, and for each $x\in X$ we have $\delta_X(x)=\delta_x$ is a
linear functional that assigns to each $f\in \reals^X$ its value at $x$, that is, $\delta_x(f)=f(x)$. In this sense, we can view
the set $L(X)$ ($GL(X,x_0)$) as the set of finite linear combinations
$\lambda_1\delta_{x_1}+\dots +\lambda_n\delta_{x_n}$ with $n\in \mathbb N$, $\lambda_i\in \reals$ and
$x_i\in X$ ($x_i\in X\setminus \{x_{0}\}$). The following facts are well known \cite\refGabriyelyan:
\begin{theorem}\label{Fundamental}
Let $X$ be a topological space and $x_{0}, x_1 \in X$ two different points. Then
\begin{enumerate}
\item[(1)] The spaces $L(X)$ and $GL(X,x_{0})$ always exist and are unique up to a topological isomorphism;
\item[(2)] $\delta_X(X)$ is a Hamel base for $L(X)$, and $\delta_X(X\setminus \{x_0\})$ is a Hamel base for $GL(X,x_{0})$;
\item[(3)] The topologies of $L(X)$ and $GL(X,x_{0})$ are Hausdorff and make the Dirac's embedding a topological
embedding, so that $X$ is embedded in $L(X)$ and $GL(X,x_{0})$ as closed subspace;
\item[(4)] For any $x_0, x_1\in X$, the spaces $GL(X,x_0)$ and $GL(X,x_1)$ are topologically isomorphic.
\end{enumerate}
\end{theorem}
To simplify notation, we will assume in what follows that $X$ is a subset of $L(X)$. The next statement is immediate from the
definition.
\begin{coro}
A linear mapping $f\from L(X)\to E$ to a locally convex space $E$ is continuous if and only if the restriction $f|X$
is continuous.
\end{coro}
In a categorical context, if we denote by $\textbf{Tych}_*$ the category of Tychonoff spaces with distinguished point and
continuous functions that preserve the distinguished points, Theorem \ref{Fundamental} tells us is that the forgetful
functors $\textbf{U}\from \textbf{HLocon}\to \textbf{Tych}$ and $\textbf{U}_{*}\from \textbf{HLocon}\to \textbf{Tych}_{*}$ have
left adjoint functors $\textbf{L}\from \textbf{Tych} \to \textbf{HLocon}$ and
$\textbf{GL}\from \textbf{Tych}_* \to \textbf{HLocon}$. Note that there is also an adjunction
$\textbf{V}\from \textbf{Tych}_*\to \textbf{Tych}$ and $\textbf{P}\from \textbf{Tych}\to \textbf{Tych}_*$, where
$\textbf{V}$ is the forgetful functor and $\textbf{P}$ is the functor that assigns to each topological space $X$ the
topological space $X^+=(X,a_X)$, where $a_X$ is an isolated point that does not belong to $X$, and to each continuous
function $f\from X\to Y$ assigns $f^+\from X^+\to Y^+$ so that $f^+|X=f$ and $f^+(a_X)=a_Y$. Taking this into account, we
have the following results.
\begin{coro}\label{natiso}
The functors $\textbf{L}$ and $\textbf{GL}\circ \textbf{P}$ are naturally isomorphic; moreover, both $\textbf{L}$ and
$\textbf{GL}$ respect finite coproducts.
\end{coro}
\begin{coro}\label{Coro-GL y P}
Let $X$ and $Y$ be topological spaces, $x_0$ a point of $X$, and $X\oplus Y$ their topological sum. Then
$GL(X\oplus Y, x_0)=GL(X,x_{0})\oplus L(Y)$.
\end{coro}
Let us show a more explicit relationship between $L(X)$ and $GL(X,x_0)$.
Consider the function $e_X\from X\to\reals$ such that $e_X(x)=1$ for all $x\in X$, and let
$(e_X)_\#\from L(X)\to \reals$ be the unique linear mapping that extends $e_X$. Denote the kernel of
$(e_X)_\#$ by $L^0(X)$. Observe that
\begin{equation*}
L^0(X)=\left\{ \displaystyle\sum_{i=1}^n\lambda_i\delta_{x_i}: n\in \mathbb{N},\quad \lambda_i\in \reals,
\quad x_i\in X, \quad 1\leq i\leq n, \quad\sum_{i=1}^n\lambda_i=0 \right\}.
\end{equation*}
We will say that a topological isomorphism $\varphi\from L(X)\to L(Y)$ is {\it special\/} if the composition
$(e_Y)_\#\circ \varphi \from L(X)\to \reals$ is constant on $X$.
As shown in \cite{\refOkuneva}, if there is a topological isomorphism between $L(X)$ and $L(Y)$, then
there is always a special topological isomorphism between them. Moreover, the following statements are easily
derived from \cite[Theorem 3.7]\refOkuneva:
\begin{prop}\label{Prop-especial}
Given a topological isomorphism $\psi\from L(X)\to L(Y)$, there is always a topological isomorphism
$\varphi\from L(X)\to L(Y)$ such that $(e_Y)_\#\circ \varphi =(e_X)_\#$.
\end{prop}
\begin{prop}
Let $x_0$ be a point of $X$. The spaces $L^0(X)$ and $GL(X,x_0)$ are topologically isomorphic.
\end{prop}
This proposition reflects the fact that the free locally convex space in the sense of Graev does not depend
(up to a topological isomorphism) on the choice of the distinguished point. For this reason, in what follows
the free locally convex space in the sense of Graev will be denoted just by $GL(X)$.
\begin{coro}\label{CoroMarkovGraev}
Let $X$ and $Y$ be topological spaces. The spaces $L(X)$ and $L(Y)$ are topologically isomorphic
if and only if $GL(X)$ and $GL(Y)$ are topologically isomorphic.
\end{coro}
\begin{proof}
If the spaces $L(X)$ and $L(Y)$ are topologically isomorphic, then by Proposition \ref{Prop-especial}
there is a special topological isomorphism $\varphi\from L(X)\rightarrow L(Y)$ such that
$(e_Y)_\#\circ \varphi = (e_X)_\#$. It follows that $\varphi|{L^0(X)}\from L^0(X)\to L^0(Y)$ is a topological
isomorphism, so the spaces $GL(X)$ and $GL(Y)$ are topologically isomorphic. On the other hand, if the spaces
$GL(X)$ and $GL(Y)$ are topologically isomorphic, then $L(X)=GL(X)\oplus\reals$ and
$L(Y)=GL(Y)\oplus \reals$ are also topologically isomorphic.
\end{proof}
Given the close relationship between spaces $L(X)$ and $GL(X)$ we can define the relation of $L$-equivalence as follows:
the spaces $X$ and $Y$ are called {\it $L$-equivalent\/} ( $X\stackrel{\mathit{L}}{\sim} Y$ ) if their free locally convex
spaces $L(X)$ and $L(Y)$ are topologically isomorphic. Furthermore, following \cite\refOkunevb, we can extend this relation
to continuous mappings between topological spaces. We say that two continuous mappings $f\from X\to Y$ and
$g\from Z\to T$ are {\it $L$-equivalent\/} ($f\stackrel{\mathit{L}}{\sim} g$) if there are topological isomorphisms
$\varphi \from L(X)\to L(Z)$ and $\psi\from L(Y)\to L(T)$ such that $\psi \circ f_\#=g_\#\circ \varphi$.
Clearly, these are equivalence relations. Likewise, any topological property of spaces or mappings that is preserved
by the relation of $L$-equivalence will be called {\it $L$-invariant\/}. It is worth noting that the $L$-equivalence
of the the identity mappings $\id_X\from X\to X$ and $\id_Y\from Y\to Y$ is the same as the $L$-equivalence of the spaces
$X$ and $Y$.
In a similar order of ideas, we can define the free weak topological vector space $L_p(X)$ over the topological space
$X$ as a pair $(\delta_X, L_p(X))$ formed by a continuous injection $\delta_X\from X\to L_p(X)$ and a weak topological
vector space $L_p(X)$ so that for every continuous function $f\from X\to E$ where $E$ is a weak topological vector space,
there is a unique continuous linear mapping $f_\#\from L_p(X)\to E$ such that $f=f_\#\circ \delta_X$. In addition,
Theorem \ref{Fundamental}, as well as the rest of subsequent statements, remain valid for these new spaces.
Naturally, this leads us to establish the concept of spaces and functions $L_p$-equivalent, and the notion of $L_p$-invariant
properties. It should be noted that the concept of $L_p$-equivalence is often linked to the functor $\mathbf C_p$, in which
case we say that two spaces $X$ and $Y$ are $\ell$-equivalent if its spaces of continuous real functions $C_p(X)$ and $
C_p(Y)$ are topologically isomorphic. This should not worry us, since the spaces $C_p(X)$ and $L_p(X)$ are in duality,
so $C_p(X)$ is topologically isomorphic to $C_p(Y)$ if and only if $L_p(X)$ is topologically isomorphic to $L_p(Y)$.
Therefore, following the notation already established, the relation of $L_p$-equivalence is the same as the relation of
$\ell$-equivalence, and the properties that are $L_p$-invariant are $\ell$-invariant.
Finally, we will briefly describe the relation between the topologies of the spaces $L(X)$ and $L_p(X)$. First,
from the definitions of these objects, it is easy to see that the identity $(\id_X)_\#\from L(X)\to L_p(X)$ is
a continuous linear mapping, accordingly, the underlying sets of the spaces $L(X)$ and $L_p(X)$ are the same, and it is
also clear that the topology of $L_p(X)$ is the $*$-weak topology of $L(X)$. Second, there is a relatively simple way
to describe its topology: since the spaces $L(X)$ and $C(X)$ are in duality (algebraic), and like any locally convex
topology over a space $E$ it is the topology of uniform convergence on the equicontinuous sets of its topological dual
$E'$, the topology of $L(X)$ it is the topology of uniform convergence on the equicontinuous pointwise bounded sets of
$C(X)$ \cite{\refFlood}. Similarly, since that the topology of $L_p(X)$ it is weak, and since we can embed
$L_p(X)$ in $C_p(C_p(X))$, whose topology is also weak, we get that the topology of $L_p(X)$ is the topology inherited
from $C_p(C_p(X))$. Thus, a local neighborhood base of zero in $L(X)$ ($L_p(X)$) is is the family of sets of the form
\begin{equation}
\nonumber V[0,F, \varepsilon]=\left\{ \alpha\in L(X) : |\alpha(f)|= |f_\#(\alpha)|<\varepsilon,
\quad f\in F \right\}.
\end{equation}
where $F\subset C(X)$ is an equicontinuous pointwise bounded set (respectively, a finite set) and $\varepsilon>0$,
\section{$L$-retracts}
As mentioned at the beginning, we will see what useful properties have the $\ell$-embedded sets and then we will try to
translate them into the language of free locally convex spaces. We start with a definition: let $X$ be a topological space
and $Y$ a subset of $X$. An {\it extender} is a mapping $\phi\from C(Y)\to C(X)$ such that $\phi(f)|Y=f$ for every
$f\in C(Y)$. An extender may be linear or not, but what really matters to us is the situation in which it is continuous.
If there is a continuous (linear continuous) extender $\phi\from C_p(Y)\to C_p(X)$, we will say that $Y$ is
{\it $t$-embedded\/} ({\it $\ell$-embedded\/) in $X$}. A basic fact about $t$-embedded sets is that they always turn
out to be closed. Clearly, every $\ell$-embedded set is also $t$-embedded, and it is easy to verify that $X$ is always
$\ell$-embedded in $L_p(X)$. The following statement is also easy to prove.
\begin{prop}\label{Propl-encajaretract}
Let $Y$ be a subspace of $X$. The following statements are equivalent:
\begin{enumerate}
\item $Y$ is $\ell$-embedded in $X$;
\item There is a linear and continuous retraction $r\from L_p(X)\to L_p(Y)$;
\item There is a continuous functions $f\from X\to L_p(A)$ such that $f|A=\delta_{A}$;
\item Every continuous function from $Y$ to a weak topological linear space $E$ extends to a continuous function from
$X$ to $E$.
\end{enumerate}
\end{prop}
The previous proposition tells us that the $\ell$-embedded sets are simply the $L_p$-retracts, however, with the purpose of
not multiplying the notation, we will continue to use the term $\ell$-embedded set. On the other hand, with respect to
free locally convex spaces, if $Y$ is any subspace of $X$, it is not true that $L(Y)$ is necessarily a locally convex
subspace of $L(X)$, even if $Y$ is closed in $X$. Therefore, if $Y$ is a subspace of $X$ such that $L(Y)$ is a locally convex
subspace of $L(X)$, we will say that $Y$ is {\it $L$-embedded\/} in $X$.
In comparison, if $Y\subset X$, we have that $Y$ is $P$-embedded in $X$ if every continuous pseudometric on $Y$ can be
extended to a continuous pseudometric on $X$. The concept of a $P$-embedded set has several characterizations; the
one given by Yamazaki \cite[Theorem 3.1]\refYamazaki \ it is the one used in the proof of the following statement.
\begin{prop}
Let $Y$ be a subspace of $X$. The following statements are equivalents:
\begin{itemize}
\item $Y$ is $L$-embedded in $X$;
\item Any equicontinuous pointwise bounded subset of $C(Y)$ can be extended to an equicontinuous pointwise bounded
subset of $C(X)$;
\item $Y$ is $P$-embedded in $X$.
\end{itemize}
\end{prop}
Taking into account that the concept of an $L$-embedded set is related to simultaneous extension of equicontinuous
pointwise bounded sets, we can ask, of course, what relationship exists between the notions of an $\ell$-embedded set and
an $L$-embedded set.
\begin{ex}
{\sl An $L$-embedded sets need not be $\ell$-embedded.}
\smallskip
Consider the space $X=\omega_1+1$ with the order topology, and let $Y$ be the dense subspace $\omega_1$. Recall that $Y$
is a pseudocompact non-compact space, and that $X$ is a the Stone-\v Cech compactification of $Y$. Since the square of $Y$ is
pseudocompact, $X^2$ is the \v Cech-Stone compactification of $Y^2$, so $Y$ is $P$-embedded in $X$, that is,
$Y$ is $L$-embedded in $X$. Since $Y$ it is not a closed set in $X$, $Y$ cannot be $\ell$-embedded.
\end{ex}
\begin{ex}\label{l no L}
{\sl An $\ell$-embedded set need not be $L$-embedded.}
\smallskip
Let $Y$ be an uncountable discrete space, and let $X=L_p(Y)$. Then $Y$ is $\ell$-embedded in $X$,
and $X$ has the Souslin property. By \cite[Theorem 1.2]\refHoshina{}, $Y$ can not be $P$-embedded in
$X$, and hence $Y$ is not $L$-embedded in $X$.
\end{ex}
As we see, the $\ell$-embedded sets and free locally convex spaces do not have a direct relationship; this is
another reason to study the $L$-retracts, because these have all the good properties of the $\ell$-embedded and
$L$-embedded sets. As we will see, this combination of concepts improves their properties.
\begin{prop}\label{PropFL-encajaimpliLlB-encaja}
Every $L$-retract is an $L$-embedded and $\ell$-embedded set. In particular, every $L$-retract is a closed set.
\end{prop}
We still do not know if the inverse of the previous proposition holds, that is, whether every $L$-embedded
and $\ell$-embedded set is an $L$-retract. We only can guarantee the following.
\begin{theorem}\label{Teopreservadecuad}
Let $Y$ be a subspace of $X$. $Y$ is an $L$-retract of $X$ if and only if there is a continuous linear extender
$\phi\from C_p(Y)\to C_{p}(X)$ such that if $B\subset C(Y)$ is an equicontinuous pointwise bounded set,
then $\phi(A)$ also is an equicontinuous pointwise bounded set.
\end{theorem}
\begin{proof}
Suppose that $Y$ is an $L$-retract of $X$. Then there is a continuous linear retraction $r\from L(X)\to L(Y)$.
Define $\phi\from C_p(Y)\to C_p(X)$ by $\phi(f)= (f_\#\circ r)|X$ is a continuous linear extender, where
$f_\#\from L(Y)\to\reals$ is the linear extension of the function $f$ to $L(Y)$.
Let $B\subset C(Y)$ be an equicontinuos pointwise bounded set; let us verify that the set
$\phi(B)=\{f_\# \circ r : f\in B \}$ is equicontinuous and pointwise bounded in $C(X)$.
By the definition of equicontinuity in a topological linear space \cite{\refSchaefer}, just note that given
an $\varepsilon>0$, the set
\begin{equation*}
\bigcap_{f\in B} \left(f_\#\circ r\right)^{-1} (-\varepsilon,\varepsilon)
=r^{-1} \left( \bigcap_{f\in B} f^{-1}_\# (-\varepsilon,\varepsilon) \right)
\end{equation*}
is a neighborhood of zero. Thus, $\phi(B)$ is an equicontinuous pointwise bounded subset of $C(X)$.
It only remains to prove that if such a continuous linear extender exists, then $Y$ is a $L$-retract of $X$.
Define $q\from X\to L(Y)$ by $q(x)=\delta_x\circ\phi$ and let $r:L(X)\to L(Y)$ be the linear extension of $q$. Note that
$q(x)$ is a continuous linear function on $C_p(Y)$, so $q(x)\in L_p(Y)$; thus, $q(x)$ also is an element of $L(Y)$, so $q$
is well-defined. Furthermore, the restriction $r|Y$ coincides with the Dirac's embedding of $Y$ in $L(Y)$, so $r$ is a
retraction. Thus, it remains only to verify that $q$ is continuous.
Let $U=U[0,A,\varepsilon]$ be a neighborhood of zero in $L(Y)$, where $A\subset C(Y)$
is an equicontinuous pointwise bounded set and $\varepsilon>0$. Since $\phi(A)$ is an equicontinuous pointwise bounded
subset of $C(X)$, we have that the set $V=V[0,\phi(A), \varepsilon]$ is a neighborhood of zero in $L(X)$ such that
$r (V)\subset U$. Since $r$ is continuous and linear, we conclude that $r \circ \delta_ X =q$ is continuous,
so $Y$ is a $L$-retract of $X$.
\end{proof}
If $\varphi\from C_p(Y)\to C_p(X)$ is a continuous linear mapping such that for every equicontinuous pointwise bounded set
$A$ in $C(Y)$ the image $\varphi(A)$ is an equicontinuous pointwise bounded set in $C(Y)$, we will say that $\phi$
{\it preserves equicontinuous pointwise bounded sets}.
\begin{coro}
The spaces $X$ and $Y$ are $L$-equivalent if and only if there is a topological isomorphism
$\varphi\from C_p(X)\to C_p(Y)$ such that both $\varphi$ and $ \varphi^{-1}$ preserve equicontinuous pointwise bounded sets.
\end{coro}
\begin{proof}
First let us suppose that $X$ and $Y$ are $L$-equivalent, that is, there is a topological isomorphism
$\psi\from L(X)\to L(Y)$. Consider the mapping $\varphi \from C_p(X)\to C_p(Y)$ defined by the rule
$\varphi(f)=f_\# \circ \psi^{-1} \circ \delta_Y$. It is clear that $\varphi$ is continuous, linear and has
the inverse topological isomorphism $\varphi^{-1}(g)= g_\# \circ \psi \circ \delta_X$. It remains to show that given
an $\varepsilon>0$ and an equicontinuous pointwise bounded set$A\subset C(X)$, the set
\begin{equation*}
\bigcap_{f\in A} \left(f_\# \circ \psi^{-1} \right)^{-1} (-\varepsilon,\varepsilon)
=\psi \left(\bigcap_{f\in A} {f}^{-1}_\# (-\varepsilon,\varepsilon) \right)
\end{equation*}
is a neighborhood of zero, but this is straightforward.
Conversely, if there is a topological isomorphism $\varphi\from C_p(X)\to C_p(Y)$ such that both
$\varphi$ and $\varphi^{-1}$ preserves equicontinuous pointwise bounded sets, we can consider the map
$\psi\from L(X)\to L(Y)$ defined by $\psi(\alpha)=\alpha\circ\varphi^{-1}$. Recall that $\alpha\circ \varphi^{-1}$
is a continuous linear function on $C_p(X)$, so $\alpha\circ \varphi^{-1}$ is in $L_p(Y)$, and therefore in $L(Y)$.
Of course, $\psi$ has an inverse topological isomorphism given by $\psi^{-1}(\beta)=\beta \circ \varphi$.
Since both $\varphi$ and $\varphi^{-1}$ preserve equicontinuous pointwise bounded sets, both $\psi$ and $\psi^{-1}$
are continuous.
\end{proof}
The following statement only reinforces the known fact that in the class of $b_f$-spaces (such property is a
$\ell$-invariant) if two spaces are $\ell$-equivalent, then they are $L$-equivalent. Recall that a function
$f\from X\to\reals$ is {\it $b$-continuous} if for every bounded set $A\subset X$ there is a continuous function
$g\from X\to\reals$ such that $g|A=f|A$. A space $X$ is called a {\it $b_f$-space} if every $b$-continuous real function
is continuous. The class of $b_f$-spaces is larger than the class of $k$-spaces. Moreover, if $X$ is a $b_f$-space,
a set $B\subset C_b(X)$ is compact if and only if $B$ is closed, equicontinuous and pointwise bounded \cite{\refUspenskii}.
\begin{coro}
Let $X$ and $Y$ be two $b_f$-spaces that are $\ell$-equivalent. Then $X$ and $Y$ are $L$-equivalent.
\end{coro}
\begin{proof}
Let $\varphi\from C_p(X)\to C_p(Y)$ be a topological isomorphism. It is not difficult to see that
$\varphi\from C_b(X)\to C_b(Y)$ is a topological isomorphism ($C_b(X)$ is the space $C(X)$ endowed with the topology of
uniform convergence on the bounded sets of $X$). Now, let us take a set $A\subset C_p(X)$ which is equicontinuous and
pointwise bounded. Since $X$ is a $b_f$-space, we have that $[A]_b$, the closure of $A$ in $C_b(X)$, is compact.
Hence, $\varphi(A)\subset \varphi([A]_b)$ is equicontinuous and pointwise bounded. But $[A]_b=[A]_p$, the closure in
$C_{p}(X)$, that is, $\varphi$ preserves equicontinuous pointwise bounded sets.
\end{proof}
\begin{ex}\rm
In particular, it is known that if $X$ is an uncountable discrete space, then the spaces $L_p(X)$ and $L_p(X)\oplus X$ are
$\ell$-equivalent and they are not $L$-equivalent, that is, there is no topological isomorphism between $C_p(L_p(X))$ and
$C_p(L_p(X) \oplus X)$ that preserves equicontinuous pointwise bounded sets. On the other hand, each topological
isomorphism $\varphi\from C_p(X)\to C_p(Y)$, where $X$ and $Y$ are compact spaces, preserves equicontinuous pointwise
bounded sets.
\end{ex}
Returning to the consequences of Theorem \ref{Teopreservadecuad}, we have the following statement.
\begin{coro}\label{Criterio_L-retracto}
The following assertions are equivalent:
\begin{enumerate}
\item $Y$ is a $L$-retract of $X$;
\item There is a continuous linear retraction $r\from L(X)\to L(Y)$;
\item There is a continuous linear extender $\varphi\from C_p(Y) \to C_p(X)$ such that $\varphi$ preserves
equicontinuous pointwise bounded sets;
\item Every continuous function from $Y$ to a locally convex space $E$ extends to a continuous function form $X$ to $E$.
\end{enumerate}
\end{coro}
From this proposition it follows immediately that, in the same way that $X$ is $\ell$-embedded in $L_p(X)$
($X$ is an $\ell$-retract of $L_p(X)$), $X$ is an $L$-retract of $L(X)$. Also, note that in view of the
Example \ref{l no L}, $X$ it is not always an $L$-retract of $L_{p}(X)$.
It is time to apply our results. First, based on Dugundji's extension theorem we have the following:
\begin{theorem}\label{TeoDugundji}
Let $X$ be a metric space. The following statements are equivalent:
\begin{itemize}
\item $Y$ is a closed subset of $X$;
\item $Y$ is a $L$-retract of $X$;
\item $Y$ is $\ell$-embedded in $X$.
\end{itemize}
\end{theorem}
\begin{proof}
Let $Y$ be a closed subset of a metric space $X$ and $\delta_Y\from Y\to L(Y)$ the Dirac's embedding of $Y$ in
$L(Y)$. Applying the Dugundji's extension theorem we get a continuous function $f\from X\to L(Y)$ such that
$f|Y=\delta_{Y}$, then, $Y$ is a $L$-retract of $X$. The other implications are clear.
\end{proof}
The Dugundji's extension theorem has been generalized in several ways, specifically, Borges generalized it to
stratifiable spaces, and Stares did the same for the decreasing $(G)$ spaces (in the sense of \cite{\refCollins}).
On the other hand, note that each stratifiable space is a decreasing $(G)$ space, and each decreasing $(G)$ space is
hereditarily paracompact, so we could ask ourselves if for the hereditarily paracompact spaces, it is true that every
closed set is an $L$-retract. The answer is ``no".
\begin{ex}
{\sl There is a hereditarily paracompact space $X$ and a closed set $Y$ in $X$ such that $Y$ it is not a $L$-retract of $X$.}
\smallskip
{\rm Let $X$ be the Michael's line (\cite[Example 5.1.32]\refEngelking). In this space, the set $\mathbb Q$ of rational number
is a closed $P$-embedded set. On the other hand, the set $\mathbb P$ of irrational number with the topology inherited from
the Euclidean's metric, we have that the space of continuous functions with the compact-open topology $C_k(\mathbb{P})$ is a
locally convex space. From \cite{\refSennott} we arrive at the existence of a continuous function
$f\from \mathbb Q \to C_k(\mathbb P)$ that does not have continuous extension to $X$, namely, the function
$f(x)(y)=\frac{1}{x-y}$, where $x\in \mathbb Q$ and $y\in \mathbb P$. With this we verify that $Y$ is not an $L$-retract of
$X$.}
\end{ex}
The previous example shows that, in general, we must impose stronger conditions on the subset $Y$ to make sure that
$Y$ be an $L$-retract of $X$. For instance, we will see that some of them need metrizability as an additional condition.
A set $A\subset X$ is called {\it strongly discrete} if there is a discrete family $\{U_a: a\in A\}$ of disjoint open sets
in $X$ such that $a\in U_a$ for every $a\in A$. Taking into account the final observation of \cite{\refMichael} we easily get
the following.
\begin{coro}
Let $Y$ be a subspace of $X$. Then
\begin{enumerate}
\item If $X$ is paracompact and $Y$ is closed and metrizable, then $Y$ is an $L$-retract of $X$;
\item If $X$ is normal and $Y$ is closed, metrizable and separable, then $Y$ is an $L$-retract of $X$;
\item If $X$ is Tychonoff and $Y$ is compact and metrizable, then $Y$ is an $L$-retract of $X$;
\item If $X$ is Tychonoff and $Y$ is strongly discrete, then $Y$ is an $L$-retract of $X$.
\end{enumerate}
\end{coro}
\begin{proof}
The first three statements are obvious. In \cite{\refArhangel} it was shown that if $Y$ is a strongly discrete subspace,
then $Y$ is $\ell$-embedded in $X$. We will reproduce the original proof with the emphasis on the preservation by the defined
extender of equicontinuous pointwise bounded sets. Let $\mathcal U=\{U_y : y\in Y\}$ be a discrete family of disjoint
open sets in $X$ such that $y\in U_{y}$ for every $y\in Y$, also, for each $y\in Y$ let $h_y\in C(X)$ be a function
such that $h_y(X)\subset [0,1]$), $h_y(y)=1$ and $h_y(X\setminus U_y)\subset \{0\}$. Define the function
$\psi(x)=\sum_{y\in Y}h_y(x)$. Since the family $\mathcal U$ is discrete, the function $\psi$ is defined on $X$ and is
continuous. Hence, the linear extender $\phi\from C_p(Y)\to C_{p}(X)$ defined by the rule
$\phi(f)=\sum_{y\in Y}f(y)\cdot h_y$ is continuous.
Let $\mathcal F\subset C_p(Y)$ be an equicontinuous and pointwise bounded family of functions. We will verify that
$\phi(\mathcal F)=\{\phi(f): f\in \mathcal F\}$ is equicontinuous and pointwise bounded. For each $y\in Y$,
let $M_y\in\reals$ be such that $\{f(y):f\in \mathcal F\}\subset [-M_y, M_y]$. Given an $\varepsilon>0$ and $x\in X$,
if $x$ has a neighborhood disjoint from $\bigcup\mathcal U$, we have $\phi(f)(x)=0$
for every $f\in C_{p}(Y)$. Otherwise, there is a neighborhood $U$ of $x$ such that $U\cap U_y\neq\emptyset$ for
a unique $y\in Y$. Put $V=h_y^{-1}(h_{y}(x)-\varepsilon/M_y, h_y(x)+\varepsilon/M_y)$ and $W=U\cap V$. Then $W$ is an
open neighborhood of $x$, and for each $z\in W$ and $f\in\mathcal F$ we have
\begin{equation*}
\left | \phi(f)(x)-\phi(f)(z) \right |=\left | f(y)\left( h_{y}(x)-h_{y}(z) \right) \right |\leq M_{y} \left | h_{y}(x)-h_{y}(z) \right |< M_{y}\cdot \frac{\varepsilon}{M_{y}}=\varepsilon.
\end{equation*}
that is, $\phi(\mathcal F)$ is an equicontinuous set that clearly is pointwise bounded.
\end{proof}
Note that, although in the class of metric spaces, the $L$-retracts and the $\ell$-embedded sets are the same,
in general, in the generalizations of the Dugundji's extension theorem, we cannot weaken the condition that $Y$ is an
$L$-retract to being an $\ell$-embedded set.
\begin{ex}
\rm Let $Y$ be the discrete space of cardinality $\omega_1$ and $X=L_{p}(Y)$. It is clear that $Y$ is $\ell$-embedded in $X$.
The function $\delta_Y\from Y\to L(Y)$ has no continuous extension to $X$, because otherwise we would
have that $Y$ is an $L$-retract of $X$, which is false ($Y$ is not $L$-embedded in $L_p(Y)$).
\smallskip
Even if both $X$ and $Y$ are compact spaces, $Y$ need not be an $L$-retract of $X$. Indeed, let $X=\beta \mathbb{N}$ and
$Y=\beta \mathbb{N}\setminus \mathbb{N}$, then $Y$ is not $t$-embedded in $X$, that is, there is no continuous mapping
$\varphi \from C_p(Y)\to C_p(X)$ \cite\refArhangel.
\end{ex}
\section{A method for constructing examples of $L$-equivalent spaces}
Now we will concentrate on finding a method that generates examples of $L$-equivalent spaces. Clearly, the method described
by Okunev in \cite[Theorem 2.4]\refOkunevb{} already generates examples of $L$-equivalent spaces, however, the notion of a retract
is quite restrictive, and as we see, every retract is a $L$-retract. Thus, we will show that the notion of an $L$-retract is
sufficient for establishing our method.
Let $K_1$ and $K_2$ be two $L$-retracts of a space $X$, we will say that $K_1$ and $K_2$ are {\it parallel\/} if there are
continuous linear retractions $r_1\from L(X)\to L(K_1)$ and $r_2\from L(X)\to L(K_2)$ such that $r_1\circ r_2=r_1$ and
$r_2\circ r_1=r_2$.
\begin{prop}
$K_1$ and $K_2$ are parallel $L$-retracts of $X$ if and only if there is a continuous linear retraction
$r_1\from L(X)\to L(K_1)$ such that the restriction $r_1|L(K_2)$ is a topological isomorphism from $L(K_2)$ onto $L(K_1)$.
In particular, $K_1$ is $L$-equivalent to $K_2$.
\end{prop}
\begin{proof}
Suppose $K_{1}$ and $K_{2}$ are parallel $L$-retracts of $X$. Let $r_1\from L(X)\to L(K_1)$ and $r_2\from L(X)\to L(K_2)$ be
continuous linear retractions such that $r_1\circ r_2=r_1$ and $r_2\circ r_1=r_2$.
Then $i=r_1|L(K_2)$ is a topological isomorphism of $L(K_2)$ onto $L(K_1)$ with the inverse $j=r_2|L(K_1)$.
Conversely, if there is a continuous linear retraction $r_1\from L(X)\to L(K_1)$ such that the restriction
$i=r_1|L(K_2)$ is a topological isomorphism from $L(K_2)$ onto $L(K_1)$, let $j=i^{-1}$ and put $r_2=j\circ r_1$.
Then $r_2$ is a continuous linear retraction from $L(X)$ to $L(K_2)$, $r_1\circ r_2=r_1$, and $r_2\circ r_1=r_2$.
\end{proof}
Recall that a continuous mapping $p\from X\to Y$ is {called $\reals$-quotient\/} if $p(X)=Y$ and whenever
$f$ is a real function on $Y$ such that the composition $f\circ p\from X\to \reals$ is continuous, $f$ is continuous
\cite{\refKarnik}. The following statement is Proposition 1.1 in \cite{\refOkunevb}.
\begin{prop}\label{rquot-tych}
If $p\from X\to Y$ is an $\reals$-quotient mapping, $Z$ is a completely regular space, and $f\from Y\to Z$ is a function
such that the composition $f\circ p$ is continuous, then $f$ is continuous.
\end{prop}
\begin{prop} \label{r-quot-open}
A mapping $p\from X\to Y$ is $\reals$-quotient if and only if its extension $p_\#\from L(X)\to L(Y)$ is open.
\end{prop}
\begin{proof} Suppose that $p_\#$ is open, and let $f\from Y\to \reals$ be a function such that $f\circ p$ is continuous.
Let $p_\#\from L(X)\to L(Y)$ and $f_\#\from L(Y)\to \reals$ be the linear extensions of $p$ and $f$. Then
$f_\#\circ p_\#=(f\circ p)_\#$ is continuous, and since $p_\#$ is open, $f_\#$ is continuous. Thus, $f=f_\#|Y$ is continuous.
\smallskip
Conversely, if $p$ is $\reals$-quotient, then, by the continuity, the subspace $H=\ker p_\#$ is closed. Let $L=L(X)/H$ be the quotient space.
The space $L$ is locally convex and Hausdorff, hence Tychonoff. Furthermore, there is a continuous bijection $i\from L\to L(Y)$
such that $p_\#=i\circ \pi$ where $\pi\from L(X)\to L$ is the natural projection. Let us verify that the mapping
$j=i^{-1}\from L(Y)\to L$ is continuous. It suffices to verify that the restriction $f=j|Y$ is continuous. We have
$f\circ p=(j\circ p_\#)|X=\pi|X$, so $f\circ p$ is continuous; since $p$ is $\reals$-quotient, it follows that $f$ is
continuous. Thus, $j$ is continuous, so $i$ is a topological isomorphism, and since $\pi$ is open, $p_\#$ is open.
\end{proof}
There is a simple characterization of $L$-equivalence of $\reals$-quotient mappings.
\begin{prop}\label{criterio}
Two $\reals$-quotient mappings $f\from X\to Y$ and $g\from Z\to T$ are $L$-equivalent if and only if
there is a topological isomorphism $i\from L(X)\to L(Z)$ such that $i(\ker f_\#)=\ker g_\#$.
\end{prop}
\begin{proof}
If $f$ and $g$ are $L$-equivalent, then there are topological isomorphisms $i\from L(X)\to L(Z)$ and
$j\from L(Y)\to L(T)$ such that $j\circ f_\#=g_\#\circ i$. Let $A=\ker f_\#$ and $B=\ker g_\#$. Then
$j\circ f_\#(A)=\{0\}$, and by $j\circ f_\#=g_\# \circ i$ we have to $\{0\}=g_\#\circ i(A)=g_\#(i(A))$, that is,
$i(A)\subset B$. For $g_\#=j\circ f_\#\circ i^{-1}$, we obtain that $\{0\}=g_\#(B)=j\circ f_\#\circ i^{-1}(B)$,
considering that $j$ is bijective we have that $f_\#\circ i^{-1}(B)=0$, hence, $i^{-1}(B)\subset A$, and this is enough
to establish the equality.
\smallskip
Conversely, suppose that there is a topological isomorphism $i\from L(X)\to L(Z)$ such that $i(\ker f_\#)=\ker g_\#$.
Then there is an (algebraic) isomorphism $j\from L(Y)=L(X)/\ker f_\#\to L(T)=L(Z)/\ker g_\#$ such that
$j\circ f_\#=g_\#\circ i$. Since $g_\#$ and $i$ are continuous and $f_\#$ is open, $j$ is continuous.
Similarly, $j^{-1}\circ g_\#=f_\#\circ i^{-1}$, $f_\#$ and $i^{-1}$ are continuous, and $g_\#$ is open, so $j^{-1}$
is continuous. Thus, $i$ and $j$ are topological isomorphisms as required in the definition of $L$-equivalent mappings.
\end{proof}
Continuing with the $\reals$-quotient mappings, we will define the $\reals$-quotient spaces. Let $p\from X\to Y$
be a mapping of $X$ onto a set $Y$, It is known that there is a unique completely regular topology on the set $Y$ that makes $p$ a
$\reals$-quotient mapping (this topology may be described as the weakest topology with respect to which all real-valued
functions on $Y$ with continuous compositions with $p$ are continuous). This topology is called the {\it $\reals$-quotient
topology}, and $Y$ endowed with this topology is the {\it $\reals$-quotient space with respect to the mapping $p$}
(or simply the {\it $\reals$-quotient space} if the mapping $p$ is clear from the context).
In this situation we say that $p$ is {\it the natural mapping}.
Now, if $X$ if a space and $K$ is a closed set in $X$, let us denote $X/K=(X\setminus K) \cup \{K\}$, and let
$p(x)=x$ for $x\in X\setminus K$, and $p(x)=K$ for each $x\in K$. Therefore, there is only one completely regular topology
on $X/K$ that makes it the $\reals$-quotient space with respect to $p$. It is shown in \cite{\refOkunevb} that this space
is Tychonoff. Also note that $p|(X\setminus K)\from X\setminus K \to X/K\setminus p(K)$ is a homeomorphism
\cite[Corollary 1.7]\refOkunevb.
With all this we can establish our method:
\begin{theorem}\label{Principal}
If $K_1$ and $K_2$ are parallel $L$-retracts of $X$, then the $\reals$-quotient mappings $p_1\from X\to X/K_1$ and
$p_2\from X\to X/K_2$ are $L$-equivalent. In particular, the spaces $X/K_1$ and $X/K_2$ are $L$-equivalent.
\end{theorem}
\begin{proof}
Let $r_1\from L(X)\to L(K_1)$ and $r_2\from L(X)\to L(K_2)$ be parallel $L$-retractions. We define a mapping
$i\from L(X)\to L(X)$ by the rule $i(\alpha)=r_1(\alpha)+r_2(\alpha)-\alpha$ for all $\alpha\in L(X)$. Clearly,
$i$ is linear and continuous. Moreover, $i\circ i(\alpha)=\alpha$, that is, $i$ is its own inverse, so $i$ is a topological
isomorphism. Let us put $s_2=r_2|L(K_1)$, then $s_2$ is a topological isomorphism such that $s_2\circ r_1=r_2\circ i$.
It follows that $i(L(K_1))=L(K_2)$ and that $i(\ker r_1)=\ker r_2$.
Clearly, $\ker (p_i)_\#=L^0(K_i)=\ker (e_{K_i})_\#$, $i=1,2$. Since $K_1$ and $K_2$ are $L$-equivalent, there is
a special topological isomorphism $k\from L(K_1)\to L(K_2)$ such that $(e_{K_2})_\#\circ k=(e_{K_1})_\#$. Let
$g=k\times j$ where $j=i|\ker r_1$. If $\alpha\in L(X)$, then the mappings $\eta_i\from L(X)\to L(K_i)\times \ker r_i$,
$i=1,2$, defined by $\eta_i(\alpha)=(r_i(\alpha),\alpha-r_i(\alpha))$ are topological isomorphism whose inverses
are $\xi_i\from L(K_i)\times \ker r_i\to L(X)$ are $\xi{}(\alpha,\beta)=\alpha+\beta$, $i=1,2$. Defining a mapping $\psi$
by
\begin{equation*}
\psi(\alpha)=\zeta_2\circ g\circ \eta_1(\alpha)=\zeta_2\circ g(r_1(\alpha),\alpha-r_1(\alpha))
=\zeta_2(k(r_1(\alpha)), j(\alpha-r_1(\alpha)))=k(r_1(\alpha))+j(\alpha)-j(r_1(\alpha)),
\end{equation*}
we obtain a topological isomorphism such that $\psi(L^0(K_1))=L^0(K_2)$. Thus, by Proposition \ref{criterio},
$p_1$ is $L$-equivalent to $p_2$.
\end{proof}
\begin{coro}\label{Corolario20}
Let $X$ be a topological space and $K\subset X$ an $L$-retract of $X$. Then the spaces $X^+$ and $X/K\oplus K$ are
$L$-equivalent.
\end{coro}
\begin{proof}
Let $K'$ be a homeomorphic copy of $K$ disjoint from $X$ and $\varphi\from K_1\to K'$ a homeomorphism.
Put $Z=X\oplus K'$, Then $L(Z)$ is topologically isomorphic to $L(X)\oplus L(K')$.
Let $r\from L(X)\to L(K)$ be an $L$-retraction. Define $r_1\from L(Z)\to L(K)$ by putting $r_1|L(X)=r$ and
$r_1|L(K')=\varphi_\#^{-1}$ and $r_2\from L(Z)\to L(K')$ by putting $r_2|L(X)=\varphi_\#\circ r$ and
$r_2| L(K')=\id_{L(K')}$. Then $(r_1\circ r_1)| L(X)=r_1\circ r=r=r_1|L(X)$ and $(r_1\circ r_1)| L(K')=
r_1\circ \varphi_\#^{-1}=\varphi_\#^{-1}$ (because $r_1|L(K)$ is the identity), so $(r_1\circ r_1)|L(K')=r_1|L(K')$.
We conclude that $r_1\circ r_1=r_1$, so $r_1$ is a retraction. Similarly, $(r_2\circ r_2)|L(X)=r_2\circ \varphi_\#\circ r|L(X)
=\varphi_\# \circ r=r_2|L(X)$, because $r_2|L(K')$ is the identity, and $(r_2\circ r_2)| L(K')=r_2|L(K')$. Thus,
$r_2\circ r_2=r_2$, and $r_2$ is a retraction.
Furthermore, $(r_1\circ r_2)|L(X)=r_1\circ \varphi\circ r=\varphi^{-1}\circ\varphi\circ r=r=r_1|L(X)$ and
$(r_1\circ r_2)|L(K')=r_1$ because $r_2|L(K')$ is the identity. Thus, $r_1\circ r_2=r_1$. Similarly,
$(r_2\circ r_1)|L(X)=r_2\circ r=\varphi_\#\circ r=\varphi_\#\circ r=r_2|L(X)$ and
$(r_2\circ r_1)|L(K')=(r_2|L(K))\circ\varphi_\#^{-1}=(\varphi_\#\circ r)|L(K)\circ\varphi_\#^{-1}=
\varphi_\#\circ\varphi_\#^{-1}=\id_{L(K')}=r_2|L(K')$, so $r_2\circ r_1=r_2$. Thus $r_1$ and $r_2$ are parallel
$L$-retractions. By Theorem \ref{Principal}, the spaces $Z/K$ and $Z/K'$ are $L$-equivalent. Clearly, $Z/K$ is homeomorphic
to $X/K\oplus K$ and $Z/K'$ is homeomorphic to $X^+$.
\end{proof}
Note that in the proof of Theorem \ref{Principal}, the fact that the $L$-retracts are parallel served to guarantee
the existence of a pair of topological isomorphisms $s_2$ and $i$ such that $s_2\circ r_1=r_2\circ i$.
Therefore, in the case that two sets $K_1$ and $K_2$ are $L$-retracts of $X$ and there are topological isomorphisms
$i\from L(X)\to L(X)$, $j\from L(K_1)\to L(K_2)$ and continuous linear retractions $r_1\from L(X)\to L(K_1)$ and
$r_2\from L(X)\to L(K_2)$ such that $j\circ r_1=r_2\circ i$ we will say that these sets are {\it equivalent $L$-retracts}.
\begin{prop}\label{PropGLigualkernel}
Let $r\from L(X)\to L(K)$ be a continuous linear retraction, where $K\subset X$. Then $L(X)$ is topologically isomorphic to
$GL(X/K)\times L(K)$, and $GL(X/K)$ is topologically isomorphic to $\ker r$.
\end{prop}
\begin{proof}
The first part follows from the fact that $X^{+}$ is $L$-equivalent to $X/K \oplus K$, therefore, applying
Corollary \ref{CoroMarkovGraev} we obtain that $GL(X^{+})$ is topologically isomorphic to $GL(X/K\oplus K)$,
that is $L(X)$ is topologically isomorphic to $GL(X/K)\oplus L(K)$ (Corollary \ref{Coro-GL y P}).
We will write $L\cong E$ if the topological linear spaces $L$ and $E$ are topologically isomorphic.
The second part is due to the observation that if $r\from L(X)\to L(K)$ is a continuous linear retraction, then
$L(X)\cong L(K)\times \ker r$. Thus, $L(K)\times \ker r \cong L(K)\oplus GL(X/K) \cong L(K)\times GL(X/K)$. To end the proof,
note that the function $\theta\from X\to X/K\subset GL(X/K)$ given by $\theta(x)=p(x)$ is $\reals$-quotient, so
$\theta_\#\from L(X)\to GL(X/K)$ is open and onto. Since $\ker \theta_\#=L(K)$, we have $L(X)/L(K)\cong GL(X/K)$.
On the other hand, the function $\psi\from L(X)\to \ker r$ given by $\psi(\alpha)=\alpha-r(\alpha)$ is linear continuous,
open and its kernel is $L(K)$. Thus, $L(X)/L(K)\cong \ker r$. We conclude that $GL(X/K)\cong \ker r$.
\end{proof}
In a way, given an $L$-retraction $r\from L(X)\to L(K)$, we can obtain enough information about $L(X)$ from $L(K)$,
then, as a corollary of the previous proposition, we can obtain that $L(X)$ is topologically isomorphic
to $\ker p_\#\oplus \ker r \oplus \reals$, where $p_\#$ it is the linear continuous extension of the natural mapping
$p\from X\to X/K$. This motivates the following statements:
\begin{prop}\label{p-equiv}
Let $K_1$ and $K_2$ be two $L$-retracts of $X$. If the natural mappings $p_1\from X\to X/K_1$ and $p_2\from X\to X/K_2$
are $L$-equivalent, then $K_1$ and $K_2$ are equivalent $L$-retracts.
\end{prop}
\begin{proof}
Since the natural mappings $p_1$ and $p_2$ are $L$-equivalent, there are topological isomorphisms $i\from L(X)\to L(X)$ and
$j\from L(X/K_1)\to L(X/K_2)$ such that $j\circ (p_1)_\#=(p_2)_\#\circ i$. In view of the assumption that $X/K_1$ is
$L$-equivalent to $X/K_2$, we have that $\ker r_1$ is topologically isomorphic to $\ker r_2$; let us denote by $t$
such a topological isomorphism. Also, from $i(L^0(K_1))=i(\ker (p_1)_\#)=\ker (p_2)_\#=L^0(K_2)$, we obtain that $L(K_1)$ is
topologically isomorphic to $L(K_2)$; let $k$ be such an isomorphism. Then $w=k\times t$ is a topological isomorphism
between $L(K_1)\times \ker r_1$ and $L(K_2)\times \ker r_2$. Thus, we have a topological isomorphism $\varphi\from L(X)\to L(X)$
given by the formulas:
\begin{equation*}
\varphi(a)=\zeta_2\circ w\circ \eta_1(a)=\zeta_2\circ w(r_1(a),a-r_{1}(a))=\zeta_2(k(r_1(a)), t(a)-t(r_1(a))=k(r_1(a))+
t(a)-t(r_1(a)).
\end{equation*}
We quickly notice that under this isomorphism, $\varphi(\ker r_1)=\ker r_2$, so, defining $\phi\from L(K_1)\to L(K_2)$
by $\psi(a)=r_2\circ \phi (r_1^{-1}(a))$ we obtain a topological isomorphism such that $\psi\circ r_1=r_2\circ \phi$,
proving that $K_1$ and $K_2$ are equivalent $L$-retracts.
\end{proof}
\begin{coro}
Let $K_1$ and $K_2$ be $L$-retracts of $X$, and $p_1\from X\to X/K_1$, $p_2\from X\to X/K_2$ the corresponding natural
mappings. The following statements are equivalent:
\begin{enumerate}
\item $K_1$ and $K_2$ are equivalent $L$-retracts;
\item $p_1$ and $p_2$ are $L$-equivalent;
\item $K_1$ is $L$-equivalent to $K_2$, and $X/K_1$ is $L$-equivalent to $X/K_2$.
\end{enumerate}
\end{coro}
\begin{proof}
The equivalence between items 1 and 2 is obvious. That 1 implies 3 is easy to verify. Therefore, we will only prove
that the statement 3 implies the statement 1. First, by the hypothesis, we have $GL(X/K_1)\cong GL(X/K_2)$, and accordingly,
due to Proposition \ref{PropGLigualkernel} we have that $\ker r_1$ is topologically isomorphic to $\ker r_2$. Then, using the
technique described in the previous propositions, we obtain topological isomorphisms $i\from L(X)\to L(X)$ and
$j\from L(K_1)\to L(K_2)$ such that $i(\ker r_1)=\ker r_2$ and $j\circ r_1=r_2\circ i$.
It follows that $K_1$ and $K_2$ are equivalent $L$-retracts.
\end{proof}
\begin{coro}
Let $r_1\from X\to K_1$ and $r_2\from X\to K_2$ be retractions in $X$, and $p_1\from X\to X/K_1$, $p_2\from X\to X/K_2$
the natural mappings. The following statements are equivalent:
\begin{enumerate}
\item $r_1$ is $L$-equivalent to $r_2$;
\item $p_1$ is $L$-equivalent to $p_2$.
\item $K_1$ is $L$-equivalent to $K_2$, and $X/K_1$ is $L$-equivalent to $X/K_2$.
\end{enumerate}
\end{coro}
\begin{coro}
Two retractions to the same retract are $L$-equivalent.
\end{coro}
\begin{coro}
Let $X$ and $Y$ be two $L$-equivalent spaces, $K_1$ and $K_2$ be retracts respectively of of $X$ and $Y$,
which are $L$-equivalent and such that $X/K_1$ is $L$-equivalent to $Y/K_2$. Then any two retractions
$X\to K_1$ and $X\to K_2$ are $L$-equivalent, moreover, the corresponding natural mappings are also $L$-equivalent.
\end{coro}
\begin{ex} \rm Consider the retractions $r_1, r_2\from [0,1]\to [0,1/2]$ defined by $r_1(x)=x$ if $x\in [0,1/2]$ and
$r_1(x)=1-x$ if $x\in [1/2,1]$, $r_2(x)=x$ if $x\in [0,1/2]$ and $r_2(x)=1/2$ if $x\in [1/2,1]$. Theses retractions
are $L$-equivalent and we have that $r_1$ is perfect and open, while $r_2$ it is also perfect but not open.
\end{ex}
\begin{coro}
Being an open mapping is not preserved under the relation of $L$-equivalence, even within the class of perfect retractions.
\end{coro}
\begin{ex}\rm
In \cite[Theorem 4.2]\refSanchez{} it is shown that there is a weakly pseudocompact, not locally compact space
that has a single non-isolated point, of the form $K=[0,\omega_1]\cup \{A_\alpha: \alpha\in\omega_1\}$. Moreover,
it is proved that $K$ is $L$-equivalent to $Y=K_1\oplus K_2$, where $K_1=[0,\omega_1]$ and $K_2=(K\setminus K_1)\cup
\{\omega_1\}$. Let $Z$ be a pseudocompact space that contains $Y$ as a closed subspace. We consider the spaces
$X_1=K\oplus Z$ and $X_2=Y\oplus Z$, and the retractions $r_1\from X_1 \to Z$ such that $r_1|K$ is the embedding of $K$
in $Z$ ($K$ can be seen as a subspace of $Y$) and $r_1|Z$ is the identity; $r_2\from X_2 \to Z$ such that $r_2|Y$ is the
embedding of $Y$ in $Z$ and $r_2|Z$ is the identity. This retractions are $L$-equivalent and they are finite-to-one,
but $r_1$ is perfect and $r_2$ it is not (it is not closed).
\end{ex}
\begin{coro}
Being a closed mapping is not preserved under the relation of $L$-equivalence, even within the class of
finite-to-one retractions. In particular, being a perfect function is not $L$-invariant.
\end{coro}
\begin{ob}\rm
Note the following relationship: each pair of parallel retracts are parallel $L$-retracts, and therefore they are
equivalent $L$-retracts. On the other hand, we know that there are parallel $L$-retracts that are not parallel retracts,
in fact, let $K_1$ and $K_2$ be two $L$-equivalent spaces that are not homeomorphic, then $X=K_1\oplus K_2$ contains both
spaces as parallel $L$-retracts that clearly are not parallel retracts. However, the following question arises:
{\sl are two equivalent $L$-retracts always parallel $L$-retracts?}
\end{ob}
\bibliographystyle{abbrv}
|
1,477,468,750,290 | arxiv | \section{Introduction}
Affine Lagrangian (totally real or purely real)
submanifolds are
``maximally non-complex" submanifolds
in almost complex manifolds
defined by relaxing the Lagrangian condition (Definition \ref{def of aff Lag}).
The affine Lagrangian condition is an open condition
and hence there are many examples.
Borrelli \cite{Borrelli}
defined a canonical volume of an affine Lagrangian submanifold
called the $J$-volume.
He obtained the stability result for the $J$-volume
as in the Lagrangian case \cite{Chen}.
Lotay and Pacini \cite{LotayPacini} pointed out the importance of
affine Lagrangian submanifolds
in the study of geometric flows.
Opozda \cite{Opozda} showed that
the moduli space of (special) affine Lagrangian submanifolds
was a smooth Fr\'{e}chet manifold.
In this paper,
we study the odd dimensional analogue.
First, we introduce the notion of
affine Legendrian submanifolds
in Sasakian manifolds
and
define a canonical volume called the $\phi$-volume
as
odd dimensional analogues of affine Lagrangian geometry.
See Definitions \ref{def of aff Leg} and \ref{def of phi vol}.
Then we compute the first variation of the $\phi$-volume
and characterize
a critical point for the $\phi$-volume
by the vanishing of some vector field $H_{\phi}$ (Proposition \ref{1st var phi Vol}),
which is a generalization of the mean curvature vector (Remark \ref{MC in Leg case}).
We call an affine Legendrian submanifold {\bf $\phi$-minimal} if $H_{\phi} = 0$.
Then we compute the second variation of the $\phi$-volume
and obtain the following.
\begin{thm} \label{2nd var phi Vol}
Let $(M^{2n+1}, g, \eta, \xi, \phi)$ be a
$(2n+1)$-dimensional Sasakian manifold
and $\iota : L^{n} \hookrightarrow M$
be an affine Legendrian immersion
of a compact oriented $n$-dimensional manifold $L$.
Let
$\iota_{t} : L \hookrightarrow M$
be a one-parameter family of affine Legendrian immersions
satisfying $\iota_{0} = \iota$.
Suppose that
$\frac{\partial \iota_{t}}{\partial t}|_{t=0} = Z
= \phi Y + f \xi$,
where $Y \in \mathfrak{X}(L)$ is a vector field on $L$
and $f \in C^{\infty}(L)$ is a smooth function.
Then we have
\begin{align*}
\left. \frac{d^{2}}{d t^{2}}
\int_{L} {\rm vol}_{\phi} [\iota_{t}] \right|_{t=0}
= &
\int_{L}
\left(
(2n+2) \eta (Y)^{2} - 2 g(Y, Y) - {\rm Ric}(Y, Y) \right. \\
&
\left.
- g(\pi_{L}[Z, Y], H_{\phi})
+ g(Y, H_{\phi})^{2}
+
\left (
\frac{{\rm div}(\rho_{\phi} [\iota] Y)}{\rho_{\phi} [\iota]}
\right)^{2}
\right)
{\rm vol}_{\phi}[\iota],
\end{align*}
where
${\rm vol}_{\phi}[\iota]$ is the $\phi$-volume form of $\iota$ given in Definition \ref{def of phi vol},
${\rm Ric}$ is the Ricci curvature of $(M,g)$,
$\pi_{L}: \iota^{*}TM \rightarrow \iota_{*} TL$
is the canonical projection given in (\ref{canonical proj}),
$\rho_{\phi} [\iota]$ is the function on $L$ given in Definition \ref{def of phi vol} and
$H_{\phi}$ is the vector field on $L$ given in Definition \ref{def of H phi}.
\end{thm}
\begin{rem}
For Legendrian submanifolds,
the $\phi$-volume agrees with the standard Riemannian volume (Lemma \ref{equality rho phi}).
When $\iota$ is minimal Legendrian and all of $\iota_{t}$'s are Legendrian,
Theorem \ref{2nd var phi Vol} agrees with \cite[Theorem 1.1]{Ono}.
When $\iota$ is Legendrian-minimal Legendrian
and all of $\iota_{t}$'s are Legendrian,
Theorem \ref{2nd var phi Vol} agrees with \cite[Theorem 1.1]{Kajigaya}.
See Remark \ref{relation Lmin}.
\end{rem}
Following the Riemannian case,
we call a $\phi$-minimal affine Legendrian submanifold {\bf $\phi$-stable}
if the second variation of the $\phi$-volume is nonnegative.
Now, suppose that
a $(2n+1)$-dimensional Sasakian manifold $(M^{2n+1}, g, \eta, \xi, \phi)$
is a $\eta$-Einstein with the
$\eta$-Ricci constant $A \in \mathbb{R}$.
(See Definition \ref{def of eta Einstein}.)
Then we obtain the following.
\begin{thm} \label{thm stablity}
Let $(M^{2n+1}, g, \eta, \xi, \phi)$ be a $(2n+1)$-dimensional
$\eta$-Einstein Sasakian manifold with the
$\eta$-Ricci constant $A \leq -2$.
Then any $\phi$-minimal affine Legendrian submanifold in $M$ is
$\phi$-stable.
\end{thm}
This is a generalization of \cite[Theorem 1.2]{Ono}.
The author obtained further results
by restricting variations of a minimal Legendrian submanifold to its Legendrian variations.
In our case, since the affine Legendrian condition is an open condition,
any small variation is affine Legendrian.
Thus there is no analogue of these results.
Similarly, using the notion of convexity
in the space of affine Legendrian submanifolds (Definition \ref{def of convex}),
we easily see the following.
\begin{thm} \label{thm convexity}
Let $(M^{2n+1}, g, \eta, \xi, \phi)$ be a $(2n+1)$-dimensional
$\eta$-Einstein Sasakian manifold with the
$\eta$-Ricci constant $A \leq -2$.
Then the $\phi$-volume functional
on the space of affine Legendrian submanifolds is convex.
\end{thm}
For affine Legendrian submanifolds in a $\eta$-Einstein Sasakian manifold with the
$\eta$-Ricci constant $A > -2$,
we have the following.
\begin{thm} \label{thm obstruction}
Let $(M^{2n+1}, g, \eta, \xi, \phi)$ be a $(2n+1)$-dimensional
$\eta$-Einstein Sasakian manifold with the
$\eta$-Ricci constant $A > -2$.
Then there are no
$\phi$-minimal affine Legendrian submanifolds
which are $\phi$-stable.
\end{thm}
Next,
we define a special affine Legendrian submanifold in
a Sasaki-Einstein manifold
with a Calabi-Yau structure on its cone
by requiring that its cone is special affine Lagrangian
(Definition \ref{def of special aff Leg}).
This notion is a generalization of that of special Legendrian submanifolds.
By a slight generalization of the general deformation theory of
Moriyama \cite[Proposition 2.2]{Moriyama},
we
study the moduli space of special affine Legendrian submanifolds
and obtain the following.
\begin{thm} \label{smooth moduli affine Leg}
Let $M$ be a Sasakian manifold with
a Calabi-Yau structure on its cone
and $L$ be a compact connected manifold
admitting a special affine Legendrian embedding $L \hookrightarrow M$.
Then
the moduli space
of special affine Legendrian embeddings of $L$
is an infinite dimensional smooth Fr\'{e}chet manifold
modeled on the Fr\'{e}chet vector space
$\{ (g, \alpha) \in C^{\infty}(L) \oplus \Omega^{1}(L); (n+1)g + d^{*} \alpha = 0 \} \cong \Omega^{1}(L)$,
which is identified with
the direct sum of
the space of functions with integral $0$ and that of coclosed 1-forms.
It is a submanifold of the
moduli space of smooth affine Legendrian embeddings of $L$.
\end{thm}
\begin{rem}
Theorem \ref{smooth moduli affine Leg} shows
the different property of
the moduli space of special affine Legendrian submanifolds
from
that of special Legendrian submanifolds.
In general,
there are obstructions of special Legendrian deformations.
See \cite[Section 4.2]{Moriyama}.
\end{rem}
This paper is organized as follows.
In Section 2, we review the fundamental
facts of Sasakian geometry.
In Section 3,
we review affine Lagrangian geometry
and introduce its odd dimensional analogue,
namely,
affine Legendrian geometry.
In Section 4, we compute the first variation of the $\phi$-volume.
In Section 5,
we compute the second variation of the $\phi$-volume
to obtain Theorems \ref{2nd var phi Vol},
\ref{thm stablity}, \ref{thm convexity} and \ref{thm obstruction}.
In Section 6,
we consider the $\phi$-volume in Sasaki-Einstein manifolds
and introduce the notion of
special affine Legendrian submanifolds.
In Section 7,
we study the moduli space of special affine Legendrian submanifolds
and prove Theorem \ref{smooth moduli affine Leg}.
\noindent{{\bf Acknowledgements}}:
The author owes a great intellectual debt to
the work of Lotay and Pacini \cite{LotayPacini},
which motivates him to study the subject of this paper.
He is grateful to Professor Takayuki Moriyama
for letting me know his work and
fruitful discussions.
He also thanks the referee for
useful comments on an earlier version of this paper.
\section{Sasakian geometry}
\begin{definition} \label{def of Sasakian 1}
Let $M^{2n + 1}$ be a $(2n + 1)$-dimensional manifold.
Suppose that
there exist
a contact form $\eta$,
a Riemannian metric $g$, a Killing vector field $\xi$
and a type (1, 1)-tensor $\phi$ on $M$.
We call
$(M, g, \eta, \xi, \phi)$
a {\bf Sasakian manifold} if
we have
\begin{align*}
\eta (\xi) &= 1, \\
\phi^{2} &= -id_{TM}+ \eta \otimes \xi, \\
g (\phi (X), \phi (Y)) &= g(X,Y) -\eta(X) \eta(Y), \\
d \eta &= 2 g (\cdot, \phi (\cdot) ), \\
[\phi,\phi](X, Y)+ d \eta (X, Y) \xi &= 0,
\end{align*}
where
$X, Y \in TM$,
$d \eta (X,Y) = X \eta(Y) - Y \eta(X) - \eta([X, Y])$ and
$[\phi, \phi](X, Y ) =
\phi^{2} [X, Y] + [\phi(X),\phi(Y)] -
\phi [\phi(X),Y]-\phi [X, \phi(Y )]$.
\end{definition}
We can also define a Sasakian manifold
in terms of a Riemannian cone.
\begin{definition} \label{def of Sasakian 2}
An odd dimensional Riemannian manifold
$(M, g)$ is a {\bf Sasakian manifold} if
its Riemannian cone
$(C(M), \bar{g}) = (\mathbb{R}_{>0} \times M, dr^{2} + r^{2} g)$
is a K\"ahler manifold with respect to some complex structure $J$ over $C(M)$.
\end{definition}
Here, $r$ is a standard coordinate of $\mathbb{R}_{>0}$ and we regard $r$ as
the function on $C(M)$.
We identify $M$ with the submanifold $\{1\} \times M \subset C(M)$.
It is known that
Definitions \ref{def of Sasakian 1} and \ref{def of Sasakian 2} are equivalent.
From Definition \ref{def of Sasakian 2}, we see that
Sasakian geometry is regarded as the odd-dimensional analogue of K\"{a}hler geometry.
Tensors in Definition \ref{def of Sasakian 1}
are recovered as follows.
Define the vector field $\tilde{\xi}$ and the 1-form $\tilde{\eta}$
on $C(M)$ by
\begin{align*}
\tilde{\xi} = - J \left( r\frac{\partial}{\partial r} \right), \qquad
\tilde{\eta} = \frac{1}{r^{2}}\bar{g}(\tilde{\xi},\cdot)=\frac{dr \circ J}{r}=-2d^{c}_{J}{\rm log}r,
\end{align*}
where $d^{c}_{J} f = -df \circ J/2 = i (\bar{\partial} - \partial)f/2$ for the function $f$ on $C(M)$.
Then we have
\begin{align*}
\xi = \tilde{\xi}|_{r=1}, \qquad
\eta= \tilde{\eta}|_{r=1}.
\end{align*}
By the decomposition
$TM = {\rm ker}\eta \oplus \mathbb{R}\xi$,
the endomorphism $\phi \in C^{\infty}(M, {\rm End}(TM))$ is given by
\begin{align*}
\phi|_{{\rm ker}\eta}=J|_{{\rm ker}\eta}, \qquad
\phi|_{\mathbb{R}\xi}=0.
\end{align*}
The metric $g$ on $M$, $J|_{ \{ r=1 \} }$
and the K\"{a}hler form $\bar{\omega}$ of $\bar{g}$
are described as
\begin{align*}
g &= \frac{1}{2}d\eta (\phi(\cdot), \cdot)+\eta \otimes \eta, \\
J|_{ \{ r=1 \} }&= \phi - \xi \otimes dr + \frac{\partial}{\partial r} \otimes \eta, \\
\bar{\omega}&= \frac{1}{2}d(r^{2}\tilde{\eta})=-\frac{1}{2}dd^{c}_{J}r^{2}.
\end{align*}
We summarize basic equations in Sasakian geometry.
See \cite[Section 2]{Ono}.
\begin{lem} \label{Sasaki eq1}
Let $(M, g, \eta, \xi, \phi)$ be a (2n+1)-dimensional Sasakian manifold.
Then we have
\begin{align*}
\phi (\xi) =0, \qquad
\eta \circ \phi =0, \qquad
\eta = g(\xi, \cdot), \qquad
i (\xi) d \eta = 0,
\end{align*}
where $i(\cdot )$ is an inner product.
\end{lem}
\begin{lem} \label{Sasaki eq2}
Let $(M, g, \eta, \xi, \phi)$ be a Sasakian manifold.
Then we have
\begin{align*}
\nabla_{X} \xi =& - \phi (X), \\
(\nabla_{X} \phi) (Y) =& g(X, Y) \xi - \eta(Y) X, \\
R(X, Y)\xi =& \eta (Y) X - \eta(X) Y, \\
R(X, Y) (\phi (Z))
=&
\phi (R(X, Y)Z) - g(Y, Z) \phi(X)
+g(\phi(X), Z) Y \\
&+ g(X, Z) \phi (Y)
- g(\phi (Y), Z) X, \\
{\rm Ric} (\xi, X)
=&
\left\{ \begin{array}{ll}
2n & (X=\xi) \\
0 & (X \in \ker \eta) \\
\end{array} \right.
\end{align*}
where $X, Y, Z \in \mathfrak{X}(M)$ are vector fields on $M$,
$\nabla$ is the Levi-Civita connection of $(M, g)$,
$R$ is the curvature tensor of $(M, g)$
and
${\rm Ric}$ is the Ricci curvature tensor of $(M, g)$.
\end{lem}
Note that when $(M, g)$ is Einstein, the scalar curvature is equal to $2n(2n+1)$.
\begin{definition} \label{def of eta Einstein}
A $(2n+1)$-dimensional Sasakian manifold $(M^{2n+1}, g, \eta, \xi, \phi)$
is called {\bf $\eta$-Einstein} if we have
\begin{align*}
{\rm Ric} = A g + (2n-A) \eta \otimes \eta
\end{align*}
for some $A \in \mathbb{R}$.
The constant
$A$ is called the {\bf $\eta$-Ricci constant}.
\end{definition}
This condition is necessary to prove Theorems \ref{thm stablity} and \ref{thm convexity}.
Note that $g$ is Einstein if $A=2n$.
Let $L$ be an $n$-dimensional manifold admitting an immersion
$\iota: L \hookrightarrow M$ into a $(2n+1)$-dimensional Sasakian manifold.
The immersion $\iota$ induces the immersion
\begin{align} \label{induced imm}
\bar{\iota}: C(L) = \mathbb{R}_{>0} \times L
\ni (r, x) \mapsto
(r, \iota(x))
\in
\mathbb{R}_{>0} \times M = C(M).
\end{align}
\begin{definition}
An immersion $\iota: L \hookrightarrow M$ is {\bf Legendrian}
if $\iota^{*} \eta = 0$.
This is equivalent to the condition that
the induced immersion $\bar{\iota}: C(L) \hookrightarrow C(M)$ given by (\ref{induced imm})
is Lagrangian: $\bar{\iota}^{*} \bar{\omega} = 0$.
\end{definition}
\section{Affine Lagrangian and affine Legendrian submanifolds}
First, we review affine Lagrangian geometry by \cite{Borrelli, LotayPacini}.
\subsection{Affine Lagrangian submanifolds}
Let $(X, h, J, \omega)$ be a real $2n$-dimensional
almost Hermitian manifold,
where $h$ is a Hermitian metric,
$J$ is an almost complex structure
and $\omega$ is an associated K\"{a}hler form.
Let $N$ be an oriented $n$-dimensional manifold
admitting
an immersion $f: N \hookrightarrow X$.
\begin{definition} \label{def of aff Lag}
An immersion $f$ is called {\bf affine Lagrangian} if
\begin{align} \label{def aff Lag decomp}
T_{f (x)} X = f_{*} (T_{x}N) \oplus J f_{*} (T_{x}N)
\end{align}
for any $x \in N$.
\end{definition}
\begin{rem}
If $N$ is Lagrangian (i.e. $f^{*} \omega = 0$), (\ref{def aff Lag decomp})
is an orthogonal decomposition.
The affine Lagrangian condition does not require
the orthogonality of the decomposition (\ref{def aff Lag decomp}).
\end{rem}
\begin{rem}
The affine Lagrangian condition is an open condition.
The metric is not needed in the definition,
and hence we can define affine Lagrangian submanifolds
in an almost complex manifold.
\end{rem}
Next,
we define a $J$-volume
introduced by Borrelli \cite{Borrelli}.
\begin{definition} \label{def of J vol}
Let $f: N^{n} \hookrightarrow X^{2n}$ be
an affine Lagrangian immersion.
Define the
{\bf $J$-volume form} ${\rm vol}_{J} [f]$ of $f$,
which is the $n$-form on $N$,
by
\begin{align*}
{\rm vol}_{J} [f] = \rho_{J} [f] {\rm vol}_{f^{*} h},
\end{align*}
where
${\rm vol}_{f^{*} h}$ is the Riemannian volume form of $f^{*} h$ and
the function $\rho_{J} [f]$ on $N$ is defined by
\begin{align*}
\rho_{J}[f](x) =
\sqrt
{{\rm vol}_{h}(f_{*} e_{1}, \cdots, f_{*} e_{n}, J f_{*} e_{1}, \cdots, J f_{*} e_{n})}
\end{align*}
for $x \in N$
and $\{e_{1},\cdots, e_{n} \}$ is an orthonormal basis of $T_{x} N$.
When $N$ is compact, define the
{\bf $J$-volume} ${\rm Vol}_{J} [f]$ of $N$ by
\begin{align*}
{\rm Vol}_{J} [f] = \int_{N} {\rm vol}_{J} [f].
\end{align*}
\end{definition}
\begin{rem}
The
definition of the $J$-volume form ${\rm vol}_{J} [f]$
is independent of the choice of the Hermitian metric $h$.
See \cite[Section 3.2, 4.1]{LotayPacini}.
Thus the $J$-volume is also defined in an almost complex manifold.
\end{rem}
By definition, the following is easy to prove and we omit the proof.
\begin{lem} \label{equality rho J}
We have $0 \leq \rho_{J}[f] \leq 1$.
The equality $\rho_{J}[f] = 1$ holds if and only if $f$ is Lagrangian.
The equality $\rho_{J}[f] = 0$ holds if and only if $f$ is partially complex,
namely, $f_{*} T_{x} N$ contains a complex line for any $x \in N$.
\end{lem}
\begin{lem} \label{diff equiv aff Lag}
For any diffeomorphism $\varphi \in {\rm Diff}^{\infty}(N)$ of $N$, we have
\begin{align*}
{\rm vol}_{J} [f \circ \varphi] = \varphi^{*} {\rm vol}_{J} [f].
\end{align*}
Thus when $N$ is compact, we obtain
\begin{align*}
{\rm Vol}_{J} [f \circ \varphi] = {\rm Vol}_{J} [f].
\end{align*}
\end{lem}
\begin{proof}
We easily see that
$\rho_{J}[f \circ \varphi] (x) = \rho_{J}[f] (\varphi (x))$,
${\rm vol}_{(f \circ \varphi)^{*} h} = \varphi^{*} {\rm vol}_{f^{*} h}$,
which imply the statement.
\end{proof}
\subsection{Affine Legendrian submanifolds}
Next, we introduce the odd dimensional analogue of affine Lagrangian geometry,
namely, affine Legendrian geometry.
Let $(M,g, \eta, \xi, \phi)$ be a $(2n+1)$-dimensional Sasakian manifold
and $L$ be an oriented $n$-dimensional manifold
admitting
an immersion $\iota: L \hookrightarrow M$.
\begin{definition} \label{def of aff Leg}
An immersion $\iota$ is called {\bf affine Legendrian} if
\begin{align} \label{def aff Leg decomp}
T_{\iota (x)} M = \iota_{*} (T_{x}L) \oplus \phi \iota_{*} (T_{x}L) \oplus \mathbb{R} \xi_{\iota (x)}
\end{align}
for any $x \in L$.
\end{definition}
Then we can define canonical projections
\begin{align}\label{canonical proj}
\pi_{L}: T_{x} M \rightarrow \iota_{*} (T_{x}L), \qquad
\pi_{\phi}: T_{x} M \rightarrow \phi \iota_{*} (T_{x}L), \qquad
\pi_{\xi}: T_{x} M \rightarrow \mathbb{R} \xi_{\iota (x)}.
\end{align}
\begin{rem}
If L is Legendrian (i.e. $\iota^{*} \eta = 0$), (\ref{def aff Leg decomp})
is an orthogonal decomposition.
The affine Legendrian condition does not require
the orthogonality of the decomposition (\ref{def aff Leg decomp}).
\end{rem}
\begin{rem}
We can define affine Legendrian submanifolds
in an almost contact manifold.
To simplify the computations, especially in Sections 4 and 5,
we assume that $M$ is Sasakian.
\end{rem}
By definition, we easily see the following.
\begin{rem}
An immersion $\iota: L \rightarrow M$
is affine Legendrian
if and only if
$\bar{\iota}: C(L) \rightarrow C(M)$ given by (\ref{induced imm})
is affine Lagrangian.
\end{rem}
Next,
we define the $\phi$-volume
as an analogue of the $J$-volume.
Recall that
the Riemannian volume form ${\rm vol}_{\bar{\iota}^{*} \bar{g}}$
of $\bar{\iota}^{*} \bar{g}$ on $C(L)$
and
the Riemannian volume form ${\rm vol}_{\iota^{*} g}$
of $\iota^{*} g$ on $L$
are related by
$
{\rm vol}_{\bar{\iota}^{*} \bar{g}} = r^{n} dr \wedge {\rm vol}_{\iota^{*} g}.
$
As an analogue of this fact,
we define the $\phi$-volume.
\begin{definition} \label{def of phi vol}
Let $\iota: L^{n} \hookrightarrow M^{2n+1}$ be
an affine Legendrian immersion into
a Sasakian manifold.
Define the
{\bf $\phi$-volume form} ${\rm vol}_{\phi} [\iota]$ of $\iota$,
which is the $n$-form on $L$,
by
\begin{align*}
{\rm vol}_{J} [\bar{\iota}] = r^{n} dr \wedge {\rm vol}_{\phi} [\iota].
\end{align*}
When $L$ is compact, define the
{\bf $\phi$-volume} ${\rm Vol}_{\phi} [\iota]$ of $L$ by
\begin{align*}
{\rm Vol}_{\phi} [\iota] = \int_{L} {\rm vol}_{\phi} [\iota].
\end{align*}
\end{definition}
The $\phi$-volume form ${\rm vol}_{\phi} [\iota]$ is described
more explicitly as follows.
Define the function $\rho_{\phi}[\iota]$ on $L$ by
\begin{align*}
\rho_{\phi}[\iota](x) &= \rho_{J}[\bar{\iota}] (1, x)\\
&=
\sqrt
{{\rm vol}_{g}(\iota_{*} e_{1}, \cdots, \iota_{*} e_{n}, -\xi, \phi \iota_{*} e_{1}, \cdots, \phi \iota_{*} e_{n})}
\end{align*}
for $x \in L$ and $\{e_{1},\cdots, e_{n} \}$ is an orthonormal basis of $T_{x} L$.
Then we see that
\begin{align*}
{\rm vol}_{\phi} [\iota] = \rho_{\phi} [\iota] {\rm vol}_{\iota^{*} g}.
\end{align*}
As in the affine Lagrangian case, we easily see the following.
\begin{rem}
The
definition of the $\phi$-volume form ${\rm vol}_{\phi} [\iota]$
is independent of the choice of the Sasakian metric $g$.
\end{rem}
\begin{lem} \label{equality rho phi}
We have $0 \leq \rho_{\phi}[\iota] \leq 1$.
The equality $\rho_{\phi}[\iota] = 1$ holds if and only if $\iota$ is Legendrian.
The equality $\rho_{\phi}[\iota] = 0$ holds if and only if
for any $x \in L$, there exists $0 \neq X \in T_{x} L$ such that
$\iota_{*} X, \phi \iota_{*} X \in \iota_{*} T_{x} L$
or
$\iota_{*} X$ or $\phi \iota_{*} X$ is a multiple of $\xi_{\iota (x)}$.
\end{lem}
\begin{lem} \label{diff equiv aff Leg}
For any diffeomorphism $\varphi \in {\rm Diff}^{\infty}(L)$ of $L$, we have
\begin{align*}
{\rm vol}_{\phi} [\iota \circ \varphi] = \varphi^{*} {\rm vol}_{\phi} [\iota].
\end{align*}
Thus when $L$ is compact, we obtain
\begin{align*}
{\rm Vol}_{\phi} [\iota \circ \varphi] = {\rm Vol}_{\phi} [\iota].
\end{align*}
\end{lem}
\subsection{Geodesics and convexity}
In \cite[Section 3.1]{LotayPacini}, the notion of geodesics
in the space of affine Lagrangian submanifolds
was introduced.
Analogously,
we define the notion of geodesics
in the space of affine Legendrian submanifolds.
Let $(M, g, \eta, \xi, \phi)$ be a $(2n+1)$-dimensional Sasakian manifold and
$L$ be an oriented $n$-dimensional manifold admitting an embedding $L \hookrightarrow M$.
Let $\mathcal{P}$ be the space of all affine Legendrian embeddings of $L$:
\begin{align*}
\mathcal{P} = \{ \iota: L \hookrightarrow M; \iota \mbox{ is an affine Legendrian embedding} \}.
\end{align*}
The group ${\rm Diff}^{\infty}(L)$ of diffeomorphisms of $L$
acts freely on $\mathcal{P}$ on the right
by reparametrizations.
Set $\mathcal{A} = \mathcal{P}/ {\rm Diff}^{\infty}(L)$.
Thus we can regard $\mathcal{P}$ as
a principal ${\rm Diff}^{\infty}(L)$-bundle over $\mathcal{A}$.
Denote by $\pi: \mathcal{P} \rightarrow \mathcal{A}$ the canonical projection.
For each $\iota \in \mathcal{P}$, define the subspaces of $T_{\iota} \mathcal{P}$ by
\begin{align*}
V_{\iota} = \{ \iota_{*} X; X \in \mathfrak{X}(L) \}, \qquad
H_{\iota} = \{ \phi \iota_{*} Y + f \xi \circ \iota; Y \in \mathfrak{X}(L), f \in C^{\infty}(L) \}.
\end{align*}
We easily see that
$V_{\iota} = \ker ((d \pi)_{\iota} : T_{\iota} \mathcal{P} \rightarrow T_{\pi (\iota)} \mathcal{A})$
and
we have a decomposition
$T_{\iota} \mathcal{P} = V_{\iota} \oplus H_{\iota}$.
As in the proof of \cite[Lemma 3.1]{LotayPacini}, we see that
the distribution $\iota \mapsto H_{\iota}$ on $\mathcal{P}$
is ${\rm Diff}^{\infty}(L)$-invariant.
Thus
the distribution $\iota \mapsto H_{\iota}$ defines a connection on
the principal ${\rm Diff}^{\infty}(L)$-bundle $\mathcal{P}$.
It is known that
the associated vector bundle
$
\mathcal{P} \times_{{\rm Diff}^{\infty}(L)} (\mathfrak{X}(L) \times C^{\infty}(L))
$
to the standard action of ${\rm Diff}^{\infty}(L)$ on
$\mathfrak{X}(L) \times C^{\infty}(L)$
is isomorphic to
the tangent bundle $T \mathcal{A}$:
\begin{align*}
\mathcal{P} \times_{{\rm Diff}^{\infty}(L)} (\mathfrak{X}(L) \times C^{\infty}(L))
\cong T \mathcal{A}
\end{align*}
via $[\iota, (Y, f)] \mapsto (d \pi)_{\iota} (\phi \iota_{*} Y + f \xi \circ \iota)$.
Then the connection on $\mathcal{P}$ induces a connection on $T \mathcal{A}$.
We define the geodesic
$\{ L_{t} \}$ on $\mathcal{A}$ by requiring that
$\frac{d L_{t}}{dt}$ is parallel with respect to this connection.
\begin{lem} \label{def of geodesic}
A curve $\{ L_{t} \} \subset \mathcal{A}$ is a
{\bf geodesic} if and only if there exists a curve
of affine Legendrian embeddings $\{ \iota_{t} \}$, a fixed vector field $Y \in \mathfrak{X}(L)$
and a function $f \in C^{\infty}(L)$
such that
$\pi (\iota_{t}) = L_{t}$ and
\begin{align*}
\frac{d \iota_{t}}{dt} = \phi (\iota_{t})_{*} Y + f \xi \circ \iota_{t}.
\end{align*}
This implies that
$[ (\iota_{t})_{*} Y, \phi (\iota_{t})_{*} Y + f \xi \circ \iota_{t}] = 0$
for all t for which $\{ L_{t} \}$ is defined.
\end{lem}
\begin{proof}
Let $\{ L_{t} \}_{t \in (a, b)} \subset \mathcal{A}$ be a geodesic
and $\{ x(s) \}_{s \in (c, d)}$ be an integral curve
of $Y$ on $L$.
Then
\begin{align*}
f: (c, d) \times (a, b) \ni (s, t) \mapsto \iota_{t} (x(s)) \in M
\end{align*}
is an embedded surface in $M$ and
\begin{align*}
\frac{\partial f}{\partial t} = \phi (\iota_{t})_{*} Y + f \xi \circ \iota_{t}, \qquad
\frac{\partial f}{\partial s} = (\iota_{t})_{*} Y,
\end{align*}
which imply that they commute.
\end{proof}
\begin{definition} \label{def of convex}
A functional $F : \mathcal{A} \rightarrow \mathbb{R}$ is {\bf convex}
if and only if
$\{ F \circ L_{t} \}$
is a convex function in one variable for any geodesic $\{ L_{t} \}$ in $\mathcal{A}$.
\end{definition}
\begin{rem}
The existence theory of a geodesic for any
$Y \in \mathfrak{X}(L)$ and $f \in C^{\infty}(L)$ is not known
as in the case of the standard Riemannian geometry.
\end{rem}
\section{First variation of the $\phi$-volume} \label{first var}
Let $(M^{2n+1}, g, \eta, \xi, \phi)$ be a $(2n+1)$-dimensional Sasakian manifold.
Let $\iota : L^{n} \hookrightarrow M$
be an affine Legendrian immersion
of an oriented $n$-dimensional manifold $L$.
For simplicity, we identify
$\iota_{*} X$ with $X$ for $X \in TL$.
Fix a point $x \in L$.
Let
$\{e_{1}, \cdots, e_{n} \}$
be an orthonormal basis of $T_{x}L$.
Since $\iota$ is affine Legendrian,
$\{ e_{1}, \cdots, e_{n}, \phi (e_{1}), \cdots, \phi (e_{n}), \xi \}$
is the basis of $T_{\iota (x)} M$.
Let
$\{e^{1}, \cdots, e^{n}, f^{1}, \cdots, f^{n}, \eta^{*} \} \subset T^{*}_{\iota (x)} M$
be the dual basis.
We easily see the following.
\begin{lem} \label{basic relation of dual coframe}
Use the notation in (\ref{canonical proj}). We have
\begin{align*}
e^{i} &= g(\pi_{L} (\cdot), e_{i}), \qquad
\eta^{*} = g(\xi, \pi_{\xi} (\cdot)) = \eta (\pi_{\xi} (\cdot)) = \eta - \eta \circ \pi_{L},\\
e^{i} &= f^{i} \circ \phi, \qquad
f^{i} = -e^{i} \circ \phi.
\end{align*}
In particular, we have
\begin{align*}
\eta \circ \pi_{L} \circ \phi = - \eta^{*} \circ \phi.
\end{align*}
\end{lem}
Now, we compute the first variation of the $\phi$-volume form.
We first give all the statements in this section and then prove them.
\begin{prop} \label{1st var phi vol form}
Let
$\iota_{t} : L \hookrightarrow M$
be a one-parameter family of affine Legendrian immersions
satisfying $\iota_{0} = \iota$.
Set $\frac{\partial \iota_{t}}{\partial t}|_{t=0} = Z \in C^{\infty}(L, \iota^{*} TM)$.
Then
at $x \in L$
we have
\begin{align*}
\left. \frac{\partial}{\partial t} {\rm vol}_{\phi} [\iota_{t}] \right|_{t=0}
=
\left(
\sum_{i=1}^{n}
e^{i} (\nabla_{e_{i}} Z)
- \eta^{*} (\phi (Z))
\right)
{\rm vol}_{\phi} [\iota].
\end{align*}
\end{prop}
\begin{definition} \label{def of H phi}
Define the vector field $H_{\phi} \in \mathfrak{X}(L)$ on $L$ by
\begin{align*}
H_{\phi} =
-\left(
\phi {\rm tr}_{L} (\pi^{t}_{\phi} \nabla \pi^{t}_{L})
\right)^{\top}
+
\xi^{\top},
\end{align*}
where
${\rm tr}_{L}$ is a trace on $L$,
$\top: \iota^{*} TM \rightarrow \iota_{*} TL$ is the tangential projection defined by
the orthogonal decomposition of $\iota^{*} TM$ by the metric $g$
and
\begin{align*}
\pi^{t}_{L} : \iota^{*} TM \rightarrow (\phi (\iota_{*} TL) \oplus \mathbb{R} \xi \circ \iota )^{\perp},
\end{align*}
where
$(\phi (\iota_{*} TL) \oplus \mathbb{R} \xi \circ \iota )^{\perp}$
is the orthogonal complement of $\phi (\iota_{*} TL) \oplus \mathbb{R} \xi \circ \iota$
with respect to $g$,
is the transposed operator of $\pi_{L}$ defined in (\ref{canonical proj})
via the metric $g$, namely
\begin{align*}
g(\pi^{t}_{L} X, Y) = g(X, \pi_{L} Y)
\end{align*}
for any $X, Y \in \iota^{*} TM$.
Similarly, we can define transposed operators
$\pi^{t}_{\phi}: \iota^{*} TM \rightarrow (\iota_{*} TL \oplus \mathbb{R} \xi \circ \iota)^{\perp}$
and $\pi^{t}_{\xi} : \iota^{*} TM \rightarrow (\iota_{*} TL \oplus \phi (\iota_{*} TL))^{\perp}$
of $\pi_{\phi}$ and $\pi_{\xi}$, respectively.
\end{definition}
The vector field $\phi H_{\phi}$ is a generalization of a mean curvature vector.
See Remark \ref{MC in Leg case}.
\begin{cor} \label{1st var cor}
Let $X, Y \in \mathfrak{X}(L)$ be vector fields on $L$
and $f \in C^{\infty}(L)$ be a smooth function.
Then we have
\begin{align*}
\sum_{i=1}^{n} e^{i} (\nabla_{e_{i}} (X+\phi Y + f \xi))
=
\frac{{\rm div} (\rho_{\phi} [\iota] X)}{\rho_{\phi}[\iota]}
-
g (Y, H_{\phi}) + \eta (Y).
\end{align*}
\end{cor}
From
Proposition \ref{1st var phi vol form} and Corollary \ref{1st var cor},
we immediately see the following first variation formula
of the $\phi$-volume.
\begin{prop} \label{1st var phi Vol}
Use the notation of Proposition \ref{1st var phi vol form}.
Suppose that $L$ is compact and
$\frac{\partial \iota_{t}}{\partial t}|_{t=0} = Z
= \phi Y + f \xi$,
where $Y \in \mathfrak{X}(L)$ is a vector field on $L$
and $f \in C^{\infty}(L)$ is a smooth function.
Then we have
\begin{align*}
\left. \frac{d}{dt} {\rm Vol}_{\phi} [\iota_{t}] \right|_{t=0}
=
- \int_{L} g (Y, H_{\phi}) {\rm vol}_{\phi} [\iota].
\end{align*}
In particular,
$\iota$ is a critical point for the $\phi$-volume
if and only if $H_{\phi} = 0$.
\end{prop}
Note that $f$ does not appear in this formula.
We call an immersion {\bf $\phi$-minimal}
if $H_{\phi} = 0$.
\begin{rem} \label{MC in Leg case}
Suppose that $L$ is Legendrian.
Let $H$ be the mean curvature vector of $L$.
Then we have
\begin{align*}
H_{\phi} = - \phi H, \qquad \phi H_{\phi} = H.
\end{align*}
\end{rem}
\begin{proof}[Proof of Proposition \ref{1st var phi vol form}]
Denote by ${\rm vol}_{\phi} [\iota_{t}]$ the $\phi$-volume of $\iota_{t}$.
Let $\{e_{1}(t), \cdots, e_{n}(t) \}$ be an oriented orthonormal basis of $T_{x} L$
with respect to $\iota_{t}^{*} g$
and
$\{e^{1}(t), \cdots, e^{n}(t) \} \subset T^{*}_{x} L$ be the dual basis.
Then
we have
\begin{align*}
{\rm vol}_{\phi} [\iota_{t}]_{x}
&= \rho_{\phi}[\iota_{t}] (x) e^{1}(t) \wedge \cdots \wedge e^{n}(t), \\
\rho_{\phi} [\iota_{t}] (x)
&=
\sqrt{
({\rm vol}_{g})_{\iota_{t}(x)}
(
(\iota_{t})_{*} e_{1}(t), \cdots, (\iota_{t})_{*} e_{n}(t),
-\xi_{\iota_{t}(x)},
\phi (\iota_{t})_{*} e_{1}(t), \cdots, \phi (\iota_{t})_{*} e_{n}(t)
)
}.
\end{align*}
Since
\begin{align*}
(\iota_{t})_{*} (e_{1} \wedge \cdots \wedge e_{n})
=
(e^{1}(t) \wedge \cdots \wedge e^{n}(t))(e_{1}, \cdots, e_{n})) \cdot
(\iota_{t})_{*} (e_{1}(t) \wedge \cdots \wedge e_{n}(t)),
\end{align*}
it follows that
\begin{align*}
{\rm vol}_{\phi} [\iota_{t}]_{x} &= \rho_{\phi} (t) (x) \cdot {\rm vol}_{\iota^{*}g},
\end{align*}
where
\begin{align*}
\rho_{\phi} (t) (x)
=
\sqrt{
({\rm vol}_{g})_{\iota_{t}(x)}
(
(\iota_{t})_{*} e_{1}, \cdots, (\iota_{t})_{*} e_{n},
-\xi_{\iota_{t}(x)},
\phi (\iota_{t})_{*} e_{1}, \cdots, \phi (\iota_{t})_{*} e_{n}
)
}.
\end{align*}
Thus we may consider $\left. \frac{\partial}{\partial t} \rho_{\phi}(t) (x) \right|_{t=0}$.
Set
\begin{align*}
\nabla_{Z} e_{i} &= \nabla_{\frac{\partial}{\partial t}} (\iota_{t})_{*} e_{i} |_{t=0}, \qquad
\nabla_{Z} (\phi e_{i}) = \nabla_{\frac{\partial}{\partial t}} (\phi (\iota_{t})_{*} e_{i}) |_{t=0}.
\end{align*}
Since the volume form is parallel, we have
\begin{align*}
& \left. \frac{\partial}{\partial t} \rho_{\phi}(t) (x) \right|_{t=0}
\\
=&
\frac{\sum_{i=1}^{n}
({\rm vol}_{g})_{\iota (x)}
(
(e_{1}, \cdots, \nabla_{Z} e_{i}, \cdots, e_{n}, -\xi_{\iota (x)},
\phi e_{1}, \cdots, \phi e_{n}
)
}{2 \rho_{\phi}[\iota]}\\
&+
\frac{\sum_{i=1}^{n}
({\rm vol}_{g})_{\iota (x)}
( e_{1}, \cdots, e_{n},
-\xi_{\iota (x)},
\phi e_{1}, \cdots,
\nabla_{Z} (\phi e_{i}), \cdots,
\phi e_{n}
)
}{2 \rho_{\phi}[\iota]}\\
&+
\frac{
({\rm vol}_{g})_{\iota (x)}
( e_{1}, \cdots, e_{n},
- \nabla_{\frac{\partial}{\partial t}} \xi_{\iota_{t}(x)} |_{t=0},
\phi e_{1}, \cdots, \phi e_{n}
)
}{2 \rho_{\phi}[\iota]}.
\end{align*}
Using the notation at the beginning of Section \ref{first var},
we have
\begin{align*}
&
\left.
\frac{\partial}{\partial t} \rho_{\phi}(t) (x) \right|_{t=0}
\\
=&
\frac{\sum_{i=1}^{n}
e^{i} (\nabla_{Z} e_{i}) \rho_{\phi}[\iota]^{2}
}{2 \rho_{\phi}[\iota]}
+
\frac{\sum_{i=1}^{n}
f^{i} (\nabla_{Z} (\phi e_{i})) \rho_{\phi}[\iota]^{2}
}{2 \rho_{\phi}[\iota]}
-
\frac{
\eta^{*} (\phi (Z)) \rho_{\phi}[\iota]^{2}
}{2 \rho_{\phi}[\iota]}.
\end{align*}
By Lemmas \ref{Sasaki eq2} and \ref{basic relation of dual coframe},
it follows that
\begin{align*}
\nabla_{Z} (\phi e_{i})
&= g(Z, e_{i}) \xi - \eta (e_{i}) Z + \phi (\nabla_{Z} e_{i}), \\
\sum_{i=1}^{n} f^{i} (\nabla_{Z} (\phi e_{i}))
&=
\sum_{i=1}^{n} e^{i} (\nabla_{Z} e_{i})
- \eta^{*} (\phi (Z)).
\end{align*}
Thus we obtain
\begin{align*}
\left.
\frac{\partial}{\partial t} \rho_{\phi}(t) (x) \right|_{t=0}
=
\left(
\sum_{i=1}^{n}
e^{i} (\nabla_{Z} e_{i})
- \eta^{*} (\phi (Z))
\right)
\rho_{\phi}[\iota].
\end{align*}
Here,
let
$(x_{1}, \cdots, x_{n})$ be a
normal coordinate at $x \in L$ of $L$
satisfying
$e_{i} = \frac{\partial}{\partial x_{i}}$
at $x \in L$.
Define the map
$\tilde{\iota}: L \times (-\epsilon, \epsilon)
\rightarrow M$
by
$\tilde{\iota} (x, t) = \iota_{t} (x)$.
Then
$\tilde{\iota}_{*} (\frac{\partial}{\partial t})|_{t=0} = Z$.
Since we may regard $(x_{1}, \cdots, x_{n}, t)$
as the local coordinate of $L \times (-\epsilon, \epsilon)$ near
$(x, 0)$,
we have $\nabla_{Z} e_{i} = \nabla_{e_{i}} Z$.
Thus we obtain the statement.
\end{proof}
\begin{proof}[Proof of Corollary \ref{1st var cor}]
Setting $Z=X \in \mathfrak{X}(L)$
in Proposition \ref{1st var phi vol form}, we have
\begin{align*}
\left. \frac{\partial}{\partial t} {\rm vol}_{\phi} [\iota_{t}] \right|_{t=0}
=
\sum_{i=1}^{n} e^{i} (\nabla_{e_{i}} X) {\rm vol}_{\phi} [\iota].
\end{align*}
By Lemma \ref{diff equiv aff Leg}, it follows that
\begin{align*}
\left. \frac{\partial}{\partial t} {\rm vol}_{\phi} [\iota_{t}] \right|_{t=0}
=
L_{X} {\rm vol}_{\phi} [\iota]
=
{\rm div}(\rho_{\phi}[\iota] X) {\rm vol}_{\iota^{*}g}.
\end{align*}
Thus we obtain
\begin{align*}
\sum_{i=1}^{n} e^{i} (\nabla_{e_{i}} X)
=
\frac{{\rm div} (\rho_{\phi} [\iota] X)}{\rho_{\phi}[\iota]}.
\end{align*}
Using the notation of (\ref{canonical proj}), we have
\begin{align*}
\sum_{i=1}^{n} e^{i} (\nabla_{e_{i}} (\phi Y))
&=
\sum_{i=1}^{n} g(\pi_{L} (\nabla_{e_{i}} (\phi Y)), e_{i}) \\
&=
-\sum_{i=1}^{n} g( \pi_{\phi} (\phi Y), \nabla_{e_{i}} (\pi^{t}_{L} e_{i}))
=
g (Y, -H_{\phi} + \xi^{\top}).
\end{align*}
by Lemma \ref{basic relation of dual coframe}.
It is easy to show that
$
\sum_{i=1}^{n} e^{i} (\nabla_{e_{i}} (f \xi))
=0
$
and the proof is done.
\end{proof}
Note that we can also prove
Corollary \ref{1st var cor}
by a direct computation.
\begin{proof}[Proof of Remark \ref{MC in Leg case}]
Since $L$ is Legendrian,
we have $\pi_{L}^{t} = \pi_{L}$ and $\pi_{\phi}^{t} = \pi_{\phi}$.
Then
\begin{align*}
H_{\phi} = - \left( \phi \sum_{i=1}^{n} \pi_{\phi} \nabla_{e_{i}} e_{i} \right)^{\top}
= - \phi \sum_{i=1}^{n} \pi_{\phi} \nabla_{e_{i}} e_{i}.
\end{align*}
Let
$\pi_{\perp}: \iota^{*}TM = \iota_{*} TL \oplus (\iota_{*} TL)^{\perp} \rightarrow (\iota_{*} TL)^{\perp}$
be the normal projection with respect to $g$.
Then we see that
\begin{align*}
H &= \pi_{\perp} \left( \sum_{i=1}^{n} \nabla_{e_{i}} e_{i} \right)
= \pi_{\phi} \left(\sum_{i=1}^{n} \nabla_{e_{i}} e_{i} \right)
+ \pi_{\xi} \left(\sum_{i=1}^{n} \nabla_{e_{i}} e_{i} \right), \\
\pi_{\xi} \left(\nabla_{e_{i}} e_{i} \right)
&=
g(\nabla_{e_{i}} e_{i}, \xi) \xi
=
e_{i} (g(e_{i}, \xi)) \xi + g(e_{i}, \phi e_{i}) \xi
= 0,
\end{align*}
which implies the statement.
\end{proof}
\section{Second variation of the $\phi$-volume}
In this section,
we compute the second variation of the $\phi$-volume
and prove Theorems \ref{2nd var phi Vol},
\ref{thm stablity}, \ref{thm convexity} and \ref{thm obstruction}.
Use the notation in Section \ref{first var}.
First, we compute the second variation of the $\phi$-volume form.
\begin{prop} \label{2nd var phi vol form}
Let
$\iota_{t} : L \hookrightarrow M$
be a one-parameter family of affine Legendrian immersions
satisfying $\iota_{0} = \iota$.
Set $\frac{\partial \iota_{t}}{\partial t}|_{t=0} = Z \in C^{\infty}(L, \iota^{*}TM).$
Then at $x \in L$ we have
\begin{align*}
\left. \frac{\partial^{2}}{\partial t^{2}} {\rm vol}_{\phi} [\iota_{t}] \right|_{t=0}
=&
\left \{
-2 \eta^{*} (\phi (Z)) \sum_{i} e^{i} (\nabla_{e_{i}}Z) \right. \\
&- \sum_{i, j} e^{i} (\nabla_{e_{j}} Z) e^{j} (\nabla_{e_{i}} Z)
+ \sum_{i, j} f^{i} (\nabla_{e_{j}} Z) f^{j} (\nabla_{e_{i}} Z) \\
&+
\sum_{i} e^{i} (R(Z, e_{i})Z + \nabla_{e_{i}} \nabla_{Z} Z) \\
&- \eta(\pi_{L} Z) \eta^{*}(Z) - \eta^{*}(\phi (\nabla_{Z} Z)) - g(Z, \pi_{\phi} Z)\\
&-2 \sum_{i} f^{i}(Z) \eta^{*}(\nabla_{e_{i}} Z)
+2 \sum_{i} e^{i}(Z) \eta^{*}(\phi (\nabla_{e_{i}} Z)) \\
&\left. + \left( \sum_{i} e^{i} (\nabla_{e_{i}} Z) \right)^{2}
\right \}
{\rm vol}_{\phi} [\iota].
\end{align*}
where
$R$ is the curvature tensor of $(M, g)$.
In particular,
when $Z= X \in \mathfrak{X}(L)$, we have
\begin{align*}
\frac{\partial^{2}}{\partial t^{2}} {\rm vol}_{\phi} [\iota_{t}]|_{t=0}
=& {\rm div} ({\rm div} (\rho_{\phi} X) X) {\rm vol}_{\iota^{*} g}\\
=&
\left \{
- \sum_{i, j} e^{i} (\nabla_{e_{j}} X) e^{j} (\nabla_{e_{i}} X)
+ \sum_{i, j} f^{i} (\nabla_{e_{j}} X) f^{j} (\nabla_{e_{i}} X)
\right.
\\
&+
\sum_{i} e^{i} (R(X, e_{i})X + \nabla_{e_{i}} \nabla_{X} X) \\
&+ \eta^{*}(\phi (\nabla_{X} X))
\left. + \left( \sum_{i} e^{i} (\nabla_{e_{i}} X) \right)^{2}
\right \}
{\rm vol}_{\phi} [\iota].
\end{align*}
\end{prop}
By Proposition \ref{2nd var phi vol form},
we obtain the second variation formula of the $\phi$-volume (Theorem \ref{2nd var phi Vol}).
\begin{proof}[Proof of Proposition \ref{2nd var phi vol form}]
Use the notation in the proof of Proposition \ref{1st var phi vol form}.
Set
\begin{align*}
\nabla_{Z} e_{i} &= \nabla_{\frac{\partial}{\partial t}} (\iota_{t})_{*} e_{i} |_{t=0}, \qquad
\nabla_{Z} (\phi e_{i}) = \nabla_{\frac{\partial}{\partial t}} (\phi (\iota_{t})_{*} e_{i}) |_{t=0}, \\
\nabla_{Z} \nabla_{Z} e_{i} &=
\nabla_{\frac{\partial}{\partial t}} \nabla_{\frac{\partial}{\partial t}}
(\iota_{t})_{*} e_{i} |_{t=0}, \qquad
\nabla_{Z} \nabla_{Z} (\phi e_{i}) =
\nabla_{\frac{\partial}{\partial t}} \nabla_{\frac{\partial}{\partial t}}
(\phi (\iota_{t})_{*} e_{i}) |_{t=0}.
\end{align*}
Then we have
\begin{align*}
\left.
\frac{\partial^{2}}{\partial t^{2}} \rho_{\phi}^{2} (t)
\right|_{t=0}
= \sum_{i=1}^{11} h_{i},
\end{align*}
where
\begin{align*}
h_{1} &=
\sum_{i \neq j} {\rm vol}_{g} (e_{1}, \cdots, \nabla_{Z} e_{i}, \cdots, \nabla_{Z} e_{j}, \cdots, e_{n},
-\xi, \phi e_{1}, \cdots, \phi e_{n}), \\
h_{2} &=
\sum_{i=1}^{n} {\rm vol}_{g} (e_{1}, \cdots, \nabla_{Z} \nabla_{Z} e_{i}, \cdots, e_{n},
-\xi, \phi e_{1}, \cdots, \phi e_{n}), \\
h_{3} &=
\sum_{i=1}^{n} {\rm vol}_{g} (e_{1}, \cdots, \nabla_{Z} e_{i}, \cdots, e_{n},
-\nabla_{Z} \xi, \phi e_{1}, \cdots, \phi e_{n}), \\
h_{4} &=
\sum_{i,j=1}^{n} {\rm vol}_{g} (e_{1}, \cdots, \nabla_{Z} e_{i}, \cdots, e_{n},
-\xi, \phi e_{1}, \cdots, \nabla_{Z} (\phi e_{j}), \cdots, \phi e_{n}), \\
h_{5} &=
\sum_{i \neq j} {\rm vol}_{g} (e_{1}, \cdots, e_{n},
-\xi, \phi e_{1}, \cdots, \nabla_{Z} (\phi e_{i}), \cdots, \nabla_{Z} (\phi e_{j}), \cdots, \phi e_{n}), \\
h_{6} &=
\sum_{i=1}^{n} {\rm vol}_{g} (e_{1}, \cdots, e_{n},
-\xi, \phi e_{1}, \cdots, \nabla_{Z} \nabla_{Z} (\phi e_{i}), \cdots, \phi e_{n}), \\
h_{7} &=
\sum_{i=1}^{n} {\rm vol}_{g} (e_{1}, \cdots, e_{n},
-\nabla_{Z} \xi, \phi e_{1}, \cdots, \nabla_{Z} (\phi e_{i}), \cdots, \phi e_{n}), \\
h_{8} &=
\sum_{i,j=1}^{n} {\rm vol}_{g} (e_{1}, \cdots, \nabla_{Z} e_{i}, \cdots, e_{n},
-\xi, \phi e_{1}, \cdots, \nabla_{Z} (\phi e_{j}), \cdots, \phi e_{n}), \\
h_{9} &=
\sum_{i=1}^{n} {\rm vol}_{g} (e_{1}, \cdots, \nabla_{Z} e_{i}, \cdots, e_{n},
-\nabla_{Z} \xi, \phi e_{1}, \cdots, \phi e_{n}), \\
h_{10} &=
\sum_{i=1}^{n} {\rm vol}_{g} (e_{1}, \cdots, e_{n},
-\nabla_{Z} \xi, \phi e_{1}, \cdots, \nabla_{Z} (\phi e_{i}), \cdots, \phi e_{n}), \\
h_{11} &=
{\rm vol}_{g} (e_{1}, \cdots, e_{n},
-\nabla_{Z} \nabla_{Z} \xi, \phi e_{1}, \cdots, \phi e_{n}).
\end{align*}
We divide $h_{i}$'s into the following four classes and
compute in each class.
\begin{itemize}
\item[class 1:] $h_{1}, h_{5}$,
\item[class 2:] $h_{2}, h_{6}, h_{11},$
\item[class 3:] $h_{3}= h_{9}, h_{7} = h_{10},$
\item[class 4:] $h_{4} = h_{8}$.
\end{itemize}
We simplify $h_{i}$'s by Lemmas \ref{Sasaki eq1}, \ref{Sasaki eq2} and \ref{basic relation of dual coframe}.
First, we compute $h_{i}$'s in class 1.
It is easy to see that
\begin{align*}
h_{1} = \left \{
\left( \sum_{i} e^{i} (\nabla_{Z} e_{i}) \right)^{2}
-
\sum_{i, j} e^{i} (\nabla_{Z} e_{j}) e^{j} (\nabla_{Z} e_{i})
\right \}
\rho_{\phi}[\iota]^{2}.
\end{align*}
Since
\begin{align*}
\nabla_{Z} (\phi e_{i}) &= g(Z, e_{i}) \xi - \eta (e_{i}) Z + \phi (\nabla_{Z} e_{i}), \\
f^{i}(\nabla_{Z} (\phi e_{j})) &= - \eta (e_{j}) f^{i}(Z) + e^{i} (\nabla_{Z} e_{j}),
\end{align*}
we have
\begin{align*}
h_{5} &=
\sum_{i \neq j}
\left \{
f^{i}(\nabla_{Z} (\phi e_{i})) f^{j}(\nabla_{Z} (\phi e_{j}))
-
f^{j}(\nabla_{Z} (\phi e_{i})) f^{i}(\nabla_{Z} (\phi e_{j}))
\right \}
\rho_{\phi}[\iota]^{2}\\
&=
\sum_{i, j}
\left \{
-2 \eta (e_{i}) f^{i} (Z) e^{j}(\nabla_{Z} e_{j})
+2 \eta (e_{i}) f^{j}(Z) e^{i}(\nabla_{Z} e_{j})
\right \}
\rho_{\phi}[\iota]^{2}
+ h_{1}\\
&=
\left \{
2 \eta (\pi_{L} \phi (Z))
\sum_{j} e^{j}(\nabla_{Z} e_{j})
+2 \sum_{j} \eta (\pi_{L} \nabla_{Z} e_{j}) f^{j}(Z)
\right \}
\rho_{\phi}[\iota]^{2}
+ h_{1}\\
&=
\left \{
- 2 \eta^{*} (\phi (Z))
\sum_{j} e^{j}(\nabla_{Z} e_{j})
+2 \sum_{j} \eta (\pi_{L} \nabla_{Z} e_{j}) f^{j}(Z)
\right \}
\rho_{\phi}[\iota]^{2}
+ h_{1}.
\end{align*}
Next, we compute $h_{i}$'s in class 2.
We easily see that
\begin{align*}
h_{2} &= \sum_{i} e^{i} (\nabla_{Z} \nabla_{Z} e_{i}) \rho_{\phi}[\iota]^{2}, \qquad
h_{6} = \sum_{i} f^{i} (\nabla_{Z} \nabla_{Z} (\phi e_{i})) \rho_{\phi}[\iota]^{2}, \\
h_{11} &=
\eta^{*} (\nabla_{Z} \nabla_{Z} \xi) \rho_{\phi}[\iota]^{2}.
\end{align*}
Set $\nabla_{Z} Z = \nabla_{\frac{\partial}{\partial t}} \tilde{\iota}_{*} (\frac{\partial}{\partial t}) |_{t=0}$,
where $\tilde{\iota}$ is given
in proof of Proposition \ref{1st var phi Vol}.
Since
\begin{align*}
\nabla_{Z} \nabla_{Z} (\phi e_{i})
=&
Z(g(Z, e_{i})) \xi - g(Z, e_{i}) \phi (Z)
+
\{ g(\phi(Z), e_{i}) - \eta (\nabla_{Z} e_{i}) \} Z \\
&- \eta (e_{i}) \nabla_{Z} Z
+g(Z, \nabla_{Z} e_{i}) \xi - \eta (\nabla_{Z} e_{i}) Z +
\phi (\nabla_{Z} \nabla_{Z} e_{i}), \\
\nabla_{Z} \nabla_{Z} \xi
=&
- g(Z, Z) \xi + \eta (Z) Z - \phi (\nabla_{Z} Z),
\end{align*}
we obtain
\begin{align*}
h_{6}
=&
\left \{
- g(Z, \pi_{L} Z)
- g(Z, \pi_{\phi} Z) \right.
\\
&\left. -2 \sum_{i} \eta (\nabla_{Z} e_{i}) f^{i}(Z)
- \eta^{*} (\phi (\nabla_{Z} Z))
\right \}
\rho_{\phi}[\iota]^{2}
+
h_{2}, \\
h_{11} =&
\{ - g(Z, Z) + \eta (Z) \eta^{*} (Z) - \eta^{*} (\phi (\nabla_{Z} Z)) \} \rho_{\phi}[\iota]^{2}\\
=&
\{ - g(Z, \pi_{L} Z) - g(Z, \pi_{\phi} Z) - \eta^{*} (\phi (\nabla_{Z} Z)) \} \rho_{\phi}[\iota]^{2}.
\end{align*}
Compute $h_{i}$'s in class 3 to obtain
\begin{align*}
h_{3} = h_{9}
=&
\left \{
- \sum_{i} e^{i} (\nabla_{Z} e_{i}) \eta^{*} (\phi(Z))
-
\eta^{*} (\nabla_{Z} e_{i}) f^{i} (Z)
\right \} \rho_{\phi}[\iota]^{2}\\
h_{7} = h_{10}
=&
\left \{
- \sum_{i} \eta^{*}(\phi(Z)) f^{i} (\nabla_{Z} (\phi e_{i}))
+
f^{i} (\phi (Z)) \eta^{*} (\nabla_{Z} (\phi e_{i}))
\right \} \rho_{\phi}[\iota]^{2}\\
=&
\left \{
\sum_{i} \eta^{*}(\phi(Z)) (\eta (e_{i}) f^{i}(Z) - e^{i} (\nabla_{Z} e_{i})) \right. \\
&\left. +
e^{i} (Z)
\left( g(Z, e_{i}) - \eta (e_{i}) \eta^{*} (Z) + \eta^{*} (\phi (\nabla_{Z} e_{i}) \right)
\right \} \rho_{\phi}[\iota]^{2}\\
=&
\left \{
\eta^{*}(\phi(Z))^{2}
-\eta^{*}(\phi(Z)) \sum_{i} e^{i} (\nabla_{Z} e_{i}) \right. \\
&
\left.
+g(Z, \pi_{L} Z) - \eta (\pi_{L} Z) \eta^{*}(Z) + \sum_{i} e^{i}(Z) \eta^{*} (\phi (\nabla_{Z} e_{i}))
\right \} \rho_{\phi}[\iota]^{2}.
\end{align*}
Compute $h_{i}$'s in class 4 to obtain
\begin{align*}
h_{4} = h_{8}
=& \sum_{i, j} \left \{
e^{i} (\nabla_{Z} e_{i}) f^{j} (\nabla_{Z} (\phi e_{j}))
-
f^{j} (\nabla_{Z} e_{i}) e^{i} (\nabla_{Z} (\phi e_{j}))
\right \}
\rho_{\phi}[\iota]^{2}\\
=&
\sum_{i, j} \left \{
e^{i} (\nabla_{Z} e_{i}) ( - \eta (e_{j}) f^{j}(Z) +e^{j} (\nabla_{Z} e_{j})) \right. \\
&\left. -
f^{j} (\nabla_{Z} e_{i})
\left( - \eta (e_{j}) e^{i}(Z) + e^{i} (\phi (\nabla_{Z} e_{j})) \right)
\right \}
\rho_{\phi}[\iota]^{2}\\
=&
\left \{
- \eta^{*} (\phi (Z))
\sum_{i} e^{i} (\nabla_{Z} e_{i})
+
\left(\sum_{i} e^{i} (\nabla_{Z} e_{i}) \right)^{2} \right.\\
& \left.
+
\sum_{i} \eta^{*} (\phi (\nabla_{Z} e_{i}))e^{i}(Z) +
\sum_{i} f^{i}(\nabla_{Z} e_{j}) f^{j} (\nabla_{Z} e_{i})
\right \}
\rho_{\phi}[\iota]^{2}.
\end{align*}
Then by a direct computation, we obtain
\begin{align*}
\left.
\frac{\partial^{2}}{\partial t^{2}} \rho_{\phi}(t) \right|_{t=0}
=&
\frac{1}{2 \rho_{\phi}[\iota]}
\left(
\left.
\frac{\partial^{2} (\rho_{\phi}(t)^{2})}{\partial t^{2}} \right|_{t=0}
- 2
\left(
\frac{\partial \rho_{\phi}(t)}{\partial t}
\right)^{2}
\right)\\
=&
\frac{1}{2 \rho_{\phi}[\iota]}
\left(
\sum_{i=1}^{11} h_{i}
- 2
\left(
\sum_{i} e^{i} (\nabla_{Z} e_{i}) - \eta^{*} (\phi (Z))
\right)^{2} \rho_{\phi}[\iota]^{2}
\right) \\
=&
\left \{
-2 \eta^{*} (\phi (Z)) \sum_{i} e^{i} (\nabla_{Z} e_{i}) \right. \\
&- \sum_{i, j} e^{i} (\nabla_{Z} e_{j}) e^{j} (\nabla_{Z} e_{i})
+ \sum_{i, j} f^{i} (\nabla_{Z} e_{j}) f^{j} (\nabla_{Z} e_{i}) \\
&+
\sum_{i} e^{i} (\nabla_{Z} \nabla_{Z} e_{i}) \\
&- \eta(\pi_{L} Z) \eta^{*}(Z) - \eta^{*}(\phi (\nabla_{Z} Z)) - g(Z, \pi_{\phi} Z)\\
&-2 \sum_{i} f^{i}(Z) \eta^{*}(\nabla_{Z} e_{i})
+2 \sum_{i} e^{i}(Z) \eta^{*}(\phi (\nabla_{Z} e_{i})) \\
&\left. + \left( \sum_{i} e^{i} (\nabla_{Z} e_{i}) \right)^{2}
\right \}
{\rm vol}_{\phi} [\iota].
\end{align*}
Here,
take the normal coordinate $(x_{1}, \cdots, x_{n})$
at $x \in L$ in the proof of Proposition \ref{1st var phi vol form}.
Then we have
\begin{align*}
\nabla_{Z} e_{i} = \nabla_{e_{i}} Z, \qquad
\nabla_{Z} \nabla_{Z} e_{i} = R(Z, e_{i})Z + \nabla_{e_{i}} \nabla_{Z} Z
\end{align*}
at $x$, which give the first statement of Proposition \ref{2nd var phi vol form}.
When $Z=X \in \mathfrak{X}(L)$,
Lemma \ref{diff equiv aff Leg} implies that
\begin{align*}
\left.
\frac{\partial^{2}}{\partial t^{2}} \rho_{\phi}[\iota] \right|_{t=0}
&=
L_{X} L_{X} {\rm vol}_{\phi}[\iota]\\
&=
d (i(X) d (i(X) \rho_{\phi}[\iota] {\rm vol}_{\iota^{*}g}))
=
{\rm div} ({\rm div}(\rho_{\phi}[\iota] X)X) {\rm vol}_{\iota^{*}g},
\end{align*}
which gives the second statement.
\end{proof}
We compute the next lemma to
prove Theorem \ref{2nd var phi Vol}.
\begin{lem} \label{computation 2nd var 1}
Suppose that
$Z= \phi Y + f\xi$,
where $Y \in \mathfrak{X}(L)$ and $f \in C^{\infty}(L)$.
Then we have
\begin{align*}
\sum_{i, j} e^{i} (\nabla_{e_{j}} Z) e^{j} (\nabla_{e_{i}} Z)
&=
n \eta(Y)^{2} +
2 \eta(Y) \sum_{i} f^{i} (\nabla_{e_{i}} Y)
+
\sum_{i, j} f^{i} (\nabla_{e_{j}} Y) f^{j} (\nabla_{e_{i}} Y),\\
\sum_{i, j} f^{i} (\nabla_{e_{j}} Z) f^{j} (\nabla_{e_{i}} Z)
&=
n f^{2}
- 2 f \sum_{i} e^{i} (\nabla_{e_{i}} Y)
+
\sum_{i, j} e^{i} (\nabla_{e_{j}} Y) e^{j} (\nabla_{e_{i}} Y),
\end{align*}
\begin{align*}
\sum_{i} e^{i}(R(Z, e_{i})Z + R(Y, e_{i})Y)
=
- {\rm Ric} (Y, Y) + g(Y,Y)+ n \eta (Y)^{2} - n f^{2},
\end{align*}
where
${\rm Ric}$ is the Ricci curvature of $(M,g)$.
\end{lem}
\begin{proof}[Proof of Lemma \ref{computation 2nd var 1}]
The first two equations follow by the next equation:
\begin{align*}
\nabla_{X} Z = g(X, Y) \xi - \eta(Y) X
+ \phi (\nabla_{X} Y) + X(f) \xi -f \phi (X),
\end{align*}
where $X \in \mathfrak{X}(L)$ is a vector field on $L$.
We prove the third equation.
By Lemma \ref{Sasaki eq2},
we see that
\begin{align*}
\sum_{i} e^{i} (R(Z, e_{i})Z)
=&
\sum_{i} e^{i} (R(Z, e_{i})(\phi Y + f \xi))\\
=&
\sum_{i}
e^{i}
\left( \phi (R(Z, e_{i})Y) - g(e_{i}, Y) \phi (Z)
+g(\phi Z, Y) e_{i} \right. \\
&\left. + g(Z, Y) \phi (e_{i}) - g(\phi e_{i}, Y) Z
+ f (\eta (e_{i}) Z - \eta (Z) e_{i})
\right)\\
=&
\sum_{i} e^{i} \circ \phi (R(Z, e_{i})Y)
+ (-n+1) g(Y,Y)
+n \eta(Y)^{2} - n f^{2}.
\end{align*}
The first term is computed as
\begin{align*}
\sum_{i} e^{i} \circ \phi (R(Z, e_{i})Y)
=
\sum_{i} g(\pi_{L} \phi R(Z, e_{i}) Y, e_{i})
=
- \sum_{i} g(R(Y, \phi \pi_{L}^{t} e_{i}) Z, e_{i}),
\end{align*}
\begin{align*}
R(Y, \phi \pi_{L}^{t} e_{i}) Z
=&
\phi (R(Y, \phi \pi_{L}^{t} e_{i})Y) - g(\phi \pi_{L}^{t} e_{i}, Y) \phi (Y)
+ g(\phi Y, Y) \phi \pi_{L}^{t} e_{i} \\
&+ g(Y, Y) \phi^{2} \pi_{L}^{t} e_{i}
- g(\phi^{2} \pi_{L}^{t} e_{i}, Y) Y
+ f(\eta(\phi \pi_{L}^{t} e_{i}) Y - \eta(Y) \phi \pi_{L}^{t} e_{i})\\
=&
\phi (R(Y, \phi \pi_{L}^{t} e_{i})Y)
- g(Y, Y) \pi_{L}^{t} e_{i}
+ g(e_{i}, Y) Y
- f \eta(Y) \phi \pi_{L}^{t} e_{i}.
\end{align*}
Thus we obtain
\begin{align*}
\sum_{i} e^{i} \circ \phi (R(Z, e_{i})Y)
=&
\sum_{i}
g(-\phi (R(Y, \phi \pi_{L}^{t} e_{i})Y)
+ g(Y, Y) \pi_{L}^{t} e_{i}
- g(e_{i}, Y) Y, e_{i})\\
=&
\sum_{i}
g(R(Y, \phi e_{i}) Y, \phi \pi_{L}^{t} e_{i})
+ (n-1) g(Y, Y)\\
=&
- \sum_{i} g(\pi_{L} \phi R(Y, \phi e_{i})Y, e_{i})
+ (n-1) g(Y, Y)\\
=&
\sum_{i} f^{i} (R(Y, \phi e_{i})Y) + (n-1) g(Y, Y).
\end{align*}
Since
\begin{align*}
\eta^{*} (R(\xi, Y)Y)
&=
g(\pi_{\xi} R(\xi, Y)Y, \xi)
=
g(R(Y, \pi_{\xi}^{t} \xi)\xi, Y)\\
&=
g(\eta(\pi_{\xi}^{t} \xi)Y - \eta (Y) \pi_{\xi}^{t} \xi, Y)
=
g(Y, Y),
\end{align*}
it follows that
\begin{align*}
&\sum_{i} e^{i} (R(Z, e_{i})Z + R(Y, e_{i})Y) \\
=&
- \sum_{i} (f^{i} (R(\phi e_{i}, Y)Y) + e^{i} (R(e_{i},Y)Y))
+ (n-1) g(Y,Y) \\
&+ (-n+1) g(Y,Y)
+ n \eta (Y)^{2} - n f^{2}\\
=&
- {\rm Ric} (Y, Y) + g(Y,Y) + n \eta (Y)^{2} - n f^{2}.
\end{align*}
\end{proof}
\begin{proof}[Proof of Theorem \ref{2nd var phi Vol}]
Set
$Z= \phi Y + f\xi$.
By Proposition \ref{2nd var phi vol form}
and
Lemma
\ref{computation 2nd var 1},
we have
\begin{align*}
\left. \frac{\partial^{2}}{\partial t^{2}} {\rm vol}_{\phi} [\iota_{t}] \right|_{t=0}
=&
\rho_{\phi}[\iota]
\left \{
-2 \eta(Y) \sum_{i} e^{i} (\nabla_{e_{i}}Z) \right. \\
&- \sum_{i, j} e^{i} (\nabla_{e_{j}} Z) e^{j} (\nabla_{e_{i}} Z)
+ \sum_{i, j} f^{i} (\nabla_{e_{j}} Z) f^{j} (\nabla_{e_{i}} Z) \\
&+
\sum_{i} e^{i} (R(Z, e_{i})Z + \nabla_{e_{i}} \nabla_{Z} Z)
+ \left( \sum_{i} e^{i} (\nabla_{e_{i}} Z) \right)^{2}
\\
&
\left.
- \eta^{*}(\phi (\nabla_{Z} Z))
- g(Z, \phi Y) -2 \eta^{*}(\nabla_{Y} Z)
\right \}\\
=&
- {\rm div} ({\rm div} (\rho_{\phi}[\iota] Y) Y)\\
&+\rho_{\phi}[\iota]
\left \{
-2 \eta(Y) \sum_{i} e^{i} (\nabla_{e_{i}} Z)
-2 \eta(Y) \sum_{i} f^{i} (\nabla_{e_{i}} Y) \right. \\
&
-2 f \sum_{i} e^{i} (\nabla_{e_{i}} Y)
- {\rm Ric}(Y, Y)
+ \sum_{i} e^{i}(\nabla_{e_{i}}(\nabla_{Z} Z + \nabla_{Y} Y))\\
&
+
\left( \sum_{i} e^{i} (\nabla_{e_{i}} Z) \right)^{2}
+
\left( \sum_{i} e^{i} (\nabla_{e_{i}} Y) \right)^{2}\\
&
\left.
- \eta^{*}(\phi (\nabla_{Z} Z)) + \eta^{*}(\phi (\nabla_{Y} Y))
-2 \eta^{*}(\nabla_{Y} Z)
+ \eta (Y)^{2}
\right \}.
\end{align*}
From Corollary \ref{1st var cor}, we have
\begin{align*}
\sum_{i} e^{i} (\nabla_{e_{i}} Z) = - g(Y, H_{\phi}) + \eta (Y), \qquad
\sum_{i} f^{i} (\nabla_{e_{i}} Y) = g(Y, H_{\phi}) - (n+1) \eta (Y),
\end{align*}
which imply that
\begin{align} \label{comp 2nd var 1}
-2 \eta (Y) \sum_{i} e^{i} (\nabla_{e_{i}} Z)
-2 \eta (Y) \sum_{i} f^{i} (\nabla_{e_{i}} Y)
=
2n \eta(Y)^{2}.
\end{align}
Since we know that
\begin{align*}
\nabla_{Z} Z
&=
- \eta(Y) Z + \phi (\nabla_{Z} Y) + f Y + Z(f) \xi, \\
\nabla_{Y} Y
&=
- \phi \nabla_{Y} (\phi Y) + \eta (\nabla_{Y} Y) \xi - \eta(Y) \phi (Y), \\
\nabla_{Z} Z + \nabla_{Y} Y
&=
\phi ([Z, Y]) -2 \eta(Y) \phi (Y)
+2 f Y + (Z(f)+ \eta(\nabla_{Y} Y) -2 f \eta (Y)) \xi,
\end{align*}
we see by Corollary \ref{1st var cor} that
\begin{align*}
- \eta^{*}(\phi (\nabla_{Z} Z))
=
\eta (Y)^{2} - \eta (\pi_{L} (\nabla_{Z} Y)), \qquad
\eta^{*}(\phi (\nabla_{Y} Y))
=
- \eta (\pi_{L} (\nabla_{Y} Z)) - \eta (Y)^{2}.
\end{align*}
Using the equation
$\eta (\nabla_{Y} Z) = Y(f) + g(Y, Y) - \eta (Y)^{2}$,
it follows that
\begin{align} \label{comp 2nd var 2}
\begin{split}
- \eta^{*}(\phi (\nabla_{Z} Z)) + \eta^{*}(\phi (\nabla_{Y} Y))
-2 \eta^{*}(\nabla_{Y} Z)\\
=
\eta(\pi_{L} [Y, Z])
-2 Y(f) -2 g(Y,Y) +2 \eta(Y)^{2}.
\end{split}
\end{align}
We can also compute
\begin{align} \label{comp 2nd var 3}
\begin{split}
\sum_{i} e^{i}(\nabla_{e_{i}}(\nabla_{Z} Z + \nabla_{Y} Y))
=&
\frac{{\rm div}(\rho_{\phi}[\iota] (\pi_{L} \phi ([Z, Y]) + 2 f Y))}{\rho_{\phi}[\iota]}\\
&+
g(- \pi_{L} [Z, Y] + 2 \eta (Y) Y, H_{\phi}) \\
&+ \eta (\pi_{L} [Z, Y]) -2 \eta (Y)^{2}.
\end{split}
\end{align}
Note that
\begin{align} \label{comp 2nd var 4}
f {\rm div} (\rho_{\phi}[\iota] Y) + \rho_{\phi}[\iota] Y(f)
=
{\rm div} (f \rho_{\phi}[\iota] Y).
\end{align}
Then by
(\ref{comp 2nd var 1}), (\ref{comp 2nd var 2}), (\ref{comp 2nd var 3}),
(\ref{comp 2nd var 4})
and Corollary \ref{1st var cor},
we obtain
\begin{align*}
\left.
\frac{\partial^{2}}{\partial t^{2}} \rho_{\phi}(t) \right|_{t=0}
=&
- {\rm div} ({\rm div} (\rho_{\phi}[\iota] Y) Y)
+ {\rm div}(\rho_{\phi}[\iota] (\pi_{L} \phi ([Z, Y]) + 2f Y))
-2 {\rm div} (f \rho_{\phi}[\iota] Y)
\\
&+\rho_{\phi}[\iota]
\left \{
(2n+2) \eta(Y)^{2} - {\rm Ric}(Y, Y) -2 g(Y, Y) \right. \\
&\left.
- g(\pi_{L} [Z, Y], H_{\phi})
+
g(Y, H_{\phi})^{2}
+
\frac{({\rm div}(\rho_{\phi}[\iota] Y))^{2}}{\rho_{\phi}[\iota]^{2}}
\right \},
\end{align*}
which implies Theorem \ref{2nd var phi Vol}.
\end{proof}
We investigate the relation
of Theorem \ref{2nd var phi Vol}
and the previous works.
Define the standard Riemannian volume of $\iota$ by
${\rm Vol}[\iota] = \int_{L} {\rm vol}_{\iota^{*}g}$.
\begin{rem} \label{relation Lmin}
We call a Legendrian immersion $\iota$ Legendrian-minimal Legendrian
if it is a critical point of the standard Riemannian volume functional
under Legendrian variations.
Suppose that $\iota$ is Legendrian-minimal Legendrian
and all of $\iota_{t}$'s are Legendrian in Theorem \ref{2nd var phi Vol}.
Then for any $t$, the $\phi$-volume agrees with the standard Riemannian volume
and the second variation formula of Theorem \ref{2nd var phi Vol}
is given by
\begin{align*}
\left. \frac{d^{2}}{dt^{2}} {\rm Vol}[\iota_{t}] \right|_{t=0} = &
\int_{L}
\left(
\frac{1}{4} (\Delta f)^{2} - 2 g(Y, Y) - {\rm Ric}(\phi Y, \phi Y)
\right. \\
&
\left.
-2 g(\nabla_{Y} Y, H)
+ g(Y, H)^{2}
\right)
{\rm vol}_{\iota^{*}g},
\end{align*}
where $H$ is the mean curvature vector of $L$
and $\Delta$ is the Laplacian acting on $C^{\infty}(L)$.
This formula agrees with \cite[Theorem 1.1]{Kajigaya}.
Thus
when $\iota$ is minimal Legendrian and all of $\iota_{t}$'s are Legendrian,
it agrees with \cite[Theorem 1.1]{Ono}.
\end{rem}
\begin{proof}
Since $\iota$ is Legendrian, we see that
\begin{align} \label{Lmin comp 1}
\eta (Y)=0, \qquad
\rho_{\phi}[\iota] = 1, \qquad
H_{\phi} = -\phi H,
\end{align}
by Lemma \ref{equality rho phi} and Remark \ref{MC in Leg case}.
Since all of $\iota_{t}$'s are Legendrian, we have $L_{Z} \eta = 2 g(Y, \cdot) + df =0$,
which implies that
\begin{align} \label{Lmin comp 2}
{\rm div}(Y) = \frac{1}{2} \Delta f.
\end{align}
A direct computation gives
\begin{align} \label{Lmin comp 3}
\begin{split}
{\rm Ric}(Y, Y) &= {\rm Ric}(\phi Y, \phi Y), \\
-g(\pi_{L}[Z, Y], H_{\phi}) &= g(\nabla_{Z} Y - \nabla_{Y} Z, \phi H), \\
g(\nabla_{Z} Y , \phi H) &= - g(\nabla_{Z}Z, H) \\
g(\nabla_{Y} Z, \phi H) &= g(\nabla_{Y}Y, H).
\end{split}
\end{align}
By \cite[Lemma 4.1]{Kajigaya}, we have
\begin{align} \label{Lmin comp 4}
\int_{L} g(\nabla_{Z} Z, H) {\rm vol}_{\iota^{*}g}
=
\int_{L} g(\nabla_{Y} Y, H) {\rm vol}_{\iota^{*}g},
\end{align}
where we use
an integration by parts argument
and
the Legendrian-minimality of $\iota$,
which is equivalent to ${\rm div}(\phi H) = 0$
\cite[Theorem 3.6]{Kajigaya}.
Then we obtain the statement by
(\ref{Lmin comp 1}), (\ref{Lmin comp 2}), (\ref{Lmin comp 3}) and (\ref{Lmin comp 4}).
\end{proof}
Now we prove Theorems \ref{thm stablity}, \ref{thm convexity} and \ref{thm obstruction}.
\begin{proof}[Proof of Theorem \ref{thm stablity}]
Let $\iota: L^{n} \hookrightarrow M^{2n+1}$ be
a $\phi$-minimal affine Legendrian submanifold.
By definition, we have $H_{\phi} = 0$.
Since $M^{2n+1}$ is a $\eta$-Einstein Sasakian manifold with the
$\eta$-Ricci constant $A$,
we see from Definition \ref{def of eta Einstein} that
\begin{align} \label{eta Einstein eq}
(2n+2) \eta (Y)^{2} - 2 g(Y, Y) - {\rm Ric}(Y, Y)
= (A+2) \left( \eta(Y)^{2} - g(Y,Y) \right)
\end{align}
for $Y \in TL$.
By the third equation of Definition \ref{def of Sasakian 1},
we have $\eta(Y)^{2} - g(Y,Y) \leq 0$.
Then Theorem \ref{2nd var phi Vol}
implies Theorem \ref{thm stablity}.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm convexity}]
Recall Lemma \ref{def of geodesic}.
Let $\{ L_{t} \} \subset \mathcal{A}$ be a geodesic.
Then there exists
a curve
of affine Legendrian embeddings $\{ \iota_{t} \}$, a fixed vector field $Y \in \mathfrak{X}(L)$
and a function $f \in C^{\infty}(L)$
such that
$\pi (\iota_{t}) = L_{t}$,
\begin{align} \label{Y Z commute}
\frac{d \iota_{t}}{dt} = \phi (\iota_{t})_{*} Y + f \xi \circ \iota_{t} \qquad \mbox{and} \qquad
[ (\iota_{t})_{*} Y, \phi (\iota_{t})_{*} Y + f \xi \circ \iota_{t}] = 0.
\end{align}
Then
by Theorem \ref{2nd var phi Vol}, (\ref{eta Einstein eq}) and $(\ref{Y Z commute})$,
we obtain Theorem \ref{thm convexity}.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm obstruction}]
Let $\iota: L^{n} \hookrightarrow M^{2n+1}$ be a $\phi$-minimal affine
Legendrian submanifold.
Since $M^{2n+1}$ is a $\eta$-Einstein Sasakian manifold with the
$\eta$-Ricci constant $A > -2$,
we have
\begin{align*}
(2n+2) \eta (Y)^{2} - 2 g(Y, Y) - {\rm Ric}(Y, Y)
= (A+2) \left( \eta(Y)^{2} - g(Y,Y) \right) <0
\end{align*}
for any $0 \neq Y \in TL$
by the third equation of Definition \ref{def of Sasakian 1} and Definition \ref{def of aff Leg}.
Take a 1-form
$0 \neq\alpha\in\Omega^{1}(L)$ such that
$d^{*} \alpha= 0$.
For example, set
$\alpha = d^{*} \beta$ for a 2-form $\beta$.
Define the vector field $Y \in \mathfrak{X}(L)$ on $L$ via
\begin{align*}
\iota^{*} g (\rho_{\phi}[\iota] Y, \cdot) = \alpha.
\end{align*}
Then we easily see that ${\rm div}(\rho_{\phi}[\iota] Y) = - d^{*} \alpha = 0$.
By Theorem \ref{2nd var phi Vol},
the second variation
of the $\phi$-volume
for this $Y$ is given by
\begin{align*}
\left. \frac{d^{2}}{d t^{2}}
\int_{L} {\rm vol}_{\phi} [\iota_{t}] \right|_{t=0}
=
\int_{L}
\left(
(A+2) \left( \eta(Y)^{2} - g(Y,Y) \right)
\right)
{\rm vol}_{\phi}[\iota] < 0,
\end{align*}
which implies that $\iota: L \hookrightarrow M$ is not $\phi$-stable.
\end{proof}
\section{$\phi$-volume in Sasaki-Einstein manifolds}
\subsection{$J$-volume in Calabi-Yau manifolds} \label{J-vol in CY}
\begin{definition}
Let $(X, h, J, \omega)$ be a real $2n$-dimensional K\"{a}hler manifold with a K\"{a}hler metric $h$,
a complex structure $J$ and an associated K\"{a}hler form $\omega$.
Suppose that there exists
a nowhere vanishing holomorphic $(n, 0)$-form $\Omega$ on $X$ satisfying
\begin{align} \label{CYcondition}
\omega^{n}/n! = (-1)^{n(n-1)/2} (i/2)^{n} \Omega \wedge \bar{\Omega},
\end{align}
Then a quintuple $(X, h, J, \omega, \Omega)$ is called a
{\bf Calabi-Yau manifold}.
\end{definition}
\begin{rem}
The condition (\ref{CYcondition}) implies that
$h$ is Ricci-flat and
$\Omega$ is parallel with respect to the Levi-Civita connection of $h$.
\end{rem}
In Section \ref{J-vol in CY},
we suppose that $(X, h, J, \omega, \Omega)$ is a real $2n$-dimensional Calabi-Yau manifold.
\subsubsection{Special Lagrangian geometry}
Define the {\bf Lagrangian angle} $\theta_{N}: N \rightarrow \mathbb{R}/ 2 \pi \mathbb{Z}$
of a Lagrangian immersion $f: N \hookrightarrow X$ by
\begin{align*}
f^{*} \Omega = e^{i \theta_{N}} {\rm vol}_{f^{*}h}.
\end{align*}
This is well-defined because
$
|f^{*} \Omega (e_{1}, \cdots, e_{n})| = 1
$
for any orthonormal basis $\{ e_{1}, \cdots, e_{n} \}$ of $T_{x}N$ for $x\in N$,
which is proved in \cite[Theorem I\hspace{-.1em}I\hspace{-.1em}I.1.7]{HarveyLawson}.
It also implies that
Re$(e^{i \theta} \Omega)$ defines a calibration on $X$ for any $\theta \in \mathbb{R}$.
A Lagrangian immersion $f: N \hookrightarrow X$
is called
{\bf special Lagrangian}
if $f^{*} {\rm Re} \Omega = {\rm vol}_{f^{*}h}$,
namely, the Lagrangian angle is $0$.
Minimal Lagrangian submanifolds
are characterized
in terms of special Lagrangian submanifolds
as follows.
For example, see \cite[Lemma 8.1]{LotayPacini}
\begin{lem} \label{equivalent min Lag}
Let $f: N \hookrightarrow X$ be an immersion
of an oriented connected $n$-dimensional manifold $N$.
The following are equivalent.
\begin{enumerate}[(a)]
\item $f^{*} {\rm Re}(e^{i \theta} \Omega) = {\rm vol}_{f^{*} h}$ for some $\theta \in \mathbb{R}$;
\item $f^{*} \omega = 0$ and $f^{*} {\rm Im}(e^{i \theta} \Omega) = 0$ for some $\theta \in \mathbb{R}$;
\item $f^{*} \omega = 0$ and the Lagrangian angle $\theta_{N}$ is constant;
\item $f: N \hookrightarrow X$ is minimal Lagrangian.
\end{enumerate}
\end{lem}
\subsubsection{Special affine Lagrangian geometry}
By using a $J$-volume,
we can generalize the notion of calibrations.
\begin{lem} [{\cite[Lemma 8.2]{LotayPacini}}] \label{J calibration}
Let $f: N \hookrightarrow X$ be an affine Lagrangian immersion
of an oriented $n$-dimensional manifold $N$.
Then we have
\begin{align*}
f^{*} {\rm Re}\Omega \leq {\rm vol}_{J}[f] \leq {\rm vol}_{f^{*}h}.
\end{align*}
The equality holds
\begin{itemize}
\item
in the first relation if and only if $f^{*} {\rm Im}\Omega =0$ and $f^{*} {\rm Re}\Omega > 0$,
\item
and in the second relation if and only if $f$ is Lagrangian.
\end{itemize}
\end{lem}
Following \cite[Section 7.1]{LotayPacini},
define the {\bf affine Lagrangian angle} $\theta_{N}: N \rightarrow \mathbb{R}/ 2 \pi \mathbb{Z}$
of an affine Lagrangian immersion $f: N \hookrightarrow X$ by
\begin{align*}
f^{*} \Omega = e^{i \theta_{N}} {\rm vol}_{J}[f].
\end{align*}
This is well-defined because
\begin{align} \label{norm Omega rho J}
|f^{*} \Omega (e_{1}, \cdots, e_{n})| = \rho_{J}[f]
\end{align}
for any orthonormal basis $\{ e_{1}, \cdots, e_{n} \}$ of $T_{x}N$ for $x\in N$.
The equation (\ref{norm Omega rho J})
is proved in \cite[Lemma 7.2]{LotayPacini}.
We can also prove this directly by a pointwise calculation.
We call an affine Lagrangian immersion $f: N \hookrightarrow X$
{\bf special affine Lagrangian}
if $f^{*} {\rm Re}\Omega = {\rm vol}_{J}[f] $,
namely, the affine Lagrangian angle is $0$.
\begin{rem}
When $f: N \hookrightarrow X$ is Lagrangian, we have
${\rm vol}_{J}[f] = {\rm vol}_{f^{*}h}$ by Lemma \ref{equality rho J}.
Then
the affine Lagrangian angle agrees with
the standard Lagrangian angle.
\end{rem}
We have an analogue of Lemma \ref{equivalent min Lag}
given in \cite[Lemma 8.3]{LotayPacini}.
\begin{lem} \label{equivalent J min Lag}
Let $f: N \hookrightarrow X$ be an affine Lagrangian immersion
of an oriented connected real $n$-dimensional manifold $N$.
The following are equivalent.
\begin{enumerate}[(a)]
\item $f^{*} {\rm Re}(e^{i \theta} \Omega) = {\rm vol}_{J}[f]$ for some $\theta \in \mathbb{R}$;
\item $f^{*} {\rm Im}(e^{i \theta} \Omega) = 0$ for some $\theta \in \mathbb{R}$;
\item the affine Lagrangian angle $\theta_{N}$ is constant;
\item $f: N \hookrightarrow X$ is a critical point for the $J$-volume.
\end{enumerate}
\end{lem}
\begin{proof}
Define $H_{J} \in C^{\infty}(N, J f_{*} TN)$ by
\begin{align} \label{def of H_J}
H_{J} = -J ((J {\rm tr}_{N} (\pi_{J}^{t} \nabla^{X} \pi_{N}^{t}))^{\top}),
\end{align}
where $\nabla^{X}$ is the Levi-Civita connection of $h$,
$\top: \iota^{*} TX \rightarrow \iota_{*} TN$ is the tangential projection defined by $h$,
and
$\pi_{N}^{t}$ and $\pi_{J}^{t}$
are transposed operators of the canonical projections
of $\pi_{N}: \iota^{*} TX \rightarrow \iota_{*} TN $ and
$\pi_{J}: \iota^{*} TX \rightarrow J \iota_{*} TN$
via the decomposition $\iota^{*} TX = \iota_{*} TN \oplus J \iota_{*} TN$
, respectively.
By \cite[Proposition 5.2]{LotayPacini},
$f: N \hookrightarrow X$ is a critical point for the $J$-volume if and only if
$H_{J} = 0$.
By \cite[Corollary 7.4]{LotayPacini}, we have
\begin{align} \label{H_J angle}
H_{J} = J (d \theta_{N})^{\sharp},
\end{align}
where $\sharp$ is the metric dual with respect to $f^{*} h$.
Then by (\ref{norm Omega rho J}) and (\ref{H_J angle}),
we see the equivalence.
\end{proof}
\subsection{$\phi$-volume in Sasaki-Einstein manifolds} \label{phi-vol in SE}
The odd dimensional analogue of a Calabi-Yau manifold
is a Sasaki-Einstein manifold.
The following is a well-known fact.
For example, see \cite[Lemma 11.1.5]{BoyerGalicki}.
\begin{lem}
Let $(M, g, \eta, \xi, \phi)$ be a $(2n+1)$-dimensional Sasakian manifold.
If $g$ is Einstein, a cone $(C(M), \bar{g})$ is Ricci-flat.
\end{lem}
Thus the canonical bundle of $C(M)$ is diffeomorphically trivial.
In addition,
suppose that
the cone $C(M)$ is a Calabi-Yau manifold,
namely,
there exists
a nowhere vanishing holomorphic $(n+1, 0)$-form $\Omega$ on $C(M)$ such that
\begin{align} \label{CYcondition cone}
\bar{\omega}^{n+1}/(n+1)! = (-1)^{n(n+1)/2} (i/2)^{n+1} \Omega \wedge \overline{\Omega},
\end{align}
where $\bar{\omega} = \bar{g}(J \cdot, \cdot)$ is the associated K\"ahler form on $C(M)$.
Then
the canonical bundle of $C(M)$ is holomorphically trivial.
\begin{lem}[{\cite[Corollary 11.1.8]{BoyerGalicki}}]
If $M$ is a compact simply-connected Sasaki-Einstein manifold,
$C(M)$ is a Calabi-Yau manifold.
\end{lem}
\begin{rem}
The holomorphic volume form $\Omega$ is not unique.
For any $\theta \in \mathbb{R}$,
$e^{i \theta} \Omega$ also satisfies (\ref{CYcondition cone}).
\end{rem}
In Section \ref{phi-vol in SE}, we suppose that
$M$ is a $(2n+1)$-dimensional Sasaki-Einstein manifold with
a Calabi-Yau structure $(\bar{g}, J, \bar{\omega}, \Omega)$ on $C(M)$.
Define a complex valued $n$-form on $M$ by
\begin{align*}
\psi = u^{*} \left( i \left(r \frac{\partial}{\partial r} \right) \Omega \right),
\end{align*}
where $u: M =\{ 1 \} \times M \hookrightarrow C(M)$ is an inclusion.
Note that we can recover $\Omega$ from $\psi$ via
\begin{align} \label{Omega psi}
\Omega = (dr - i \eta) \wedge r^{n} \psi.
\end{align}
\subsubsection{Special Legendrian geometry}
Define the {\bf Legendrian angle} $\theta_{L}: L \rightarrow \mathbb{R}/ 2 \pi \mathbb{Z}$
of a Legendrian immersion $\iota: L \hookrightarrow M$ by
\begin{align*}
\iota^{*} \psi = e^{i \theta_{L}} {\rm vol}_{\iota^{*} g}.
\end{align*}
Note that
the Legendrian angle $\theta_{L}$ of a Legendrian immersion $\iota: L \hookrightarrow M$
agrees with the Lagrangian angle
of the induced Lagrangian immersion $\bar{\iota}: C(L) \hookrightarrow C(M)$ given by (\ref{induced imm}).
\begin{definition}
Let $L$ be an oriented $n$-dimensional manifold
admitting an immersion $\iota: L \hookrightarrow M$.
An immersion $\iota: L \hookrightarrow M$ is called {\bf special Legendrian} if
$\iota^{*} {\rm Re} \psi = {\rm vol}_{\iota^{*}g}$.
This is equivalent to the condition that
the induced immersion $\bar{\iota}: C(L) \hookrightarrow C(M)$ given by (\ref{induced imm})
is special Lagrangian.
\end{definition}
We have an analogue of Lemmas \ref{equivalent min Lag} and \ref{equivalent J min Lag}.
This is a direct consequence of Lemma \ref{equivalent min Lag}.
Note that $\iota$ is minimal if and only if $\bar{\iota}$ is minimal.
\begin{lem} \label{equivalent min Leg}
Let $\iota: L \hookrightarrow M$ be an immersion
of an oriented connected $n$-dimensional manifold $L$.
The following are equivalent.
\begin{enumerate}[(a)]
\item $\iota^{*} {\rm Re}(e^{i \theta} \psi) = {\rm vol}_{\iota^{*}g}$ for some $\theta \in \mathbb{R}$;
\item $\iota^{*} \eta = 0$ and $\iota^{*} {\rm Im}(e^{i \theta} \psi) = 0$ for some $\theta \in \mathbb{R}$;
\item $\iota^{*} \eta = 0$ and the Legendrian angle $\theta_{L}$ is constant;
\item $\iota: L \hookrightarrow M$ is minimal Legendrian.
\end{enumerate}
\end{lem}
\subsubsection{Special affine Legendrian geometry}
From Lemma \ref{J calibration}, we immediately see the following.
\begin{lem} \label{phi calibration}
Let $\iota: L \hookrightarrow M$ be an affine Legendrian immersion
of an oriented $n$-dimensional manifold $L$.
Then we have
\begin{align*}
\iota^{*} {\rm Re}\psi \leq {\rm vol}_{\phi}[\iota] \leq {\rm vol}_{\iota^{*}g}.
\end{align*}
The equality holds
\begin{itemize}
\item
in the first relation if and only if $\iota^{*} {\rm Im}\psi =0$ and $\iota^{*} {\rm Re}\psi > 0$,
\item
and in the second relation if and only if $\iota$ is Legendrian.
\end{itemize}
\end{lem}
Define the {\bf affine Legendrian angle} $\theta_{L}: L \rightarrow \mathbb{R}/ 2 \pi \mathbb{Z}$
of an affine Legendrian immersion $\iota: L \hookrightarrow M$ by
\begin{align*}
\iota^{*} \psi = e^{i \theta_{L}} {\rm vol}_{\phi}[\iota].
\end{align*}
This is well-defined by (\ref{norm Omega rho J}).
Note that
the affine Legendrian angle $\theta_{L}$ of an affine Legendrian immersion $\iota: L \hookrightarrow M$
agrees with the affine Lagrangian angle
of the induced affine Lagrangian immersion
$\bar{\iota}: C(L) \hookrightarrow C(M)$ given by (\ref{induced imm}).
\begin{definition} \label{def of special aff Leg}
Let $L$ be an oriented $n$-dimensional manifold
admitting an immersion $\iota: L \hookrightarrow M$.
An immersion $\iota: L \hookrightarrow M$ is called {\bf special affine Legendrian} if
$\iota^{*} {\rm Re} \psi = {\rm vol}_{\phi}[\iota]$.
This is equivalent to the condition that
the induced immersion $\bar{\iota}: C(L) \hookrightarrow C(M)$ given by (\ref{induced imm})
is special affine Lagrangian.
\end{definition}
It is natural to expect an analogue of Lemmas \ref{equivalent min Lag},
\ref{equivalent J min Lag} and \ref{equivalent min Leg}.
By Lemma \ref{phi calibration},
we immediately see that the following three conditions are equivalent.
\begin{itemize}
\item $\iota^{*} {\rm Re}(e^{i \theta} \psi) = {\rm vol}_{\phi}[\iota]$ for some $\theta \in \mathbb{R}$;
\item $\iota^{*} {\rm Im}(e^{i \theta} \psi) = 0$ for some $\theta \in \mathbb{R}$;
\item the affine Legendrian angle $\theta_{L}$ is constant.
\end{itemize}
However,
in the affine Legendrian setting,
each of these conditions is not equivalent to saying that
$\iota$ is a critical point for the $\phi$-volume.
In fact, we have the following.
\begin{prop} \label{relation angle H phi}
Let $\iota: L \hookrightarrow M$ be an affine Legendrian immersion
of an oriented connected $n$-dimensional manifold $L$.
We have
\begin{align*}
(d \theta_{L})^{\sharp} = - (n+1) \xi^{\top} + H_{\phi},
\end{align*}
where $\sharp$ is the metric dual with respect to $\iota^{*} g$ on $L$,
$\top: \iota^{*} TM \rightarrow TL$ is the tangential projection defined by
the orthogonal decomposition of $\iota^{*} TM$ by the metric $g$
and $H_{\phi}$ is given in Definition \ref{def of H phi}.
\end{prop}
Thus an analogue of Lemmas \ref{equivalent min Lag},
\ref{equivalent J min Lag} and \ref{equivalent min Leg}
holds if
an affine Legendrian immersion
$\iota: L \hookrightarrow M$ is Legendrian.
It may be necessary to modify the notion of the $\phi$-volume
to hold an analogue of Lemmas \ref{equivalent min Lag},
\ref{equivalent J min Lag} and \ref{equivalent min Leg}
or
to consider
what the critical points of the $\phi$-volume which are not minimal Legendrian are.
\begin{lem} \label{H_J H_phi}
Let $H_{J} \in C^{\infty}(C(L), J \bar{\iota}_{*} TC(L))$
be defined by (\ref{def of H_J}). Then we have
\begin{align*}
(J H_{J})|_{r=1} = (n+1) \xi^{\top} - H_{\phi}.
\end{align*}
\end{lem}
\begin{proof}
By (\ref{def of H_J}),
we have for any vector field $Y$ on $C(L)$
\begin{align*}
\bar{\iota}^{*} \bar{g} (Y, J H_{J}) &=
\sum_{i=1}^{n} \bar{\iota}^{*} \bar{g}
\left( \pi_{C(L)} (\bar{\nabla}_{\frac{e_{i}}{r}} (JY)), \frac{e_{i}}{r} \right)
+ \bar{\iota}^{*} \bar{g} \left (\pi_{C(L)} (\bar{\nabla}_{\frac{\partial}{\partial r}} (JY)),
\frac{\partial}{\partial r} \right) \\
&=
\sum_{i=1}^{n} \bar{\iota}^{*} \bar{g}
\left( \pi_{C(L)} (\bar{\nabla}_{\frac{e_{i}}{r}} (JY)), \frac{e_{i}}{r} \right),
\end{align*}
where $\{ e_{1}, \cdots, e_{n} \}$ is a local orthonormal frame of $TL$
with respect to $\iota^{*}g$.
Using the notation in Section \ref{first var},
we have for any vector field $Y$ on $L$
\begin{align*}
\bar{\nabla}_{e_{i}} (JY) &=
J \left(\nabla_{e_{i}} Y - \frac{\bar{g}(e_{i}, Y)}{r^{2}} \cdot r \frac{\partial}{\partial r} \right),\\
\nabla_{e_{i}} Y &=
\sum_{j=1}^{n} e^{j} (\nabla_{e_{i}} Y) e_{j} + \sum_{j=1}^{n} f^{j} (\nabla_{e_{i}} Y) \phi e_{j}
+ \eta^{*} (\nabla_{e_{i}} Y) \xi \\
&=
\sum_{j=1}^{n} e^{j} (\nabla_{e_{i}} Y) e_{j}
+ \sum_{j=1}^{n} f^{j} (\nabla_{e_{i}} Y)
\left (J e_{j} - \eta (e_{j}) r \frac{\partial}{\partial r} \right)
+ \eta^{*} (\nabla_{e_{i}} Y) \xi.
\end{align*}
Hence we have at the point of $\{ r=1 \}$
\begin{align*}
\sum_{i=1}^{n} \bar{\iota}^{*} \bar{g} (\pi_{C(L)} (\bar{\nabla}_{e_{i}} (JY)), e_{i})
&=
- \sum_{i=1}^{n} f^{i} (\nabla_{e_{i}} Y)\\
&=
\sum_{i=1}^{n} e^{i} (\phi (\nabla_{e_{i}} Y))\\
&=
\sum_{i=1}^{n} e^{i} (\nabla_{e_{i}} (\phi Y) - (\nabla_{e_{i}} \phi) (Y))\\
&=
\sum_{i=1}^{n} e^{i} (\nabla_{e_{i}} (\phi Y)) + n \eta(Y).
\end{align*}
Since we know that
$\sum_{i=1}^{n} e^{i} (\nabla_{e_{i}} (\phi Y)) = - g (Y, H_{\phi}) + \eta (Y)$
by Corollary \ref{1st var cor},
the proof is done.
\end{proof}
\begin{proof}[Proof of Proposition \ref{relation angle H phi}]
By (\ref{H_J angle}), we have
\begin{align*}
d \theta_{C(L)} = - \bar{\iota}^{*} \bar{g} (J H_{J}, \cdot),
\end{align*}
where $\theta_{C(L)}$ is the affine Lagrangian angle of
$\bar{\iota}: C(L) \hookrightarrow C(M)$ given by (\ref{induced imm}).
Since we know that
$u^{*} \theta_{C(L)} = \theta_{L}$ for the inclusion $u: L \hookrightarrow C(L)$,
Lemma \ref{H_J H_phi} implies the statement.
\end{proof}
\begin{rem}
We can also prove Proposition \ref{relation angle H phi} by using the tensors on $L$.
We give an outline of the proof.
Define the $1$-form $\xi_{\phi}$ on $L$ by $u^{*} \xi_{J}$:
the pullback of the Maslov form $\xi_{J}$ of $C(L)$ defined in \cite[Section 3.2]{LotayPacini}
by $u: L \hookrightarrow C(L)$.
Define the complex valued $n$-form $\psi_{L}$ on $L$ by
$\psi_{L} = u^{*} (i(\partial/ \partial r) \Omega_{C(L)})$,
where $\Omega_{C(L)}$ is the canonical section
of the canonical bundle of $C(L)$
defined in \cite[Section 3.2]{LotayPacini}.
Then as in \cite[Lemma 7.2]{LotayPacini}, we have $\psi = e^{i \theta_{L}} \psi_{L}$.
By a direct computation, we have
\begin{align*}
\xi_{\phi} = - \sum_{i=1}^{n} f^{i} (\nabla e_{i}), \qquad
\xi_{\phi}^{\sharp}= (n+1) \xi^{\top} - H_{\phi}.
\end{align*}
Let $\bar{\nabla}$ and $\nabla$ be the Levi-Civita connections of
$\bar{g}$ and $g$, respectively.
By the equations $\bar{\nabla} \Omega = 0$ and (\ref{Omega psi}), we deduce that
\begin{align*}
\nabla \psi = - i \eta \wedge \psi.
\end{align*}
By the equations $\bar{\nabla} \Omega_{L} = i \xi_{J} \otimes \Omega_{C(L)}$
and $\Omega_{C(L)} = (dr - i \eta) \wedge r^{n} \psi_{L}$,
we deduce that
\begin{align*}
\nabla \psi_{L} = i (-\eta \wedge \psi_{L} + \xi_{\phi} \otimes \psi_{L}).
\end{align*}
Then we obtain $\xi_{\phi} = - d \theta_{L}$ from $\psi = e^{i \theta_{L}} \psi_{L}$.
\end{rem}
\section{Moduli space of the special affine Legendrian submanifolds}
In this section, we prove Theorem \ref{smooth moduli affine Leg}.
First, we study the moduli space of submanifolds
characterized by differential forms
following \cite{Moriyama}
to obtain Proposition \ref{general moduli}.
As a corollary of Proposition \ref{general moduli},
we prove Theorem \ref{smooth moduli affine Leg}.
Let $(M, g)$ be a Riemannian manifold
and $L$ be a compact connected manifold
admitting an embedding into $M$.
Denote by $C^{\infty}_{emb}(L, M)$ be
the set of all embeddings from $L$ to $M$:
\begin{align*}
C^{\infty}_{emb}(L, M) = \{ \iota: L \hookrightarrow M; \iota \mbox{ is an embedding} \}.
\end{align*}
Set $\mathcal{M}(L, M) = C^{\infty}_{emb}(L, M)/ {\rm Diff}^{\infty}(L)$,
where ${\rm Diff}^{\infty}(L)$ is a $C^{\infty}$ diffeomorphism group of $L$.
By \cite[Theorem 3.3]{Opozda},
$\mathcal{M}(L, M)$ is a smooth Fr\'{e}chet manifold
modeled on the Fr\'{e}chet vector space
$C^{\infty}(L, \mathcal{N}_{\iota})$ for $\iota \in C^{\infty}_{emb}(L, M)$,
where
$\mathcal{N}_{\iota}$ is any vector bundle transversal to $\iota$
and
$C^{\infty}(L, \mathcal{N}_{\iota})$
is the space of all sections of $\mathcal{N}_{\iota} \rightarrow L$.
Now we choose
a system
$\Phi = (\varphi_{1}, \cdots, \varphi_{m}) \in \oplus_{i=1}^{m} \Omega^{k_{i}} (M)$
of smooth differential forms on $M$.
These forms are not necessarily closed.
\begin{definition} \label{def of phi embed}
The embedding $\iota \in C^{\infty}_{emb}(L, M)$
is called a {\bf $\Phi$-embedding} if
\begin{align*}
\iota^{*} \Phi = (\iota^{*} \varphi_{1}, \cdots, \iota^{*} \varphi_{m}) = 0.
\end{align*}
Define the moduli space
$\mathcal{M}_{L} (\Phi)$
of
$\Phi$-embeddings of $L$ by
\begin{align*}
\mathcal{M}_{L} (\Phi)
=
\{ \iota \in C^{\infty}_{emb}(L, M); \iota^{*} \Phi =0 \}/ {\rm Diff}^{\infty}(L).
\end{align*}
\end{definition}
We want to study the structure of $\mathcal{M}_{L} (\Phi)$.
Fix $\iota \in C^{\infty}_{emb}(L, M)$
satisfying $\iota^{*} \Phi = 0$
and
a vector bundle $\mathcal{N}_{\iota} \rightarrow L$ which is transversal to $\iota$.
Set
\begin{align*}
V_{1} = C^{\infty}(L, \mathcal{N}_{\iota}), \qquad
V_{2} = \oplus_{i=1}^{m} \Omega^{k_{i}} (L) = C^{\infty} (L, \oplus_{i=1}^{m} \wedge^{k_{i}} T^{*}L).
\end{align*}
By the tubular neighborhood theorem there exists a neighborhood
of $L$ in $M$ which is identified with
an open neighborhood $\mathcal{U} \subset \mathcal{N}_{\iota}$ of the zero section by
the exponential map.
Set
\begin{align*}
U = \{ v \in V_{1}; v_{x} \in \mathcal{U} \mbox{ for any } x \in L \}.
\end{align*}
The exponential map induces the embedding
${\rm exp}_{v}: L \hookrightarrow M$
by ${\rm exp}_{v}(x) = {\rm exp}_{x} (v_{x})$
for $v \in U$ and $x \in L$.
Define the first order differential operator $F: U \rightarrow V_{2}$ by
\begin{align*}
F(v) =
{\rm exp}_{v}^{*} \Phi
=
({\rm exp}_{v}^{*} \varphi_{1}, \cdots, {\rm exp}_{v}^{*} \varphi_{m}).
\end{align*}
Then
${\rm exp}_{v}: L \hookrightarrow M$ is $\Phi$-embedding
if and only if $F(v)=0$.
Thus
a neighborhood of $[\iota]$ in $\mathcal{M}_{L} (\Phi)$
is identified
with that of $0$ in $F^{-1}(0)$ (in the $C^{1}$ sense).
Let $D_{1}$ be the
linearization of $F$ at $0$:
\begin{align*}
D_{1} = (dF)_{0}: V_{1} \rightarrow V_{2}.
\end{align*}
First, we prove the following,
which is a slight generalization of \cite[Proposition 2.2]{Moriyama}.
It will be useful to see
whether the moduli space of submanifolds
characterized by some differential forms
is smooth.
We use the notion of a Fr\'{e}chet manifold given in \cite{Hamilton}.
\begin{prop} \label{general moduli}
Suppose that there exist a vector bundle $E \rightarrow L$
and a first order differential operator $D_{2}: V_{2} \rightarrow V_{3}$,
where $V_{3} = C^{\infty}(L, E)$ is a space of
smooth sections of $E \rightarrow L$,
such that
\begin{align*}
V_{1} \overset{D_{1}}{\longrightarrow} V_{2} \overset{D_{2}}{\longrightarrow} V_{3}
\end{align*}
is a differential complex.
Namely, $D_{2} \circ D_{1} = 0$.
Denote by $D_{i}^{*} : V_{i+1} \rightarrow V_{i}$
the formal adjoint operator of $D_{i}$.
\begin{enumerate}
\item
Suppose that $P_{2} = D_{1} D_{1}^{*} + D_{2}^{*} D_{2}: V_{2} \rightarrow V_{2}$
is elliptic and
${\rm Im}(F) \subset {\rm Im}(D_{1})$.
Then
around $[\iota]$,
the moduli space
$\mathcal{M}_{L} (\Phi)$ is a smooth Fr\'{e}chet manifold
and it is a submanifold of $\mathcal{M} (L, M)$.
\item
In addition to the assumptions of 1,
suppose further that
$P_{1} = D_{1}^{*} D_{1}: V_{1} \rightarrow V_{1}$ is elliptic.
Then the moduli space $\mathcal{M}_{L} (\Phi)$ is
a finite dimensional smooth manifold around $[\iota]$
and its dimension is equal to $\dim \ker (D_{1})$.
\end{enumerate}
\end{prop}
\begin{proof}
Consider the case 1.
First, we extend above spaces and operators to those of class $C^{k ,a}$,
where $k \geq 1$ is an integer and $0 < a <1$. Set
\begin{align*}
V_{1}^{k, a} &= C^{k, a} (L, \mathcal{N}_{\iota}), \qquad
V_{2}^{k, a} = C^{k, a} (L, \oplus_{i=1}^{m} \wedge^{k_{i}} T^{*}L), \\
U^{k, a} &= \{ v \in V_{1}; v_{x} \in \mathcal{U} \mbox{ for any } x \in L \},\\
F^{k, a}&: U^{k, a} \rightarrow V_{2}^{k-1, a}, \qquad
D_{1}^{k, a} = (dF^{k, a})_{0}, \\
\mathcal{M}_{L}^{k, a} (\Phi)
&=
\{ \iota \in C^{k, a}_{emb}(L, M); \iota^{*} \Phi =0 \}/ {\rm Diff}^{k, a}(L).
\end{align*}
Similarly,
a neighborhood of $[\iota]$ in $\mathcal{M}^{k, a}_{L} (\Phi)$
is identified
with that of $0$ in $(F^{k, a})^{-1}(0)$
(in the $C^{1}$ sense).
We prove that
$\mathcal{M}^{k, a}_{L} (\Phi)$
is smooth around $[\iota]$
in the sense of Banach.
To apply the implicit function theorem,
we prove the following.
\begin{lem} \label{prepare for imp fn}
\begin{enumerate}[(a)]
\item ${\rm Im}(D_{1}^{k,a}) \subset V_{2}^{k-1, a}$ is a closed subspace.
\item ${\rm Im}(F^{k,a}) \subset {\rm Im}(D_{1}^{k,a})$.
\item $D_{1}^{k,a}: V_{1}^{k,a} \rightarrow {\rm Im}(D_{1}^{k,a})$ has a right inverse.
\end{enumerate}
\end{lem}
\begin{proof}
By the Hodge decomposition, we have
\begin{align} \label{Hodge decomp}
V_{2}^{k-1, a} = \ker P_{2} \oplus D_{1}^{k,a} (V_{1}^{k,a}) \oplus (D_{2}^{*})^{k,a}(V_{3}^{k,a}),
\end{align}
where $(D_{2}^{*})^{k,a}: V_{3}^{k,a} \rightarrow V_{2}^{k-1,a}$ is a canonical extension of $D_{2}^{*}$.
This is a $L_{2}$-orthogonal decomposition
and
${\rm Im}(D_{1}^{k,a})$ is the orthogonal complement of
$\ker P_{2} \oplus (D_{2}^{*})^{k,a}(V_{3}^{k,a})$. Thus we see (a).
We prove (b).
For any $f \in U^{k, a}$, there exists a sequence $\{ f_{n} \} \subset U$
such that $f_{n} \rightarrow f$.
By
$F (f_{n}) \in {\rm Im}(F) \subset {\rm Im}(D_{1}) \subset {\rm Im}(D_{1}^{k,a})$,
$F^{k,a}(f) = \lim_{n \rightarrow \infty} F(f_{n})$ and (a),
we see that
$F^{k,a}(f) \in {\rm Im}(D_{1}^{k,a})$.
We prove (c).
Let $G$ be the Green's operator of $P_{2}$.
Then for any $f \in {\rm Im}(D_{1}^{k,a})$, we have
$f = P_{2} G(f) = D_{1}^{k,a} (D_{1}^{*})^{k+1,a} G(f) + (D_{2}^{*})^{k,a} D_{2}^{k+1,a} G(f)$.
By (\ref{Hodge decomp}), we deduce that
\begin{align}\label{eq Im D1}
D_{1}^{k,a} (D_{1}^{*})^{k+1,a} G(f) = f, \qquad
(D_{2}^{*})^{k,a} D_{2}^{k+1,a} G(f) = 0.
\end{align}
Thus we see that
$(D_{1}^{*})^{k+1,a} G|_{{\rm Im}(D_{1}^{k,a})} : {\rm Im}(D_{1}^{k,a}) \rightarrow V_{1}^{k,a}$
is a right inverse of $D_{1}^{k,a}: V_{1}^{k,a} \rightarrow {\rm Im}(D_{1}^{k,a})$.
\end{proof}
By Lemma \ref{prepare for imp fn} (a),
we obtain the smooth map
$F^{k,a}: U^{k,a} \rightarrow {\rm Im}(D_{1}^{k,a})$.
The smoothness of this map is proved in \cite[Theorem 2.2.15]{Baier}.
It is clear that
$(d F^{k,a})_{0} = D_{1}^{k,a}: V_{1}^{k,a} \rightarrow {\rm Im}(D_{1}^{k,a})$
is surjective
and $V_{1}^{k,a}$ is the direct sum of the kernel of $D_{1}^{k,a}$
and the image of the right inverse of $D_{1}^{k,a}: V_{1}^{k,a} \rightarrow {\rm Im}(D_{1}^{k,a})$.
By the proof of Lemma \ref{prepare for imp fn} (c), we have
\begin{align*}
V_{1}^{k,a} = X_{1}^{k,a} \oplus Y_{1}^{k,a},
\end{align*}
where
$X_{1}^{k,a} = \ker (D_{1}^{k,a})$ and
$Y_{1}^{k,a} = (D_{1}^{*})^{k+1,a} G {\rm Im}(D_{1}^{k,a}).$
Note that
both spaces are closed in $V_{1}^{k,a}$.
Then we can apply the implicit function theorem.
There exist
an open neighborhood $A_{1}^{k,a} \subset X_{1}^{k,a}$ of $0$,
an open neighborhood $B_{1}^{k,a} \subset Y_{1}^{k,a}$ of $0$,
and a smooth mapping
$\hat{G}^{k,a}: A_{1}^{k,a} \rightarrow B_{1}^{k,a}$ such that
\begin{align*}
(F^{k,a})^{-1}(0) \cap (A_{1}^{k,a} \oplus B_{1}^{k,a})
=
\{ x + \hat{G}^{k,a}(x); x \in A_{1}^{k,a} \},
\end{align*}
which implies that $\mathcal{M}_{L}^{k,a} (\Phi)$ is smooth around $[\iota]$
in the sense of Banach.
Next, we prove that
$\mathcal{M}_{L} (\Phi)$
is smooth
around $[\iota]$ in the sense of Fr\'{e}chet.
The proof is an analogue of that of
\cite[Theorem 4.1]{Opozda}.
The open set $A_{1}^{k,a}$
and the map $\hat{G}^{k,a}$
depend on $k$ and $a$.
We have to show that
we can take $A_{1}^{k,a}$ and $\hat{G}^{k,a}$
``uniformly".
Namely, set
\begin{align*}
G^{k,a} = \hat{G}^{1,a}|_{A_{1}^{1,a} \cap V_{1}^{k,a}}:
A_{1}^{1,a} \cap V_{1}^{k,a} \rightarrow B_{1}^{1,a}.
\end{align*}
In the following,
by shrinking $A_{1}^{1,a}$ if necessary,
we prove that for any $k \geq 1$
\begin{itemize}
\item ${\rm Im}(G^{k,a}) \subset Y_{1}^{k,a} = Y_{1}^{1,a} \cap V_{1}^{k,a}$,
\item and $G^{k,a}: A_{1}^{1,a} \cap V_{1}^{k,a} \rightarrow Y_{1}^{k,a}$ is smooth in the sense of Banach.
\end{itemize}
Then we see that
${\rm Im}(\hat{G}^{1,a}|_{A_{1}^{1,a} \cap V_{1}}) \subset Y_{1}^{1,a} \cap V_{1}$
and
$\hat{G}^{1,a}|_{A_{1}^{1,a} \cap V_{1}}$
is smooth in the sense of Fr\'{e}chet.
Hence we see that $\mathcal{M}_{L} (\Phi)$
is smooth around $[\iota]$.
First, we show that
${\rm Im} (G^{k,a}) \subset Y_{1}^{k,a}$
by the elliptic regularity theorem.
For any $\gamma \in V_{1}$, define
the second order differential operator $F_{\gamma}: V_{2} \rightarrow V_{2}$ by
\begin{align*}
F_{\gamma} (\beta) = F(\gamma + D_{1}^{*} \beta) + D_{2}^{*} D_{2} \beta.
\end{align*}
Denote by
$F_{\gamma}^{1,a}$
the extension of $F_{\gamma}$ on $V_{2}^{1,a}$.
Since the linearization of $F_{0}$ at $0$,
which is given by
$(dF_{0})_{0} = D_{1} D_{1}^{*} + D_{2}^{*} D_{2} = P_{2}$, is elliptic
and
the ellipticity is an open condition, we see that
there exist
an open neighborhood $\mathcal{U}_{0} \subset V_{1}^{1,a}$ of $0$
and an open neighborhood $\mathcal{V}_{0} \subset V_{2}^{2,a}$ of $0$
such that
$(dF^{1,a}_{\gamma})_{\beta}$ is elliptic
for any $(\gamma, \beta) \in \mathcal{U}_{0} \times \mathcal{V}_{0}.$
Set
\begin{align*}
\mathcal{U}_{1} = (G^{1,a})^{-1}( (D_{1}^{*})^{2,a} (\mathcal{V}_{0} \cap G ({\rm Im}(D_{1}^{1,a})))
\cap B_{1}^{1,a}) \cap \mathcal{U}_{0},
\end{align*}
which is an open subset of $A_{1}^{1,a}$ because
\begin{align*}
(D_{1}^{*})^{2,a}|_{G ({\rm Im} (D_{1}^{1,a}))}: G ({\rm Im} (D_{1}^{1,a})) \rightarrow
(D_{1}^{*})^{2,a} G ({\rm Im} (D_{1}^{1,a})) = Y_{1}^{1,a}
\end{align*}
is an isomorphism.
\begin{lem} \label{image Ck}
For any $k \geq 1$, we have
\begin{align*}
G^{1,a} (\mathcal{U}_{1} \cap V_{1}^{k,a}) \subset Y_{1}^{k,a}.
\end{align*}
\end{lem}
\begin{proof}
Let $\alpha \in \mathcal{U}_{1} \cap V_{1}^{k,a}$.
Since $F$ is the first order differential operator,
the differential operator $F_{\alpha}$ is of class $C^{k-1,a}$.
By the definition of $\mathcal{U}_{1}$,
there exists $\beta \in \mathcal{V}_{0} \cap G ({\rm Im}(D_{1}^{1,a}))$
satisfying
$G^{1,a}(\alpha) = (D_{1}^{*})^{2,a} (\beta)$. Then
\begin{align*}
F^{1,a}_{\alpha} (\beta) = F^{1,a} (\alpha + (D_{1}^{*})^{2,a} \beta) +
(D_{2}^{*})^{1,a} D_{2}^{2,a} \beta = 0
\end{align*}
by the definition of $G^{1,a}$ and (\ref{eq Im D1}).
Since $(\alpha, \beta) \in \mathcal{U}_{0} \times \mathcal{V}_{0}$,
$(dF^{1,a}_{\alpha})_{\beta}$ is elliptic.
Hence Schauder theory implies that
$\beta$ is of class $C^{k+1, a}$.
Thus
$G^{1,a}(\alpha) = (D_{1}^{*})^{2,a} (\beta)$ is of class $C^{k,a}$.
\end{proof}
Next, we show that $G^{k,a}$ is a smooth map.
Since
$(d F^{1,a})_{0}|_{Y_{1}^{1,a}} = D_{1}^{1,a}|_{Y_{1}^{1,a}} : Y_{1}^{1,a}
\rightarrow Z_{2}^{0,a} = D_{1}^{1,a} (V_{1}^{1,a})$
is an isomorphism
and being an isomorphism is an open condition,
there is an open neighborhood $\mathcal{U}_{2} \subset V_{1}^{1,a}$ of $0$ such that
$
(d F^{1,a})_{\gamma}|_{Y_{1}^{1,a}}
:Y_{1}^{1,a} \rightarrow Z_{2}^{0,a}
$
is an isomorphism for any $\gamma \in \mathcal{U}_{2}$.
Set $\mathcal{U}_{3} = \mathcal{U}_{2} \cap \mathcal{U}_{0}$.
\begin{lem} \label{isom Ck}
For any $\gamma \in \mathcal{U}_{3} \cap V_{1}^{k,a}$,
\begin{align*}
(d F^{1,a})_{\gamma}|_{Y_{1}^{k,a}}
:
Y_{1}^{1,a} \cap V_{1}^{k,a} =
Y_{1}^{k,a} \rightarrow
D_{1}^{k,a} (V_{1}^{k,a}) = Z_{2}^{0,a} \cap V_{2}^{k-1,a}
\end{align*}
is an isomorphism.
\end{lem}
\begin{proof}
The injectivity of $(d F^{1,a})_{\gamma}|_{Y_{1}^{k,a}}$
follows from the fact that
$(d F^{1,a})_{\gamma}|_{Y_{1}^{k,a}}$
is a restriction of
the isomorphism
$(d F^{1,a})_{\gamma}|_{Y_{1}^{1,a}}
:Y_{1}^{1,a} \rightarrow Z_{2}^{0,a}$.
The equation
$(d F^{1,a})_{\gamma}|_{V_{1}^{k,a}} = (d F^{k,a})_{\gamma}$
and the smoothness of
$F^{k,a}: V_{1}^{k,a} \rightarrow D_{1}^{k,a} (V_{1}^{k,a})$
imply that
$(d F^{1,a})_{\gamma}|_{Y_{1}^{k,a}}$ is continuous.
We prove that
$(d F^{1,a})_{\gamma}|_{Y_{1}^{k,a}}$ is surjective.
Take any $\mu \in D_{1}^{k,a} (V_{1}^{k,a})$.
Since
$(d F^{1,a})_{\gamma}|_{Y_{1}^{1,a}}
:Y_{1}^{1,a} \rightarrow Z_{2}^{0,a}$
is an isomorphism,
there exists $\beta \in G({\rm Im} (D_{1}^{1,a})) \subset V_{2}^{2,a}$
satisfying
$(d F^{1,a})_{\gamma} ((D_{1}^{*})^{2,a} \beta) = \mu$.
Now we have
\begin{align*}
(d F^{1,a})_{\gamma} ((D_{1}^{*})^{2,a} \beta)
=&
\left.
\frac{d}{dt} F^{1,a} (\gamma + t (D_{1}^{*})^{2,a} \beta ) \right|_{t=0}\\
=&
\left.
\frac{d}{dt} \left( F^{1,a}(\gamma + t (D_{1}^{*})^{2,a} \beta)
+ (D_{2}^{*})^{1,a} D_{2}^{2,a} (t \beta)
\right) \right|_{t=0}\\
=&
(d F^{1,a}_{\gamma})_{0} (\beta)
\end{align*}
by (\ref{eq Im D1}).
Since $\gamma \in \mathcal{U}_{0} \cap V_{1}^{k,a}$,
the differential operator $(d F^{1,a}_{\gamma})_{0}$ is
the elliptic operator of class $C^{k-1, a}$.
Hence by Schauder theory,
$\beta$ is of class $C^{k+1,a}$,
which implies that
$\mu = (d F^{1,a})_{\gamma} ((D_{1}^{*})^{2,a} \beta) \in (d F^{1,a})_{\gamma} (Y_{1}^{k,a})$.
\end{proof}
Define the map $\tilde{G}^{1,a}: A_{1}^{1,a} \rightarrow V_{1}^{1,a} = X_{1}^{1,a} \oplus Y_{1}^{1,a}$
by $\tilde{G}^{1,a}(\alpha) = \alpha + G^{1,a} (\alpha)$. Set
\begin{align*}
\mathcal{U}_{4} = (\tilde{G}^{1,a})^{-1}(\mathcal{U}_{3}) \cap \mathcal{U}_{1},
\end{align*}
which is an open set of $A_{1}^{1,a}$.
\begin{lem} \label{smoothness G k,a}
For any $k \geq 1$,
$G^{k,a} |_{\mathcal{U}_{4} \cap V_{1}^{k,a}}: \mathcal{U}_{4} \cap V_{1}^{k,a}
\rightarrow Y_{1}^{k,a}$
is smooth.
\end{lem}
\begin{proof}
We only have to prove that
$G^{k,a}$ is smooth around any $\alpha_{0} \in \mathcal{U}_{4} \cap V_{1}^{k,a}$.
Set $\gamma_{0} = \tilde{G}^{1,a}(\alpha_{0}) = \alpha_{0} + G^{1,a} (\alpha_{0})$.
By Lemma \ref{image Ck}, $\gamma_{0} \in \mathcal{U}_{3} \cap V_{1}^{k,a}$.
By Lemma \ref{isom Ck},
$
(d F^{1,a})_{\gamma_{0}}|_{Y_{1}^{k,a}}
:
Y_{1}^{k,a} \rightarrow
D_{1}^{k,a} (V_{1}^{k,a})
$
is an isomorphism.
Set $\tilde{X}_{1}^{k,a} = \ker (d F^{1,a})_{\gamma_{0}} \cap V_{1}^{k,a}$.
Then we have
\begin{align*}
F^{1,a}(\gamma_{0}) =0, \qquad
V_{1}^{k,a} = \tilde{X}_{1}^{k,a} \oplus Y_{1}^{k,a}.
\end{align*}
Let $\tilde{\pi}:
V_{1}^{k,a} = \tilde{X}_{1}^{k,a} \oplus Y_{1}^{k,a} \rightarrow \tilde{X}_{1}^{k,a}$
be the canonical projection
and set $\tilde{\alpha}_{0} = \tilde{\pi} (\alpha_{0}).$
This is a smooth mapping between Banach spaces.
Applying the implicit function theorem
to $F^{k,a} = F^{1,a}|_{V_{1}^{k,a}}: V_{1}^{k,a} \rightarrow D_{1}^{k,a} (V_{1}^{k,a})$,
there exist
an open neighborhood $\tilde{U}_{1}^{k,a} \subset \tilde{X}_{1}^{k,a}$ of $\tilde{\alpha}_{0}$,
an open set $\tilde{V}_{1}^{k,a} \subset Y_{1}^{k,a}$
and a smooth map
$H^{k,a}: \tilde{U}_{1}^{k,a} \rightarrow \tilde{V}_{1}^{k,a}$
such that
\begin{align*}
(F^{k,a})^{-1}(0) \cap (\tilde{U}_{1}^{k,a} \oplus \tilde{V}_{1}^{k,a})
=
\{ \tilde{\alpha} + H^{k,a}(\tilde{\alpha}); \tilde{\alpha} \in \tilde{U}_{1}^{k,a} \}.
\end{align*}
Now recall that
for any
$\alpha \in A_{1}^{1,a} \cap (\tilde{U}_{1}^{k,a} \oplus \tilde{V}_{1}^{k,a})$,
we have
$F^{1, a}(\alpha + G^{1, a}(\alpha)) = 0$
and
$G^{k, a}(\alpha) = G^{1, a}(\alpha) \in Y_{1}^{k, a}$ by Lemma \ref{image Ck}.
Then there exists $\tilde{\alpha} \in \tilde{U}_{1}^{k,a}$
satisfying $\alpha + G^{k, a}(\alpha) = \tilde{\alpha} + H^{k,a}(\tilde{\alpha})$.
Taking $\tilde{\pi}$ of both sides, we obtain
$\tilde{\pi}(\alpha) = \tilde{\alpha}$, which implies that
\begin{align*}
G^{k,a}(\alpha) = \tilde{\pi} (\alpha) + H^{k,a} (\tilde{\pi}(\alpha)) - \alpha.
\end{align*}
Thus
$G^{k,a}|_{\tilde{U}_{1}^{k,a} \oplus \tilde{V}_{1}^{k,a}}$ is smooth.
\end{proof}
By Lemma \ref{smoothness G k,a},
it follows that
$\hat{G}^{1,a} |_{\mathcal{U}_{4} \cap V_{1}}$ is smooth
in the sense of Fr\'{e}chet,
which implies that
$\mathcal{M}_{L} (\Phi)$ is smooth around $[\iota]$.
Next, we prove that $\mathcal{M}_{L} (\Phi)$ is a submanifold of $\mathcal{M}(L,M)$
around $[\iota]$.
Set $\mathfrak{U} = (\mathcal{U}_{4} \cap V_{1}) \oplus Y_{1}$,
where $Y_{1} = Y_{1}^{1,a} \cap V_{1}$.
Setting $X_{1} = X_{1}^{1,a} \cap V_{1}$,
we have $V_{1} = X_{1} \oplus Y_{1}$.
Let $p: V_{1} = X_{1} \oplus Y_{1} \rightarrow X_{1}$
be the canonical projection.
Define the map $\psi: \mathfrak{U} \rightarrow \mathfrak{U}$ by
$\psi (z) = z - \hat{G}^{1,a} \circ p (z)$.
By Lemma \ref{image Ck}, the image of $\psi$ is contained in $\mathfrak{U}$.
This is bijective
and the inverse $\psi^{-1}$ is given by
$\psi^{-1}(z) = z + \hat{G}^{1,a} \circ p (z)$.
Both mappings are smooth in the sense of Fr\'{e}chet.
It is clear that
\begin{align*}
\psi \left(\{ \alpha + \hat{G}^{1,a} (\alpha); \alpha \in \mathcal{U}_{4} \cap X_{1} \} \right)
=
\mathcal{U}_{4} \cap X_{1}
\end{align*}
since
$\psi (\alpha + \hat{G}^{1,a} (\alpha)) = \alpha + \hat{G}^{1,a} (\alpha) - \hat{G}^{1,a} (\alpha) = \alpha$.
Thus $\mathcal{M}_{L}(\Phi)$ is locally identified
with the closed subspace
$X_{1}$ of $V_{1}$,
which implies that the
$\mathcal{M}_{L} (\Phi)$ is a submanifold of $\mathcal{M}(L,M)$ around $[\iota]$.
Finally, we prove the case 2.
Since $P_{1} = D_{1}^{*} D_{1}$ is elliptic,
we have $\dim \ker (D_{1}) \leq \dim \ker P_{1} < \infty$.
Then we see the statement from the case 1.
\end{proof}
Since the affine Legendrian condition is an open condition,
we see as in the proof of \cite[Theorem 3.4]{Opozda} that
the moduli space of affine Legendrian submanifolds,
namely,
$\{ [\iota] \in \mathcal{M}(L,M); \iota \mbox{ is affine Legendrian} \}$,
is a smooth Fr\'{e}chet manifold
and it is open in $\mathcal{M}(L,M)$.
Applying Proposition \ref{general moduli},
we prove Theorem \ref{smooth moduli affine Leg}.
\begin{proof}[Proof of Theorem \ref{smooth moduli affine Leg}]
Use the notation after Definition \ref{def of phi embed}.
The moduli space of special affine Legendrian embeddings of $L$
is given by $\mathcal{M}_{L}({\rm Im} \psi).$
Fix any $[\iota] \in \mathcal{M}_{L}({\rm Im} \psi).$
Set
$\mathcal{N}_{\iota} = \phi \iota_{*} TL \oplus \mathbb{R} \xi \circ \iota$.
Define the map $F: U \rightarrow C^{\infty}(L)$ by
\begin{align*}
F(v) = * (\exp_{v}^{*} ({\rm Im} \psi)),
\end{align*}
where $*$ is the Hodge star operator of $\iota^{*}g$.
Then the linearization $(dF)_{0}$ of $F$ at $0$ is given by
\begin{align*}
(dF)_{0}(v) = * \iota^{*} L_{v} {\rm Im} \psi
=
* (\iota^{*} ( i(v)d {\rm Im} \psi + d i(v) {\rm Im} \psi )).
\end{align*}
By \cite[Proposition 3.2]{Moriyama}, we have
$d \psi = - (n+1) i \eta \wedge \psi$.
Since $\iota$ is special affine Legendrian, we have $\iota^{*} {\rm Re}(\psi) = {\rm vol}_{\phi}[\iota]$.
Then we compute
\begin{align*}
\iota^{*} ( i(v)d {\rm Im} \psi
=
(n+1) (- \eta(v) {\rm vol}_{\phi}[\iota]
+ \iota^{*} (\eta \wedge i(v) {\rm Re} \psi)).
\end{align*}
Denoting $v = \phi \iota_{*} Y + f \xi$ where $Y \in \mathfrak{X}(L), f \in C^{\infty}(L)$, we have
\begin{align*}
i(v) \psi
&=
i (\phi \iota_{*} Y) i \left(r \frac{\partial}{\partial r} \right) \Omega |_{r=1}\\
&=
i (J \iota_{*} Y) i \left(r \frac{\partial}{\partial r} \right) \Omega |_{r=1}\\
&=
i \cdot i (\iota_{*} Y) i \left(r \frac{\partial}{\partial r} \right) \Omega |_{r=1}
=
i \cdot i (\iota_{*} Y) \psi.
\end{align*}
which implies that
\begin{align*}
\iota^{*} (i(v) {\rm Re} \psi) = 0, \qquad
\iota^{*} d (i(v) {\rm Im} \psi) = d (i(Y) {\rm vol}_{\phi}[\iota]).
\end{align*}
Then we obtain
\begin{align*}
D_{1}(v) = (dF)_{0}(v)
&= * (-(n+1)* (\rho_{\phi}[\iota] f) + d * (\iota^{*} g (\rho_{\phi}[\iota]Y, \cdot)))\\
&=
-(n+1) \rho_{\phi}[\iota] f - d^{*} (\iota^{*} g (\rho_{\phi}[\iota]Y, \cdot)).
\end{align*}
Via the identification
\begin{align*}
\begin{array}{cccc}
C^{\infty}(L, \mathcal{N}_{\iota}) =&
C^{\infty}(L, \phi \iota_{*} TL \oplus \mathbb{R} \xi \circ \iota) & \longrightarrow &
C^{\infty}(L) \oplus \Omega^{1}(L) \\
&\rotatebox{90}{$\in$} & & \rotatebox{90}{$\in$} \\
&\phi \iota_{*}Y + f \xi & \longmapsto & (\rho_{\phi}[\iota]f, \iota^{*} g (\rho_{\phi}[\iota]Y, \cdot)),
\end{array}
\end{align*}
the map $D_{1}$ is given by
\begin{align*}
C^{\infty}(L) \oplus \Omega^{1}(L) \ni (g, \alpha) \mapsto
-(n+1) g - d^{*} \alpha
\in C^{\infty}(L).
\end{align*}
Since $D_{1}^{*}(h) = (-(n+1)h, -dh)$, we see that
\begin{align*}
D_{1} D_{1}^{*}(h) = (n+1)^{2} h + d^{*} d h,
\end{align*}
which is clearly elliptic.
We easily see that ${\rm Im}(F) \subset {\rm Im}(D_{1})$ since ${\rm Im}(D_{1}) = C^{\infty}(L)$.
Then
setting $D_{2}=0$ and $V_{3} = \{ 0 \}$, we can apply Proposition \ref{general moduli}
to see that
the moduli space
of special affine Legendrian embeddings of $L$
is an infinite dimensional smooth Fr\'{e}chet manifold
modeled on the Fr\'{e}chet vector space
$\{ (g, \alpha) \in C^{\infty}(L) \oplus \Omega^{1}(L); (n+1)g + d^{*} \alpha = 0 \} \cong \Omega^{1}(L)$.
Note that
we have
$\Omega^{1}(L) = \{ \alpha \in \Omega^{1}(L); d^{*} \alpha = 0 \} \oplus d C^{\infty}(L)$
by the Hodge decomposition
and $d C^{\infty}(L) = C^{\infty}(L) / \mathbb{R}$ is identified with
the space of functions with integral $0$.
Since the moduli space of affine Legendrian submanifolds is open in $\mathcal{M}(L,M)$
and special affine Legendrian submanifolds are affine Legendrian,
the proof is done.
\end{proof}
\begin{rem}
Applying Proposition \ref{general moduli} to the affine Lagrangian case,
we can also deduce \cite[Theorem 1.1]{Opozda}.
\end{rem}
|
1,477,468,750,291 | arxiv |
\section{\bf Scientific Background}
\input{sciback-l}
\section{\bf Methods}
\input{methods-l}
\section{\bf Contributions}
\input{res-l}
\section{\bf Experimental Evaluations}
\input{xp-l}
\section{\bf Conclusion}
\input{conclu-l}
\section*{\bf Acknowledgments}
This work has been partly funded by IDEX UCA$^{\textsc{jedi}}$.
\section{\bf The Colored-Table Method}
First of all, let us formally define the problem and analyze its complexity.
\begin{definition}[\DSECP]
The (finite) Discrete Dynamical Systems Solving Equations on Components Problem is a problem which takes in input
$C^1_p$ and $C^n_q$ and outputs the list of all the solutions $X$ to the equation $C^1_p \odot X = C^n_q$.
\end{definition}
Solving \DSECP\ is hard but still tractable. Indeed, the following lemma classifies our problem in \EnumP.
Recall that \EnumP\ is the complexity class of enumeration problems for which a solution can be verified in polynomial time~\cite{phdStrozecki}. It can be seen as the enumeration counterpart of the \NP\ complexity class.
\begin{lemma}
\DSECP\ is in \EnumP.
\end{lemma}
\begin{proof}
One just needs to be able to check if a given value is a solution in polynomial time. This can be done in linear time using
Lemma~\ref{lem:prodm}.
\end{proof}
\paragraph{Notation.} For any $n, p, q\in\mathbb{N}\xspace^{\star}$, let $T_{p,q}^n$ denote the set of solutions of Equation~\eqref{eq:simple}
and $S_{p,q}^n$ the set of solutions returned by the \texttt{colored-tree}\xspace method.
\medskip
The \texttt{colored-tree}\xspace method is pretty involved, we prefer start to illustrate it by an example.
\begin{example}
Consider the following equation $C_6^1 \odot X = C_6^6$.
The algorithm consists in two distinct phases: tree building and solution aggregation.
In the first phase, the algorithm enumerates all the divisors $\mathcal{D}$ of $6$ \emph{i.e.}\@\xspace $\set{6, 3, 2, 1}$. It then applies a making-change decomposition algorithm (MCDA)~\cite{AdamaszekA10} in which the total sum is $6$ and the allowed set of coins is $\mathcal{D'}=\mathcal{D}\setminus\set{6}$. MCDA decomposes $6$ as $3 + 3$ (which is an optimal decomposition).
MCDA is then applied recursively (always using $\mathcal{D} \setminus\set{i}$ as the set of coins to decompose $i$).
We obtain $(6= 3+3)$, $(3= 2+1)$ and $(2= 1+1)$ as reported in Table~\ref{table:ExColoredTable}.
%
\begin{table}[!b]
\centering
\caption{Final data-structure storing all the decompositions, each solution for each value and at each step, the set of all solutions for a given value.}\label{table:ExColoredTable}
\scalebox{0.7}{
\begin{tabular}{cccc}
\toprule
\textbf{Node} & \textbf{Splits} & \textbf{Node solution} & \textbf{Subtree solutions set} \\ \midrule
6 & [3,3][2,2,2] & $C_6^1$ &
\begin{tabular}{@{}c@{}}
$\{ C^1_6,C^2_3,C^1_1 \oplus C^1_2 \oplus C^1_3,C^1_3 \oplus C^3_1,$\\
$C^1_2 \oplus C^4_1,C^6_1,C^3_2,C^2_1 \oplus C^2_2 \}$
\end{tabular} \\ \rowcolor{gray!40}
3 & [2,1] & $C_3^1$ & $\set{ C^1_3,C^1_1 \oplus C^1_2,C^3_1}$ \\
2 & [1,1] & $C_2^1$ & $\set{ C^2_1,C^1_2}$ \\ \rowcolor{gray!40}
1 & $\emptyset$ & $C_1^1$ & $\set{ C_1^1}$ \\
\bottomrule
\end{tabular}}
\end{table}
%
At this point, a check is performed to ensure that all possible ways of decomposing $6$ using $\mathcal{D'}$ are present in the tree.
In our case, we already have $[3,3]$ found by the first run of MCDA. We also found: $[3,2,1]$, $[2,2,1,1]$, $[1,1,2,1,1]$, $[1,1,1,1,1,1]$ by the recursive application of MCDA. By performing the check, we discover that the decomposition of $6$ as $[2,2,2]$ is not represented in the current tree. For this reason, $[2,2,2]$ is added to the set of decompositions of $6$ as illustrated in Figure~\ref{fig:treetable}, it is assigned a new color and a recursive application of MCDA is started on the newly added nodes. A new check ensures that all decompositions are
present. This ends the building phase. The resulting tree is reported in Figure~\ref{fig:treetable}.
%
\begin{figure}[!t]
\centering
\begin{tikzpicture}[scale=.45,-latex,auto, semithick, font=\footnotesize,
state/.style ={circle,draw,minimum width=4mm},
bleu/.style={fill=bleu!40},
vert/.style={fill=vert!40}
]
\node[state] (a) at (0,0) {$6$};
\node[state,bleu] (b) at (-7,-2) {$3$};
\node[state,bleu] (c) at (-3,-2) {$3$};
\node[state,vert] (d) at (0,-2) {$2$};
\node[state,vert] (e) at (4,-2) {$2$};
\node[state,vert] (f) at (7,-2) {$2$};
\node[state,bleu] (g) at (-6,-4) {$1$};
\node[state,bleu] (h) at (-8,-4) {$2$};
\node[state,bleu] (i) at (-4,-4) {$1$};
\node[state,bleu] (l) at (-2,-4) {$2$};
\node[state,vert] (m) at (0,-4) {$1$};
\node[state,vert] (n) at (2,-4) {$1$};
\node[state,vert] (o) at (4,-4) {$1$};
\node[state,vert] (p) at (6,-4) {$1$};
\node[state,vert] (q) at (8,-4) {$1$};
\node[state,vert] (r) at (10,-4) {$1$};
\node[state,bleu] (s) at (-9,-6) {$1$};
\node[state,bleu] (t) at (-7,-6) {$1$};
\node[state,bleu] (u) at (-3,-6) {$1$};
\node[state,bleu] (v) at (-1,-6) {$1$};
\draw[->] (a) to (b);
\draw[->] (a) to (c);
\draw[->] (a) to (d);
\draw[->] (a) to (e);
\draw[->] (a) to (f);
\draw[->] (b) to (g);
\draw[->] (b) to (h);
\draw[->] (c) to (i);
\draw[->] (c) to (l);
\draw[->] (d) to (m);
\draw[->] (d) to (n);
\draw[->] (e) to (o);
\draw[->] (e) to (p);
\draw[->] (f) to (q);
\draw[->] (f) to (r);
\draw[->] (h) to (s);
\draw[->] (h) to (t);
\draw[->] (l) to (u);
\draw[->] (l) to (v);
\end{tikzpicture}
\caption{The colored tree for the equation $C^1_6 \odot X=C^6_6$ after the completeness check.}
\label{fig:treetable}
\end{figure}
After this first phase of construction of the tree, the aggregation of solutions starts. Remark that each node $m$ represents the equation $C^1_p \odot X =C^m_q$ that we call the \textbf{node equation}. The single component solution is called the \textbf{node solution} and it is obtained thanks to Lemma~\ref{lem:prodm}, $C^1_{{\frac{q}{p}} \times m}$ whenever a \textbf{feasible solution} exists \emph{i.e.}\@\xspace if $\gcd(p,{\frac{q}{p}} \times m)=m$ and $\texttt{lcm}\xspace(p,{\frac{q}{p}} \times m)=q$. For example, for $m=3$ one finds $x=C_3^1$.
To find all the solutions for the current node one must also take the Cartesian product of the solutions sets in the subtrees of the same
color and then the union of the solution sets of nodes of different colors (different splits). All the solutions can be found in Table \ref{table:ExColoredTable}.
\end{example}
\begin{example}
Consider the equation $C_2^1 \odot X = C_4^5$.
In the first phase, the algorithm enumerates all the divisors $\mathcal{D}$ of $4$ \emph{i.e.}\@\xspace $\set{4, 2, 1}$.
It then applies a making-change decomposition algorithm (MCDA)~\cite{AdamaszekA10}. MCDA decomposes $5$ as $4 + 1$ (which is an optimal decomposition).
MCDA is then applied recursively always using $\mathcal{D} \setminus\set{i}$ as the set of coins to decompose $i$.
We obtain $(5= 4+1)$, $(4= 2+2)$ and $(2= 1+1)$ as reported in Table~\ref{table:ExColoredTablenosol}.
\begin{table}[!ht]
\centering
\scalebox{1.0}{
\begin{tabular}{cccc}
\toprule
\textbf{Node} & \textbf{Splits} & \textbf{Node solution} & \textbf{Subtree solutions set} \\ \midrule
5 & [4,1] & $\{\}$ &
$\{\} $ \\ \rowcolor{gray!40}
4 & [2,2] & $\{\}$ & $\{ C^2_4 \}$ \\
2 & [1,1] & $C_4^1$ & $\{ C^1_4\}$ \\ \rowcolor{gray!40}
1 & $\emptyset$ & $\{\}$ & $\{\}$ \\
\bottomrule
\end{tabular}}
\caption{\label{table:ExColoredTablenosol}Final data-structure storing all the decomposition, each solution for each value and at each step, the set of all solutions for a given value.}
\end{table}
At this point, a check is performed to ensure that all possible ways of decomposing $5$ using $\mathcal{D} \setminus\set{i}$ as the set of coins to decompose $i$.
In our case, we already have $[4,1]$ found by the first run of MCDA. We also found: $[2,2,1]$, $[2,1,1,1]$, $[1,1,1,1,1]$ by the recursive application of MCDA.
By performing the check, we discover that all the possible decompositions of $5$ are represented in the current tree.
This ends the building phase.
The resulting tree is reported in Figure~\ref{fig:treetablenosol}.
\begin{figure}[!ht]
\centering
\begin{tikzpicture}[scale=.50,-latex,auto, semithick , state/.style ={circle,draw,minimum width=0.5cm}]
\node[state] (a) at (0,0) {$5$};
\node[state,fill=bleu!40] (c) at (-2,-2) {$4$};
\node[state,fill=bleu!40] (e) at (2,-2) {$1$};
\node[state,fill=bleu!40] (i) at (-5,-4) {$2$};
\node[state,fill=bleu!40] (l) at (1,-4) {$2$};
\node[state,fill=bleu!40] (u) at (-1,-6) {$1$};
\node[state,fill=bleu!40] (v) at (3,-6) {$1$};
\node[state,fill=bleu!40] (s) at (-3,-6) {$1$};
\node[state,fill=bleu!40] (r) at (-7,-6) {$1$};
\draw[->] (a) to[] (c);
\draw[->] (a) to[] (e);
\draw[->] (c) to[] (i);
\draw[->] (c) to[] (l);
\draw[->] (l) to[] (u);
\draw[->] (l) to[] (v);
\draw[->] (i) to[] (s);
\draw[->] (i) to[] (r);
\end{tikzpicture}
\caption{The tree represented in the table for $C^1_2x=C^5_4$, after the check of completeness.}
\label{fig:treetablenosol}
\end{figure}
After this first phase of construction of the tree, the aggregation of solutions starts. In this case the tree presents only one color. Remark that if in the cartesian product a empty set is involved, the result of the operation is the empty set. For example, for $m=2$ ,
one has that the node solution is $C_4^1$. From the subtrees of the node one finds a empty set, but with the union of the solution of the node, the subtree solutions set for $m=2$ is $\set{C_4^1}$.
Moreover, the final solution set for the node $5$ is the empty set, in fact in the Cartesian product $m=1$ is involved (empty set). In this case the method return a empty set of solutions, that represents the impossibility of the equation.
\end{example}
\begin{example}
Consider the equation $C_2^1 \odot X = C_6^{12}$.
In the first phase, the algorithm enumerates all the divisors $\mathcal{D}$ of $6$ \emph{i.e.}\@\xspace $\set{6,3, 2, 1}$.
It then applies a making-change decomposition algorithm (MCDA)~\cite{AdamaszekA10}. MCDA decomposes $12$ as $6 + 6$ (which is an optimal decomposition).
MCDA is then applied recursively always using $\mathcal{D} \setminus\set{i}$ as the set of coins to decompose $i$.
We obtain $(12= 6+6)$, $(6= 3+3)$, $(3= 2+1)$ and $(2= 1+1)$ as reported in Table~\ref{table:ExColoredTablemoresol}.
\begin{table}[h!]
\centering
\scalebox{1.0}{
\begin{tabular}{cccc}
\toprule
\textbf{Node} & \textbf{Splits} & \textbf{Node solution} & \textbf{Subtree solutions set} \\ \midrule
12 & [6,6] & $\{\}$ &
\begin{tabular}{@{}c@{}}
$\{ C^4_3 \oplus C^4_6,C^{12}_3,C^6_6,C^6_3 \oplus C^3_6,$\\
$C^2_6 \oplus C^8_3,C^2_3 \oplus C^5_6,C^1_6 \oplus C^{10}_3\}$
\end{tabular} \\ \rowcolor{gray!40}
6 &[3,3] [2,2,2] & $\{\}$ & $\{ C^6_3 , C^2_6 \oplus C^2_3,C^4_3 \oplus C^1_6,C^3_6\}$ \\
3 & [2,1] & $\{\}$ & $\{ C^3_3,C_6^1 \oplus C_3^1 \}$ \\ \rowcolor{gray!40}
2 & [1,1] & $C_6^1$ & $\{ C^1_6,C_3^2 \}$ \\
1 & $\emptyset$ & $C_3^1$ & $\{C_3^1\}$ \\ \rowcolor{gray!40}
\bottomrule
\end{tabular}
}
\caption{\label{table:ExColoredTablemoresol}Final data-structure storing all the decomposition, each solution for each value and at each step, the set of all solutions for a given value.}
\end{table}
At this point, a check is performed to ensure that all possible ways of decomposing $12$ using $\mathcal{D'}$ is present in the tree.
In our case, the decomposition of $6$ in $[2,2,2]$ is added in "each occurrence" of $6$. This ends the building phase. The resulting tree is reported in Figure~\ref{fig:treetablemoresol}.
\begin{figure}[!ht]
\centering
\begin{tikzpicture}[scale=.50,-latex,auto, semithick , state/.style ={circle,draw,minimum width=0.5cm}]
\node[state] (a) at (0,0) {$12$};
\node[state,fill=bleu!40] (b) at (-5,-2) {$6$};
\node[state,fill=bleu!40] (c) at (5,-2) {$6$};
\node[state,fill=rose!40] (d) at (-11,-5) {$3$};
\node[state,fill=rose!40] (e) at (-9,-5) {$3$};
\node[state,fill=vert!40] (f) at (-5,-5) {$2$};
\node[state,fill=vert!40] (g) at (-3,-5) {$2$};
\node[state,fill=vert!40] (h) at (-1,-5) {$2$};
\node[state,fill=yellow!40] (i) at (1,-5) {$3$};
\node[state,fill=yellow!40] (l) at (3,-5) {$3$};
\node[state,fill=orange!40] (m) at (6,-5) {$2$};
\node[state,fill=orange!40] (n) at (9,-5) {$2$};
\node[state,fill=orange!40] (o) at (11,-5) {$2$};
\draw[->] (a) to[] (c);
\draw[->] (a) to[] (b);
\draw[->] (b) to[] (d);
\draw[->] (b) to[] (e);
\draw[->] (b) to[] (f);
\draw[->] (b) to[] (g);
\draw[->] (b) to[] (h);
\draw[->] (c) to[] (i);
\draw[->] (c) to[] (l);
\draw[->] (c) to[] (m);
\draw[->] (c) to[] (n);
\draw[->] (c) to[] (o);
\end{tikzpicture}
\caption{The first two levels of the tree represented in the table for $C^1_2x=C^{12}_6$, after the check of completeness.}
\label{fig:treetablemoresol}
\end{figure}
After this first phase of construction of the tree, the aggregation of solutions starts. To find the solutions for the current node one must also take the Cartesian product of the solutions sets in the subtrees of the same
color and then the union of the solution sets of nodes of different colors (different splits). For example, for $m=12$ (\emph{i.e.}\@\xspace the root node), the cartesian product between $6$ and $6$ is computed, but for $m=6$ (in each occurrence) two cartesian operations and a union are necessary.
Therefore, the final solution set for the node $12$ is $\set{ C^4_3 \oplus C^4_6,C^{12}_3,C^6_6,C^6_3 \oplus C^3_6,C^2_6 \oplus C^8_3,C^2_3 \oplus C^5_6,C^1_6 \oplus C^{10}_3}$.
\end{example}
Although we can describe our algorithm with a pseudocode, and then we can sketch some proofs about its soundness, completeness and termination.
\begin{lstlisting}[caption=\textbf{Colored-Tree} - Complete algorithm for the enumeration problem.,
label=coloredtable,
escapechar=^]
procedure Colored-Tree(p, n, q):
// input 'p,q,n': the parameters of the equation
// enumerate all the solutions of the equation
node,splits,nodeSolution,SubTreeSolutions=[]
D=divisors(q)
node.add(n,1)
for i in node.length do
if (node[i]!=1) then
splits[i]=MCDA(node[i],D \ node[i])
generateNewNodes(splits[i])
SubTreeSolutions[i].add(nodeSolutions[i])
end
end
checkRepresented()
for i in node.length do
nodeSolution[i]=computeSingleSolution(node[i])
end
IncreaseOrder()
for i in node.length do
if (node[i]!=1) then
solutionsSplits=[]
for j in splits[i] do
solutionsSplits.add(cartesian(splits[i][j]))
end
SubTreeSolutions[i].add(union(solutionsSplits))
end
end
return SubTreeSolutions[node.length]
\end{lstlisting}
The Lisiting \ref{coloredtable} presents the procedure using some particular functions:
\begin{itemize}
\item \textbf{generateNewNodes} adds the elements of the split, the node necessary in order to decompose but not yet represented as nodes in the nodes set.
\item \textbf{MCDA} computes the optimal solutions of the making-change problem for a node value and a set of coins.
\item \textbf{computeSingleSolution} returns the node solution for a node equation represented with a node.
\item \textbf{checkRepresented} check if all the possible decomositions of the root are represented, otherwise add the corrisponding sub-tree.
\item \textbf{IncreaseOrder} permutes the row of the table in the increasing order according to the value of the nodes.
\end{itemize}
Now we can sketch some proofs about its soundness, completeness and termination.
\begin{proposition}[Soundness]
For all $n,p,q\in\mathbb{N}\xspace^{\star},\;S_{p,q}^n\subseteq T_{p,q}^n$.
\end{proposition}
\begin{proof}
Let us prove the soundness by induction on the depth of the tree from leaves to root.
\textit{Induction base}: if there is only one step, we know by Lemma~\ref{lem:prodm}, that a solution found is feasible iff $gcd(p,{\frac{q}{p}} \times m)=m$ and $lcm(p,{\frac{q}{p}} \times m)=q$, and because there is only one leaf in the base, we therefore, obtain all the solutions.
\textit{Induction hypothesis}: let us assume that we have all the possible solutions at a depth $n$ and let us show that we can obtain all the solutions at a depth $n+1$.
\textit{Induction step}: It is easy to see that a solution exists if and only if it comes from a decomposition.
Thus, by performing a Cartesian product between the set of solutions at depth $n$ (which is true by IH) and the node solution (which is true by Induction base, since the node can be seen as a leaf), we know that we will obtain all the solution coming from the possible decomposition in the sub-tree.
If a solution is coming from another sub-tree, since we perform an exhaustive check where we assign a different color to the other sub-tree, we know again, by IH and because we are taking the union of all the possible solutions, that we have all the possible solutions at a depth $n+1$.
\end{proof}
\begin{proposition}[Completeness]
For all $n,p,q\in\mathbb{N}\xspace^{\star},\;T_{p,q}^n\subseteq S_{p,q}^n$.
\end{proposition}
\begin{proof}
By contradiction, let us assume that there exists a solution $r \in T_{p,q}^n$ and that $r \not\in S_{p,q}^n$. This means that the colored-tree method does not return it. This implies that it exists a decomposition of $n$, which leads to $r$, such that this decomposition is not in the tree. This is impossible since, an exhaustive check is performed to assure that all the decompositions are there. Therefore, all solutions are returned.
\end{proof}
\begin{proposition}[Termination]
The colored-tree method always terminates.
\end{proposition}
\begin{proof}
The building phase always terminates since the colored-tree has maximal depth $\mathcal{D'}=div(q,n)$ and the number of different
possible colors is bounded by $2^k$ where $k$ is the size of the multi-set containing $n/p_i$ copies of the divisor $p_i$ per each divisor in $\mathcal{D'}$. The aggregation phase always terminates since it performs a finite number of operations per each node of the colored tree.
\end{proof}
Now that we have defined the problem, its complexity and a sound and complete algorithm to solve it.
It is time to experimentally evaluate it in order to study its scalability.
|
1,477,468,750,292 | arxiv | \section{Introduction}
In the 1950's Wigner introduced random real symmetric matrices to
model the highly excited energy levels of heavy nuclei (see
\cite{Po65}). From the experimental data, a natural statistic to
calculate empirically is the distribution of the spacing between
consecutive levels, normalized so that the spacing is unity. For
random real symmetric matrices $X$ with independent Gaussian entries
such that the joint probability density function (p.d.f.) for the
elements is proportional to $e^{-{\rm Tr}(X^2)/2}$ (such matrices
are said to form the Gaussian orthogonal ensemble, abbreviated GOE),
Wigner used heuristic reasoning to surmise that the spacing
distribution is well approximated by the functional form
\begin{equation}\label{pW}
p_1^{\rm W}(s) := {\pi s \over 2} e^{- \pi s^2/4}.
\end{equation}
In the limit of infinite matrix size, it was subsequently proved by
Gaudin that the exact spacing distribution is given by
\begin{equation}\label{pd}
p_1(s) = {d^2 \over d s^2} \det (\mathbb{I} - K_{(0,s)}^{\rm bulk,
+} ),
\end{equation}
where $\mathbb{I}$ stands for the identity operator and where
$K_{(0,s)}^{\rm bulk, +}$ is the integral operator supported on
$(0,s)$ with kernel
\begin{equation}\label{pd1}
{\sin \pi (x - y) \over \pi (x - y) }
\end{equation}
restricted to its even eigenfunctions. It was shown that this
integral operator commutes with the differential operator for the so
called prolate spherical functions, and from the numerical
determinantion of the corresponding eigenvalues (\ref{pd}) was
computed and shown to differ from the approximation (\ref{pW}) by no
more than a few percent.
The (Fredholm) determinant in (\ref{pd}) is itself a probabilistic
quantity. Thus let $E_1^{\rm bulk}(0;(0,s))$ denote the probability
that for the infinite GOE, scaled so that the mean spacing is
unity, the interval $(0,s)$ of the spectrum contains no eigenvalues.
Then
\begin{equation}\label{S1}
E_1^{\rm bulk}(0;(0,s)) = \det ( \mathbb{I} - K_{(0,s)}^{\rm bulk,
+} ).
\end{equation}
In applications of random matrices to the eigenspectrum of quantum
Hamiltonians, two other ensembles in addition to the GOE are
relevant. These are the Gaussian unitary ensemble (GUE) of complex
Hermitian matrices, and the Gaussian symplectic ensemble (GSE) of
Hermitian matrices with real quaternion elements. For the infinite
limit of such ensembles of matrices, scaled so that the mean density
is unity, let $E_2^{\rm bulk}(0;(0,s))$ and $E_4^{\rm
bulk}(0;(0,s))$ respectively denote the probabilities that the
interval $(0,s)$ is free of eigenvalues. Then it is known
\cite{Dy62,DM} that
\begin{align}\label{1.5}
E_2^{\rm bulk}(0;(0,s)) & = \det ( \mathbb{I} - K_{(0,s)}^{\rm bulk} ) \nonumber \\
& = \det ( \mathbb{I} - K_{(0,s)}^{\rm bulk,+} )
\det ( \mathbb{I} - K_{(0,s)}^{\rm bulk,-} )
\end{align}
while
\begin{equation}\label{S3}
E_4(0;(0,s)) = {1 \over 2} \Big ( \det ( \mathbb{I} -
K_{(0,2s)}^{\rm bulk, +} ) +
\det ( \mathbb{I} - K_{(0,2s)}^{\rm bulk, -} ) \Big )
\end{equation}
where $K^{\rm bulk, -}_J$ denotes the integral operator $K^{\rm
bulk}_J$ restricted to odd eigenfunctions.
The remarkable structure exhibited by (\ref{S1})--(\ref{S3}) can
also be seen in certain Painlev\'e transcendent evaluations of the
gap probabilities \cite{Fo06}. These expressions are given in terms
of the solution of the $\sigma$-form of the $P_{\rm III'}$ equation
\begin{equation}\label{1.6a}
(t \sigma '')^2 - v_1 v_2 (\sigma')^2 + \sigma'(4 \sigma' - 1)
(\sigma - t \sigma') - {1 \over 4^3} (v_1 - v_2)^2 = 0
\end{equation}
with
$$
v_1 = v_2 = a = \pm {1 \over 2}
$$
subject to the boundary condition
\begin{equation}\label{1.6b}
\sigma(t;a) \: \mathop{\sim}\limits_{t \to 0^+} \: {t^{1 + a} \over
2^{2 + 2a} \Gamma(1 + a) \Gamma(2 + a) }.
\end{equation}
In terms of this solution, introduce the corresponding
$\tau$-functions by
\begin{equation}\label{1.6c}
\tau_{\rm III'}(s;a) := \exp \Big ( - \int_0^s {\sigma(t;a) \over t}
\, dt \Big ).
\end{equation}
Then
\begin{align}
E_1^{\rm bulk}(0;(0,2s)) & = \tau_{\rm III'}((\pi s)^2;-1/2)
\label{t1}
\\
E_2^{\rm bulk}(0;(0,2s)) & = \tau_{\rm III'}((\pi s)^2;-1/2)
\,\tau_{\rm III'}((\pi s)^2;1/2) \label{t2}
\\
E_4^{\rm bulk}(0;(0,2s)) & = {1 \over 2} \Big ( \tau_{\rm
III'}((\pi s)^2;-1/2) + \tau_{\rm III'}((\pi s)^2;1/2) \Big ).
\label{t3}
\end{align}
Comparison of the results (\ref{S1})--(\ref{S3}) with the results
(\ref{t1})--(\ref{t3}) shows
\begin{equation}\label{g1}
\det ( \mathbb{I} - K_{(0,2s)}^{\rm bulk, \pm}) = \tau_{\rm
III'}((\pi s)^2; \mp 1/2).
\end{equation}
It is the objective of this paper to give formulas analogous to
(\ref{g1}) for both the soft and hard edge scalings. In so doing we
will be relating known $\tau$-function evaluations of these
quantities to some recently derived Fredholm determinant formulas in
the case of the soft edge, and to some new Fredholm determinant
formulas in the case of the hard edge. Further, these identities
will be generalized to include a generating function type parameter
$\xi$.
\section{Soft edge scaling}
\setcounter{equation}{0} Soft edge scaling refers to shifting the
origin to the neighbourhood of the largest, or smallest, eigenvalue
where it is required that the support of the eigenvalue density is
unbounded beyond this eigenvalue, and then scaling so that the
average eigenvalue spacings in this neighbourhood are of order
unity.
The soft edge scaling can be made precise in the case of the
Gaussian and Laguerre ensembles. For this let us define a random
matrix ensemble by its eigenvalue p.d.f., assumed to be of the
functional form
\begin{equation}\label{g}
{1 \over C} \prod_{l=1}^N g(x_l) \prod_{1 \le j < k \le N} |x_k -
x_j|^\beta,
\end{equation}
and denote the corresponding probability that the interval $J$ is
free of eigenvalues by $E_\beta(0;J;g(x);N)$. For the Gaussian
ensembles with $\beta = 1$ or 2, the soft edge scaling is defined by
\begin{equation}\label{1G}
E_\beta^{\rm soft}(0;(s,\infty)) := \lim_{N \to \infty} E_\beta
\Big ( 0;(\sqrt{2N} + {s \over \sqrt{2} N^{1/6} }, \infty);
e^{-\beta x^2/2};N \Big )
\end{equation}
while for $\beta = 4$ a more natural definition (see the formulas of
\cite{AFNV00}) is
\begin{equation}\label{4G}
E_4^{\rm soft}(0;(s,\infty)) := \lim_{N \to \infty} E_4 \Big (
0;(\sqrt{2N} + {s \over \sqrt{2} N^{1/6} }, \infty); e^{- x^2};N/2
\Big )
\end{equation}
It is expected that for a large class of weights $g(x)$ in
(\ref{g}), the soft edge limit of the gap probabilities exists and
is equal to that for the Gaussian ensembles (see \cite{De99} for
some proofs related to this statement). This can be checked
explicitly in the case of the Laguerre ensembles (i.e.~the weight
$g(x) = x^a e^{-x}$, $x>0$ in (\ref{g}), up to scaling of $x$). Thus
for $\beta = 1$ or 2 we have
\begin{equation}\label{1L}
\lim_{N \to \infty} E_\beta \Big ( 0;(4N + 2 (2N)^{1/3} s, \infty);
x^a e^{-\beta x/2};N \Big ) = E_\beta^{\rm soft}(0;(s,\infty))
\end{equation}
while for $\beta = 4$
\begin{equation}\label{1L1}
\lim_{N \to \infty} E_4 \Big ( 0;(4N + 2 (2N)^{1/3} s, \infty); x^a
e^{- x};2N \Big ) = E_4^{\rm soft}(0;(s,\infty)).
\end{equation}
A number of exact expressions are known for the $E_\beta^{\rm
soft}$. Let us consider first those involving Painlev\'e
transcendents. These can in turn be grouped into two types. The
first of these relates to the particular Painlev\'e II transcendent
$q(s)$, specifed as the solution of the Painlev\'e II equation
\begin{equation}\label{2.50}
q'' = s q + 2 q^3 + \alpha
\end{equation}
with $\alpha = 0$ and subject to the boundary condition
\begin{equation}\label{2.5a}
q(s) \mathop{\sim}\limits_{s \to \infty} {\rm Ai}(s)
\end{equation}
where ${\rm Ai}(s)$ denotes the Airy function. One has
\cite{TW94a,TW96} (see \cite{Fo00} for a simplified derivation of
the latter two)
\begin{align}
E_2^{\rm soft}(0;(s,\infty)) & = \exp \Big ( - \int_s^\infty (t - s) q^2(t) \, dt \Big ) \\
E_1^{\rm soft}(0;(s,\infty)) & = \exp \Big ( - {1 \over 2}
\int_s^\infty (t - s) q^2(t) \, dt \Big ) \exp \Big ( {1 \over 2}
\int_s^\infty q(t) \, dt \Big )
\label{2.7}\\
E_4^{\rm soft}(0;(s,\infty)) & = {1 \over 2} \exp \Big ( - {1 \over
2} \int_s^\infty (t - s) q^2(t) \, dt \Big )
\nonumber \\
& \qquad \times \Big ( \exp \Big ( {1 \over 2} \int_s^\infty q(t) \,
dt \Big ) +
\exp \Big ( - {1 \over 2} \int_s^\infty q(t) \, dt \Big ) \Big ).
\end{align}
The alternative Painlev\'e expressions relate to the $\sigma$-form
of the P${}_{\rm II}$ equation
\begin{equation}\label{2.9b}
(H_{II}'')^2 + 4 (H_{II}')^3 + 2 H_{II}'(t H_{II}' - H_{II} ) - {1
\over 4} ( \alpha + {1 \over 2} )^2 = 0.
\end{equation}
Introduce the auxiliary Hamiltonian
\begin{equation}\label{hH}
h_{II}(t;\alpha) := H_{II}(t;\alpha) + {t^2 \over 8}
\end{equation}
and the corresponding $\tau$-function
\begin{equation}\label{th}
\tau_{II}(s;\alpha) = \exp \Big ( - \int_s^\infty h_{II}(t;\alpha)
\, dt \Big ).
\end{equation}
Then from \cite{FW02} we know that
\begin{align}
E_1^{\rm soft}(0;(s,\infty)) & = \tau_{II}^+(s;0) \label{2.11a} \\
E_2^{\rm soft}(0;(s,\infty)) & = \tau_{II}^+(s;0) \tau_{II}^-(s,0) \label{2.11b} \\
E_4^{\rm soft}(0;(s,\infty)) & = {1 \over 2} \Big (
\tau_{II}^+(s;0) + \tau_{II}^-(s;0) \Big ) \label{E12}
\end{align}
where $\tau_{II}^{\pm}(s,0)$ is specified by (\ref{th}) with
$h_{II}(t;0)$ in (\ref{hH}) subject to the boundary condition
$h_{II}(t;0) \sim \pm {1 \over 2} {\rm Ai}(t)$ as $t \to \infty$.
We turn our attention now to Fredholm determinant expressions for
the gap probabilities at the soft edge. The best known is the $\beta
= 2$ result \cite{Fo93}
\begin{equation}\label{A}
E_2^{\rm soft}(0;(s,\infty)) = \det (\mathbb{I} - K^{\rm
soft}_{(s,\infty)} )
\end{equation}
where $ K^{\rm soft}_{(s,\infty)}$ is the integral operator on
$(s,\infty)$ with kernel
\begin{equation}
K^{\rm soft}(x,y) = { {\rm Ai }(x) {\rm Ai}'(y) - {\rm Ai }(y)
{\rm Ai}'(x) \over x - y}.
\end{equation}
This can be rewritten \cite{TW94a}
\begin{equation}\label{2.14}
E_2^{\rm soft}(0;(s,\infty)) = \det (\mathbb{I} - \tilde{K}^{\rm
soft}_{(0,\infty)} )
\end{equation}
where $ \tilde{K}^{\rm soft}_{(0,\infty)}$ is the integral operator
on $(0,\infty)$ with kernel
$$
\tilde{K}^{\rm soft} = \int_0^\infty {\rm Ai}(s+x+t) {\rm Ai}(s+y+t)
\, dt,
$$
which in turn implies
\begin{equation}\label{2V}
E_2^{\rm soft}(0;(s,\infty)) = \det ( \mathbb{I} - V^{\rm
soft}_{(0,\infty)} ) \det ( \mathbb{I} + V^{\rm soft}_{(0,\infty)} )
\end{equation}
where $V^{\rm soft}_{(0,\infty)}$ is the integral operator on
$(0,\infty)$ with kernel
\begin{equation}
V^{\rm soft}(x,u) = {\rm Ai}(x+u+s).
\end{equation}
Recently it has been conjectured by Sasamoto \cite{Sa}, and
subsequently proved by Ferrari and Spohn \cite{FS05} that
\begin{equation}\label{2V1}
E_1^{\rm soft}(0;(s,\infty)) = \det (\mathbb{I} - V^{\rm
soft}_{(0,\infty)} ),
\end{equation}
which is the soft edge analogue of the evaluation of $E_1^{\rm
bulk}(0;(0,s))$ (\ref{S1}). Comparing (\ref{2V}), (\ref{2V1}) with
(\ref{2.11b}), we see immediately that
\begin{equation}\label{2.23}
\tau_{II}^{\pm}(s;0) = \det (\mathbb{I} \mp V^{\rm
soft}_{(0,\infty)} ).
\end{equation}
This is the soft edge analogue of the bulk identity (\ref{g1}).
\section{Hard edge scaling}
\setcounter{equation}{0} The Laguerre ensemble has its origin in
positive definite matrices $X^\dagger X$ where $X$ is an $n \times
N$ matrix $(n \ge N)$ with real $(\beta = 1)$, complex $(\beta = 2$)
or real quaternion $(\beta = 4)$ entries. Being positive definite
the eigenvalue density is strictly zero for $x<0$; for this reason
the neighbourhood of $x=0$ is referred to as the hard edge. The hard
edge scaling limit takes $N \to \infty$ while keeping the mean
spacing between eigenvalues near $x=0$ of order unity. In relation
to the gap probabilities, this can be accomplished by the limits
$$
E_\beta^{\rm hard}(0;(0,s);a) := \lim_{N \to \infty}
E_\beta\left(0;(0, {s \over 4 N}); x^a e^{-\beta x /2}; N\right)
$$
for $\beta = 1,2$, while for $\beta = 4$
$$
E_4(0;(0,s);a) := \lim_{N \to \infty} E_4 \left(0;(0, {s \over 4N});
x^a e^{- x }; N/2\right).
$$
As for the soft edge, there are two classes of Painlev\'e
evaluations of the gap probability at the hard edge. The first
involves the solution $\tilde{q}(t)$ of the nonlinear equation
\begin{equation}\label{2.63}
t (\tilde{q}^2 - 1) (t \tilde{q}')' = \tilde{q} (t \tilde{q}')^2 +
{1 \over 4} (t - a^2) \tilde{q} + {1 \over 4} t
\tilde{q}^3(\tilde{q}^2 - 2)
\end{equation}
(a transformed version of the Painlev\'e V equation) subject to the
boundary condition
\begin{equation}\label{3.1b}
\tilde{q}(t;a) \: \mathop{\sim}\limits_{t \to 0^+} \: {1 \over 2^a
\Gamma(1+a)} t^{a/2}.
\end{equation}
Thus \cite{TW94b}
\begin{equation}
E_2^{\rm hard}(0;(0,s);a) = \exp \Big ( - {1 \over 4} \int_0^s \Big
( \log {s \over t} \Big ) \tilde{q}^2(t;a) \, dt \Big )
\end{equation}
while \cite{Fo00}
\begin{align}
E_1^{\rm hard}(0;(0,s);{\textstyle {a-1 \over 2}}) &=
\exp \Big ( - {1 \over 8} \int_0^s \Big ( \log {s \over t}
\Big ) \tilde{q}^2(t;a) \, dt \Big ) \exp \Big ( - {1 \over 4}
\int_0^s {\tilde{q}(t;a) \over \sqrt{t} } \, dt \Big )
\label{3.3}\\
E_4^{\rm hard}(0;(0,s);a+1)
&={1 \over 2}
\exp \Big ( - {1 \over 8} \int_0^s \Big ( \log {s \over t}
\Big ) \tilde{q}^2(t;a) \, dt \Big ) \nonumber \\
&\qquad \times \Big ( \exp \Big ( - {1 \over 4} \int_0^s
{\tilde{q}(t;a) \over \sqrt{t} } \, dt \Big ) + \exp \Big ( {1 \over
4} \int_0^s {\tilde{q}(t;a) \over \sqrt{t} } \, dt \Big ) \Big ).
\end{align}
For the second class of Painlev\'e evaluations at the hard edge, we
recall the $\sigma$-form of the $P_V$ equation
\begin{multline}\label{sV}
(t \sigma'')^2 - \Big ( \sigma - t \sigma' + 2 (\sigma')^2 +
(\nu_0 + \nu_1 + \nu_2 + \nu_3) \sigma' \Big )^2
\\ + 4(\nu_0 + \sigma') (\nu_1 + \sigma') (\nu_2 + \sigma') (\nu_3 +
\sigma') = 0.
\end{multline}
Set
\begin{equation}\label{sV1}
\nu_0 = 0, \quad \nu_1 = v_2 - v_1, \quad \nu_2 = v_3 - v_1, \quad
\nu_3 = v_4 - v_1
\end{equation}
and let
\begin{equation}\label{3.6a}
x \tilde{h}_V^{\pm}(x;a) = \sigma^{\pm}(x;a) - {1 \over 4} x^2 +
{a-1 \over 2} x - {a (a -1) \over 4}
\end{equation}
where $\sigma^{\pm}(x;a)$ satisfies (\ref{sV}) with $t \mapsto 2x$,
subject to the boundary condition consistent with
\begin{equation}\label{3.7}
x \tilde{h}_V^{\pm}(x;a)
\: \mathop{\sim}\limits_{x \to 0^+} \: \mp {x^{a+1} \over 2^{a+1} \Gamma(a+1) }.
\end{equation}
Further, introduce the $\tau$-function
\begin{equation}
\tau_V^{\pm}(s;a) = \exp \int_0^s \tilde{h}_V^{\pm}(x;a) \, dx.
\end{equation}
In terms of this quantity \cite{FW02}
\begin{align}
E_1^{\rm hard}(0;(0,s); {a - 1 \over 2} ) & = \tau_V^+(\sqrt{s};a) \label{3.9y} \\
E_2^{\rm hard}(0;(0,s); a ) & = \tau_V^+(\sqrt{s};a) \tau_V^-(\sqrt{s};a) \label{3.9x} \\
E_4^{\rm hard}(0;(0,s); a +1 ) & = {1 \over 2} \Big (
\tau_V^+(\sqrt{s};a) + \tau_V^-(\sqrt{s};a) \Big ) \label{3.9}
\end{align}
where the parameters (\ref{sV1}) are specified by
$$
v_1 = - v_3 = - (a-1)/4, \qquad v_2 = - v_4 = (a+1)/4.
$$
In relation to Fredholm determinant expressions for the gap
probabilities at the soft edge, analogous to (\ref{A}) we have
\cite{Fo93}
\begin{equation}\label{3.xx}
E_2^{\rm hard}((0,s);a) = \det (\mathbb{I} - K^{\rm hard}_{(0,s)})
\end{equation}
where $ K^{\rm hard}_{(0,s)}$ is the integral operator on $(0,s)$
with kernel
\begin{equation}\label{hardkernel}
K^{\rm hard}(x,y) = {J_a(\sqrt{x}) \sqrt{y} J_a'(\sqrt{y}) -
\sqrt{x} J_a'(\sqrt{x}) J_a(\sqrt{y}) \over x - y}.
\end{equation}
This can be rewritten \cite{TW94b}
\begin{equation}\label{3.12}
E_2^{\rm hard}((0,s);a) = \det (\mathbb{I} - \tilde{K}^{\rm
hard}_{(0,1)})
\end{equation}
where $ \tilde{K}^{\rm hard}_{(0,1)}$ is the integral operator on
$(0,1)$ with kernel
\begin{equation}
\tilde{K}^{\rm hard}(x,y) = {s \over 4} \int_0^1 J_a(\sqrt{sxu}) J_a(\sqrt{syu}) \, du.
\end{equation}
Because
\begin{equation}\label{3.19}
\tilde{K}^{\rm hard}_{(0,1)} = (V_{(0,1)}^{\rm hard})^2
\end{equation}
where $V_{(0,1)}^{\rm hard}$ is the integral operator on $(0,1)$
with kernel
\begin{equation}\label{defV}
V^{\rm hard}(x,y) = {\sqrt{s} \over 2} J_a(\sqrt{sxy}),
\end{equation}
it follows that
\begin{equation}\label{3.20}
E_2^{\rm hard}((0,s);a) = \det (\mathbb{I} - V_{(0,1)}^{\rm hard})
\det (\mathbb{I} + V_{(0,1)}^{\rm hard}).
\end{equation}
For $\beta = 1$, a Fredholm determinant expression analogous to the
result (\ref{2V1}) holds true. This is proved with the help of the
three following lemmas, which are modeled on the strategy used in
\cite{FS05} to prove (\ref{2V1}).
\begin{lemma}\label{L1} Let $V=V_{(0,1)}^{\rm hard}$ and $\rho(x)=1/\sqrt{x}$ for $x>0$. Let $\langle
f|g\rangle_{(0,1)}=\int_0^1f(x)g(x)dx$ be the scalar product in
$\mathfrak{L}^2(0,1)$. Let also $\delta_1$ denote the delta
function at $1$; that is, $\langle \delta_1|f\rangle_{(0,1)}=f(1)$.
Then,
$$
\left(E_1^{\rm
hard}\left((0,s);{\textstyle\frac{a-1}{2}}\right)\right)^2=\det
(\mathbb{I} - V)\det (\mathbb{I} + V)\langle\delta_1|(\mathbb{I}
+V)^{-1}\rho\rangle_{(0,1)}.
$$
\end{lemma}
\begin{proof}
We know from \cite{Fo00} that
$$\left(E_1^{\rm
hard}\left((0,s);{\textstyle\frac{a-1}{2}}\right)\right)^2=\det
\left(\mathbb{I}-K^{\mathrm{hard}}_{(0,s)} -C\otimes D\right),$$
where $K^{\mathrm{hard}}_{(0,s)}$ and $C\otimes D$ are integral
operators on $(0,s)$ whose kernels are respectively
$K^{{\mathrm{hard}}}(x,y)$ (see Eq.~(\ref{hardkernel})) and
$J_a(\sqrt{x})\frac{1}{2\sqrt{y}}\int_{\sqrt{y}}^\infty J_a(t)dt$.
Note that $f\otimes g$ stands for an integral operator with kernel
\begin{equation}\label{tensprod}(f\otimes
g)(x,y)=f(x)g(y).\end{equation}We now make use of
$\sqrt{s}J_a(\sqrt{x})=2(V\delta_1)(x)$ and $\int_0^\infty
J_a(y)dy=1$ for showing that
\begin{align}
(C\otimes Df)(x)&=
\frac{J_a(\sqrt{x})}{2}\int_0^s\left(1-\int_0^{\sqrt{y}}J_a(t)dt\right)\frac{f(y)}{\sqrt{y}}dy\nonumber\\
&=\frac{\sqrt{s}}{2}J_a(\sqrt{x})
\int_0^1\left(\frac{1}{\sqrt{y}}-\frac{\sqrt{s}}{2}\int_0^1\frac{J_a(\sqrt{syt})}{\sqrt{t}}dt
\right)f(sy)dy\nonumber\\
&=(V\delta_1)(x)\int_0^1\Big(\rho(y)-(V\rho)(y)\Big)f(sy)dy.
\nonumber
\end{align}
Then by recalling Eqs \eqref{3.xx}--\eqref{3.19}, we get
\begin{align} \left(E_1^{\rm
hard}\left((0,s);{\textstyle\frac{a-1}{2}}\right)\right)^2&=\det
\left(\mathbb{I}-V^2
-V\delta_1\otimes(\mathbb{I} -V)\rho\right)\nonumber\\
&=\det(\mathbb{I} -V)\det(\mathbb{I} +V)\det\left(\mathbb{I}
-(\mathbb{I} +V)^{-1}\rho\otimes V\delta_1\right)\label{blabla}
\end{align}
But $\mathbb{I} -(\mathbb{I} +V)^{-1}\rho\otimes V\delta_1$ is a degenerate operator of
rank $1$ (see e.g.\ \cite[Eq.\ (17)]{TW96}). This means that
Eq.~\eqref{blabla} can be written as
$$ \left(E_1^{\rm
hard}\left((0,s);{\textstyle\frac{a-1}{2}}\right)\right)^2=\det(\mathbb{I}
-V)\det(\mathbb{I} +V)\left(1 -\langle\delta_1|(\mathbb{I}
+V)^{-1}V\rho\rangle_{(0,1)}\right).$$The use of
$\langle\rho|\delta_1\rangle=1$ finishes the proof.
\end{proof}
\begin{lemma}\label{L2}
Let $\Delta$ be the operator defined by $(\Delta f)(x)=x\partial_x
f(x)$ and let $\otimes$ be the direct product defined in
Eq.~(\ref{tensprod}). Then, for $V=\ensuremath{V_{(0,1)}^{ \mathrm{\mathrm{hard}}}}$,
$$ 2s\frac{\partial}{\partial s}V=(\mathbb{I} +2\Delta)V,\qquad
\Delta V=-V\Delta+V\delta_1\otimes\delta_1-V,$$ and consequently,
$$s\frac{\partial}{\partial s}V=\frac{1}{2}(\mathbb{I} -V^2)^{-1}V(\mathbb{I} +2\Delta)-(\mathbb{I} -V^2)^{-1}V\delta_1\otimes\delta_1(\mathbb{I} +V)^{-1}.$$
\end{lemma}
\begin{proof}
Firstly, the definition of $V=\ensuremath{V_{(0,1)}^{ \mathrm{\mathrm{hard}}}}$ ( given in Eq.~\eqref{defV}) and
the property
$s\partial_sJ_{a}(\sqrt{sxt})=x\partial_xJ_{a}(\sqrt{sxt})$ directly
imply that$$ s\frac{\partial}{\partial s}(Vf)(x)=
s\frac{\partial}{\partial
s}\left(\frac{\sqrt{s}}{2}\int_0^1J_{a}(\sqrt{sxt})f(t)dt\right)=\frac{1}{2}(Vf)(x)+(\Delta
Vf)(x)$$ which is the desired result. Secondly, by using
$x\partial_xJ_{a}(\sqrt{sxt})=t\partial_tJ_{a}(\sqrt{sxt})$ and by
integrating by parts, we find
\begin{align} (\Delta
Vf)(x)&=\frac{\sqrt{s}}{2}\int_0^1t\frac{\partial}{\partial
t}\left(J_{a}(\sqrt{sxt})\right)f(t)dt\nonumber\\
&=\frac{\sqrt{s}}{2}J_a(\sqrt{sx})f(1)-\frac{\sqrt{s}}{2}\int_0^1J_{a}(\sqrt{sxt})f(t)dt\nonumber\\
&\qquad\qquad-\frac{\sqrt{s}}{2}\int_0^1J_{a}(\sqrt{sxt})J_{a}(\sqrt{sxt})t\frac{\partial}{\partial
t}\left(f(t)\right)dt\nonumber\\
&=(V\delta_1)(x)\langle\delta_1|f\rangle_{(0,1)}-(Vf)(x)-(V\Delta
f)(x),\nonumber\end{align} as expected. Finally, by exploiting
$2s{\partial_s}V=(\mathbb{I}+2\Delta)V$, $(\mathbb{I} +V)^{-1}=
\sum_{n\geq0}(-1)^n V^n$ and $(\mathbb{I}
+V)^{-2}=\sum_{n\geq0}(-1)^n(n+1) V^n$, we get
\begin{align}
2s\frac{\partial}{\partial s}(\mathbb{I} +V)^{-1}&:=\sum_{n\geq
1}(-1)^n
s\frac{\partial}{\partial s}V^n\nonumber\\
&=\sum_{n\geq 1}(-1)^n \sum_{k=0}^{n-1}V^k
\left(2s\frac{\partial}{\partial
s}V\right)V^{n-k-1}\nonumber\\
&=-V(\mathbb{I} +V)^{-2}+2\sum_{n\geq 1}(-1)^n \sum_{k=0}^{n-1}V^k
\Delta V^{n-k},\nonumber
\end{align}
But,
for any operators $O$ and $P$ such that $OV=-VO-P$, we have
\cite[Lemma 3]{FS05}
$$
\sum_{n\geq 1}(-1)^n \sum_{k=0}^{n-1}V^k O V^{n-k}=(\mathbb{I}
-V^2)^{-1}VO+(\mathbb{I} -V^2)^{-1}P(\mathbb{I} +V)^{-1}.
$$In our case, $O=\Delta$ and $P=-V\delta_1\otimes\delta_1+V$.
Therefore,
\begin{multline*}2s\frac{\partial}{\partial s}(\mathbb{I}
+V)^{-1}=-V(\mathbb{I} +V)^{-2}+2(\mathbb{I} -V^2)^{-1}V(\mathbb{I}
+V)^{-1}\\+2(\mathbb{I} -V^2)^{-1}V\Delta-(\mathbb{I}
-V^2)^{-1}V\delta_1\otimes\delta_1(\mathbb{I}
+V)^{-1}.\end{multline*} This turns out to be equivalent to the last
equation we wanted to prove.
\end{proof}
\begin{lemma}\label{L3} Let $M$ be a symmetric,
trace class operator in $\mathfrak{L}^2(0,1)$. Then,
$$\mathrm{Tr}\left[(\mathbb{I} +2\Delta)M\right]=\langle\delta_1|M\delta_1\rangle_{(0,1)}.$$
\end{lemma}
\begin{proof} Set $\{f_i\}$ and $\{\lambda_i\}$, respectively the
orthonormal eigenfunctions and the eigenvalues of $M$. On the one
hand, we have
$$\langle\delta_1|M\delta_1\rangle_{(0,1)}=\sum_i\lambda_if_i(1)^2.$$
On the other hand, we have
$$\mathrm{Tr}[(\mathbb{I} +2\Delta)M]=\sum_i\langle
f_i|(1+2\Delta)Mf_i\rangle_{(0,1)}=\sum_i\lambda_i\left(1+2\langle
f_i|\Delta f_i\rangle_{(0,1)}\right).$$ But integration by parts
gives
$$\langle
f_i|\Delta
f_i\rangle_{(0,1)}=\int_0^1f_i(x)x\frac{\partial}{\partial
x}f_i(x)dx=f_i(1)^2-1-\int_0^1f_i(x)x\frac{\partial}{\partial
x}f_i(x)dx.$$ Consequently, $ \mathrm{Tr}[(\mathbb{I}
+2\Delta)M]=\sum_i\lambda_i f_i(1)^2$ and the lemma follows.
\end{proof}
\begin{prop}
We have
\begin{equation}\label{3.18}
E_1^{\rm hard}\left((0,s);{a-1 \over 2}\right) = \det (\mathbb{I} -
V_{(0,1)}^{\rm hard}),
\end{equation}and consequently $$ \tau_V^+(\sqrt{s})=\det (\mathbb{I} -
V_{(0,1)}^{\rm hard}).$$
\end{prop}
\begin{proof} From Lemma \ref{L1}, we know that the proposition is true if
\begin{equation}
\label{3.22a} \det\left((\mathbb{I} -V)(\mathbb{I} +V)^{-1}\right)=
\langle\rho|(\mathbb{I} +V)^{-1}\delta_1\rangle_{(0,1)}
\end{equation}
or equivalently, if
\begin{equation} \label{PE1}\ln\det\left((\mathbb{I} -V)(\mathbb{I} +V)^{-1}\right)= \ln
\langle\delta_1|(\mathbb{I}
+V)^{-1}\rho\rangle_{(0,1)}.\end{equation}But from the fact that
$V\rightarrow0$ as $s\rightarrow0$, we deduce that Eq.~\eqref{PE1}
holds if and only if
$$
s\frac{\partial}{\partial s}\ln\det\left((\mathbb{I} -V)(\mathbb{I} +V)^{-1}\right)= s\frac{\partial}{\partial s}\ln
\langle\delta_1|(\mathbb{I} +V)^{-1}\rho\rangle_{(0,1)}.$$By virtue
of $s\partial_s \ln (\det M)=\mathrm{Tr}(M^{-1}s\partial_sM)$, the
latter equation reads
\begin{equation}\label{PE2}
\mathrm{Tr}\left[(\mathbb{I} -V^2)^{-1}2 s\frac{\partial}{\partial
s}V\right]=
-\frac{\langle\delta_1|s\frac{\partial}{\partial s}(\mathbb{I} +V)^{-1}\rho\rangle_{(0,1)}}{
\langle\delta_1|(\mathbb{I}
+V)^{-1}\rho\rangle_{(0,1)}}.\end{equation}Using the cyclicity of
the trace and Lemma \ref{L3}, we find that
\begin{equation}\label{PE3}
\mathrm{Tr}\left[(\mathbb{I} -V^2)^{-1}2 s\frac{\partial}{\partial
s}V\right]=\mathrm{Tr}\left[(\mathbb{I} -V^2)^{-1}(\mathbb{I}
+2\Delta)V\right]=\langle\delta_1|(\mathbb{I}
-V^2)^{-1}V\delta_1\rangle_{(0,1)}.\end{equation} Furthermore, Lemma
\ref{L2} and $(\mathbb{I} +2\Delta)\rho=0$ imply that
\begin{multline}\label{PE4}
-\frac{\langle\delta_1|s\frac{\partial}{\partial s}(\mathbb{I}
+V)^{-1}\rho\rangle_{(0,1)}}{ \langle\delta_1|(\mathbb{I}
+V)^{-1}\rho\rangle_{(0,1)}}\\=\frac{\langle\delta_1|(\mathbb{I}
-V^2)^{-1}V\delta_1\otimes\delta_1(\mathbb{I}
+V)^{-1}\rho\rangle_{(0,1)}}{ \langle\delta_1|(\mathbb{I}
+V)^{-1}\rho\rangle_{(0,1)}}=\langle\delta_1|(\mathbb{I}
-V^2)^{-1}V\delta_1\rangle_{(0,1)}.\end{multline} The comparison of
Eqs (\ref{PE3})--(\ref{PE4}) finally establishes the validity of
Eq.~(\ref{PE2}), and the proposition follows.\end{proof}
By comparing (\ref{3.18}) with (\ref{3.9y}), and then equating
(\ref{3.9x}) and (\ref{3.20}), we obtain the hard edge analogue of
(\ref{2.23}).
\begin{cor}
One has
\begin{equation}\label{3.23}
\tau_V^{\pm}(\sqrt{s}) = \det (\mathbb{I} \mp V_{(0,1)}^{\rm hard}).
\end{equation}
\end{cor}
We remark that the evaluation of the hard edge gap probability
(\ref{3.18}), and the identity (\ref{3.23}), contain the
evaluation of the soft edge gap probability (\ref{2V1}), and the
identity (\ref{2.23}), as a limiting case. This follows from the
limit formula (see e.g.~\cite{BF03}),
$$
E_1^{\rm soft}(0;(s,\infty)) = \lim_{a \to \infty} E_1^{\rm
hard}\left(0;(0,a^2-(2a^2)^{2/3}s); {a - 1 \over 2} \right).
$$
\section{Generating function generalization}
\setcounter{equation}{0} The probabilistic quantity $E_2^{\rm
bulk}(0;(0,s))$ is the first member of the sequence $\{E_2^{\rm
bulk}(n;(0,s))\}_{n=0,1,\dots}$ where $E_2^{\rm bulk}(n;(0,s))$
denotes the probability that the interval $(0,s)$ contains exactly
$n$ eigenvalues. Introducing the generating function for this
sequence by
\begin{equation}\label{4.0}
E_2^{\rm bulk}((0,s);\xi) := \sum_{n=0}^\infty (1 - \xi)^n E_2^{\rm
bulk}(n;(0,s)),
\end{equation}
it is well known that \cite{Gaudin}
\begin{eqnarray}\label{4.1}
E_2^{\rm bulk}((0,s);\xi) & = & \det (\mathbb{I} - \xi K_{(0,s)}^{\rm bulk}) \nonumber \\
& = & \det (\mathbb{I} - \xi K_{(0,s)}^{\rm bulk,+})\det (\mathbb{I}
- \xi K_{(0,s)}^{\rm bulk,-}).
\end{eqnarray}
Thus to obtain from the Fredholm determinant expressions (\ref{1.5})
for $E_2^{\rm bulk}(0;(0,s))$ expressions for the generating
function (\ref{4.0}), one merely multiplies the kernel by $\xi$.
This immediately raises the question as to whether the formula
(\ref{g1}) admits a generalization upon multiplying the kernel by
$\xi$? The answer is that it does, with the only change being in the
initial condition (\ref{1.6b}) satisfied by the transcendent
$\sigma(t;a)$ in (\ref{1.6c}). Thus specify $\sigma(t;a)$ as again
satisfying (\ref{1.6a}), but now subject to the boundary condition
$$
\sigma(t;a;\xi) \: \mathop{\sim}\limits_{t \to 0^+} \: {\xi t^{1 +
a} \over 2^{2 + 2a} \Gamma(1 + a) \Gamma(2 + a) }.
$$
Then with
$$
\tau_{\rm III'}(s;a;\xi) := \exp \Big ( - \int_0^s {\sigma(t;a;\xi)
\over t} \, dt \Big )
$$
we have \cite{TW94b,Fo06}
\begin{equation}\label{4.2a}
\det (\mathbb{I}- \xi K_{(0,2s)}^{\rm bulk,\pm}) = \tau_{\rm III'}((\pi s)^2, \mp 1/2; \xi).
\end{equation}
Now, the gap probabilities at the soft and hard edges can similarly
be generalized to generating functions. Thus, in an obvious notation
\begin{align*}
E_2^{\rm soft}((s,\infty);\xi) & = \sum_{n=0}^\infty (1 - \xi)^n
E_2^{\rm soft}(n;(s,\infty)) \\
E_2^{\rm hard}((0,s);a;\xi) & = \sum_{n=0}^\infty (1 - \xi)^n
E_2^{\rm hard}(n;(0,s);a).
\end{align*}
Analogous to (\ref{4.1}), it is fundamental in random matrix theory
that (\ref{2.14}) and (\ref{3.12}) generalize (see e.g.~\cite{Fo02})
to give
\begin{align}\label{4.4}
E_2^{\rm soft}((s,\infty);\xi) & = \det (\mathbb{I}- \xi
\tilde{K}^{\rm soft}_{(0,\infty)} )
\nonumber \\
& = \det ( \mathbb{I} - \sqrt{\xi} V^{\rm soft}_{(0,\infty)} ) \det
( \mathbb{I} + \sqrt{\xi} V^{\rm soft}_{(0,\infty)} )
\end{align}
and
\begin{align}\label{4.5}
E_2^{\rm hard}((0,s);a) & = \det (\mathbb{I} - \xi \tilde{K}^{\rm
hard}_{(0,1)})
\nonumber \\
& =
\det (\mathbb{I} - \sqrt{\xi} V_{(0,1)}^{\rm hard}) \det (\mathbb{I} + \sqrt{\xi} V_{(0,1)}^{\rm hard}).
\end{align}
Also, analogous to the situation with $E_2^{\rm bulk}((0,s);\xi)$ we
know from \cite{TW94a,TW94b,FW02} that the $\tau$-function formulas
in (\ref{E12}) and (\ref{3.9x}) for $E_2^{\rm soft}(0;(s,\infty))$
and $E_2^{\rm hard}(0;(0,s))$ require only modification to the
boundary condition satisfied by the corresponding transcendent to
generalize to $\tau$-function formulas for the generating functions.
Explicitly, in relation to $E_2^{\rm soft}$, in (\ref{2.9b}) and
(\ref{hH}) again set $\alpha = 0$, but now require that $H_{\rm II}$
and thus $h_{\rm II}$ depend on an auxiliary parameter $\xi$ by
specifying the boundary condition
\begin{equation}\label{Ap}
h_{\rm II}^\pm(t;0;\xi) \mathop{\sim}\limits_{t \to \infty} \pm
{\sqrt{\xi} \over 2} {\rm Ai}(t).
\end{equation}
Then, with
$$
\tau_{\rm II}^\pm(s;\alpha;\xi) = \exp \Big ( - \int_0^\infty h_{\rm
II}^\pm(t;\alpha;\xi) \, dt \Big ),
$$
we have \cite{TW94a}
\begin{equation}\label{4.6a}
E_2^{\rm soft}((s,\infty);\xi) = \tau_{\rm II}^+(s;0;\xi) \tau_{\rm
II}^-(s;0;\xi)
\end{equation}
where the superscripts refer to the corresponding sign in
(\ref{Ap}). And generalizing the identity implied by the equality
between (\ref{2.7}) and (\ref{2.11a}) $\tau_{\rm II}^+$ admits the
further Painlev\'e transcendent form \cite{TW96, FW02}
\begin{equation}\label{4.6}
\tau_{\rm II}^{\pm}(s;0;\xi) = \exp \Big ( - {1 \over 2}
\int_s^\infty (t - s) q^2(t;\xi) \, dt \Big ) \exp \Big ( \mp {1
\over 2} \int_s^\infty q(t;\xi) \, dt \Big )
\end{equation}
where $q(t;\xi)$ satisfies (\ref{2.50}) with $\alpha = 0$ subject to
the boundary condition
\begin{equation}
q(s;\xi) \mathop{\sim}\limits_{s \to \infty} \sqrt{\xi} {\rm Ai}(s).
\end{equation}
At the hard edge again specify $\tilde{h}_V^\pm$ in terms of
$\sigma^\pm$ by (\ref{3.6a}), but now modify the boundary condition
(\ref{3.7}) by multiplying it by $\sqrt{\xi}$ and thus requiring
that
$$
x \tilde{h}_V^{\pm}(x;a;\xi)
\: \mathop{\sim}\limits_{x \to 0^+} \: \mp {\sqrt{\xi} x^{a+1} \over 2^{a+1} \Gamma(a+1) }.
$$
With the corresponding $\tau$ function specified by
$$
\tau_V^{\pm}(s;a;\xi) = \exp \int_0^s \tilde{h}_V^{\pm}(x;a;\xi) \,
dx
$$
we then have \cite{FW02}
\begin{equation}\label{4.8a}
E_2^{\rm hard}((0,s);a;\xi) = \tau_V^+(s;a;\xi) \tau_V^-(s;a;\xi).
\end{equation}
Analogous to (\ref{4.6}) $\tau_V^\pm$ admits the further Painlev\'e
transcendent form \cite{Fo00,FW02}
\begin{equation}\label{4.8}
\tau_V^\pm(s;a;\xi) =
\exp \left( - {1 \over 8} \int_0^s \Big ( \log {s \over t}
\Big ) \tilde{q}^2(t;a;\xi) \, dt \right) \exp \left( \mp {1 \over
4} \int_0^s {\tilde{q}(t;a;\xi) \over \sqrt{t} } \, dt \right)
\end{equation}
where $\tilde{q}(t;a;\xi)$ satisfies (\ref{2.63}) but now with the
boundary condition
$$
\tilde{q}(t;a;\xi) \mathop{\sim}\limits_{t \to 0^+} {\sqrt{\xi}
\over 2^a \Gamma(1+a) } t^{a/2}.
$$
This with $\xi = 1$ reduces (in the "+" case) to the equality
implied by (\ref{3.9y}) and (\ref{3.3}).
The general $\xi$ bulk identity (\ref{4.2a}) leads us to investigate
if, as is true at $\xi = 1$ according to (\ref{2.23}) and
(\ref{3.23}), that the factors in the Fredholm determinant
factorizations (\ref{4.4}), (\ref{4.5}) coincide with those in the
$\tau$-function factorizations (\ref{4.6a}), (\ref{4.8a}). The
answer is that they do coincide, but to show this requires some
intermediate working. We will detail this working for the soft edge,
and be content with a sketch in the hard edge, as the strategy is
very similar.
\begin{lemma}\label{U1}
With $q(t;\xi)$ as in (\ref{4.6})
\begin{equation}\label{La1}
\exp \Big ( - \int_s^\infty q(t;\xi) \, dt \Big ) = 1 -
\int_s^\infty [(\mathbb{I}- \xi K^{\rm soft})^{-1} A^{\rm s}](y)
B^{\rm s}(y) \, dy
\end{equation}
where $A^{\rm s}$ is the operator which multiplies by $\sqrt{\xi}
{\rm Ai}(x)$, while
\begin{equation}\label{w0}
B^{\rm s}(y) := 1 - \sqrt{\xi} \int_y^\infty {\rm Ai}(x) \, dx.
\end{equation}
\end{lemma}
\begin{proof} We closely follow the working in \cite{Fo00}, referring to
equations therein as required. Introduce the notation
$$
\phi(x) = \sqrt{\xi} {\rm Ai}(x), \qquad Q(x) = [(\mathbb{I} - \xi
K^{\rm soft})^{-1} \phi](x)
$$
so that
\begin{equation}\label{w1}
\int_s^\infty[(\mathbb{I} - \xi K^{\rm soft})^{-1} A^{\rm s}](y)
B^{\rm s}(y) \, dy = \int_s^\infty dy \, Q(y) \Big ( 1 -
\int_y^\infty \phi(v) \, dv \Big ) =: u_\epsilon.
\end{equation}
The strategy is to derive coupled differential equations for
$u_\epsilon$ and
\begin{equation}\label{w2}
q_\epsilon := \int_s^\infty dy \, \rho(s,y) \Big ( 1 -
\int_y^\infty \phi(v) \, dv \Big ),
\end{equation}
where $ \rho(s,y)$ denotes the kernel of the integral operator $
(\mathbb{I} - \xi K^{\rm soft})^{-1}$.
According to the working of \cite[eqs.~(3.11)--(3.14)]{Fo00} the
sought equations are
\begin{align}
{d u_\epsilon \over ds} & = - q(s;\xi) q_\epsilon \label{x1} \\
{d q_\epsilon \over ds} & = q(s;\xi) (1 - u_\epsilon ), \label{x2}
\end{align}
where $q(s;\xi)$ enters via the fact that $Q(s) = q(s;\xi)$. Since
$Q(y)$ is smooth while $\rho(s,y)$ is equal to the delta function
$\delta(s-y)$ plus a smooth term, we see from (\ref{w1}), (\ref{w2})
that the equations (\ref{x1}), (\ref{x2}) must be solved subject to
the boundary conditions
$$
u_\epsilon \to 0, \qquad q_\epsilon \to 1 \qquad {\rm as} \quad s \to \infty
$$
It is simple to verify that the solution subject to these boundary
conditions is
$$
u_\epsilon(s) = 1 - q_\epsilon(s) = 1 - \exp\Big ( - \int_s^\infty
q(x;\xi) \, dx\Big ),
$$
and (\ref{La1}) follows.\end{proof}
\begin{lemma}\label{U2}
One has
\begin{equation}\label{x4}
1 - \int_s^\infty [(\mathbb{I} - \xi K^{\rm soft})^{-1} A^{\rm
s}](y) B^{\rm s}(y) \, dy = \langle \delta_0 | (\mathbb{I} +
\sqrt{\xi} V^{\rm soft}_{(0,\infty)})^{-1} 1 \rangle_{(0,\infty)}.
\end{equation}
\end{lemma}
\begin{proof}
Changing variables $y \mapsto y + s$ and noting from
(\ref{w0}) that
$$
B^{\rm s}(y+s) = [(\mathbb{I}- \sqrt{\xi} V_{(0,\infty)}^{\rm soft})(1)](y)
$$
shows that the left hand side of (\ref{x4}) is equal to
$$
1 - \langle \delta_0 | \sqrt{\xi} V_{(0,\infty)}^{\rm soft}
(\mathbb{I} + \sqrt{\xi} V_{(0,\infty)}^{\rm soft} )^{-1} 1
\rangle_{(0,\infty)}.
$$
This reduces to the right hand side upon noting $ \langle \delta_0
|1 \rangle_{(0,\infty)}= 1$. \end{proof}
The sought $\xi$ generalization of (\ref{2.23}) can now be
established.
\begin{prop}
One has
\begin{equation}\label{T1}
\tau_{II}^{\pm}(s;0;\xi) = \det (\mathbb{I} \mp \sqrt{\xi}
V_{(0,\infty)}^{\rm soft}).
\end{equation}
\end{prop}
\begin{proof}
The well known fact \cite{TW94b} that
\begin{equation}\label{qK}
\exp \Big ( - \int_s^\infty (t - s) q^2(t;\xi) \, dt \Big ) = \det
(\mathbb{I} - \xi \tilde{K}^{\rm soft}_{(0,\infty)})
\end{equation}
together with (\ref{4.6}), Lemma \ref{U1} and Lemma \ref{U2} tell us
that
$$
(\tau_{\rm II}^+(s;0;\xi) )^2 = \det ( \mathbb{I} - \xi
\tilde{K}^{\rm soft}) \langle \delta_0 | (\mathbb{I} + \sqrt{\xi}
V_{(0,\infty)}^{\rm soft})^{-1} 1 \rangle.
$$
Recalling (\ref{4.4}) we see that (\ref{T1}) in the "+" case is
equivalent to the identity
\begin{equation}\label{V11}
\det ( \mathbb{I} - \sqrt{\xi} V_{(0,\infty)}^{\rm soft}) =
\det(\mathbb{I} + \sqrt{\xi} V_{(0,\infty)}^{\rm soft} )\langle
\delta_0 | (\mathbb{I} + \sqrt{\xi} V_{(0,\infty)}^{\rm soft})^{-1}
1 \rangle.
\end{equation}
With $\xi = 1$ this is precisely the identity established in
\cite{FS05}. Inspection of the details of the derivation (on which,
as already mentioned, our Lemmas \ref{L1}--\ref{L3} are based) show
that the workings remain valid upon multiplying
$V_{(0,\infty)}^{\rm soft}$ by a scalar, so (\ref{V11}) is true, and
thus so is (\ref{T1}) in the "+" case. The validity of the "$-$"
case now follows from use of (\ref{qK}) and the plus case in
(\ref{4.4}). \end{proof}
At the hard edge, analogous to the result (\ref{T1}) we would like
to show that (\ref{3.23}) admits a $\xi$-generalization. The
$\xi$-generalization of the $\tau$-function on the left hand side is
given by (\ref{4.8a}). In relation to that expression we know that
\cite{TW94b}
$$
\exp \Big ( - {1 \over 4} \int_0^s \Big ( \log {s \over t}
\Big ) \tilde{q}^2(t;a;\xi) \, dt \Big ) = \det (\mathbb{I} - \xi
\tilde{K}^{\rm hard}_{(0,1)})
$$
while the workings of \cite{Fo00} allow us to deduce that
\begin{equation}\label{ss1}
\exp \Big ( - {1 \over 2} \int_0^s {\tilde{q}(t;a;\xi) \over
\sqrt{t} } \, dt \Big ) = 1 - \int_0^s [(\mathbb{I} - \xi K^{\rm
hard})^{-1} A^{\rm h}](y) B^{\rm h}(y) \, dy
\end{equation}
where $A^{\rm h}$ is the operator which multiplies by
$\sqrt{\xi}J_a(\sqrt{x})$, while
$$
B^{\rm h}(y) = {1 \over 2 \sqrt{y}} \Big ( 1 - \sqrt{\xi} \int_0^{\sqrt{y}}
J_a(t) \, dt \Big )
$$
(cf.~(\ref{La1})). Proceeding as in the proof of Lemma \ref{L1} (and
using the notation therein) shows that the right hand side of
(\ref{ss1}) is equal to
$$
\langle \delta_1 | (\mathbb{I}+ \sqrt{\xi} V_{(0,1)}^{\rm
hard})^{-1} \rho \rangle_{(0,1)}.
$$
With these preliminaries noted, our sort result can be established.
\begin{prop}
One has
\begin{equation}
\tau_V^{\pm}(s;a;\xi) = \det (\mathbb{I} \mp \sqrt{\xi}
V_{(0,1)}^{\rm hard}).
\end{equation}
\end{prop}
\begin{proof}
According to the above results, the "+" case is equivalent to
the identity
\begin{equation}\label{vv}
\det( \mathbb{I} - \sqrt{\xi} V_{(0,1)}^{\rm hard} ) = \det(
\mathbb{I} + \sqrt{\xi} V_{(0,1)}^{\rm hard} ) \langle \delta_1 |
(\mathbb{I} + \sqrt{\xi} V_{(0,1)}^{\rm hard})^{-1} \rho
\rangle_{(0,1)},
\end{equation}
which in the case $\xi=1$ is precisely (\ref{3.22a}). The derivation
given of the latter identity carries over unchanged with $V \mapsto
\sqrt{\xi} V$, thus verifying (\ref{vv}). The "minus" case can now
be deduced from (\ref{4.5}). \end{proof}
We conclude by noting a $\xi$-generalization which holds in the bulk
but not at the hard or soft edge. Thus in the bulk, with the
generating function for $\{E_1^{\rm bulk}(n;(0,s))\}_{n=0,1,\dots}$
specified by
$$
E_1^{{\rm bulk, \mp}}((0,s);\xi) = \sum_{n=0}^\infty (1 - \xi)^n
\Big ( E_1^{\rm bulk}(2n;(0,s)) + E_1^{\rm bulk}(2n \mp 1;(0,s))\Big
) ,
$$
the identity (\ref{S1}) admits the simple generalization (see
e.g.~[9])
\begin{equation}\label{f.1}
E_1^{{\rm bulk, \mp}}((0,s);\xi) = \det (\mathbb{I} + \sqrt{\xi}
K_{(0,\infty)}^{\rm bulk, \pm}).
\end{equation}
However the corresponding $\xi$ generalizations of (\ref{2V1}) and
(\ref{3.18}) cannot hold true, as the corresponding integral
operators are not positive definite, but rather have both positive
and negative eigenvalues. The Fredholm determinant $\det(\mathbb{I}
- \xi V^{\rm soft}_{(0,\infty)})$ (for example) thus vanishes for
some negative $\xi$, in contradiction to the behaviour of
$\sum_{n=0}^\infty (1 - \xi)^n E_1^{\rm soft}(n,(s,\infty))$.
\section*{Acknowledgement}
The work of P.J.F.~has been supported by the Australian Research
Council.
P.D.~is grateful to the Natural Sciences and Engineering Research
Council of Canada for a postdoctoral fellowship. We thank
N.S.~Witte for comments relating to \cite{FS05}.
|
1,477,468,750,293 | arxiv | \section{Introduction }
\subsection{Statement of the problem}
The present paper deals with the inverse problem of
determining the magnetic field and
the time-dependent electric potential in the magnetic Schr\"odinger equation
from the knowledge of boundary observations. Let $\Omega \subset\mathbb{R}^{n}$, $n\geq 3$, be a
bounded and simply connected domain with $\mathcal{C}^{\infty}$ boundary
$\Gamma$. We denote by $\Delta_{A}$ the Laplace operator associated to the
real valued magnetic potential $A\in \mathcal{C}^{3}(\Omega)$ which is defined
by
$$\Delta_{A}=\sum_{j=1}^{n}(\partial_{j}+ia_{j})^{2}=\Delta+2iA\cdot \nabla+i\,\mbox{div}(A)-|A|^{2}.$$
Given $T>0$, we denote by
$Q=\Omega\times (0,T)$
and $\Sigma=\Gamma\times (0,T)$. We consider the following initial boundary
problem for the Schr\"odinger equation
\begin{equation}\label{Eq1}
\left\{
\begin{array}{ll}
( i\partial_{t}+\Delta_{A}+q(x,t))u=0, & \mbox{in} \,Q, \\
u(.,0)=u_{0}, & \mbox{in}\, \Omega, \\
u=f, & \mbox{on} \,\Sigma,
\end{array}
\right.
\end{equation}
where the real valued bounded function $q\in W^{2,\infty}(0,T; W^{1,\infty}(\Omega))$ is the electric
potential. We define the Dirichlet-to-Neumann map associated to the magnetic
Schr\"odinger equation (\ref{Eq1}) as
$$\begin{array}{ccc}
\Lambda_{A,q}: H^{2}(\Omega)\times H^{2,1}(\Sigma)&\longrightarrow& H^{1}(\Omega)\times L^{2}(\Sigma)\\
(u_{0},f)&\longmapsto&\displaystyle\Big(u(.,T),(\partial_{\nu}+iA\cdot \nu)u\displaystyle\Big),
\end{array}$$
were $\nu(x)$ denotes the unit outward normal to $\Gamma$ at $x$, and
$\partial_{\nu} u$ stands for $\nabla u\cdot\nu$. Here $H^{2,1}(\Sigma)$ is a
Sobolev space we shall make precise below. We aim to know whether the knowledge of
the Dirichlet-to-Neumann map $\Lambda_{A,q}$ can uniquely determine the
magnetic and the electric potentials.
The problem of recovering coefficients in the magnetic Schr\"odinger equation
was treated by many authors. In \cite{[MC]}, Bellassoued and Choulli
considered the problem of recovering the magnetic potential $A$ from the
knowledge of the Dirichlet-to-Neumann map
$\Lambda_{A}(f)=(\partial_{\nu}+i\nu.A)u$ \,for \,$f\in L^{2}(\Sigma),$
associated to the Schr\"odinger equation with zero initial data. As it was
noted in \cite{[E]}, the Dirichlet-to-Neumann map $\Lambda_{A}$ is invariant
under the gauge transformation of the magnetic potential. Namely, given
$\varphi \in \mathcal{C}^{1}(\overline{\Omega})$ such that
$\varphi_{|\Gamma}=0$, we have
\begin{equation}\label{eq25}
e^{-i\varphi}\Delta_{A} e^{i\varphi}=\Delta_{A+\nabla\varphi},\,\,\,\,\, e^{-i\varphi}\Lambda_{A}e^{i\varphi}=\Lambda_{A+\nabla\varphi},
\end{equation}
and $\Lambda_{A}=\Lambda_{A+\nabla \varphi}$. Therefore, the magnetic potential $A$ can not be uniquely determined by the Dirichlet-to-Neumann map $\Lambda_{A}$. In geometric terms, the magnetic potential $A$ defines the connection given by the one form $\alpha_{A}=\sum_{j=1}^{n}a_{j}dx_{j}.$ The non uniqueness manifested in (\ref{eq25}) says that the best one can hope to recover from the Dirichlet-to-Neumann map is the 2-form
$$d\alpha_{A}=\sum_{i,j=1}^{n}\Big(\frac{\partial a_{i}}{\partial x_{j}}-\frac{\partial a_{j}}{\partial x_{i}} \Big)dx_{j}\wedge dx_{i},$$
called the magnetic field. Bellassoued and Choulli proved in dimension $n\geq 2$ that the knowledge of the Dirichlet-to-Neumann map $\Lambda_{A}$ H\"older stably determines the magnetic field $d\alpha_{A}$.
In the presence of a time-independent electric potential, the inverse problem of determining the magnetic field $d\alpha_{A}$ and the electric potential
$q$ from boundary observations was first considered by Sun \cite{[sun]}, in the case $n\geq 3$. He showed that $d\alpha_{A}$ and $q$ can be
uniquely determined when $A\in W^{2,\infty}$, $q\in L^{\infty}$ and $d\alpha_{A}$ is small in the $L^{\infty}$ norm. In \cite{[H]}, Benjoud studied the inverse problem of recovering the magnetic field $d\alpha_{A}$ and the electric potential $q$
from the knowledge of the Dirichlet-to-Neumann map. Assuming that the
potentials are known in a neighborhood of the boundary, she proved a stability estimate with respect to arbitrary partial boundary observations.\\
In the Riemannian case, Bellassoued \cite{[MB]} proved recently a
H\"older-type stability estimate in the recovery of the magnetic field
$d\alpha_{A}$ and the time-independent electric potential $q$ from the
knowledge of the Dirichlet-to-Neumann map associated to the Shr\"odinger
equation with zero initial data. In the absence of the magnetic potential
$A$, the problem of recovering the electric potential $q$ on a compact
Riemannian manifold was solved by Bellassoued and Dos Santos Ferreira \cite{[BD2]}.\\
In recent years significant progress have been made in the recovery of time-dependent and time-independent coefficients appearing in
hyperbolic equations, see for instance \cite{[BJY],[RS],[Is]}. We also refer to the work of Bellassoued and Benjoud \cite{[BH]} in which
they prove that the Dirichlet-to-Neumann map determines
uniquely the magnetic field in a magnetic wave equation. In \cite{[ES]},
Eskin proved that the Dirichlet-to-Neumann map uniquely determines
coefficients depending analytically on the time variable. In \cite{[Stef]},
Stefanov proved that the time-dependent potential $q$ appearing in the wave
equation is uniquely determined from the knowledge of scattering data. In
\cite{[Ram]}, Ramm and Sj\"ostrand proved a uniqueness result in recovering
the time-dependent potential $q$ from the Dirichlet-to-Neumann map, on the
infinite time-space cylindrical domain $\mathbb{R}_{t}\times\Omega$. As for
stability results, we refer to Salazar \cite{[S]}, Waters \cite{[Waters]},
Ben A\"icha \cite{[ibtissem]} and Kian \cite{[Yv]}.
The problem of determining time-dependent electromagnetic potentials
appearing in a Schr\"odinger equation was treated by Eskin \cite{[E]}.
Using a geometric optics construction, he prove the uniqueness for
this problem in domains with obstacles. In unbounded domains and in the
absence of the magnetic potential, Choulli , Kian and Soccorsi \cite{MYE}
treated the problem of recovering the time-dependent scalar potential $q$
appearing in the Schr\"odinger equation from boundary observations. Assuming that the domain is a
$1$-periodic cylindrical waveguide, they proved logarithmic
stability for this problem.
In the present paper, we address the uniqueness and the stability issues in the inverse problem
of recovering the magnetic field $d\alpha_{A}$ and the time-dependent potential $q$ in the dynamical Schr\"odinger equation, from the knowledge
of the operator $\Lambda_{A,q}$. By means of techniques used in
\cite{[MB],[H]}, we prove a "$\log$-type" stability estimate in the recovery of
the magnetic field and a "$\log$-$\log$-$\log$-type" stability inequality in the
determination of the
time-dependent electric potential.
From a physical view point, our inverse problem consists in determining the magnetic field $d\alpha_{A}$
induced by the magnetic potential $A$, and the electric potential $q$ of an
inhomogeneous medium by probing it with disturbances generated on the
boundary. Here we assume that the medium is quiet initially and $f$ denotes the
disturbance used to probe the medium. Our data are
the response $(\partial_{\nu}+iA.\nu)u$ performed on the boundary $\Sigma$, and the measurement $u(.,T)$,
for different choices of $f$ and for all possible initial data $u_{0}$.
\subsection{Well-posedness of the magnetic Schr\"odinger equation and main results }
In order to state our main results, we need the following existence and
uniqueness result. To this end, we introduce the following Sobolev
space
$$H^{2,1}(\Sigma)=H^{2}(0,T;L^{2}(\Gamma))\cap L^{2}(0,T;H^{1}(\Gamma)),$$
equipped with the norm
$$\|f\|_{H^{2,1}(\Sigma)}=\|f\|_{H^{2}(0,T;L^{2}(\Gamma))}+\|f\|_{L^{2}(0,T;H^{1}(\Gamma))},$$
and we set
$$H^{2,1}_{0}(\Sigma)=\{f\in H^{2,1}(\Sigma), \,\,f(.,0)=\partial_{t}f(.,0)=0\}.$$
Then we have the following theorem.
\begin{theorem}\label{Thm1.1}
Let $T>0$ and let $q\in W^{1,\infty}(Q)$, $A\in\mathcal{C}^{1}(\Omega)$ and
$u_{0}\in H^{1}_{0}(\Omega)\cap H^{2}(\Omega)$. Suppose that $f\in
H^{2,1}_{0}(\Sigma)$. Then, there exists a unique solution $u\in
\mathcal{C}(0,T; H^{1}(\Omega))$ of the Shr\"odinger equation (\ref{Eq1}).
Furthermore, we have $\partial_{\nu}u\in L^{2}(\Sigma)$ and there exists a constant
$C>0$ such that
$$\|u(.,t)\|_{H^{1}(\Omega)}+\|\partial_{\nu}u\|_{L^{2}(\Sigma)}\leq C\para{\|u_{0}\|_{H^{2}(\Omega)}+\|f\|_{H^{2,1}(\Sigma)}}.$$
\end{theorem}
As a corollary, the Dirichlet-to-Neumann map $\Lambda_{A,q}$ is
bounded from $ H^{2}(\Omega)\times H^{2,1}(\Sigma)$ to $H^{1}(\Omega)\times
L^{2}(\Sigma).$
The proof of Theorem\ref{Thm1.1} is given in Appendix A.\\
In order to express the main results of
this article, we first define the following admissible sets of unknown
coefficients $A$ and $q$: for $\varepsilon >0$, $M>0$, we set
$$\mathcal{A}_{\varepsilon}=\{A\in C^{3}(\Omega),\,\,\,\|A\|_{W^{3,\infty}(\Omega)}\leq \varepsilon,\,\,\,\,\,\,\,A_{1}=A_{2}\,\,\mbox{in}\,\Gamma\},$$
$$\mathcal{Q}_{M}=\{q\in \mathcal{X}=W^{2,\infty}(0,T;W^{1,\infty}(\Omega)),\,\,\,\|q\|_{\mathcal{X}}\leq M,\,\,\,\,\,\,q_{1}=q_{2}\,\,\,\,\mbox{in}\,\Gamma\}.$$
Our first main result claims stable determination of the magnetic field
$d\alpha_{A}$, from full boundary measurement $\Lambda_{A,q}$ on the
cylindrical domain $Q$.
\begin{theorem}\label{Thm1} Let $\alpha>\frac{n}{2}+1$. Let $q_{i}\in
\mathcal{Q}_{M}$, $A_{i}\in \mathcal{A}_{\varepsilon}$, such that
$\|A_{i}\|_{H^{\alpha}(\Omega)}\leq M$, for $i=1,\,2$. Then, there exist three
constants $C>0$ and $\mu,s\in(0,1),$ such that we have
$$\|d{\alpha_{A_{1}}}-d{\alpha_{A_{2}}}\|_{L^{\infty}(\Omega)}\leq C\para{\|\Lambda_{A_{2},q_{2}}-\Lambda_{A_{1},q_{1}}\|^{1/2}+|\log \|\Lambda_{A_{2},q_{2}}-\Lambda_{A_{1},q_{1}}\||^{-\mu}}^{s}.$$
Here $C$ depends only on $\Omega$, $\varepsilon$, $M$ and $T$.
\end{theorem}
Next, assuming that the magnetic potential $A$ is divergence free, we
can stably retrieve the electric potential.
\begin{theorem}\label{Thm2}
Let $q_{i}\in \mathcal{Q}_{M}$, $A_{i}\in \mathcal{A}_{\varepsilon}$, for
$i=1,\,2$. Assume that div $A_{i}=0$. Then there exist three constants $C>0$, and
$m, \mu\in(0,1)$, such that we have
$$\|q_{1}-q_{2}\|_{H^{-1}(Q)}\leq
C \Phi_{m}(\|\Lambda_{A_{2},q_{2}}-\Lambda_{A_{1},q_{1}}\|),$$
where
$$
\Phi_{m}(\|\Lambda_{A_{2},q_{2}}-\Lambda_{A_{1},q_{1}}\|)=
\left\{
\begin{array}{lll}
| \log\,|\log|\log\|\Lambda_{A_{2},q_{2}}-\Lambda_{A_{1},q_{1}}\||^{\mu} |\,|^{-1} &\,\mbox{if}\,\,\|\Lambda_{A_{2},q_{2}}-\Lambda_{A_{1},q_{1}}\|<m,\\
\\
\displaystyle\frac{1}{m} \|\Lambda_{A_{2},q_{2}}-\Lambda_{A_{1},q_{1}}\| &\,\mbox{if}\,\,\|\Lambda_{A_{2},q_{2}}-\Lambda_{A_{1},q_{1}}\|\geq m.
\end{array}
\right.
$$
Here $C$ depends on $\Omega$, $M$, $\varepsilon$ and $T$.
\end{theorem}
The text is organized as follows. Section \ref{Sec2} is devoted to the construction of special
geometrical optics solutions to the Shr\"odinger equation (\ref{Eq1}). Using
these particular solutions, we establish in sections \ref{Sec3} and \ref{Sec4}, two
stability estimates for the magnetic field and the electric potential. In
Appendix A, we develop the proof of Theorem\ref{Thm1.1}. Appendix B contains the proof of several
technical results used in the derivation of the main results.
\section{Preliminaries and geometrical optics solutions }\label{Sec2}
The present section is devoted to the construction of suitable geometrical
optics solutions, which are key ingredients in the proof of our main results.
We start by collecting several known lemmas from \cite{[R1],[R2]}.
\subsection{Preliminaries}
Let $\omega=\omega_{\Re}+i\omega_{\Im}$ be a vector with
$\omega_{\Re},\,\omega_{\Im}\in \mathbb{S}^{n-1}$, and
$\omega_{\Re}\cdot\omega_{\Im}=0$. We shall see that the differential
operator $N_{\omega}=\omega\cdot\nabla$ is invertible
and we have
$$N_{\omega}^{-1}(g)(x)=\displaystyle\frac{1}{({2\pi})^{n}}\displaystyle\int_{\mathbb{R}^{n}}e^{-ix\cdot\xi}\displaystyle
\para{\displaystyle\frac{\hat{g}(\xi)}{\omega\cdot\xi}}d\xi=\frac{1}{2\pi}\int_{\mathbb{R}^{2}}\frac{1}{y_{1}+iy_{2}}g(x-y_{1}\omega_{\Re}-y_{2}\omega_{\Im})
\,dy_{1}\,dy_{2}.$$ Notice that the differential operator
$\overline{\partial}$ corresponds to $N_{\omega}$ with $\omega=(0,1)$.
\begin{Lemm}\label{Lm2.1}
Let $r>0$, $k>0$ and let $g \in W^{k,\infty}(\mathbb{R}^{n})$ be such that Supp $g
\subseteq B(0,r)=\{x\in\mathbb{R}^{n},\,\,\,|x|\leq r\}$. Then the function
$\phi = N_{\omega}^{-1}(g) \in W^{k,\infty}(\mathbb{R}^{n})$ solves $N_{\omega}(\phi)=g$, and satisfies the estimate
$$\|\phi\|_{W^{k,\infty}(\mathbb{R}^{n})}\leq C \,\|g\|_{W^{k,\infty}(\mathbb{R}^{n})},$$
where $C$ is a positive constant depending only on $r$.
\end{Lemm}
We recall from \cite{[Sa]}, the following technical result.
\begin{Lemm}\label{Lm2.2}
Let $A\in C_{c}(\mathbb{R}^{n})$, $\xi\in\mathbb{R}^{n}$, and
$\omega=\omega_{\Re}+i\omega_{\Im}$ with
$\omega_{\Re},\,\omega_{\Im}\in\mathbb{S}^{n-1}$ and
$\omega_{\Re}\cdot\omega_{\Im}=\omega_{\Re}\cdot\xi=\omega_{\Im}\cdot\xi=0$.
Then we have the following identity
$$\int_{\mathbb{R}^{n}}\omega\cdot A(x)e^{iN_{\omega}^{-1}(-\omega\cdot A)(x)}e^{i\xi\cdot x}\,dx=\int_{\mathbb{R}^{n}}\omega\cdot A(x)e^{i\xi\cdot x}dx.$$
\end{Lemm}
\subsection{Geometrical optics solutions}
In this section, we build special solutions to the magnetic Schr\"odinger
equation (\ref{Eq1}), inspired by techniques used in elliptic problems. For
this purpose, we consider a vector $\omega=\omega_{\Re}+i\,\omega_{\Im}$,
such that $\omega_{\Re},\,\omega_{\Im}\in \mathbb{S}^{n-1}$ and
$\omega_{\Re}\,.\,\omega_{\Im}=0$. For $\sigma>1$, we define the complex
variable $\rho$ as follows
\begin{equation}\label{Eq2}
\rho=\sigma\omega+y,
\end{equation}
where $y\in B(0,1)$ is fixed and independent of $\sigma$. In what follows,
$P(D)$ denotes a differential operator with constant coefficients:
$$P(D)=\sum_{|\alpha|\leq m}a_{\alpha}\,D^{\alpha},\,\,\,\,\,\,\,\,\,\,D=-i(\partial_{t},\partial_{x}).$$
We associate to the operator $P(D)$ its symbol $p(\xi,\tau)$ defined by
$$p(\xi,\tau)=\sum_{|\alpha|\leq m}
a_{\alpha}(\xi,\tau)^{\alpha},\,\,\,\,\,\,\,\,\,\,(\xi,\tau)\in\mathbb{R}^{n+1}.$$
Moreover, we set
$$\widetilde {p}(\xi,\tau)=\para{\sum_{\beta\in\mathbb{N}} \sum_{\alpha\in\mathbb{N}^{n}}|\partial_{\tau}^{\beta}\partial_{\xi}^{\alpha}p(\xi,\tau)|^{2}}^{\frac{1}{2}},\,\,
\,\,\,(\xi,\tau)\in\mathbb{R}^{n+1},$$ and introduce the operators
$$\Delta_{\rho}=\Delta-2i\rho\cdot\nabla\,\,\,\,\,\, \mbox{and}\,\,\,\,\,\, \nabla_{\rho}=\nabla-i\rho.$$
We turn now to building particular solutions to the magnetic Shr\"odinger equation. We
proceed with a succession of lemmas. The first result is inspired by
H\"ormander \cite{[L.H]} (see Appendix B).
\begin{Lemm}\label{Lemme 2.3}
Let $P\neq0$ be an operator. There exists a linear operator $E\in\mathcal{L}(
L^{2}(0,T;H^{1}(\Omega))),$
such that:
$$P(D)E f=f, \,\,\,\,\,\,\mbox{for\,\,any}\,\, f\in L^{2}(0,T;H^{1}(\Omega)).$$
Moreover, for any linear operator $S$ with constant coefficients such that
$\displaystyle\frac{|S(\xi,\tau)|}{\tilde{p}(\xi,\tau)}$ is bounded
in $\mathbb{R}^{n+1}$, we have the following estimate
\begin{equation}\label{Eq2.5}
\|S(D)E f\|_{L^{2}(0,T;
H^{1}(\Omega))}\leq C \,\displaystyle\sup_{\mathbb{R}^{n+1}}\frac{|S(\xi,\tau)|}{\tilde{p}(\xi,\tau)}\|f\|_{L^{2}(0,T;H^{1}(\Omega))}.
\end{equation}
Here $C$ depends only on the degree of $P$, $\Omega$ and $T$.
\end{Lemm}
\begin{Lemm}\label{Lemme 2.4}
There exists a bounded operator $E_{\rho}:L^{2}(0,T;H^{1}(\Omega))\longrightarrow
L^{2}(0,T;H^{2}(\Omega))$ such that
$$P_{\rho}(D) E_{\rho}f=(i\partial_{t}+\Delta_{\rho})E_{\rho}f=f\quad\mbox{for\,any}\quad f\in L^{2}(0,T;H^{1}(\Omega)).$$
Moreover, there exists a constant $C(\Omega,T)>0$ such that
\begin{equation}\label{Eq2.6}
\|E_{\rho}f\|_{L^{2}(0,T;H^{k}(\Omega))}\leq \frac{C}{\sigma^{2-k}}\|f\|_{L^{2}(0,T;H^{1}(\Omega))},\,\,\,\,\,k=1,\,2.
\end{equation}
\end{Lemm}
\begin{proof}{}
From Lemma \ref{Lemme 2.3}, we deduce the existence of a linear operator
$E_{\rho}\in \mathcal{L}\Big(L^{2}(0,T;H^{1}(\Omega))\Big)$ such that
$P_{\rho}(D) E_{\rho}f=f$.
Moreover, since $|\widetilde{p _{\rho}}(\xi,\tau)|>\sigma$,
we get from (\ref{Eq2.5})
\begin{equation}\label{aj1}
\|E_{\rho}f\|_{L^{2}(0,T;H^{1}(\Omega))}\leq \frac{C}{
\sigma} \|f\|_{L^{2}(0,T;H^{1}(\Omega))}.
\end{equation}
Similarly, since $\displaystyle\frac{|\xi|}{\widetilde{p_{\rho}}(\xi,\tau)}$ is bounded on
$\mathbb{R}^{n+1}$, we get
$$\|\nabla E_{\rho}f\|_{L^{2}(0,T;H^{1}(\Omega))}\leq C
\|f\|_{L^{2}(0,T;H^{1}(\Omega))}.$$ From this and (\ref{aj1}) we see that $E_{\rho}$ is bounded from $L^{2}(0,T;H^{1}(\Omega))$ into $L^{2}(0,T;H^{2}(\Omega))$.
\end{proof}
Let us now deduce the coming statementfrom the above lemma.
\begin{Lemm}\label{Lemme 2.5}
There exists $\varepsilon>0$ such that for all $A\in W^{1,\infty}(\Omega)$ obeying
$\|A\|_{W^{1,\infty}(\Omega)}\leq \varepsilon,$ we may build a bounded operator
$F_{\rho}:L^{2}(0,T;H^{1}(\Omega))\longrightarrow L^{2}(0,T;H^{2}(\Omega))$
such that:
\begin{equation}\label{aj3}
\big(i\partial_{t}+\Delta_{\rho}+2iA\cdot\nabla\big)
F_{\rho}f=f,\quad\mbox{for\,any}\quad f\in L^{2}(0,T;H^{1}(\Omega)).
\end{equation}
Moreover, there exists a constant $C(\Omega, T)>0$ such that
\begin{equation}\label{eq2.9}
\|F_{\rho}f\|_{L^{2}(0,T;H^{k}(\Omega))}\leq \frac{C}{\sigma^{2-k}}\|f\|_{L^{2}(0,T;H^{1}(\Omega))},\,\,\,\,\,k=1,\,2.
\end{equation}
\end{Lemm}
\begin{proof}{}
Let $f\in L^{2}(0,T; H^{1}(\Omega)).$
We start by introducing the following operator
$$\begin{array}{rrr}
S_{\rho}: L^{2}(0,T; H^{2}(\Omega))&\longrightarrow &L^{2}(0,T; H^{2}(\Omega))\\
g&\longmapsto& E_{\rho}(-2i A\cdot\nabla g+f).
\end{array}$$
Since $\|A\|_{W^{1,\infty}(\Omega)}\leq
\varepsilon,$ we deduce from (\ref{Eq2.6}) with $k=2$ that
\begin{eqnarray}\label{Equation 2.9}
\|S_{\rho}(h)-S_{\rho}(g)\|_{L^{2}(0,T; H^{2}(\Omega))}&\leq& C\varepsilon \|h-g\|_{L^{2}(0,T; H^{2}(\Omega))},
\end{eqnarray}
for any $h,\,g\in L^{2}(0,T;H^{2}(\Omega))$. Thus, $S_{\rho}$ is a contraction from $L^{2}(0,T;H^{2}(\Omega))$ into $L^{2}(0,T;H^{2}(\Omega))$ for $\varepsilon$ small enough. Then, $S_{\rho}$ admits a unique fixed
point $g\in L^{2}(0,T; H^{2}(\Omega))$. Put $F_{\rho}f=g$. It is clear that
$F_{\rho}f$ is a solution to (\ref{aj3}). Then, taking into account the
identity $S_{\rho}F_{\rho} f=E_{\rho}(-2iA\cdot \nabla F_{\rho}f+f)$ and the
estimate (\ref{Equation 2.9}), we get
$$\begin{array}{lll}
\|F_{\rho}f\|_{L^{2}(0,T;H^{2}(\Omega))}&=&\|S_{\rho}F_{\rho}f-S_{\rho}(0)\|_{L^{2}(0,T; H^{2}(\Omega))}+\|S_{\rho}(0)\|_{L^{2}(0,T; H^{2}(\Omega))} \\
&\leq& C \varepsilon
\|F_{\rho}f\|_{L^{2}(0,T;H^{2}(\Omega))}+\|E_{\rho}f\|_{L^{2}(0,T;H^{2}(\Omega))}.
\end{array}$$
From this and (\ref{Eq2.6}) with $k=2$, we end up getting for $\varepsilon$ small enough
\begin{equation}\label{l'aequation 2.11}
\|F_{\rho}f\|_{L^{2}(0,T;H^{2}(\Omega))}\leq
{C}\|f\|_{L^{2}(0,T;H^{1}(\Omega))}.
\end{equation}
This being said, it remains to show (\ref{eq2.9}) for $k=1$. To see this, we
notice from (\ref{Eq2.6}) with $k=1$ that
$$\begin{array}{lll}
\|F_{\rho}f\|_{L^{2}(0,T;H^{1}(\Omega))}&\leq& \|E_{\rho}(-2iA\cdot \nabla F_{\rho}f+f)\|_{L^{2}(0,T;H^{1}(\Omega))}\\
&\leq& \displaystyle\frac{C}{\sigma}\para{\varepsilon\|F_{\rho}f\|_{L^{2}(0,T;H^{2}(\Omega))}+\|f\|_{L^{2}(0,T;H^{1}(\Omega))}}.
\end{array}$$
Then the estimate (\ref{eq2.9}) for $k=1$ follows readily from this and
(\ref{l'aequation 2.11}).
\end{proof}
\begin{Lemm}\label{Lemme 2.6}
There exists $\varepsilon>0$ such that for all $A\in W^{1,\infty}(\Omega)$ obeying
$\|A\|_{W^{1,\infty}(\Omega)}\leq \varepsilon,$ we may build a bounded operator
$G_{\rho}:L^{2}(0,T;H^{1}(\Omega))\longrightarrow L^{2}(0,T;H^{2}(\Omega))$
such that:
\begin{equation}\label{l'equation 2.12}
\big(i\partial_{t}+\Delta_{\rho}+2i A\cdot\nabla_{\rho}\big)G_{\rho}f=f\quad \mbox{for\,any}\quad f\in L^{2}(0,T;H^{1}(\Omega)).
\end{equation}
Moreover, there exists a constant $C(\Omega,T)>0 $ such that
\begin{equation}\label{l'equation 2.13}
\|G_{\rho}f\|_{L^{2}(0,T;H^{k}(\Omega))}\leq \frac{C}{\sigma^{2-k}}\|f\|_{L^{2}(0,T;H^{1}(\Omega))},
\,\,\,\,\,\,k=1,\,2.
\end{equation}
\end{Lemm}
\begin{proof}{}
Let $f\in L^{2}(0,T;H^{1}(\Omega))$. We introduce the following operator
$$\begin{array}{rrr}
R_{\rho}:L^{2}(0,T;H^{1}(\Omega))&\longrightarrow& L^{2}(0,T;H^{1}(\Omega))\\
g&\longmapsto& F_{\rho}(-2\rho\cdot A g+f)
\end{array}$$
From (\ref{Eq2}), we see that $|\rho|< 3\sigma$. Thus, arguing as in the
proof of Lemma \ref{Lemme 2.5}, we prove the existence of a unique solution
$G_{\rho}f=g$ to the equation (\ref{l'equation 2.12}). Moreover there exists
a positive constants $C>0$ such that we have
\begin{equation}\label{l'equation 2.14}
\|u\|_{L^{2}(0,T,H^{1}(\Omega))}\leq \frac{C}{\sigma}\|f\|_{L^{2}(0,T;H^{1}(\Omega))}.
\end{equation}
Further, combining the definition of $R_{\rho}$ with (\ref{aj3}) we deduce
(\ref{l'equation 2.13}) for $k=2$.
\end{proof}
Armed with lemma \ref{Lemme 2.6}, we are now in position to establish the
main result of this section, which can
be stated as follows
\begin{Lemm}\label{Prop3.1} Let $M>0$, $\varepsilon>0$, $\omega\in\mathbb{S}^{n-1}$ and $A\in \mathcal{A}_{\varepsilon}$ satisfy $\|A\|_{W^{1,\infty}(\Omega)}\leq
\varepsilon$. Put $\phi=N_{\omega}^{-1}(-\omega.A)$. Then, for all
$\sigma\geq \sigma_{0}>0$ the magnetic Schr\"odinger equation
\begin{equation}\label{Eq6}
(i\partial_{t}+\Delta_{A}+q(x,t))u(x,t)=0,\,\,\,\,\,\mbox{in}\,\,Q
\end{equation}
admits a solution $u\in H^{2}(0,T;H^{1}(\Omega))\cap
L^{2}(0,T;H^{2}(\Omega)),$ of the form
\begin{equation}\label{Eq7}
u(x,t)=e^{-i\big((\rho\cdot\rho)t+x\cdot\rho\big)}\big(e^{i\phi(x)}+w(x,t)\big),
\end{equation}
in such a way that
\begin{equation}\label{eq}
\omega\cdot\nabla\phi(x)=-\omega\cdot A(x),\,\,\,\,\,x\in\mathbb{R}^{n}.
\end{equation}
Moreover, $w\in H^{2}(0,T;H^{1}(\Omega))\cap L^{2}(0,T;H^{2}(\Omega))$
satisfies
\begin{equation}\label{Equation 2.17}
\sigma\|w\|_{H^{2}(0,T;H^{1}(\Omega))}+\|w\|_{L^{2}(0,T;H^{2}(\Omega))}\leq C,
\end{equation}
where the constants $C$ and $\sigma_{0}$ depend only on $\Omega, T$ and
$M.$
\end{Lemm}
Here we extended $A$ by zero outside $\Omega$.
\begin{proof}{} To prove our lemma, it is enough to show that $w\in H^{2}(0,T;H^{1}(\Omega))\cap L^{2}(0,T;H^{2}(\Omega)) $
satisfies the estimate (\ref{Equation 2.17}). Substituting (\ref{Eq7}) into
the equation (\ref{Eq6}), one gets
$$\begin{array}{lll}
\displaystyle\Big(i\partial_{t}+\Delta_{\rho}+2iA(x)\cdot\nabla_{\rho}+h(x,t)\Big)w(x,t)
\!\!\!&=&\!\!\!-e^{i\phi(x)}\Big(i\Delta\phi(x)-|\nabla\phi(x)|^{2}+2\sigma\omega\cdot \nabla\phi(x)+2\sigma\omega\cdot A(x)\\
&&\,\,\,\,\,+2y\cdot \nabla\phi(x)+2A(x)\cdot y-2A(x)\cdot \nabla\phi(x)+h(x,t)\Big),
\end{array}$$
where $h(x,t)=i\mbox{div}A(x)-|A(x)|^{2}+q(x,t)$. Equating coefficients of
power of $|\sigma|$ to zero, we get $\omega\cdot\nabla\phi(x)=-\omega\cdot
A(x)$ for all $x\in\mathbb{R}^{n}.$ Then $w$ solves the following equation
\begin{equation}\label{Equation 2.19}
\para{i\partial_{t}+\Delta_{\rho}+2iA(x)\cdot\nabla_{\rho}+h(x,t)}w(x,t)=L(x,t),
\end{equation}
where
\begin{equation}\label{h}
L(x,t)=-e^{i\phi(x)}\big(i\Delta\phi(x)-|\nabla\phi(x)|^{2}+2y\cdot\nabla\phi(x)+2A(x)\cdot
y-2A\cdot\nabla\phi(x)+h(x,t)\big).
\end{equation}
In light of (\ref{Equation 2.19}), we introduce the following map
$$\begin{array}{rrr}
U_{\rho}:L^{2}(0,T;H^{1}(\Omega))&\longrightarrow& L^{2}(0,T;H^{1}(\Omega)),\\
w&\longmapsto&G_{\rho}(-w\,h+L).
\end{array}$$
Applying (\ref{l'equation 2.13}) with $k=1$ and $f=h\,(w-\tilde{w})$, we get
for all $w,\tilde{w}\in L^{2}(0,T;H^{1}(\Omega))$ that
$$\begin{array}{lll}
\|U_{\rho}(w)-U_{\rho}(\tilde{w})\|_{L^{2}(0,T;H^{1}(\Omega))}&=&\|G_{\rho}(h\,(w-\tilde{w}))\|_{L^{2}(0,T;H^{1}(\Omega))}\\
&\leq&\displaystyle\frac{C}{\sigma}\|h\|_{\mathcal{X}}\|w-\tilde{w}\|_{L^{2}(0,T;H^{1}(\Omega))}.
\end{array}$$
Taking $\sigma_{0}$ sufficiently large so that
$\sigma_{0}>2C\|h\|_{\mathcal{X}},$ then, for each $\sigma>\sigma_{0}$, $U_{\rho}$
admits a unique fixed point $w\in L^{2}(0,T;H^{1}(\Omega))$ such that
$U_{\rho}(w)=w$. Again, applying (\ref{l'equation 2.13}) with $k=1$ and
$f=-hw+L$, one gets
$$\begin{array}{lll}
\|w\|_{L^{2}(0,T;H^{1}(\Omega))}&=&\|G_{\rho}(-hw+L)\|_{L^{2}(0,T;H^{1}(\Omega))}\\
&\leq&\displaystyle \frac{1}{2}\|w\|_{L^{2}(0,T;H^{1}(\Omega)}+\frac{C}{\sigma}\|L\|_{L^{2}(0,T;H^{1}(\Omega))}.
\end{array}$$
Therefore, in view of Lemma \ref{Lm2.1} and (\ref{h}), we get
\begin{equation}\label{w}
\|w\|_{L^{2}(0,T;H^{1}(\Omega))}
\leq \displaystyle\frac{C}{\sigma}.
\end{equation}
Next, differentiating the equation (\ref{Equation 2.19}) twice with respect
to $t$,
taking
into account that $\|h\|_{\mathcal{X}}$ is uniformly bounded with respect to
$\sigma$, and proceeding as before, we show that
\begin{equation}\label{wt}
\|\partial_{t}^{k}w\|_{L^{2}(0,T;H^{1}(\Omega))}\leq
\frac{C}{\sigma},\,\,\,\,\,k=1,2.\end{equation}
Finally, from (\ref{w}) and Lemma \ref{Lm2.1}, we obtain
\begin{eqnarray}\label{l'equation 2.23}
\|w\|_{L^{2}(0,T;H^{2}(\Omega))}&\leq& C\|-wh+L\|_{L^{2}(0,T;H^{1}(\Omega))}\cr
&\leq&C\Big(\displaystyle\frac{C}{\sigma}\|h\|_{\mathcal{X}}+C \Big)\cr
&\leq&C,
\end{eqnarray}
by applying (\ref{l'equation
2.13}) with $k=2$ and $f=-wh+L$. Thus, we get the desired result by combining (\ref{w})-(\ref{l'equation
2.23}).
\end{proof}
\section{Stability estimate for the magnetic field}\label{Sec3}
In this section, we prove Theorem\ref{Thm1} by means of the geometrical
optics solutions
\begin{equation}\label{Eq10}
u_{j}(x,t)=e^{-i\big((\rho_{j}\cdot\rho_{j})t+x\cdot\rho_{j}\big)}\Big(e^{i\phi_{j}(x)}+w_{j}(x,t)\Big),\quad
j=1,2,
\end{equation}
associated $A_{j}$ and $q_{j}$. Here we choose
$\rho_{j}=\sigma\omega_{j}$ and we recall that the correction term $w_{j}$
satisfies (\ref{Equation 2.17}) and
that $\phi_{j}(x)=N^{-1}_{\omega_{j}^{*}}(-\omega_{j}^{*}.A_{j})$ solves the
transport equation
$$\omega_{j}^{*}.\nabla \phi_{j}(x)=-\omega_{j}^{*}.A(x),\,\,\,\,\,\,x\in\mathbb{R}^{n}.$$
Let us specify the choice of $\rho_{j}$: we consider $\xi\in\mathbb{R}^{n}$ and $\omega=\omega_{\Re}+i\omega_{\Im}$ with $\omega_{\Re},\,\omega_{\Im}\in \mathbb{S}^{n-1}$ and $\omega_{\Re}.\omega_{\Im}=\xi.\omega_{\Re}=\xi.\omega_{\Im}=0$. for each $\sigma>|\xi|/{2}$, we
denote
\begin{equation}\label{Eq8}
\rho_{1}=\sigma\para{i\omega_{\Im}+\para{-\frac{\xi}{2\sigma}+\sqrt{1-\frac{|\xi|^{2}}{4\sigma^{2}}}\omega_{\Re}}}=\sigma\omega_{1}^{*},
\end{equation}
\begin{equation}\label{Eq9}
\rho_{2}=\sigma\para{-i\omega_{\Im}+\para{\frac{\xi}{2\sigma}+\sqrt{1-\frac{|\xi|^{2}}{4\sigma^{2}}}\omega_{\Re}}}=\sigma\omega_{2}^{*}.
\end{equation}
Notice that $\rho_{j}.\rho_{j}=0.$ In this section, we aim for recovering
the magnetic field $d\alpha_{A}$ from the boundary operator
$$\begin{array}{ccc}
\Lambda_{A,q}:L^{2}(\Omega)\times H^{2,1}(\Sigma)&\longrightarrow& H^{1}(\Omega)\times L^{2}(\Sigma)\\
g=(u_{0},f)&\longmapsto&\displaystyle\Big(u(.,T),(\partial_{\nu}+iA\cdot\nu)u\displaystyle\Big).
\end{array}$$
We denote by
$$\Lambda_{A,q}^{1}=u(.,T),\quad \Lambda_{A,q}^{2}=(\partial_{\nu}+iA\cdot \nu)u.$$
We first establish an orthogonality identity for the magnetic potential $A=A_{1}-A_{2}$.
\subsection{A basic identity for the magnetic potential}
In this section, we derive an identity relating the magnetic
potential $A$ to the solutions $u_{j}$. We start by the following result.
\begin{Lemm}\label{Lm4.2}
Let $\varepsilon>0$, $A_{j}\in\mathcal{A}_{\varepsilon}$ and $u_{j}$ be the
solutions given by (\ref{Eq10}) $j=1,\,2$. Then, for all $\xi\in\mathbb{R}^{n}$ and
$\sigma>max(\sigma_{0},|\xi|/2)$, we have
$$\int_{Q}iA(x)\cdot\big(\overline{u_{1}}\nabla u_{2}-u_{2}\nabla \overline{u_{1}}\big)\,dx\,dt
=\int_{Q}A(x)\cdot(\rho_{2}+\overline{\rho_{1}})e^{-ix\cdot\xi}e^{i(\phi_{2}-\overline{\phi_{1}})(x)}+I(\xi,\sigma),$$
where the remaining term $I(\xi,\sigma)$ is uniformly bounded with respect to
$\sigma$ and $\xi$.
\end{Lemm}
\begin{proof}{}
In light of (\ref{Eq10}), we have by direct computation
$$\begin{array}{lll}
\overline{u_{1}}\nabla u_{2}-u_{2}\nabla\overline{u_{1}}&=&e^{-ix\cdot(\rho_{2}-\overline{\rho_{1}})}\Big[-i \rho_{2}e^{i(\phi_{2}-\overline{\phi_{1}})}
-i\overline{\rho_{1}}e^{i(\phi_{2}-\overline{\phi_{1}})}\\
&&+i\nabla \phi_{2} e^{i(\phi_{2}-\overline{\phi_{1}})}+i\nabla \overline{\phi_{1}}e^{i(\phi_{2}-\overline{\phi_{1}})}-i\rho_{2}w_{2}e^{-i
\overline{\phi_{1}}}-i\overline{\rho_{1}}\overline{w_{1}}e^{i\phi_{2}}\\
&&+\nabla w_{2}e^{-i\overline{\phi_{1}}}-\nabla \overline{w_{1}}e^{i\phi_{2}}-i\rho_{2}\overline{w_{1}}e^{i\phi_{2}}-i\overline{\rho_{1}}
w_{2}e^{-i\overline{\phi_{1}}}+i\nabla \phi_{2} \overline{w_{1}}e^{i\phi_{2}}\\
&&+iw_{2}\nabla \overline{\phi_{1}}e^{-i\overline{\phi_{1}}}-i\rho_{2}w_{2}\overline{w_{1}}-i\overline{\rho_{1}}\overline{w_{1}}w_{2}
+\nabla w_{2}\overline{w_{1}}-\nabla \overline{w_{1}} w_{2}\Big].\\
\end{array}$$
Therefore, as we have $\rho_{2}-\overline{\rho_{1}}=\xi$, this yields that
$$\begin{array}{lll}
\displaystyle\int_{Q}iA(x)\cdot\para{\overline{u_{1}}\nabla u_{2}-u_{2}\nabla\overline{u_{1}}}dx\,dt&=&
\displaystyle\int_{Q}
A(x)\cdot(\rho_{2}+\overline{\rho_{1}})e^{-ix.\xi}e^{i(\phi_{2}-\overline{\phi_{1}})}\,dx\,dt+I(\xi,\sigma),
\end{array}$$
where
$I(\xi,\sigma)=\displaystyle\int_{Q}iA(x)\cdot\Big(\psi_{1}(x,t)+\psi_{2}(x,t)\Big)\,dx\,dt,$
and $\psi_{1}$, $\psi_{2}$ stand for
$$\psi_{1}=-i(\rho_{2}+\overline{\rho_{1}})\left( w_{2}e^{-i\overline{\phi_{1}}}+\overline{w_{1}}e^{i\phi_{2}}+w_{2}\overline{w_{1}}\right),$$
$$\begin{array}{lll}
\psi_{2}&=&e^{i\phi_{2}}\big(i\nabla\phi_{2}\overline{w_{1}}-\nabla \overline{w_{1}}\big)+e^{-i\overline{\phi_{1}}}\big(\nabla w_{2}
+i\nabla\overline{\phi_{1}}w_{2}\big)
+\nabla w_{2}\overline{w_{1}}-\nabla \overline{w_{1}}w_{2}
+i\big(\nabla\phi_{2}+\nabla\overline{\phi_{1}}\big)e^{i(\phi_{2}-\overline{\phi_{1}})}.
\end{array}$$
In view of bounding $|I(\xi,\sigma)|$ uniformly with respect to $\xi$ and
$\sigma$, we use the fact that $A$ is extended by zero outside $\Omega$ and use Lemma \ref{Lm2.1}
to get
$$\|\phi_{j}\|_{L^{\infty}(\Omega)}\leq C \|A_{j}\|_{L^{\infty}(\mathbb{R}^{n})}\leq C \varepsilon,\,\,\,\,\,\,\,j=1,\,2.$$
Recalling (\ref{eq}) and (\ref{Equation 2.17}) and applying Lemma \ref{Lm2.1}, we
get
\begin{equation}\label{mag1}\|\psi_{j}\|_{L^{1}(Q)}\leq C\para{C +\frac{1}{\sigma}}\leq
C,\,\,\,\,\,\,\,j=1,\,2,
\end{equation}
which yields the desired result.
\end{proof}
With the help of the above lemma we may now derive the following
orthogonality identity for the magnetic potential.
\begin{Lemm}\label{Lm4.3}
Let $\xi\in\mathbb{R}^{n}$ and $\sigma>max(\sigma_{0},|\xi|/{2})$. Then, we have the following
identity
$$\begin{array}{lll}
\displaystyle\int_{Q}A(x)\!\!\!\!\!&\cdot&\!\!\!\! (\rho_{2}+\overline{\rho_{1}})e^{-ix\cdot\xi}e^{i(\phi_{2}-\overline{\phi_{1}})}\,dx\,dt
=2\sigma T\displaystyle\int_{\Omega}\overline{\omega}\cdot A(x) e^{-ix\cdot\xi}\,dx+J(\xi,\sigma),
\end{array}$$
with $|J(\xi,\sigma)|\leq C|\xi|$, where $C$ is independent of $\sigma$ and
$\xi$.
\end{Lemm}
\begin{proof}{}
In view of (\ref{Eq8}) and (\ref{Eq9}), we have
\begin{eqnarray}\label{mag3}
\displaystyle\int_{Q}A(x)\!\!\!\!\!&\cdot&\!\!\!\!\!(\rho_{2}+\overline{\rho_{1}})
e^{-ix\cdot\xi}e^{i (\phi_{2}-\overline{\phi_{1}})} \,dx\,dt
=2\sigma \displaystyle\int_{Q}\overline{\omega}\cdot A(x) e^{-ix\cdot\xi}e^{i(\phi_{2}-\overline{\phi_{1}})}\,dx\,dt\cr
&&-2\sigma \displaystyle\para{1-\displaystyle\sqrt{1-\displaystyle|\xi|^{2}/4\sigma^{2}}}
\displaystyle\int_{Q}\omega_{\Re}\cdot A(x)e^{-ix\cdot\xi}e^{i(\phi_{2}-\overline{\phi_{1}})}\,dx\,dt,
\end{eqnarray}
where we recall that
$$\overline{\phi_{1}}=N^{-1}_{\overline{\omega_{1}^{*}}}(-\overline{\omega_{1}^{*}}\cdot A_{1}),\,\,\,\,\,\,\,\,\,\,\,\phi_{2}=N^{-1}_{\omega^{*}_{2}}(-\omega^{*}_{2}\cdot
A_{2}).$$ Set
$\overline{\Psi_{1}}=N^{-1}_{\overline{\omega}}(-\overline{\omega}\cdot
A_{1})$ and $\Psi_{2}=N^{-1}_{\overline{\omega}}(-\overline{\omega}\cdot
A_{2})$ in such away that we have
$$\Psi_{2}-\overline{\Psi_{1}}=N^{-1}_{\overline{\omega}}(-(-\overline{\omega}\cdot A))=-N_{\overline{\omega}}^{-1}(-\overline{\omega}\cdot A).$$
Then, we infer from (\ref{mag3}) that
$$\begin{array}{lll}
\displaystyle\int_{Q}A(x)\cdot(\rho_{2}+\overline{\rho_{1}})e^{-ix\cdot
\xi}e^{i(\phi_{2}-\overline{\phi_{1}})}dxdt&=&
J_{1}(\xi,\sigma)+J_{2}(\xi,\sigma)+J_{3}(\xi,\sigma),
\end{array}$$
where we have set
$$J_{1}(\xi,\sigma)=2\sigma\displaystyle\int_{Q}\overline{\omega}\cdot A(x)e^{-ix\cdot
\xi}e^{i(\Psi_{2}-\overline{\Psi_{1}})}\,dx\,dt,$$
$$J_{2}(\xi,\sigma)=-2\sigma \displaystyle\int_{Q}\overline{\omega}\cdot A(x)e^{-ix\cdot \xi}\displaystyle\para{e^{i(\Psi_{2}-\overline{\Psi_{1}})}-
e^{i(\phi_{2}-\overline{\phi_{1}})}}\,dx\,dt,$$
and
$$J_{3}(\xi,\sigma)=-2\sigma \displaystyle\para{1-\displaystyle\sqrt{1-\displaystyle|\xi|^{2}/4\sigma^{2}}}\displaystyle\int_{Q}\omega_{\Re}\cdot A(x)e^{-ix\cdot \xi}e^{i(\phi_{2}-\overline{\phi_{1}})}\,dx\,dt.$$
Using Lemma \ref{Lm2.2}, one can see that
$$\begin{array}{lll}
J_{1}(\xi,\sigma)&=
&2\sigma T\displaystyle\int_{\Omega}\overline{\omega}\cdot A(x)e^{iN^{-1}_{\overline{\omega}}(-\overline{\omega}\cdot (-A))} e^{-ix\cdot \xi}\,dx\\
&=&2\sigma T\, \displaystyle\int_{\Omega}\overline{\omega}\cdot A(x) e^{-ix\cdot \xi}\,dx\,dt.
\end{array}$$
Now it remains to upper bound the absolute value of
$J:=J_{2}+J_{3}$. We start by inserting $e^{i(\Psi_{2}-\overline{\phi_{1}})}$
into $J_{2}(\xi,\sigma)$, getting
$$\begin{array}{lll}
J_{2}(\xi,\sigma)
&=&-2\sigma T \displaystyle\int_{\Omega}\overline{\omega}\cdot A(x)e^{-ix\cdot \xi}\displaystyle\para{e^{i\Psi_{2}}\displaystyle\para{e^{-i\overline{\Psi_{1}}}
-e^{-i\overline{\phi_{1}}}}
+e^{-i\overline{\phi_{1}}}\displaystyle\para{e^{i\Psi_{2}}-e^{i\phi_{2}}}} \,dx.
\end{array}$$
Further, as $N^{-1}_{\omega}(-\omega\cdot A)$ depends continuously on $\omega$,
according to Lemma $2.4$ in \cite{[L]}, we get for all $|\xi|\leq2\sigma$
$$|J_{2}(\xi,\sigma)|\leq C_{T}\sigma \para{|\overline{\omega}-\overline{\omega_{1}^{*}}|+|\overline{\omega}-\omega_{2}^{*}
|}.$$
Hence, as $1-\sqrt{1-|\xi|^{2}/4\sigma^{2}}\leq |\xi|^{2}/4\sigma^{2}$
for all $|\xi|\leq2\sigma$, we deduce from (\ref{Eq8}), (\ref{Eq9}) and the above inequality that
$$|J_{2}(\xi,\sigma)|\leq C_{T}\para{\sigma \frac{|\xi|^{2}}{4\sigma^{2}}+|\xi|}\leq C_{T}\,|\xi|.$$
Arguing in the same way, we find that $ |J_{3}(\xi,\sigma)|\leq C_{T} |\xi|,
$ for some positive constant $C_{T}$ which is independent of $\xi$ and
$\sigma$.
\end{proof}
\subsection{Estimating the Fourier transform of the magnetic field}
We aim to relate the Fourier transform of the magnetic field
$d{\alpha_{A_{1}}}-d{\alpha_{A_{2}}}$ to the measurement
$\Lambda_{A_{1},q_{1}}-\Lambda_{A_{2},q_{2}}$. To this end, we introduce the
following notation: we put
$${a}_{k}(x)=(A_{1}-A_{2})(x)\cdot e_{k}=A(x)\cdot e_{k},$$
where $(e_{k})_{k}$ is the canonical basis of $\mathbb{R}^{n}$, and
\begin{equation}\label{def}
\sigma_{j,k}(x)=\frac{\partial {a}_{k}}{\partial x_{j}}(x)-\frac{\partial {a_{j}}}{\partial
x_{k}}(x), \,\,\,\,\,\,j,\,k=1,...,n.
\end{equation}
We recall that the Green formula for the magnetic Laplacian
\begin{equation}\label{green}
\int_{\Omega} (\Delta_{A}u \overline{v}-u\overline{\Delta_{A} v})
\,dx=-\int_{\Gamma} \Big( (\partial_{\nu}+i\nu.A
)u\overline{v}-u\overline{(\partial_{\nu}+iA.\nu)v}\Big)\,d\sigma_{x},
\end{equation}
holds for any $u,\,v\in H^{1}(\Omega)$ such that $\Delta u,\,\Delta v\in
L^{2}(\Omega)$. Here $d\sigma_{x}$ is the Euclidean surface measure on
$\Gamma$. We estimate the Fourier transform of $\sigma_{j,k}$ as
follows.
\begin{Lemm}\label{Lemme3.3}
Let $\xi\in\mathbb{R}^{n}$ and $\sigma>\max(\sigma_{0},|\xi|/2)$, where
$\sigma_{0}$ is as in Lemma \ref{Prop3.1}. Then we have
$$<\xi>^{-1}|\widehat{\sigma}_{j,k}(\xi)|\leq C\para{e^{C\sigma}\|\Lambda_{A_{2},q_{2}}-\Lambda_{A_{1},q_{1}}\|+\frac{1}{\sigma}+\frac{|\xi|}{|\sigma|} },$$
where $C$ is independent of $\xi$ and $\sigma$.
\end{Lemm}
\begin{proof}{} First, for $\sigma>\sigma_{0}$, Lemma \ref{Prop3.1} guarantees the
existence of a geometrical optic solution $u_{2}$, of the form
$$u_{2}(x,t)=e^{-ix.\rho_{2}}(e^{i\phi_{2}(x)}+w_{2}(x,t))$$
to the magnetic Schr\"odinger equation
\begin{equation}
\left\{
\begin{array}{ll}
(i\partial_{t}+\Delta_{A_{2}}+q_{2}(x,t))u_{2}(x,t)=0, & \mbox{in}\,\,Q, \\
u_{2}(x,0)=u_{0} , & \mbox{in}\,\,\Omega, \\
\end{array}
\right.
\end{equation}
where $\rho_{2}$ is given by (\ref{Eq9}).
Let us denote by $f_{\sigma}:=u_{2|\Sigma}$. We consider a solution $v$ to
the following non homogeneous boundary value problem
\begin{equation}\label{u}
\left\{
\begin{array}{ll}
(i\partial_{t}+\Delta_{A_{1}}+q_{1}(x,t))v=0, & \mbox{in}\,\,Q, \\
v(.,0)=u_{2}(.,0)=u_{0}, & \mbox{in}\,\,\Omega, \\
v=u_{2}=f_{\sigma}, & \mbox{on}\,\Sigma.
\end{array}
\right.
\end{equation}
Then, $u=v-u_{2}$ is a solution to the following homogenous boundary value
problem for the magnetic Schr\"odinger equation
$$
\left\{
\begin{array}{ll}
(i\partial_{t}+\Delta_{A_{1}}+q_{1}(x,t))u=2iA\cdot \nabla u_{2}+h(x,t) u_{2}, & \mbox{in}\,Q, \\
u(x,0)=0, & \hbox{in}\, \Omega,\\
u(x,t)=0, & \mbox{on}\,\Sigma,
\end{array}
\right.
$$
where
$$A=A_{1}-A_{2},\quad q=q_{1}-q_{2}\quad \mbox{and}\quad h=i \,\mbox{div}
A-(|A_{1}|^{2}-|A_{2}|^{2})+q.$$ On the other hand, with reference to Lemma \ref{Prop3.1} we consider a solution
$u_{1}$ to the magnetic Shr\"odinger equation (\ref{Eq6}), associated with
the potentials $A_{1}$ and $q_{1}$, of the form
$$u_{1}(x,t)=e^{-ix.\rho_{1}}(e^{i\phi_{1}(x)}+w_{1}(x,t)),$$
where $\rho_{1}$ is given by (\ref{Eq8}).
Integrating by parts in the following integral, and using the Green Formula (\ref{green}), we get
\begin{eqnarray}\label{ap} \displaystyle\int_{Q}(i\partial_{t}+\Delta_{
A_{1}}+q_{1})u\overline{u_{1}}dxdt\!\!\!&=&\!\!\!\!\!\!\displaystyle\int_{Q}2iA\cdot\nabla
u_{2}\overline{u_{1}}dxdt
+\displaystyle\int_{Q}\!\!\Big(i\mbox{div}A-(|A_{1}|^{2}-|A_{2}|^{2})+q\Big)u_{2}\overline{u_{1}} dx dt\cr
&=&i\displaystyle\int_{\Omega} u(.,T)\overline{u_{1}}(.,T)\,dx-\displaystyle\int_{\Sigma}(\partial_{\nu}+iA_{1}.\nu)u \overline{u_{1}}\,d\sigma_{x}\,dt.
\end{eqnarray}
This entails that
$$\begin{array}{lll}
\displaystyle\int_{Q}2iA\cdot \nabla u_{2} \overline{u_{1}}dx\,dt&=-&i\displaystyle\int_{\Omega} (\Lambda_{A_{2},q_{2}}^{1}-\Lambda_{A_{1},q_{1}}^{1})(g)\overline{u_{1}}(.,T)\,dx
+\displaystyle\int_{\Sigma}(\Lambda_{A_{2},q_{2}}^{2}
-\Lambda_{A_{1},q_{1}}^{2})(g) \overline{u_{1}}\,d\sigma_{x}\,dt\\
&&-\displaystyle\int_{Q}\!\!\Big(i\mbox{div}A-(|A_{1}|^{2}-|A_{2}|^{2})+q\Big)u_{2}\overline{u_{1}} dx dt,
\end{array}$$
where $g=(u_{2|t=0},u_{2|\Sigma})$.
Upon applying the Stokes formula and using the fact that $A_{|\Gamma}=0$,
we get
\begin{eqnarray}\label{Eq11}
\displaystyle\int_{Q}\!\! i A\!\cdot\!\big( \overline{u_{1}}\nabla u_{2}-u_{2}\nabla \overline{u_{1}}\big)dxdt
\!\!\!\!\!&=&\!\!\!\!\!-i\displaystyle\int_{\Omega}\!\! \displaystyle\para{\Lambda_{A_{2},q_{2}}^{1}\!-\!
\Lambda_{A_{1},q_{1}}^{1}}(g)\,\overline{u_{1}}(.,T)\,dx+\!\displaystyle\int_{\Sigma}\!\!
\displaystyle\para{\Lambda_{A_{2},q_{2}}^{2}\!-\!\Lambda_{A_{1},q_{1}}^{2}}(g)\,\overline{u_{1}}d\sigma_{x}dt\cr
&&+\displaystyle\int_{Q}\Big(|A_{1}|^{2}-|A_{2}|^{2}+q\Big)u_{2}\overline{u_{1}}\,dx\,dt.
\end{eqnarray}
This, Lemma \ref{Lm4.2} and Lemma \ref{Lm4.3}, yield
$$\Big|\int_{\Omega}\overline{\omega}.A(x)e^{-ix.\xi}\,dx\Big|\leq \frac{C_{T}}{\sigma}\Big( \|\Lambda_{A_{2},q_{2}}-
\Lambda_{A_{1},q_{1}}\|\,\|g\|_{H^{2}(\Omega)\times
H^{2,1}(\Sigma)}\|\phi\|_{L^{2}(\Sigma)\times L^{2}(\Omega)}+C+|\xi|
\Big),$$ where $\phi=(\overline{u_{1}}_{|\Sigma},\overline{u_{1}}_{t=T})$.
Here we used the fact that $ \|u_{2}\overline{u_{1}}\|_{L^{1}(Q)}\leq C_{T} $,
for $\sigma$ sufficiently large.
Hence, bearing in mind that
$$\|g\|_{H^{2}(\Omega)\times H^{2,1}(\Sigma)}\leq C e^{C\sigma},\,\,\,\,\,\,\,\mbox{and}\,\,\,\,\,\|\phi\|_{L^{2}(\Sigma)\times L^{2}(\Omega) }\leq C e^{C\sigma},$$
we get for $\sigma>|\xi|/2$,
\begin{equation}\label{aj4}
\Big|\int_{\Omega}\overline{\omega}\cdot A(x) e^{-ix\cdot\xi}\,dx\Big|\leq C\para{e^{C\sigma}\|\Lambda_{A_{2},q_{2}}-\Lambda_{A_{1},q_{1}}\|+\frac{1}{\sigma}
+\frac{|\xi|}{\sigma}}.
\end{equation}
Arguing as in the derivation of (\ref{aj4}), we prove by replacing
$\overline{\omega}$ by $-\omega$, that
\begin{equation}\label{aj5}
\Big|\int_{\Omega}-\omega\cdot A(x)e^{-ix\cdot\xi}\,dx\Big|\leq C \para{e^{C\sigma}\|\Lambda_{A_{2},q_{2}}-\Lambda_{A_{1},q_{1}}\|+\frac{1}{\sigma}+\frac{|\xi|}{\sigma}}.
\end{equation}
Thus, choosing
$\omega_{\Im}=\frac{\xi_{j}e_{k}-\xi_{k}e_{j}}{|\xi_{j}e_{k}-\xi_{k}e_{j}|},$
multiplying (\ref{aj4}) and (\ref{aj5}) by $|\xi_{j}e_{k}-\xi_{k}e_{j}|$, and adding the obtained inequalities together, we find that
$$\Big|\int_{\Omega}e^{-ix\cdot\xi}\para{\xi_{j}\tilde{a}_{k}(x)-\xi_{k}\tilde{a_{j}}(x)}\,dx|\leq C\,|\xi_{j}e_{k}-e_{j}\xi_{k}|\para{e^{C\sigma}\|\Lambda_{A_{2},q_{2}}-\Lambda_{A_{1},q_{1}}\|+\frac{1}{\sigma}+\frac{|\xi|}{\sigma} }.$$
From this and (\ref{def}) we deduce that
$$|\widehat{\sigma}_{j,k}(\xi)|\leq C <\xi>\para{e^{C\sigma}\|\Lambda_{A_{2},q_{2}}-\Lambda_{A_{1},q_{1}}\|+\frac{1}{\sigma}+\frac{|\xi|}{\sigma} },\,\,\,\,\,\,\,j,\,k\in
\mathbb{N}.$$ This ends the proof.
\end{proof}
\subsection{Stability estimate}
Armed with Lemma \ref{Lemme3.3}, we are now in position to complete the proof
of the stability estimate for the magnetic field. To do so, we first need to
bound the $H^{-1}(\mathbb{R}^{n})$ norm of
$d\alpha_{A_{1}}-d\alpha_{A_{2}}$. In light of the above reasoning , this can be achieved by taking
$\sigma>R>0$ and decomposing the $H^{-1}(\mathbb{R}^{n})$ norm of $\sigma_{j,k}$ as
$$
\|\sigma_{j,k}\|^{2}_{H^{-1}(\mathbb{R}^{n})}=\displaystyle\int_{|\xi|\leq
R}|\widehat{\sigma}_{j,k}(\xi)|^{2}<\xi>^{-2}\,d\xi
+\displaystyle\int_{|\xi|>R}|\widehat{\sigma}_{j,k}(\xi)|^{2}
<\xi>^{-2}\,d\xi.$$
Then, we have
$$\|\sigma_{j,k}\|^{2}_{H^{-1}(\mathbb{R}^{n})}\leq C \Big[R^{n}\|<\xi>^{-1}\widehat{\sigma}_{j,k}\|^{2}_{L^{\infty}(B(0,R))}+\frac{1}{R^{2}}
\|\sigma_{j,k}\|_{L^{2}(\mathbb{R}^{n})}^{2}\Big],$$
which entails that
$$\|\sigma_{j,k}\|^{2}_{H^{-1}(\mathbb{R}^{n})}\leq C\Big[R^{n}\para{e^{C\sigma}\|\Lambda_{A_{2},q_{2}}
-\Lambda_{A_{1},q_{1}}\|^{2}+\frac{1}{\sigma^{2}}+\frac{R^{2}}{\sigma^{2}}}+\frac{1}{R^{2}}\Big],
$$
by Lemma \ref{Lemme3.3}. The next step is to choose $R>0$ in such away
$\frac{R^{n+2}}{\sigma^{2}}=\frac{1}{R^{2}}$. In this case we get for
$\sigma>\max(\sigma_{0},|\xi|/2)$, that
\begin{eqnarray}\label{aj6}
\|\sigma_{j,k}\|^{2}_{H^{-1}(\mathbb{R}^{n})}&\leq& C\para{\sigma^{\frac{2n}{n+4}}e^{C\sigma}\|\Lambda_{A_{2},q_{2}}-\Lambda_{A_{1},q_{1}}\|^{2}+\sigma^{\frac{-4}{n+4}}}\cr
&\leq& C \para{e^{C_{0}\sigma}\|\Lambda_{A_{2},q_{2}}-\Lambda_{A_{1},q_{1}}\|^{2}+\frac{1}{\sigma^{\mu}}},
\end{eqnarray}
where $\mu\in (0,1)$. Thus, assuming that $\|\Lambda_{A_{2},q_{2}}-\Lambda_{A_{1},q_{1}}\|\leq c=e^{-C_{0}\max\para{\sigma_{0},|\xi|/2}}$, and taking
$\sigma=\frac{1}{C_{0}}|\log\|\Lambda_{A_{2},q_{2}}-\Lambda_{A_{1},q_{1}}\||$ in (\ref{aj6}), we get that
$$\|\sigma_{j,k}\|_{H^{-1}(\mathbb{R}^{n})}\leq C
\para{\|\Lambda_{A_{2},q_{2}}-\Lambda_{A_{1},q_{1}}\|^{1/2}+|\log\|\Lambda_{A_{2},q_{2}}-\Lambda_{A_{1},q_{1}}\||^{-\mu'}},$$
for some positive $\mu'\in(0,1)$. Since the above estimate remains true when
$\|\Lambda_{A_{2},q_{2}}-\lambda_{A_{1},q_{1}}\|\geq c$, as we have
$$\|\sigma_{j,k}\|_{H^{-1}(\mathbb{R}^{n})}\leq \frac{2 M}{c^{1/2}}c^{1/2}\leq \frac{2M}{c^{1/2}}\|\Lambda_{A_{2},q_{2}}-\Lambda_{A_{1},q_{1}}\|^{1/2}, $$
we have obtained that
$$\begin{array}{lll}
\|d\alpha_{A_{1}}-d{\alpha_{A_{2}}}\|_{H^{-1}(\Omega)
&\leq& C\para{\|\Lambda_{A_{2},q_{2}}-\Lambda_{A_{1},q_{1}}\|^{1/2}+|\log\|\Lambda_{A_{2},q_{2}}-\Lambda_{A_{1},q_{1}}\||^{-\mu'}}.
\end{array}$$
In order to complete the proof of the theorem, we consider $\delta>0$ such
that $\alpha:=s-1=\frac{n}{2}+2\delta$, use Sobolev's embedding theorem and
we find
$$\begin{array}{lll}
\|d\alpha_{A_{1}}-d\alpha_{A_{2}}\|_{L^{\infty}(\Omega)}&\leq& C \|d\alpha_{A_{1}}-d\alpha_{A_{2}}\|_{H^{\frac{n}{2}+\delta}(\Omega)}\\
&\leq& C \|d\alpha_{A_{1}}-d\alpha_{A_{2}}\|_{H^{-1}(\Omega)}^{1-\beta}\|d\alpha_{A_{1}}-d\alpha_{A_{2}}\|_{H^{s-1}(\Omega)}^{\beta}\\
&\leq&C\para{\|\Lambda_{A_{2},q_{2}}-\Lambda_{A_{1},q_{1}}\|^{1/2}+|\log \|\Lambda_{A_{2},q_{2}}-\Lambda_{A_{1},q_{1}}\||^{-\mu}}^{1-\beta},
\end{array}$$
by interpolating with $\beta\in (0,1)$. This completes the proof of Theorem \ref{Thm1}.
This theorem is a key ingredient in the proof of the result of the next section.
\section{Stability result for the electric potential}\label{Sec4}%
This section contains the proof of Theorem \ref{Thm2}. Using the geometric optics solutions constructed in Section\ref{Sec2}, we will prove with the aid of the stability estimate obtained for the magnetic field,
that the time-dependent electric potential depends stably on the
Dirichlet-to-Neuamnn map $\Lambda_{A,q}$.
To do this, we should normally apply the Hodge decomposition to
$A=A_{1}-A_{2}=A'+\nabla\varphi$ and use this estimate
\begin{equation}\label{o}
\|A'\|_{W^{1,p}(\Omega)}\leq C \|\mbox{curl} \,A'\|_{L^{p}(\Omega)}.
\end{equation}
that holds for any $p>n$ (see Appendix B).
But in this paper, since $u_{0}$ is not frozen to zero, we don't have
invariance under Gauge transformation, so will further assume that $A$
is divergence free
in such a way that the estimate (\ref{o}) holds for $A'=A$.
For a fixed $y\in B(0,1)$, we consider solutions $u_{j}$ to the Schr\"odinger
equation of the form (\ref{Eq10}) with $\rho_{j}= \sigma\omega_{j}^{*}+y$, where
$\xi\in \mathbb{R}^{n}$ and $\omega\in\mathbb{S}^{n-1}$ are as in Section \ref{Sec3},
and $w_{j}^{*}$, $j=1,2$, are given by (\ref{Eq8}) and (\ref{Eq9}).
In contrast to Section \ref{Sec3}, $y$ is no longer equal to zero, as we need
to estimate the Fourier transform of $q$ with
respect to $x$ and $t$.
\subsection{An identity for the electric potential }
Let us first establish the following identity for the electric
potential.
\begin{Lemm}\label{Lem3.3}
Let $u_{j}$ be the solutions given by (\ref{Eq10}) for $j=1,2$. For all $\sigma\geq
\sigma_{0}$ and $\xi\in\mathbb{R}^{n}$ such that $|\xi|<2\sigma$, we have the
following identity
$$\int_{Q}q(x,t) u_{2}\overline{u_{1}}\,dx\,dt=\int_{Q}q(x,t) e^{-i(2y.\xi t+x.\xi)}\,dx\,dt+P_{1}(\xi,y,\sigma)+P_{2}(\xi,y,\sigma),$$
where $P_{1}(\xi,y,\sigma)$ and $P_{2}(\xi,y,\sigma)$ satisfy the estimates
$$|P_{1}(\xi,y,\sigma)|\leq C\para{\|A\|_{L^{\infty}(\Omega)}+\frac{|\xi|}{\sigma}},\,\,\,\,\,\,\,|P_{2}(\xi,y,\sigma)|\leq \frac{C}{\sigma}\,.$$
Here $\sigma_{0}$ is as in Lemma \ref{Prop3.1} and $C$ is independent of
$\sigma,\,y,$ and $\,\xi$.
\end{Lemm}
\begin{proof}{}
In light of (\ref{Eq8}), (\ref{Eq9}) and (\ref{Eq10}), a direct calculation
gives us
\begin{eqnarray}\label{X}
u_{2}\overline{u_{1}}&=&e^{-i\Big((\rho_{2}.\rho_{2}-\overline{\rho_{1}.\rho_{1}})t+x.(\rho_{2}-\overline{\rho_{1}})\Big)}
\Big(e^{i(\phi_{2}-\overline{\phi_{1}})}+e^{-i\overline{\phi}_{1}}w_{2}+e^{i\phi_{2}}\overline{w_{1}}+w_{2}\overline{w}_{1}\Big)\cr
&=&e^{-i(2y.\xi t+x.\xi)}e^{-i(\overline{\phi_{1}}-\phi_{2})} +e^{-i(2y.\xi t+x.\xi)}\para{e^{-\overline{\phi_{1}}}w_{2}
+e^{i\phi_{2}}\overline{w_{1}}+w_{2}\overline{w_{1}}},
\end{eqnarray}
which yields
\begin{eqnarray}\label{Eq3.19}
\int_{Q}q(x,t)u_{2}\overline{u_{1}}\,dx\,dt=\int_{Q}q(x,t)e^{-i(2y.\xi t +x.\xi)}\,dx\,dt+P_{1}(\xi,y,\sigma)+P_{2}(\xi,y,\sigma),
\end{eqnarray}
where we have set
$$\begin{array}{lll}P_{1}(\xi,y,\sigma)&=&\displaystyle\int_{Q}q(x,t)e^{-i(2y\cdot\xi
t+x\cdot\xi)}e^{-i\overline{\phi}_{1}}\para{e^{i\phi_{2}}-e^{i\overline{\phi}_{1}}}\,dx\,dt,\\
P_{2}(\xi,y,\sigma)&=&\displaystyle\int_{Q}q(x,t)e^{(-i2y\cdot\xi t+x.\xi)}\para{e^{-i\overline{\phi}_{1}}w_{2}+e^{i\phi_{2}}\overline{w_{1}}+w_{2}\overline{w_{1}}}\,dx\,dt.
\end{array}$$
Recalling that $\phi_{j}=N^{-1}_{\omega_{j}^{*}}(-\omega_{j}^{*}\cdot A_{j})$, for $j=1,\,2$, we deduce from the definition of $P_{1}$ that
$$\begin{array}{lll}
|P_{1}(\xi,y,\sigma)
&\leq& C\Big(\|e^{i N^{-1}_{\omega_{2}^{*}}(-\omega_{2}^{*}\cdot A_{2})}-e^{i N^{-1}_{\omega_{2}^{*}}(-\omega_{2}^{*}\cdot A_{1})}\|_{L^{\infty}(\Omega)}+
\|e^{i N_{\omega_{2}^{*}}^{-1}(-\omega_{2}^{*}\cdot A_{1})}-e^{iN_{\overline{\omega_{1}}^{*}}^{-1}(-\overline{\omega_{1}}^{*}\cdot A_{1})}\|_{L^{\infty}(\Omega)}\Big),
\end{array}$$
with $C>0$ is depending on $T$, $M$, $\Omega$ and $\|A_{1}\|$. Using the
continuity of $N_{\omega}^{-1}(-\omega\cdot A)$ with respect to $\omega$ (see
Lemma 2.4 in \cite{[L]}), we get that
$$\begin{array}{lll}
|P_{1}(\xi,y,\sigma)|&\leq& C\para{ \|{N_{\omega_{2}}^{*}}^{-1}(-\omega_{2}^{*}.A_{2})-N_{\omega_{2}^{*}}^{-1}(-\omega_{2}^{*}.A_{1})\|_{L^{\infty}(\Omega)}
+|\omega^{*}_{2}-\overline{\omega}_{1}^{*}|}\\
&\leq&C\para{\|A\|_{L^{\infty}(\Omega)}+\displaystyle\frac{|\xi|}{\sigma}}.
\end{array}$$
On the other hand, from Cauchy Schwarz
inequality, Lemma \ref{Lm2.1} and (\ref{Equation 2.17}), we get
$$\begin{array}{lll}
|P_{2}(\xi,y,\sigma)|&\leq& C\para{\|w_{2}\|_{L^{2}(Q)}\|e^{-i\overline{\phi_{1}}}\|_{L^{2}(Q)}+\|e^{i\phi_{2}}\|_{L^{2}(Q)}\|\overline{w_{1}}\|_{L^{2}(Q)}
+\|w_{2}\|_{L^{2}(Q)}\|\overline{w_{1}}\|_{L^{2}(\Omega)}}\\
&\leq& \displaystyle\frac{C}{\sigma}.
\end{array}$$
This completes the proof of Lemma \ref{Lem3.3}.
\end{proof}
\subsection{Estimate of the Fourier transform}
In view of relating the Fourier transform of the electric potential
$q=q_{1}-q_{2}$ to $\Lambda_{A_{1},q_{1}}-\Lambda_{A_{2},q_{2}}$, we first
establish the following auxiliary result
\begin{Lemm}\label{Lm3.4}
For any $\sigma \geq \sigma_{0}$ and $\xi\in \mathbb{R}^{n}$ such that
$|\xi|<2\sigma$, we have the following estimate
$$
|\widehat{q}(\xi,2y.\xi)|\leq C\Big( e^{C\sigma}\|\Lambda_{A_{2},q_{2}}-\Lambda_{A_{1},q_{1}}\|+e^{C\sigma}\|d{\alpha_{A_{1}}}-d{\alpha_{A_{2}}}\|_{L^{\infty}(\Omega)}+\frac{|\xi|}{\sigma} +\frac{1}{\sigma} \Big),
$$
for some $C$ that is independent of $|\xi|$ and $\sigma.$
\end{Lemm}
\begin{proof}{}
First, for $\sigma>\sigma_{0}$, Lemma \ref{Prop3.1} guarantees the existence
of a geometrical optics solution $u_{2}$ of the form
$$u_{2}(x,t)=e^{-i\para{(\rho_{2}.\rho_{2})t+x.\rho_{2}}}(e^{i\phi_{2}(x)}+w_{2}(x,t)),$$
to the magnetic Schr\"odinger equation
\begin{equation}
\left\{
\begin{array}{ll}
(i\partial_{t}+\Delta_{A_{2}}+q_{2}(x,t))u_{2}(x,t)=0, &\mbox{in}\,Q,\\
u_{2}(x,0)=u_{0}, &\mbox{in}\,\,\Omega,
\end{array}
\right.
\end{equation}
where $\rho_{2}$ is given by (\ref{Eq9}) and $w_{2}(x,t)$ satisfies
\begin{equation}\label{Eq3.28}
\sigma\|w_{2}\|_{H^{2}(0,T,H^{1}(\Omega))}+\|w_{2}\|_{L^{2}(0,T,H^{2}(\Omega))}\leq C
\end{equation}
Let us denote by $f_{\sigma}:=u_{2|\Sigma}$. We consider a solution $v$ to
the following non homogeneous boundary value problem
\begin{equation}\label{u}
\left\{
\begin{array}{ll}
(i\partial_{t}+\Delta_{A_{1}}+q_{1}(x,t))v=0, & \mbox{in}\,\,Q, \\
v(.,0)=u_{2}(.,0)=u_{0}, & \mbox{in}\,\,\Omega, \\
v=u_{2}=f_{\sigma}, & \mbox{on}\,\Sigma.
\end{array}
\right.
\end{equation}
Denote $u=v-u_{2}$, then $u$ is a solution to the following homogenous
boundary value problem for the magnetic Schr\"odinger equation
$$
\left\{
\begin{array}{ll}
(i\partial_{t}+\Delta_{A_{1}}+q_{1}(x,t))u=2iA\cdot \nabla u_{2}+h(x,t) u_{2}, & \mbox{in}\,Q, \\
u(x,0)=0, & \hbox{in}\, \Omega,\\
u(x,t)=0, & \mbox{on}\,\Sigma,
\end{array}
\right.
$$
where we recall that
$$A=A_{1}-A_{2},\quad q=q_{1}-q_{2}\quad \mbox{and}\quad h=i \,\mbox{div}
A-(|A_{1}|^{2}-|A_{2}|^{2})+q.$$
On the other hand, we consider a solution $u_{1}$ of the magnetic
Shr\"odinger equation (\ref{Eq6}) corresponding to the potentials $A_{1}$ and
$q_{1}$, of the form
$$u_{1}(x,t)=e^{-i\para{(\rho_{1}.\rho_{1})t+x.\rho_{1}}}(e^{i\phi_{1}(x)}+w_{1}(x,t)),$$
where $\rho_{1}$ is given by (\ref{Eq8}) and $w_{1}(x,t)$ satisfies
\begin{equation}\label{eq3.29}
\sigma\|w_{1}\|_{H^{2}(0,T,H^{1}(\Omega))}+\|w_{1}\|_{L^{2}(0,T,H^{2}(\Omega))}\leq C.
\end{equation}
Integrating by parts and using the Green Formula (\ref{green}), we get
$$\begin{array}{lll}
\displaystyle\int_{Q}q(x,t) u_{2}\overline{u_{1}}\,dx\,dt&=&i\displaystyle\int_{\Omega}(\Lambda_{A_{2},q_{2}}^{1}-\Lambda_{A_{1},q_{1}}^{1})(g)\overline{u_{1}}(.,T)\,dx
-\displaystyle\int_{\Sigma}(\Lambda_{A_{2},q_{2}}^{1}-\Lambda_{A_{1},q_{1}}^{2})(g)\overline{u_{1}}\,d\sigma_{x}\,dt\\
&&+\displaystyle\int_{Q}i A(x)\cdot (\overline{u_{1}}\nabla u_{2}-u_{2}\nabla \overline{u_{1}})\,dx\,dt
-\int_{Q}(|A_{1}|^{2}-|A_{2}|^{2})u_{2}\overline{u_{1}}\,dx\,dt,
\end{array}
$$
where $g=(u_{2|t=0},u_{2|\Sigma}).$ To bring the Fourier transform of
$q$ out of the above identity, we extend $q$ by zero outside the cylindrical domain $Q$, we use Lemma
\ref{Lem3.3} and take to account that
$$\|u_{2}\overline{u_{1}}\|_{L^{1}(Q)}\leq C,\,\,\,\mbox{and}\,\,\,\,\,\|\overline{u_{1}}\nabla u_{2}\|_{L^{1}(Q)}+\|u_{2}\nabla \overline{u_{1}}\|_{L^{1}(Q)}\leq C\sigma,$$
and get
$$\begin{array}{lll}
|\widehat{q}(\xi,2y\cdot\xi)|\leq C \Big(\|\Lambda_{A_{2},q_{2}}-\Lambda_{A_{1},q_{1}}\|\|g\|_{H^{2}(\Omega)\times H^{2,1}(\Sigma)}
\|\phi\|_{L^{2}(\Sigma)\times L^{2}(\Omega)}+C\sigma\|A\|_{L^{\infty}(\Omega)}+\displaystyle\frac{|\xi|}{\sigma}+\displaystyle\frac{1}{\sigma}\Big),
\end{array}$$
where $\phi=(\overline{u_{1}}_{|\Sigma},\,\overline{u_{1}}_{|t=T})$.
Now, bearing in mind that
$$\|g\|_{H^{2}(\Omega)\times H^{2,1}(\Sigma)}\leq C e^{C\sigma},\,\,\,\,\,\,\,\mbox{and}\,\,\,\,\,\|\phi\|_{L^{2}(\Sigma)\times L^{2}(\Omega)}\leq C e^{C\sigma},$$
we get for all $\xi\in\mathbb{R}^{n}$ such that $|\xi|<2\sigma$ and for all $y\in
B(0,1)$,
\begin{equation}
|\widehat{q}(\xi,2y.\xi)|\leq C\Big( e^{C\sigma}\|\Lambda_{A_{2},q_{2}}-\Lambda_{A_{1},q_{1}}\|+e^{C\sigma}\|A\|_{L^{\infty}(\Omega)}+\frac{|\xi|}{\sigma} +\frac{1}{\sigma} \Big).
\end{equation}
Finally, using the fact that $\|A\|_{W^{1,\infty}(\Omega)}\leq C
\|\mbox{curl}\,A\|_{L^{\infty}(\Omega)},$ (see Lemma \ref{rot} in Appendix B), we obtain the desired result.
\end{proof}
We are now in position to estimate $\widehat{q}(\xi,\tau)$ for all
$(\xi,\tau)$ in the following set
$$E_{\alpha}=\{(\xi,\tau)\in (\mathbb{R}^{n}\setminus\{0\})\times \mathbb{R},\,\,|\xi|<2\alpha,\,\,\,|\tau|<2|\xi|\},$$
for any fixed $0<\alpha<\sigma$.
\begin{Lemm}\label{Lm3.5}
Suppose that the conditions of Lemma \ref{Lm3.4} are satisfied. Then we have
for all $(\xi,\tau)\in E_{\alpha}$,
\begin{equation}\label{ball}
|\widehat{q}(\xi,\tau)|\leq C\Big(
e^{C\sigma}\|\Lambda_{A_{2},q_{2}}-\Lambda_{A_{1},q_{1}}\|
+e^{C\sigma}\|d{\alpha_{A_{1}}}-d{\alpha_{A_{2}}}\|_{L^{\infty}(\Omega)}+\frac{\alpha}{\sigma}
+\frac{1}{\sigma} \Big).
\end{equation}
Here $C$ is independent of $|\xi|$ and $\sigma$.
\end{Lemm}
\begin{proof}{}
Fix $(\xi,\tau)\in E_{\alpha},$ and set $y=\frac{\tau}{2|\xi|^{2}}\cdot
\xi,$ in such away that $y\in B(0,1)$ and $2y\cdot\xi=\tau$. Since
$\alpha<\sigma$ we have $|\xi|<2\alpha<2\sigma$. Hence,
Lemma \ref{Lm3.4} yields the desired result.
\end{proof}
\subsection{Stability estimate}
In order to complete the proof of the stability estimate for
the electric potential, we use an argument for analytic functions
proved in \cite{[new]} (see also \cite{[A],[V]}). For $\gamma\in
\mathbb{N}^{n+1}$, we put $|\gamma|=\gamma_{1}+...+\gamma_{n+1}.$ We have the
following statement that claims conditional stability for the analytic continuation.
\begin{Lemm}\label{Lm3.6}
Let $O$ be a non empty open set of $B(0,1)$ and let $F$ be an analytic
function in $B(0,2)$, obeying
$$\|\partial^{\gamma}F\|_{L^{\infty}(B(0,2))}\leq \frac{M |\gamma|}{\eta^{|\gamma|}},\,\,\,\,\,\,\forall \gamma\in \mathbb{N}^{n+1}$$
for some $M>0$ and $\eta>0$. Then we have
$$\|F\|_{L^{\infty}(B(0,1))}\leq (2M)^{1-\mu}\|F\|^{\mu}_{L^{\infty}(O)},$$
where $\mu\in(0,1)$ depends on $n$, $\eta$ and $|O|$.
\end{Lemm}
We refer to Lavrent'ev \cite{[Lav]} for classical results for this type. For
fixed $0<\alpha<\sigma$, let us set
$$F_{\alpha}(\xi,\tau)=\widehat{q}(\alpha(\xi,\tau)),\,\,\,\,\,\,\,\,(\xi,\tau)\in\mathbb{R}^{n+1}.$$
It is easily seen that $F_{\alpha}$ is analytic and that
$$\begin{array}{lll}
|\partial^{\gamma}F_{\alpha}(\xi,\tau)|=|\partial^{\gamma}\widehat{q}(\alpha(\xi,\tau))|&=&\Big|\partial^{\gamma}\displaystyle\int_{\mathbb{R}^{n+1}}q(x,t) e^{-\alpha (x,t).(\tau,\xi)}\,dx\,dt\Big|\\
&=&\Big| \displaystyle\int_{\mathbb{R}^{n+1}}q(x,t)(-i)^{|\gamma|}\alpha^{|\gamma|}(x,t)^{\gamma}e^{-i\alpha (x,t).(\xi,\tau)}\,dx\,dt\Big|.
\end{array}$$
Hence one gets
$$ |\partial^{\gamma}F_{\alpha}(\xi,\tau)|\leq \displaystyle\int_{\mathbb{R}^{n+1}}|q(x,t)|\alpha^{|\gamma|}(|x|^{2}+t^{2})^{\frac{|\gamma|}{2}}\,dx\,dt\leq \|q\|_{L^{1}(Q)}\alpha^{|\gamma|}(2T^{2})^{\frac{|\gamma|}{2}}\leq C\frac{|\gamma|!}{(T^{-1})^{|\gamma|}}e^{\alpha}.$$
Applying Lemma \ref{Lm3.6} on the set $O=E_{1}\cap B(0,1)$ with $M=C
e^{\alpha}$, $\eta=T^{-1}$, we may find a constant $\mu\in (0,1)$ such that
we have
$$|F_{\alpha}(\xi,\tau)|=|\widehat{q}(\alpha(\xi,\tau))|\leq Ce^{\alpha(1-\mu)}\|F_{\alpha}\|^{\mu}_{L^{\infty}(O)},\,\,\,\,\,\,(\xi,\tau)\in B(0,1).$$
Now the idea is to estimate the Fourier transform of $q$ in a
suitable ball. Bearing in mind that $\alpha E_{1}=E_{\alpha}$, we have for
all $(\xi,\tau)\in B(0,\alpha)$,
\begin{eqnarray}\label{3.25}
|\widehat{q}(\xi,\tau)|=|F_{\alpha}(\alpha^{-1}(\xi,\tau)|&\leq& Ce^{\alpha(1-\mu)}\|F_{\alpha}\|^{\mu}_{L^{\infty}(O)}\cr
&\leq& Ce^{\alpha(1-\mu)}\|\widehat{q}\|^{\mu}_{L^{\infty}(B(0,\alpha)\cap E_{\alpha})}\cr
&\leq& Ce^{\alpha (1-\mu)}\|\widehat{q}\|^{\mu}_{L^{\infty}(E_{\alpha})}.
\end{eqnarray}
The next step of the proof is to get an estimate linking the
coefficient $q$ to the measurement
$\Lambda_{A_{1},q_{1}}-\Lambda_{A_{2},q_{2}}$. To do that we first
decompose the $H^{-1}(\mathbb{R}^{n+1})$ norm of $q$ as follows
$$\begin{array}{lll}
\|q\|_{H^{-1}(\mathbb{R}^{n+1})}^{\frac{2}{\mu}}\!\!&=&\!\!\Big( \displaystyle\int_{|(\xi,\tau)|<\alpha}\!\!\!\!\!\!<(\xi,\tau)>^{-2}|\widehat{q}(\xi,\tau)|^{2}\,d\xi \,d\tau
+\displaystyle\int_{|(\xi,\tau)|\geq \alpha}\!\!\!\! \!\! <(\xi,\tau)>^{-2}|\widehat{q}(\xi,\tau)|^{2}\,d\tau\,d\xi\Big)^{\frac{1}{\mu}}\\
&\leq&C\para{\alpha^{n+1}\|\widehat{q}\|^{2}_{L^{\infty}(B(0,\alpha))}+\alpha^{-2}\|q\|_{L^{2}(\mathbb{R}^{n+1})}^{2}}^{\frac{1}{\mu}}.
\end{array}$$
It follows from (\ref{3.25}) and Lemma \ref{Lm3.5}, that
\begin{equation}
\|q\|_{H^{-1}(\mathbb{R}^{n+1})}^{\frac{2}{\mu}}\leq C\Big[\alpha^{\frac{n+1}{\mu}}e^{\frac{2\alpha (1-\mu)}{\mu}}\Big(e^{C\sigma}\eta^{2}+
e^{C\sigma}\|d\alpha_{A_{1}}-d\alpha_{A_{2}}\|^{2}_{L^{\infty}(\Omega)}+\frac{\alpha^{2}}{\sigma^{2}}+\frac{1}{\sigma^{2}}
\Big)+\frac{1}{\alpha^{\frac{2}{\mu}}} \Big],
\end{equation}
where we have set $\eta=\|\Lambda_{A_{2},q_{2}}-\Lambda_{A_{1},q_{1}}\|$. In
light of Theorem \ref{Thm1}, one gets
\begin{equation}\label{eq4.47}\|q\|_{H^{-1}(\mathbb{R}^{n+1})}^{\frac{2}{\mu}}\leq C\Big[
\alpha^{\frac{n+1}{\mu}}
e^{\frac{2\alpha(1-\mu)}{\mu}}\Big(e^{C\sigma}\eta^{2}+e^{c\sigma}\eta^{s}+e^{C\sigma}
|\log \eta|^{-2\mu
s}+\frac{\alpha^{2}}{\sigma^{2}}+\frac{1}{\sigma^{2}}\Big)+\frac{1}{\alpha^{\frac{2}{\mu}}}
\Big].
\end{equation}
The above statements are valid provided $\sigma$ is sufficiently large. Then, we choose
$\alpha$ so large that
$\sigma=\alpha^{\frac{2\mu+n+3}{2\mu}}e^{\frac{\alpha(1-\mu)}{\mu}},$ and
hence $\alpha^{\frac{2\mu+n+1}{\mu}}e^{\frac{2\alpha
(1-\mu)}{\mu}}\sigma^{-2}=\alpha^{\frac{-2}{\mu}}$, so the estimate
(\ref{eq4.47}) yields
\begin{equation}\label{eq4.48}\|q\|_{H^{-1}(\mathbb{R}^{n+1})}^{\frac{2}{\mu}}\leq C
\Big[ e^{Ce^{N\alpha}}(\eta^{2}+\eta^{s}+|\log \eta|^{-2\mu
s})+\alpha^{\frac{-2}{\mu}}\Big],
\end{equation} where $N$ depends on $\mu$ and $n$. Thus, if
$\eta\in (0,1)$, we have
\begin{equation}\label{eq4.49}\|q\|_{H^{-1}(\mathbb{R}^{n+1})}^{\frac{2}{\mu}}\leq
C\Big(e^{Ce^{N\alpha}}|\log \eta |^{-2\mu s} +\alpha^{\frac{-2}{\mu}}
\Big).
\end{equation}
Finally, if $\eta$ is small enough, taking
$\alpha=\frac{1}{N}\log \Big(\log|\log \eta|^{\frac{\mu s}{C}}\Big),$
we get from (\ref{eq4.49}) that
$$\|q\|_{H^{-1}(\mathbb{R}^{n+1})}^{\frac{2}{\mu}}\leq
C\Big[ |\log\eta|^{-\mu s}+\Big[ \log\para{\log|\log\eta|^{\frac{\mu s}{c}}
}\Big]^{-\frac{2}{\mu}} \Big].$$
This completes the proof of
Theorem\ref{Thm2}.
|
1,477,468,750,294 | arxiv | \section*{Introduction}
Thanks to a result by J. Stelzig (\cite[Theorem 3]{Ste18}) we obtain the explicit algebraic structures of the complexes of differential forms of the Iwasawa manifold and some of its small deformations. We introduce these examples whit the aim of studying some properties of the Fr\"olicher spectral sequence; in particular, for each deformation analysed, we compute the explicit dimensions of the components of the successive pages. We observe that:
\begin{enumerate}[(i)]
\item the Fr\"olicher spectral sequence of the Iwasawa manifold degenerates at the second page, but there are some deformations for which it degenerates at the first page;
\item for some deformations the dimensions of the components of the second page of the Fr\"olicher spectral sequence can be in general higher or lower than those of the corresponding components for the Iwasawa manifold;
\item for each page of the Fr\"olicher spectral sequence and for every $p,q\in\mathbb{Z}$ the component of type $(p,q)$ has the same dimension as the component of type $(3-p,3-q)$.
\end{enumerate}
Fact (i) was already known, by considering the Hodge numbers of the Iwasawa manifold and its small deformations determined by I. Nakamura in \cite[p. 96]{Nak75}, in which it is shown that the degeneration at the second page is not a stable property under small deformations. The fact (ii) shows instead that the dimensions of the various components do not vary in general either upper or lower semi-continuosly under small deformations.
These two considerations were already provided by M. Maschio in \cite{Mas18}, where he considered the Nakamura manifold, which is an example of solvmanifold, and a family of its small deformations. In particular, in that case the behaviour of the Fr\"olicher spectral sequence is different; in fact, it degenerates at the second page for the Nakumara manifold, while it degenerates at higher pages for every other deformations.
Finally, fact (iii) is a direct consequence of a generalization of the Serre duality, recently proved by A. Milojevic in \cite{Mil19}.
\section{Preliminaries}
A double complex over a field $K$ is a triad $(A,\partial_1,\partial_2)$, where $A=\bigoplus_{p,q\in\mathbb{Z}}A^{p,q}$ is a bigraded $K$-vector space and $\partial_1$, $\partial_2$ are two endomorphisms of bidegree $(1,0)$ and $(0,1)$, i.e. such that for each $p,q\in\mathbb{Z}$
\[
\partial_1(A^{p,q})\subseteq A^{p+1,q},\quad \partial_2(A^{p,q})\subseteq A^{p,q+1},
\]
satisfying the \emph{boundary conditions} $\partial_1^2=0$, $\partial_2^2=0$ and also
the anti-commutativity property $\partial_1\partial_2+\partial_2\partial_1=0$. From now by the term \emph{complex} we will refer to a double complex over a field $K$.
A result by J. Stelzig (\cite[Theorem 3]{Ste18}) guarantees that every bounded complex can be decomposed in an essentially unique way as direct sum of \emph{indecomposable} complexes, i.e. those complexes that cannot be seen as direct sum of two other nontrivial complexes. This class is composed by \emph{squares}, defined as those complexes whose representation is of the type
\[
\xymatrix{
A^{p,q+1} \ar[r]^{\partial_1} & A^{p+1,q+1} \\
A^{p,q} \ar[u]^{\partial_2} \ar[r]^{\partial_1} & A^{p+1,q} \ar[u]^{\partial_2}
},
\]
and \emph{zigzags}, given by
\begin{gather*}
\xymatrix{
A^{p,q}
},\quad
\xymatrix{
A^{p,q} \ar[r]^{\partial_1} & A^{p+1,q}
},\quad
\xymatrix{
A^{p,q+1} \\
A^{p,q} \ar[u]^{\partial_2}
},\quad
\xymatrix{
A^{p,q+1} & \\
A^{p,q} \ar[u]^{\partial_2} \ar[r]^{\partial_1} & A^{p+1,q}
}, \\
\xymatrix{
A^{p,q+1} \ar[r]^{\partial_1} & A^{p+1,q+1} \\
& A^{p+1,q} \ar[u]^{\partial_2}
},\quad
\xymatrix{
A^{p,q+1} \ar[r]^{\partial_1} & A^{p+1,q+1} \\
& A^{p+1,q} \ar[u]^{\partial_2} \ar[r]^{\partial_1} & A^{p+2,q+1}
},\quad \dots
\end{gather*}
where in both cases all the drawn components represent one-dimensional vector spaces and all drawn arrows represent isomorphisms. The number of non-zero components is said \emph{lenght} of the zigzag.
Let us consider now the complex of complex-valued differential forms on a complex manifold $X$, denoted by $\big(\Omega_\mathbb{C}(X),\partial,\bar{\partial}\big)$. If $X=\Gamma\backslash G$ is a \emph{nilmanifold}, i.e. a compact quotient of a connected and simply connected nilpotent Lie group $G$ by a co-compact discrete subgroup $\Gamma$, endowed with a left-invariant complex structure then, under some appropriate hypotheses, all cohomological informations can be obtained from the subcomplex of left-invariant forms, namely differential forms whose pullback to $G$ is invariant by left-translations, denoted by $\big(\Omega_\mathbb{C}^\text{inv}(X),\partial,\bar{\partial}\big)$ (see \cite[Chapter 3]{Ang14} and references therein), which is isomorphic to the complex $\big(\bigwedge^{\bullet,\bullet} \mathfrak{g}_\mathbb{C}^*,\partial,\bar{\partial}\big)$, where $\mathfrak{g}_\mathbb{C}\coloneqq\mathfrak{g}\otimes\mathbb{C}$ represents the complexification of the Lie algebra $\mathfrak{g}$ naturally associated to $G$. Using the results obtained by L. A. Cordero, M. Fernández, L. Ugarte and A. Gray in \cite[Theorem 1, Theorem 3]{CFUG97} this last complex also permits us to compute easily the dimensions of the successive pages of the Fr\"olicher spectral sequence ${\{(E_r^{\bullet,\bullet},\mathop{}\!\mathrm{d}_r)\}}_{r\in\mathbb{N}}$.
A classical example of nilmanifold endowed with a left-invariant complex structure is the \emph{Iwasawa manifold}, defined as the quotient
\[
\mathbb{I}\coloneqq\mathbb{H}\big(3;\mathbb{Z} [i]\big)\big\backslash\mathbb{H}(3;\mathbb{C}),
\]
where $\mathbb{H}(3;\mathbb{C})$ is the $3$-dimensional \emph{Heisenberg group} over $\mathbb{C}$, defined by
\[
\mathbb{H}(3;\mathbb{C})\coloneqq\Set{
\begin{pmatrix}
1 & z_1 & z_3 \\
0 & 1 & z_2 \\
0 & 0 & 1
\end{pmatrix} \in \mathrm{GL}(n,\mathbb{C})
| z_1,z_2,z_3\in\mathbb{C}}
\]
and $\mathbb{H}(3,\mathbb{Z}[i])$ corresponds to the lattice
\[
\mathbb{H}(3,\mathbb{Z}[i])\coloneqq\mathbb{H}(3,\mathbb{C})\cap \mathrm{GL}(3,\mathbb{Z}[i]).
\]
I. Nakamura determined in \cite[pp. 94-96]{Nak75} a family of small deformations $\{X_{\mathbf{t}}=(\mathbb{I},J_{\mathbf{t}})\}_{\mathbf{t}\in\Delta(0,\varepsilon)}$ depending on six parameters
\[
\mathbf{t}=(t_{11},t_{12},t_{21},t_{22},t_{31},t_{32})\in\Delta(0,\varepsilon),
\]
with
\[
\Delta(0,\varepsilon)\coloneqq\Set{\mathbf{s}\in\mathbb{C}^6|\lvert \mathbf{s}\rvert <\varepsilon},
\]
where $\varepsilon>0$ is small enough.
Furthermore, he provided a co-frame $(\varphi_\mathbf{t}^1,\varphi_\mathbf{t}^2,\varphi_\mathbf{t}^3)$ of left-invariant $(1,0)$-forms on $X_\mathbf{t}$ satisfying the structure equations given by
\[
\begin{cases}
\mathop{}\!\mathrm{d}\varphi_\mathbf{t}^1=0\\
\mathop{}\!\mathrm{d}\varphi_\mathbf{t}^2=0\\
\mathop{}\!\mathrm{d}\varphi_\mathbf{t}^3=\sigma_{12}\,\varphi_\mathbf{t}^1\wedge\varphi_\mathbf{t}^2+\sigma_{1\bar{1}}\,\varphi_\mathbf{t}^1\wedge\bar{\varphi}_\mathbf{t}^1+\sigma_{1\bar{2}}\,\varphi_\mathbf{t}^1\wedge\bar{\varphi}_\mathbf{t}^2+\sigma_{2\bar{1}}\,\varphi_\mathbf{t}^2\wedge\bar{\varphi}_\mathbf{t}^1+\sigma_{2\bar{2}}\,\varphi_\mathbf{t}^2\wedge\bar{\varphi}_\mathbf{t}^2,
\end{cases},
\]
where $\sigma_{12}, \sigma_{1\bar{1}}, \sigma_{2\bar{2}}, \sigma_{2\bar{1}}, \sigma_{1\bar{2}}\in\mathbb{C}$ are parameters depending on $\mathbf{t}$.
Defining the matrix $S$ as
\[
S=
\begin{pmatrix}
\overline{\sigma_{1\bar{1}}} & \overline{\sigma_{2\bar{2}}} &\overline{\sigma_{1\bar{2}}} &\overline{\sigma_{2\bar{1}}} \\
\sigma_{1\bar{1}} & \sigma_{2\bar{2}} & \sigma_{2\bar{1}} & \sigma_{1\bar{2}}
\end{pmatrix},
\]
the small deformations of the Iwasawa manifold can be classified into three classes, based on their Hodge numbers, further subdivided in other subclasses considering their Bott-Chern-numbers:
\begin{enumerate}[(i)]
\item $t_{11}=t_{12}=t_{21}=t_{22}=0$;
\item $D(\mathbf{t})=0$ and $(t_{11},t_{12},t_{21},t_{22})\neq(0,0,0,0)$:
\begin{enumerate}[(i)]
\item[(ii.a)] $D(\mathbf{t})=0$ and $\rank S=1$;
\item[(ii.b)] $D(\mathbf{t})=0$ and $\rank S=2$;
\end{enumerate}
\item[(iii)] $D(\mathbf{t})\neq 0$:
\begin{enumerate}[(i)]
\item[(iii.a)] $D(\mathbf{t})\neq 0$ and $\rank S=1$;
\item[(iii.b)] $D(\mathbf{t})\neq 0$ and $\rank S=2$.
\end{enumerate}
\end{enumerate}
Thanks to some results contained in \cite{Rol09}, \cite{Ang13} and the references therein, we can make some considerations about the de Rham, Dolbeault, Aeppli and Bott-Chern cohomologies, in particular for each $\mathbf{t}\in\Delta(0,\varepsilon)$ we get the isomorphisms
\begin{gather*}
H_{dR}^\bullet(\mathfrak{h}_,\mathbb{R})\simeq H^\bullet_{dR}(X_\mathbf{t},\mathbb{R}),\quad H_{\bar{\partial}}^{\bullet,\bullet}(\mathfrak{h}_\mathbb{C}
)\simeq H^{\bullet,\bullet}_{\bar{\partial}}(X_\mathbf{t}),\\
H_{BC}^{\bullet,\bullet}(\mathfrak{h}_\mathbb{C})\simeq H^{\bullet,\bullet}_{BC}(X_\mathbf{t}),\quad H_{A}^{\bullet,\bullet}(\mathfrak{h}_\mathbb{C})\simeq H^{\bullet,\bullet}_{A}(X_\mathbf{t}),
\end{gather*}
where $\mathfrak{h}$ is the Lie Algebra naturally associated to $\mathbb{H}(3,\mathbb{C})$ and $H_*^{\bullet,\bullet}(\mathfrak{h}_\mathbb{C})$ represents the corresponding cohomology of the subcomplex of left-invariant forms; therefore, to compute dimensions of the cohomologies, it is sufficient to analyze the complex of left-invariant forms. For this reason, in the next section we will restrict our attention on this last complex.
\section{Double complex of left-invariant forms on the Iwasawa manifold and its small deformations}
We are now going to analyze some explicit examples of complex of differential forms; in particular we will give an explicit graphic representation, as in the Stelzig process, of the structure of the complex related to the Iwasawa manifold and some of its small deformations. Once we describe the required decomposition as a direct sum of squares and zigzags, it will be easy to obtain the dimensions of the successive pages of the Fr\"olicher spectral sequence.
Given any arbitrary bounded complex, the decomposition described by J. Stelzig is based on the costruction of an ascending filtration, which further splits into simpler pieces, thus allowing to identify squares and zigzags appearing in the complex. The procedure works as follows: we determine for each $k\in\mathbb{Z}$ a subcomplex generated by all components of total degree at most $k$ and then a complement of the subcomplex generated by the components of total degree at most $k-1$ contained in it. This permits to obtain for every $k\in\mathbb{Z}$ a simpler subcomplex composed by components of degree $k,k+1$ and $k+2$, whose direct sum corresponds to complex considered. As a result it is easier to determine the various squares and zigzags contained in every subcomplex to get the desired decomposition.
In the case of the complex of left-invariant differential forms of the Iwasawa manifold and its small deformations the procedure consists in five steps, based respectively on the computation of the images of the left-invariant differential forms of total degree between $2$ and $5$ by the operators $\mathop{}\!\mathrm{\partial}$ and $\mathop{}\!\mathrm{\bar{\partial}}$.
\begin{figure}[ht]
\begin{center}
a)\quad
\begin{tikzpicture}[scale=1.5]
\draw[help lines,black] (0,0) grid (4,4);
\draw [line width=1.5,->,red] (3/2,1/2) -- (5/2-0.03,1/2);
\draw [line width=1.5,->,red] (1/2,3/2) -- (1/2,5/2-0.03);
\draw [line width=1.5,->,green] (7/4,5/4) -- (9/4-0.03,5/4);
\draw [line width=1.5,->,green] (7/4,3/2) -- (9/4-0.03,3/2);
\draw [line width=1.5,->,green] (5/4,7/4) -- (5/4,9/4-0.03);
\draw [line width=1.5,->,green] (3/2,7/4) -- (3/2,9/4-0.03);
\draw [line width=1.5,->,green] (7/4,7/4) -- (9/4-0.03,7/4);
\draw [line width=1.5,->,green] (7/4,7/4) -- (7/4,9/4-0.03);
\draw [line width=1.5,->,green] (9/4,7/4) -- (9/4,9/4-0.03);
\draw [line width=1.5,->,green] (7/4,9/4) -- (9/4-0.03,9/4);
\draw [line width=1.5,->,blue] (7/4,5/2) -- (9/4-0.03,5/2);
\draw [line width=1.5,->,blue] (7/4,11/4) -- (9/4-0.03,11/4);
\draw [line width=1.5,->,blue] (5/2,7/4) -- (5/2,9/4-0.03);
\draw [line width=1.5,->,blue] (11/4,7/4) -- (11/4,9/4-0.03);
\draw [line width=1.5,->,orange] (7/2,3/2) -- (7/2,5/2-0.03);
\draw [line width=1.5,->,orange] (3/2,7/2) -- (5/2-0.03,7/2);
\draw [fill] (1/2,1/2) circle [radius=0.03];
\draw [fill=red] (5/4,1/4) circle [radius=0.03];
\draw [fill=red] (3/2,1/2) circle [radius=0.03];
\draw [fill=red] (7/4,3/4) circle [radius=0.03];
\draw [fill=green] (9/4,1/4) circle [radius=0.03];
\draw [fill=red] (5/2,1/2) circle [radius=0.03];
\draw [fill=green] (11/4,3/4) circle [radius=0.03];
\draw [fill=blue] (7/2,1/2) circle [radius=0.03];
\draw [fill=red] (1/4,5/4) circle [radius=0.03];
\draw [fill=red] (1/2,3/2) circle [radius=0.03];
\draw [fill=red] (3/4,7/4) circle [radius=0.03];
\draw [fill=green] (5/4,5/4) circle [radius=0.03];
\draw [fill=green] (3/2,5/4) circle [radius=0.03];
\draw [fill=green] (7/4,5/4) circle [radius=0.03];
\draw [fill=green] (5/4,3/2) circle [radius=0.03];
\draw [fill=green] (3/2,3/2) circle [radius=0.03];
\draw [fill=green] (7/4,3/2) circle [radius=0.03];
\draw [fill=green] (5/4,7/4) circle [radius=0.03];
\draw [fill=green] (3/2,7/4) circle [radius=0.03];
\draw [fill=green] (7/4,7/4) circle [radius=0.03];
\draw [fill=green] (9/4,5/4) circle [radius=0.03];
\draw [fill=blue] (5/2,5/4) circle [radius=0.03];
\draw [fill=blue] (11/4,5/4) circle [radius=0.03];
\draw [fill=green] (9/4,3/2) circle [radius=0.03];
\draw [fill=blue] (5/2,3/2) circle [radius=0.03];
\draw [fill=blue] (11/4,3/2) circle [radius=0.03];
\draw [fill=green] (9/4,7/4) circle [radius=0.03];
\draw [fill=blue] (5/2,7/4) circle [radius=0.03];
\draw [fill=blue] (11/4,7/4) circle [radius=0.03];
\draw [fill=orange] (13/4,5/4) circle [radius=0.03];
\draw [fill=orange] (7/2,3/2) circle [radius=0.03];
\draw [fill=orange] (15/4,7/4) circle [radius=0.03];
\draw [fill=green] (1/4,9/4) circle [radius=0.03];
\draw [fill=red] (1/2,5/2) circle [radius=0.03];
\draw [fill=green] (3/4,11/4) circle [radius=0.03];
\draw [fill=green] (5/4,9/4) circle [radius=0.03];
\draw [fill=green] (3/2,9/4) circle [radius=0.03];
\draw [fill=green] (7/4,9/4) circle [radius=0.03];
\draw [fill=blue] (5/4,5/2) circle [radius=0.03];
\draw [fill=blue] (3/2,5/2) circle [radius=0.03];
\draw [fill=blue] (7/4,5/2) circle [radius=0.03];
\draw [fill=blue] (5/4,11/4) circle [radius=0.03];
\draw [fill=blue] (3/2,11/4) circle [radius=0.03];
\draw [fill=blue] (7/4,11/4) circle [radius=0.03];
\draw [fill=green] (9/4,9/4) circle [radius=0.03];
\draw [fill=blue] (5/2,9/4) circle [radius=0.03];
\draw [fill=blue] (11/4,9/4) circle [radius=0.03];
\draw [fill=blue] (9/4,5/2) circle [radius=0.03];
\draw [fill=orange] (5/2,5/2) circle [radius=0.03];
\draw [fill=orange] (11/4,5/2) circle [radius=0.03];
\draw [fill=blue] (9/4,11/4) circle [radius=0.03];
\draw [fill=orange] (5/2,11/4) circle [radius=0.03];
\draw [fill=orange] (11/4,11/4) circle [radius=0.03];
\draw [fill=magenta] (13/4,9/4) circle [radius=0.03];
\draw [fill=orange] (7/2,5/2) circle [radius=0.03];
\draw [fill=magenta] (15/4,11/4) circle [radius=0.03];
\draw [fill=blue] (1/2,7/2) circle [radius=0.03];
\draw [fill=orange] (5/4,13/4) circle [radius=0.03];
\draw [fill=orange] (3/2,7/2) circle [radius=0.03];
\draw [fill=orange] (7/4,15/4) circle [radius=0.03];
\draw [fill=magenta] (9/4,13/4) circle [radius=0.03];
\draw [fill=orange] (5/2,7/2) circle [radius=0.03];
\draw [fill=magenta] (11/4,15/4) circle [radius=0.03];
\draw [fill] (7/2,7/2) circle [radius=0.03];
\end{tikzpicture}\qquad b)\quad
\begin{tikzpicture}[scale=1.5]
\draw[help lines,black] (0,0) grid (4,4);
\draw [line width=1.5,->,red] (5/4,1/4) -- (9/4-0.03,1/4);
\draw [line width=1.5,->,red] (1/4,5/4) -- (1/4,9/4-0.03);
\draw [line width=1.5,->,red] (1/4,5/4) -- (5/4-0.03,5/4);
\draw [line width=1.5,->,red] (5/4,1/4) -- (5/4,5/4-0.03);
\draw [line width=1.5,->,green] (1/2,5/2) -- (5/4-0.03,5/2);
\draw [line width=1.5,->,green] (7/4,5/4) -- (5/2-0.03,5/4);
\draw [line width=1.5,->,green] (7/4,3/2) -- (9/4-0.03,3/2);
\draw [line width=1.5,->,green] (5/4,7/4) -- (5/4,5/2-0.03);
\draw [line width=1.5,->,green] (5/2,1/2) -- (5/2,5/4-0.03);
\draw [line width=1.5,->,green] (3/2,7/4) -- (3/2,9/4-0.03);
\draw [line width=1.5,->,green] (7/4,7/4) -- (9/4-0.03,7/4);
\draw [line width=1.5,->,green] (7/4,7/4) -- (7/4,9/4-0.03);
\draw [line width=1.5,->,green] (9/4,7/4) -- (9/4,9/4-0.03);
\draw [line width=1.5,->,green] (7/4,9/4) -- (9/4-0.03,9/4);
\draw [line width=1.5,->,blue] (7/4,5/2) -- (9/4-0.03,5/2);
\draw [line width=1.5,->,blue] (11/4,3/2) -- (7/2-0.03,3/2);
\draw [line width=1.5,->,blue] (3/2,11/4) -- (9/4-0.03,11/4);
\draw [line width=1.5,->,blue] (5/2,7/4) -- (5/2,9/4-0.03);
\draw [line width=1.5,->,blue] (3/2,11/4) -- (3/2,7/2-0.03);
\draw [line width=1.5,->,blue] (11/4,3/2) -- (11/4,9/4-0.03);
\draw [line width=1.5,->,orange] (15/4,7/4) -- (15/4,11/4-0.03);
\draw [line width=1.5,->,orange] (7/4,15/4) -- (11/4-0.03,15/4);
\draw [line width=1.5,->,orange] (11/4,11/4) -- (15/4-0.03,11/4);
\draw [line width=1.5,->,orange] (11/4,11/4) -- (11/4,15/4-0.03);
\draw [fill] (1/2,1/2) circle [radius=0.03];
\draw [fill=red] (5/4,1/4) circle [radius=0.03];
\draw [fill=red] (3/2,1/2) circle [radius=0.03];
\draw [fill=red] (7/4,3/4) circle [radius=0.03];
\draw [fill=red] (9/4,1/4) circle [radius=0.03];
\draw [fill=green] (5/2,1/2) circle [radius=0.03];
\draw [fill=green] (11/4,3/4) circle [radius=0.03];
\draw [fill=blue] (7/2,1/2) circle [radius=0.03];
\draw [fill=red] (1/4,5/4) circle [radius=0.03];
\draw [fill=red] (1/2,3/2) circle [radius=0.03];
\draw [fill=red] (3/4,7/4) circle [radius=0.03];
\draw [fill=red] (5/4,5/4) circle [radius=0.03];
\draw [fill=green] (3/2,5/4) circle [radius=0.03];
\draw [fill=green] (7/4,5/4) circle [radius=0.03];
\draw [fill=green] (5/4,3/2) circle [radius=0.03];
\draw [fill=green] (3/2,3/2) circle [radius=0.03];
\draw [fill=green] (7/4,3/2) circle [radius=0.03];
\draw [fill=green] (5/4,7/4) circle [radius=0.03];
\draw [fill=green] (3/2,7/4) circle [radius=0.03];
\draw [fill=green] (7/4,7/4) circle [radius=0.03];
\draw [fill=green] (5/2,5/4) circle [radius=0.03];
\draw [fill=blue] (11/4,5/4) circle [radius=0.03];
\draw [fill=green] (9/4,3/2) circle [radius=0.03];
\draw [fill=blue] (5/2,3/2) circle [radius=0.03];
\draw [fill=blue] (11/4,3/2) circle [radius=0.03];
\draw [fill=green] (9/4,7/4) circle [radius=0.03];
\draw [fill=blue] (5/2,7/4) circle [radius=0.03];
\draw [fill=orange] (13/4,5/4) circle [radius=0.03];
\draw [fill=blue] (7/2,3/2) circle [radius=0.03];
\draw [fill=orange] (15/4,7/4) circle [radius=0.03];
\draw [fill=red] (1/4,9/4) circle [radius=0.03];
\draw [fill=green] (1/2,5/2) circle [radius=0.03];
\draw [fill=green] (3/4,11/4) circle [radius=0.03];
\draw [fill=green] (3/2,9/4) circle [radius=0.03];
\draw [fill=green] (7/4,9/4) circle [radius=0.03];
\draw [fill=green] (5/4,5/2) circle [radius=0.03];
\draw [fill=blue] (3/2,5/2) circle [radius=0.03];
\draw [fill=blue] (7/4,5/2) circle [radius=0.03];
\draw [fill=blue] (5/4,11/4) circle [radius=0.03];
\draw [fill=blue] (3/2,11/4) circle [radius=0.03];
\draw [fill=green] (9/4,9/4) circle [radius=0.03];
\draw [fill=blue] (5/2,9/4) circle [radius=0.03];
\draw [fill=blue] (11/4,9/4) circle [radius=0.03];
\draw [fill=blue] (9/4,5/2) circle [radius=0.03];
\draw [fill=orange] (5/2,5/2) circle [radius=0.03];
\draw [fill=orange] (11/4,5/2) circle [radius=0.03];
\draw [fill=blue] (9/4,11/4) circle [radius=0.03];
\draw [fill=orange] (5/2,11/4) circle [radius=0.03];
\draw [fill=orange] (11/4,11/4) circle [radius=0.03];
\draw [fill=magenta] (13/4,9/4) circle [radius=0.03];
\draw [fill=magenta] (7/2,5/2) circle [radius=0.03];
\draw [fill=orange] (15/4,11/4) circle [radius=0.03];
\draw [fill=blue] (1/2,7/2) circle [radius=0.03];
\draw [fill=orange] (5/4,13/4) circle [radius=0.03];
\draw [fill=blue] (3/2,7/2) circle [radius=0.03];
\draw [fill=orange] (7/4,15/4) circle [radius=0.03];
\draw [fill=magenta] (9/4,13/4) circle [radius=0.03];
\draw [fill=magenta] (5/2,7/2) circle [radius=0.03];
\draw [fill=orange] (11/4,15/4) circle [radius=0.03];
\draw [fill] (7/2,7/2) circle [radius=0.03];
\draw [fill=blue] (5/4,9/4) circle [radius=0.03];
\draw [fill=blue] (9/4,5/4) circle [radius=0.03];
\draw [fill=blue] (11/4,7/4) circle [radius=0.03];
\draw [fill=blue] (7/4,11/4) circle [radius=0.03];
\end{tikzpicture}
\bigskip
c)\quad
\begin{tikzpicture}[scale=1.5]
\draw[help lines,black] (0,0) grid (4,4);
\draw [line width=1.5,->,red] (3/2,1/2) -- (5/2-0.03,1/2);
\draw [line width=1.5,->,red] (1/2,3/2) -- (1/2,5/2-0.03);
\draw [line width=1.5,->,red] (1/2,3/2) -- (3/2-0.03,3/2);
\draw [line width=1.5,->,red] (3/2,1/2) -- (3/2,3/2-0.03);
\draw [line width=1.5,->,green] (1/4,9/4) -- (5/4-0.03,9/4);
\draw [line width=1.5,->,green] (3/4,11/4) -- (3/2-0.03,11/4);
\draw [line width=1.5,->,green] (7/4,5/4) -- (9/4-0.03,5/4);
\draw [line width=1.5,->,green] (7/4,3/2) -- (11/4-0.03,3/2);
\draw [line width=1.5,->,green] (5/4,7/4) -- (5/4,9/4-0.03);
\draw [line width=1.5,->,green] (9/4,1/4) -- (9/4,5/4-0.03);
\draw [line width=1.5,->,green] (11/4,3/4) -- (11/4,3/2-0.03);
\draw [line width=1.5,->,green] (3/2,7/4) -- (3/2,11/4-0.03);
\draw [line width=1.5,->,green] (7/4,7/4) -- (9/4-0.03,7/4);
\draw [line width=1.5,->,green] (7/4,7/4) -- (7/4,9/4-0.03);
\draw [line width=1.5,->,green] (9/4,7/4) -- (9/4,9/4-0.03);
\draw [line width=1.5,->,green] (7/4,9/4) -- (9/4-0.03,9/4);
\draw [line width=1.5,->,blue] (5/4,5/2) -- (9/4-0.03,5/2);
\draw [line width=1.5,->,blue] (11/4,7/4) -- (15/4-0.03,7/4);
\draw [line width=1.5,->,blue] (5/2,5/4) -- (13/4-0.03,5/4);
\draw [line width=1.5,->,blue] (7/4,11/4) -- (9/4-0.03,11/4);
\draw [line width=1.5,->,blue] (5/2,5/4) -- (5/2,9/4-0.03);
\draw [line width=1.5,->,blue] (7/4,11/4) -- (7/4,15/4-0.03);
\draw [line width=1.5,->,blue] (5/4,5/2) -- (5/4,13/4-0.03);
\draw [line width=1.5,->,blue] (11/4,7/4) -- (11/4,9/4-0.03);
\draw [line width=1.5,->,orange] (7/2,3/2) -- (7/2,5/2-0.03);
\draw [line width=1.5,->,orange] (3/2,7/2) -- (5/2-0.03,7/2);
\draw [line width=1.5,->,orange] (5/2,5/2) -- (7/2-0.03,5/2);
\draw [line width=1.5,->,orange] (5/2,5/2) -- (5/2,7/2-0.03);
\draw [fill] (1/2,1/2) circle [radius=0.03];
\draw [fill=red] (5/4,1/4) circle [radius=0.03];
\draw [fill=red] (3/2,1/2) circle [radius=0.03];
\draw [fill=red] (7/4,3/4) circle [radius=0.03];
\draw [fill=green] (9/4,1/4) circle [radius=0.03];
\draw [fill=red] (5/2,1/2) circle [radius=0.03];
\draw [fill=green] (11/4,3/4) circle [radius=0.03];
\draw [fill=blue] (7/2,1/2) circle [radius=0.03];
\draw [fill=red] (1/4,5/4) circle [radius=0.03];
\draw [fill=red] (1/2,3/2) circle [radius=0.03];
\draw [fill=red] (3/4,7/4) circle [radius=0.03];
\draw [fill=green] (5/4,5/4) circle [radius=0.03];
\draw [fill=green] (7/4,5/4) circle [radius=0.03];
\draw [fill=red] (3/2,3/2) circle [radius=0.03];
\draw [fill=green] (7/4,3/2) circle [radius=0.03];
\draw [fill=green] (5/4,7/4) circle [radius=0.03];
\draw [fill=green] (3/2,7/4) circle [radius=0.03];
\draw [fill=green] (7/4,7/4) circle [radius=0.03];
\draw [fill=blue] (11/4,7/4) circle [radius=0.03];
\draw [fill=blue] (5/2,5/4) circle [radius=0.03];
\draw [fill=green] (11/4,3/2) circle [radius=0.03];
\draw [fill=green] (9/4,7/4) circle [radius=0.03];
\draw [fill=blue] (13/4,5/4) circle [radius=0.03];
\draw [fill=orange] (7/2,3/2) circle [radius=0.03];
\draw [fill=blue] (15/4,7/4) circle [radius=0.03];
\draw [fill=green] (1/4,9/4) circle [radius=0.03];
\draw [fill=red] (1/2,5/2) circle [radius=0.03];
\draw [fill=green] (3/4,11/4) circle [radius=0.03];
\draw [fill=green] (7/4,9/4) circle [radius=0.03];
\draw [fill=blue] (5/4,5/2) circle [radius=0.03];
\draw [fill=green] (3/2,11/4) circle [radius=0.03];
\draw [fill=blue] (7/4,11/4) circle [radius=0.03];
\draw [fill=green] (9/4,9/4) circle [radius=0.03];
\draw [fill=blue] (5/2,9/4) circle [radius=0.03];
\draw [fill=blue] (11/4,9/4) circle [radius=0.03];
\draw [fill=blue] (9/4,5/2) circle [radius=0.03];
\draw [fill=orange] (5/2,5/2) circle [radius=0.03];
\draw [fill=blue] (9/4,11/4) circle [radius=0.03];
\draw [fill=orange] (11/4,11/4) circle [radius=0.03];
\draw [fill=magenta] (13/4,9/4) circle [radius=0.03];
\draw [fill=orange] (7/2,5/2) circle [radius=0.03];
\draw [fill=magenta] (15/4,11/4) circle [radius=0.03];
\draw [fill=blue] (1/2,7/2) circle [radius=0.03];
\draw [fill=blue] (5/4,13/4) circle [radius=0.03];
\draw [fill=orange] (3/2,7/2) circle [radius=0.03];
\draw [fill=blue] (7/4,15/4) circle [radius=0.03];
\draw [fill=magenta] (9/4,13/4) circle [radius=0.03];
\draw [fill=orange] (5/2,7/2) circle [radius=0.03];
\draw [fill=magenta] (11/4,15/4) circle [radius=0.03];
\draw [fill] (7/2,7/2) circle [radius=0.03];
\draw [fill=green] (3/2,5/4) circle [radius=0.03];
\draw [fill=green] (5/4,3/2) circle [radius=0.03];
\draw [fill=green] (5/4,9/4) circle [radius=0.03];
\draw [fill=green] (9/4,5/4) circle [radius=0.03];
\draw [fill=blue] (11/4,5/4) circle [radius=0.03];
\draw [fill=blue] (9/4,3/2) circle [radius=0.03];
\draw [fill=blue] (5/2,3/2) circle [radius=0.03];
\draw [fill=blue] (5/2,7/4) circle [radius=0.03];
\draw [fill=blue] (3/2,9/4) circle [radius=0.03];
\draw [fill=blue] (3/2,5/2) circle [radius=0.03];
\draw [fill=blue] (7/4,5/2) circle [radius=0.03];
\draw [fill=blue] (5/4,11/4) circle [radius=0.03];
\draw [fill=orange] (11/4,5/2) circle [radius=0.03];
\draw [fill=orange] (5/2,11/4) circle [radius=0.03];
\end{tikzpicture}\qquad d)\quad
\begin{tikzpicture}[scale=1.5]
\draw[help lines,black] (0,0) grid (4,4);
\draw [line width=1.5,->,red] (3/2,1/2) -- (5/2-0.03,1/2);
\draw [line width=1.5,->,red] (1/2,3/2) -- (1/2,5/2-0.03);
\draw [line width=1.5,->,red] (1/2,3/2) -- (5/4-0.03,3/2);
\draw [line width=1.5,->,red] (3/2,1/2) -- (3/2,5/4-0.03);
\draw [line width=1.5,->,green] (1/4,9/4) -- (5/4-0.03,9/4);
\draw [line width=1.5,->,green] (3/4,11/4) -- (3/2-0.03,11/4);
\draw [line width=1.5,->,green] (7/4,5/4) -- (9/4-0.03,5/4);
\draw [line width=1.5,->,green] (7/4,3/2) -- (11/4-0.03,3/2);
\draw [line width=1.5,->,green] (5/4,7/4) -- (5/4,9/4-0.03);
\draw [line width=1.5,->,green] (9/4,1/4) -- (9/4,5/4-0.03);
\draw [line width=1.5,->,green] (11/4,3/4) -- (11/4,3/2-0.03);
\draw [line width=1.5,->,green] (3/2,7/4) -- (3/2,11/4-0.03);
\draw [line width=1.5,->,green] (7/4,7/4) -- (9/4-0.03,7/4);
\draw [line width=1.5,->,green] (7/4,7/4) -- (7/4,9/4-0.03);
\draw [line width=1.5,->,green] (9/4,7/4) -- (9/4,9/4-0.03);
\draw [line width=1.5,->,green] (7/4,9/4) -- (9/4-0.03,9/4);
\draw [line width=1.5,->,blue] (5/4,5/2) -- (9/4-0.03,5/2);
\draw [line width=1.5,->,blue] (11/4,7/4) -- (15/4-0.03,7/4);
\draw [line width=1.5,->,blue] (5/2,5/4) -- (13/4-0.03,5/4);
\draw [line width=1.5,->,blue] (7/4,11/4) -- (9/4-0.03,11/4);
\draw [line width=1.5,->,blue] (5/2,5/4) -- (5/2,9/4-0.03);
\draw [line width=1.5,->,blue] (7/4,11/4) -- (7/4,15/4-0.03);
\draw [line width=1.5,->,blue] (5/4,5/2) -- (5/4,13/4-0.03);
\draw [line width=1.5,->,blue] (11/4,7/4) -- (11/4,9/4-0.03);
\draw [line width=1.5,->,orange] (7/2,3/2) -- (7/2,5/2-0.03);
\draw [line width=1.5,->,orange] (3/2,7/2) -- (5/2-0.03,7/2);
\draw [line width=1.5,->,orange] (11/4,5/2) -- (7/2-0.03,5/2);
\draw [line width=1.5,->,orange] (5/2,11/4) -- (5/2,7/2-0.03);
\draw [fill] (1/2,1/2) circle [radius=0.03];
\draw [fill=red] (5/4,1/4) circle [radius=0.03];
\draw [fill=red] (3/2,1/2) circle [radius=0.03];
\draw [fill=red] (7/4,3/4) circle [radius=0.03];
\draw [fill=green] (9/4,1/4) circle [radius=0.03];
\draw [fill=red] (5/2,1/2) circle [radius=0.03];
\draw [fill=green] (11/4,3/4) circle [radius=0.03];
\draw [fill=blue] (7/2,1/2) circle [radius=0.03];
\draw [fill=red] (1/4,5/4) circle [radius=0.03];
\draw [fill=red] (1/2,3/2) circle [radius=0.03];
\draw [fill=red] (3/4,7/4) circle [radius=0.03];
\draw [fill=green] (5/4,5/4) circle [radius=0.03];
\draw [fill=red] (3/2,5/4) circle [radius=0.03];
\draw [fill=green] (7/4,5/4) circle [radius=0.03];
\draw [fill=red] (5/4,3/2) circle [radius=0.03];
\draw [fill=green] (3/2,3/2) circle [radius=0.03];
\draw [fill=green] (7/4,3/2) circle [radius=0.03];
\draw [fill=green] (5/4,7/4) circle [radius=0.03];
\draw [fill=green] (3/2,7/4) circle [radius=0.03];
\draw [fill=green] (7/4,7/4) circle [radius=0.03];
\draw [fill=blue] (11/4,7/4) circle [radius=0.03];
\draw [fill=blue] (5/2,5/4) circle [radius=0.03];
\draw [fill=green] (11/4,3/2) circle [radius=0.03];
\draw [fill=green] (9/4,7/4) circle [radius=0.03];
\draw [fill=blue] (13/4,5/4) circle [radius=0.03];
\draw [fill=orange] (7/2,3/2) circle [radius=0.03];
\draw [fill=blue] (15/4,7/4) circle [radius=0.03];
\draw [fill=green] (1/4,9/4) circle [radius=0.03];
\draw [fill=red] (1/2,5/2) circle [radius=0.03];
\draw [fill=green] (3/4,11/4) circle [radius=0.03];
\draw [fill=green] (7/4,9/4) circle [radius=0.03];
\draw [fill=blue] (5/4,5/2) circle [radius=0.03];
\draw [fill=green] (3/2,11/4) circle [radius=0.03];
\draw [fill=blue] (7/4,11/4) circle [radius=0.03];
\draw [fill=green] (9/4,9/4) circle [radius=0.03];
\draw [fill=blue] (5/2,9/4) circle [radius=0.03];
\draw [fill=blue] (11/4,9/4) circle [radius=0.03];
\draw [fill=blue] (9/4,5/2) circle [radius=0.03];
\draw [fill=orange] (5/2,5/2) circle [radius=0.03];
\draw [fill=orange] (11/4,5/2) circle [radius=0.03];
\draw [fill=blue] (9/4,11/4) circle [radius=0.03];
\draw [fill=orange] (5/2,11/4) circle [radius=0.03];
\draw [fill=orange] (11/4,11/4) circle [radius=0.03];
\draw [fill=magenta] (13/4,9/4) circle [radius=0.03];
\draw [fill=orange] (7/2,5/2) circle [radius=0.03];
\draw [fill=magenta] (15/4,11/4) circle [radius=0.03];
\draw [fill=blue] (1/2,7/2) circle [radius=0.03];
\draw [fill=blue] (5/4,13/4) circle [radius=0.03];
\draw [fill=orange] (3/2,7/2) circle [radius=0.03];
\draw [fill=blue] (7/4,15/4) circle [radius=0.03];
\draw [fill=magenta] (9/4,13/4) circle [radius=0.03];
\draw [fill=orange] (5/2,7/2) circle [radius=0.03];
\draw [fill=magenta] (11/4,15/4) circle [radius=0.03];
\draw [fill] (7/2,7/2) circle [radius=0.03];
\draw [fill=green] (5/4,9/4) circle [radius=0.03];
\draw [fill=green] (9/4,5/4) circle [radius=0.03];
\draw [fill=blue] (11/4,5/4) circle [radius=0.03];
\draw [fill=blue] (9/4,3/2) circle [radius=0.03];
\draw [fill=blue] (5/2,3/2) circle [radius=0.03];
\draw [fill=blue] (5/2,7/4) circle [radius=0.03];
\draw [fill=blue] (3/2,9/4) circle [radius=0.03];
\draw [fill=blue] (3/2,5/2) circle [radius=0.03];
\draw [fill=blue] (7/4,5/2) circle [radius=0.03];
\draw [fill=blue] (5/4,11/4) circle [radius=0.03];
\end{tikzpicture}
\end{center}
\caption{\label{fig1} Representation of the structure of the complexes of complex-valued differential forms related to the Iwasawa manifold and some of its small deformations. Horizontal and vertical arrows represent respectively the maps $\mathop{}\!\mathrm{\partial}$ and $\mathop{}\!\mathrm{\bar{\partial}}$. The different colours correspond to the various steps of the Stelzig process.}
\end{figure}
In Figure \ref{fig1}, we see the graphs corresponding to some deformations of the Iwasawa manifold obtained in this way. The graph a) corresponds to the complex of differential forms of the Iwasawa manifold itself. Its structure, already determined in \cite[Section 4]{Ang13} by D. Angella is quite simple and it can be represented as direct sum of thirtysix zigzags of lenght $1$, twelve zigzags of lenght $2$ and a square.
The graph b) instead represents the complex related to a deformation of the Iwasawa manifold in the case where
\[
t_{21}\neq 0,\quad t_{11}=t_{12}=t_{22}=t_{31}=t_{32}=0,
\]
that is of class (ii.a). It has a bit more complicated structure, with twentyeight zigzags of lenght $1$, four zigzags of lenght $2$, four zigzags of lenght $3$, two zigzags of lenght $5$ and a square.
Finally the graphs c) and d) concern a deformation obtained by parameters
\[
t_{11}\neq 0,\quad t_{22}\neq0,\quad t_{12}=t_{21}=t_{31}=t_{32}=0
\]
respectively when $|t_{11}|=|t_{22}|$ and $|t_{11}|\neq|t_{22}|$. In the first case the deformation represents an example of class (iii.a), while in the second case it belongs to the class (iii.b).
The only difference beetween the two graphs is given by two zigzags of lenght $5$ in the first, which split into four zigzags of lenght $3$ if the two norms are different. In addition to these, we can find also eight more zigzags of lenght $3$ and a square.
Because of the results mentioned before, these representations give another way to compute the dimensions of de Rham, Dolbeault, Bott-Chern and Aeppli cohomologies of the manifolds considered, already determined in \cite[p. 96]{Nak75} and \cite[Section 5]{Ang13}. For example, if we want to check the dimensions of Dolbeault cohomology, we have to remove vertical arrows in the graphs and count the remaining points (see the table in \cite[p. 96]{Nak75} to check the corresponding values).
\section{Properties of the Fr\"olicher spectral sequence under
small deformations}
Thanks to the explicit description given in \cite{CFUG97}, we can easily use the graphs in Figure \ref{fig1} to determine the dimensions of the successive pages of the Fr\"olicher spectral sequence of the related manifolds, that we collect in the table of Figure \ref{fig2}.
In particular, we observe that for the Iwasawa Manifold and the deformation of class (ii) analysed the Fr\"olicher spectral sequence degenerates at the second page, while in the case of the deformations
of class (iii), in both the examples considered, it degenerates at the first page, according to the Hodge numbers of the Iwasawa manifold and its small deformations already determined by I. Nakamura in \cite[p.96]{Nak75}.
These facts permit us to make some considerations.
\begin{re}{1}
M. Maschio showed in \cite[Theorem 1]{Mas18} that, given any compact complex manifold $X$, if the dimensions of Dolbeault cohomology are constant under deformations and its Fr\"olicher spectral sequence degenerates at the second page, then for each small enough deformation of $X$ its Fr\"olicher spectral sequence degenerates at the second page too. In \cite[Section 5]{Mas18} he also provided that the hypothesis of constantness is strictly necessary to guarantee the stability of degeneration at the second page under small deformations: indeed, using the Nakamura manifold, that represents an example of holomorphic parallelizable complex three-dimensional solvmanifold and whose Fr\"olicher spectral sequence degenerates at the second page, he considered a family of small deformations and proved (\cite[Theorem 9]{Mas18}) that their Fr\"olicher spectral sequence degenerates at higher steps.
Here we provided, by the Iwasawa manifold and its small deformations, another example to show
the necessity of this condition,
using in this case a nilmanifold with a quite simple structure.
\end{re}
\begin{figure}
{\small
\[
\begin{array}{|c|cc|ccc|cccc|ccc|cc|}
\toprule
\text{a)} & {1,0} & {0,1} &{2,0} & {1,1} &{0,2} & {3,0} &{2,1} & {1,2} & {0,3} & {3,1} &{2,2} & {1,3} &{3,2} & {2,3} \\
\midrule
E_1^{\bullet,\bullet} & 3 & 2 & 3 & 6 & 2 & 1 & 6 & 6 & 1 & 2 & 6 & 3 & 2 & 3\\
E_2^{\bullet,\bullet} & 2 & 2 & 2 & 4 & 2 & 1 & 4 & 4 & 1 & 2 & 4 & 2 & 2 & 2\\
\bottomrule
\end{array}
\]
\[
\begin{array}{|c|cc|ccc|cccc|ccc|cc|}
\toprule
\text{b)} & {1,0} & {0,1} &{2,0} & {1,1} &{0,2} & {3,0} &{2,1} & {1,2} & {0,3} & {3,1} &{2,2} & {1,3} &{3,2} & {2,3}\\
\midrule
E_1^{\bullet,\bullet} & 2 & 2 & 2 & 5 & 2 & 1 & 5 & 5 & 1 & 2 & 5 & 2 & 2 & 2\\
E_2^{\bullet,\bullet} & 2 & 2 & 2 & 4 & 2 & 1 & 4 & 4 & 1 & 2 & 4 & 2 & 2 & 2
\\
\bottomrule
\end{array}
\]
\[
\begin{array}{|c|cc|ccc|cccc|ccc|cc|}
\toprule
\text{c), d)} & {1,0} & {0,1} &{2,0} & {1,1} &{0,2} & {3,0} &{2,1} & {1,2} & {0,3} & {3,1} &{2,2} & {1,3} &{3,2} & {2,3} \\
\midrule
E_1^{\bullet,\bullet} & 2 & 2 & 1 & 5 & 2 & 1 & 4 & 4 & 1 & 2 & 5 & 1 & 2 & 2\\
\bottomrule
\end{array}
\]
}
\caption{\label{fig2} Successive pages of the Fr\"olicher spectral sequence of the Iwasawa manifold (table a)) and some of its small deformations (tables b) and c)).}
\end{figure}
\begin{re}{2}
The example of the Nakamura manifold used by M. Maschio permits also to observe (\cite[Remark 2]{Mas18}) that, in general, the dimensions of the components $E_r^{p,q}(X_\mathbf{t})$ of the Fr\"olicher spectral sequence do not vary either upper
semi-continuously or lower semi-continuously under small deformations, i.e. if $\{X_{\mathbf{t}}=(X,J_{\mathbf{t}})\}_{\mathbf{t}\in\Delta(0,\varepsilon)}$ is a family of small deformations of a compact complex manifold $X$, then for every $p,q\in\mathbb{Z}$ and $r\in\mathbb{Z}\setminus\{0,1\}$ the map
\[
\mathbf{t}\mapsto \dim\big(E_r^{p,q}(X_\mathbf{t})\big)
\]
is, in general, nor upper neither lower semi-continuos.
We can deduce it also by analysing values exposed in Figure \ref{fig2}, in particular comparing the numbers corresponding to the components of type $(2,0)$ and $(1,1)$ of tables a) and c).
\end{re}
\begin{re}{3}
It is evident that the Fr\"olicher spectral sequences of the considered manifolds present also a symmetry property: for each $p,q\in \mathbb{N}$ we have
\[
\dim\big(E_2^{p,q}(X_\mathbf{t})\big)=\dim\big(E_2^{3-p,3-q}(X_\mathbf{t})\big).
\]
This equality represents a generalization of the Serre duality for Dolbeault cohomology, correspponding for each $p,q\in\mathbb{Z}$ to the isomorphism
\[
H_{\mathop{}\!\mathrm{\bar{\partial}}}^{p,q}(X_\mathbf{t})\simeq H_{\mathop{}\!\mathrm{\bar{\partial}}}^{n-p,n-q}(X_\mathbf{t})^*.
\]
It was already noticed by L. Ugarte in \cite{Uga00} for a hypothetical complex structure on $S^6$ and it has been recently proved by A. Milojevic in \cite[Corollary 7]{Mil19}, where he shows that Serre symmetry is valid in general for every page of the Fr\"olicher spectral sequence.
\end{re}
|
1,477,468,750,295 | arxiv | \section{Introduction}\label{sec:1}
One of the most beautiful results in the theory of Milnor fibrations would be the formula for the (local) Milnor monodromy zeta functions obtained by Varchenko \cite{Varchenko} (see also \cite{Kushnirenko} and \cite{Oka-2} for the detail of this subject). In his formula, the Milnor monodromy zeta function $\zeta_f(t)\in {\mathbb C} (t)^*$ at $0 \in {\mathbb C}^n$ of a polynomial $f(x) \in {\mathbb C} [x_1, x_2, \ldots , x_n]$ on ${\mathbb C}^n$ such that $f(0)=0$ is expressed by the geometry of the Newton polygon of $f$ (for a similar and more precise result on Hodge structures, see also Tanabe \cite{Tanabe}). To prove it, he constructed a toric modification of ${\mathbb C}^n$ on which the pull-back of $f$ defines a hypersurface with only normal crossing singularities. Since ${\mathbb C}^n$ is a very special toric variety, it would be natural to generalize his formula to Milnor fibers over general singular toric varieties. In this paper, we realize this idea with the help of sheaf-theoretical methods, such as nearby cycle and constructible sheaves. In particular, in Theorem \ref{thm:3-4} we prove a formula for the monodromy zeta functions of Milnor fibers over general (not necessarily normal) toric varieties. Note that general theories of Milnor fibers over complete intersection varieties were developed by Looijenga \cite{Looijenga} and Oka \cite{Oka-2} etc. However toric varieties are not complete intersection nor of isolated singularities in general. Also for Milnor fibers over varieties of determinantal singularities, see Esterov \cite{Esterov}.
In order to give the precise statement of our theorem, let $\SS$ be a finitely generated subsemigroup of the lattice $M \simeq {\mathbb Z}^n$ such that $0 \in \SS$. Denote by $K(\SS)$ the convex hull of $\SS$ in $M_{{\mathbb R}} ={\mathbb R} \otimes_{{\mathbb Z}}M$. For simplicity, assume that $K(\SS)$ is a strongly convex polyhedral cone in $M_{{\mathbb R}}$ (for the general case, see Remark \ref{rem:3-7}) such that $\d K(\SS) =n$ and let $M(\SS)$ be the ${\mathbb Z}$-sublattice of rank $n$ in $M$ generated by $\SS$. Then $X(\SS) ={\rm Spec}({\mathbb C}[\SS])$ is a (not necessarily normal) toric variety of dimension $n$ (see \cite{Fulton}, \cite{G-K-Z} and \cite{Oda} etc. for the detail) on which the algebraic torus $T ={\rm Spec}({\mathbb C}[M(\SS)]) \simeq ({\mathbb C}^*)^n$ acts. By our assumption, there exists a unique $T$-fixed point in $X(\SS)$, which we denote simply by $0$. Let $f \colon X(\SS) \longrightarrow {\mathbb C}$ be a non-zero polynomial function on $X(\SS)$ (i.e. $f=\sum_{v \in \SS} a_v \cdot v$, $a_v \in {\mathbb C}$) such that $f(0)=0$. Denote by $F_0$ the Milnor fiber of $f \colon X(\SS) \longrightarrow {\mathbb C}$ at $0 \in X(\SS)$ (see for example \cite{Takeuchi} for a review on this subject). We define the monodromy zeta function $\zeta_{f,0}(t) \in {\mathbb C} (t)^*$ of $f$ at $0 \in X(\SS)$ by
\begin{equation}
\zeta_{f, 0}(t)=\prod_{j=0}^{\infty} \det({\rm id} -t\Phi_{j,0})^{(-1)^j},
\end{equation}
where
\begin{equation}
\Phi_{j,0} \colon H^j(F_0;{\mathbb C}) \overset{\sim}{\longrightarrow} H^j(F_0;{\mathbb C}) \qquad \ (j=0,1,\ldots)
\end{equation}
are the isomorphisms induced by the geometric monodromy automorphism $F_0 \overset{\sim}{\longrightarrow} F_0$. Then we can give a formula for the zeta function $\zeta_{f,0}(t)$ as follows. First, we define the Newton polygon $\Gamma_+(f) \subset K(\SS)$ of $f$ just as in the classical case of polynomials on ${\mathbb C}^n$ (see Definition \ref{dfn:3-1}). For each face $\Delta \prec K(\SS)$ of the cone $K(\SS)$ such that $\Gamma_+(f) \cap \Delta \neq \emptyset$, let $\gamma_1^{\Delta}, \gamma_2^{\Delta},\ldots, \gamma_{n(\Delta)}^{\Delta}$ be the compact faces of $\Gamma_+(f) \cap \Delta$ such that $\d \gamma_i^{\Delta}=\d \Delta -1$. Let ${\mathbb L}(\Delta)$ be the linear subspace of $M_{{\mathbb R}}$ spanned by $\Delta$ and denote by $M(\SS \cap \Delta )$ the sublattice of $M(\SS)$ generated by $\SS \cap \Delta$. Then we can define the lattice distance $d_i^{\Delta}\in {\mathbb Z}_{>0}$ from $\gamma_i^{\Delta}$ to $0 \in {\mathbb L}(\Delta)$ with respect to the lattice $M(\SS \cap \Delta ) \subset {\mathbb L}(\Delta)$ (see Definition \ref{dfn:3-3}). Finally, let ${\rm Vol}_{{\mathbb Z}}(\gamma_i^{\Delta}) \in {\mathbb Z}$ be the normalized ($\d \Delta -1$)-dimensional volume of $\gamma_i^{\Delta}$ with respect to the lattice $M(\SS \cap \Delta) \cap {\mathbb L}(\gamma_i^{\Delta})$.
\begin{theorem}\label{thm:1-1}
Assume that $f$ is non-degenerate (in the sense of Definition \ref{dfn:3-2} below). Then the monodromy zeta function $\zeta_{f,0}(t)$ of $f$ at $0 \in X(\SS)$ is given by
\begin{equation}
\zeta_{f,0} (t) =\prod_{\Gamma_+(f) \cap \Delta \neq \emptyset} \zeta_{\Delta}(t),
\end{equation}
where for each face $\Delta \prec K(\SS)$ of $K(\SS)$ such that $\Gamma_+(f) \cap \Delta \neq \emptyset$ we set
\begin{equation}
\zeta_{\Delta}(t) = \prod_{i=1}^{n(\Delta)} \left(1-t^{d_i^{\Delta}}\right)^{(-1)^{\d \Delta -1}{\rm Vol}_{{\mathbb Z}}(\gamma_i^{\Delta})}.
\end{equation}
\end{theorem}
We will prove this theorem by decomposing the problem into those on the closures of $T$-orbits in $X(\SS)$ with the help of nearby cycle functors introduced by Deligne \cite{Deligne} (see also \cite[Chapter VIII]{K-S} etc.). Recall the following basic correspondence ($0 \leq k \leq n$):
\begin{equation}
\{ \text{$k$-dimensional faces in $K(\SS)$} \}\overset{\text{1:1}}{\longleftrightarrow} \{ \text{$k$-dimensional $T$-orbits in $X(\SS)$}\}.
\end{equation}
For a face $\Delta$ of $K(\SS)$, denote by $T_{\Delta}$ the corresponding $T$-orbit in $X(\SS)$. Then we obtain a decomposition $X(\SS)=\bigsqcup_{\Delta \prec K(\SS)} T_{\Delta}$ of $X(\SS)$ into $T$-orbits. To prove Theorem \ref{thm:1-1}, we first interpret the classical notions of Milnor fibers into the language of nearby cycle sheaves and reduce the problem to the computation of the monodromy zeta functions of the nearby cycle sheaves $\psi_f({\mathbb C}_{T_{\Delta}})$ of the constructible sheaves ${\mathbb C}_{T_{\Delta}}$ on $X(\SS)$. Then by Proposition \ref{prp:2-9} we can study the monodromy zeta function of $\psi_f({\mathbb C}_{T_{\Delta}})$ on the closure $\overline{T_{\Delta}}$ of $T_{\Delta}$. This simple idea largely simplifies the classical arguments and allows us to avoid topological difficulties we usually encounter in treating Milnor fibers over singular varieties. Indeed even the original proof of Varchenko's theorem in \cite{Varchenko} would be also simplified by our idea of decomposing ${\mathbb C}^n$ into smaller tori $({\mathbb C}^*)^d$. Moreover, by applying the same idea to complete intersection subvarieties $\{ f_1=f_2= \cdots =f_k=0\}$ in $X(\SS)$, in Theorem \ref{thm:3-12} we obtain also a generalization of the deeper results of Kirillov \cite{Kirillov} and Oka \cite{Oka-1}, \cite{Oka-2} to Milnor fibers over complete intersection subvarieties of singular toric varieties. In our Theorem \ref{thm:3-12}, even on the smooth toric variety ${\mathbb C}^n$ we could remove some technical assumptions (see \cite[Chapter IV, \S 4, page 205]{Oka-2}) imposed by \cite{Oka-1} and \cite{Oka-2}. For example, in our Theorem \ref{thm:3-12} we do not assume any condition on the Newton polygons of polynomial functions $f_1, f_2, \ldots, f_k$ on $X(\SS)$. Note that Theorem \ref{thm:3-12} is also a natural generalization of the formula for the local multiplicities of toric varieties in \cite[Theorem 3.16]{G-K-Z}. The proof of Theorem \ref{thm:3-12} is very simple and follows also from the functorial property (Proposition \ref{prp:2-9}) of the nearby cycle functor. Note that on ${\mathbb C}^n$ Gusev \cite{Gusev} obtained independently a similar result in a special case as a corollary of his main result. In Section \ref{sec:5}, we extend our results to the monodromy zeta functions of $T$-invariant constructible sheaves.
Finally, let us mention that the methods we developed in this paper can be applied also to other related problems. For example, in \cite{M-T-new3} we used this idea to compute the monodromy zeta functions at infinity. In another paper \cite{M-T-new1}, some applications of our methods to $A$-discriminant varieties are also given.
\bigskip
\noindent{\bf Acknowledgements:} After submitting this paper to a preprint server, we were informed by Professor Gusev that he obtained a similar result on ${\mathbb C}^n$. We thank him cordially for showing us his very interesting paper \cite{Gusev}.
\section{Preliminary notions and results}\label{sec:2}
In this section, we introduce basic notions and results which will be used in this paper. In this paper, we essentially follow the terminology of \cite{Dimca}, \cite{H-T-T} and \cite{K-S}. For example, for a topological space $X$ we denote by ${\bf D}^{b}(X)$ the derived category whose objects are bounded complexes of sheaves of ${\mathbb C}_X$-modules on $X$.
\begin{definition}\label{dfn:2-1}
Let $X$ be an algebraic variety over ${\mathbb C}$. Then
\begin{enumerate}
\item We say that a sheaf ${\cal F}$ on $X$ is constructible if there exists a stratification $X=\bigsqcup_{\alpha} X_{\alpha}$ of $X$ such that ${\cal F}|_{X_{\alpha}}$ is a locally constant sheaf of finite rank for any $\alpha$.
\item We say that an object ${\cal F}$ of ${\bf D}^{b}(X)$ is constructible if the cohomology sheaf $H^j({\cal F})$ of ${\cal F}$ is constructible for any $j \in {\mathbb Z}$. We denote by ${\bf D}_{c}^{b}(X)$ the full subcategory of ${\bf D}^{b}(X)$ consisting of constructible objects ${\cal F}$.
\end{enumerate}
\end{definition}
Recall that for any morphism $f \colon X \longrightarrow Y$ of algebraic varieties over ${\mathbb C}$ there exists a functor
\begin{equation}
Rf_* \colon {\bf D}^{b}(X) \longrightarrow {\bf D}^{b}(Y)
\end{equation}
of direct images. This functor preserves the constructibility and we obtain also a functor
\begin{equation}
Rf_* \colon {\bf D}_{c}^{b}(X) \longrightarrow {\bf D}_{c}^{b}(Y).
\end{equation}
For other basic operations $Rf_!$, $f^{-1}$, $f^!$ etc. in derived categories, see \cite{K-S} for the detail.
Next we introduce the notion of constructible functions and explain its relation with that of constructible sheaves.
\begin{definition}\label{dfn:2-2}
Let $X$ be an algebraic variety over ${\mathbb C}$ and $G$ an abelian group. Then we say a $G$-valued function $\rho \colon X \longrightarrow G$ on $X$ is constructible if there exists a stratification $X=\bigsqcup_{\alpha} X_{\alpha}$ of $X$ such that $\rho|_{X_{\alpha}}$ is constant for any $\alpha$. We denote by ${\rm CF}_G(X)$ the abelian group of $G$-valued constructible functions on $X$.
\end{definition}
Let ${\mathbb C}(t)^*={\mathbb C}(t) \setminus \{0\}$ be the multiplicative group of the function field ${\mathbb C}(t)$ of the scheme ${\mathbb C}$. In this paper, we consider ${\rm CF}_G(X)$ only for $G={\mathbb Z}$ or ${\mathbb C}(t)^*$. For a $G$-valued constructible function $\rho \colon X \longrightarrow G$, by taking a stratification $X=\bigsqcup_{\alpha}X_{\alpha}$ of $X$ such that $\rho|_{X_{\alpha}}$ is constant for any $\alpha$ as above, we set
\begin{equation}
\int_X \rho :=\displaystyle \sum_{\alpha}\chi(X_{\alpha}) \cdot \rho(x_{\alpha}) \in G,
\end{equation}
where $x_{\alpha}$ is a reference point in $X_{\alpha}$. Then we can easily show that $\int_X\rho \in G$ does not depend on the choice of the stratification $X=\bigsqcup_{\alpha} X_{\alpha}$ of $X$. Hence we obtain a homomorphism
\begin{equation}
\int_X \colon {\rm CF}_G(X) \longrightarrow G
\end{equation}
of abelian groups. For $\rho \in {\rm CF}_G(X)$, we call $\int_X \rho \in G$ the topological (Euler) integral of $\rho$ over $X$. More generally, for any morphism $f \colon X \longrightarrow Y$ of algebraic varieties over ${\mathbb C}$ and $\rho \in {\rm CF}_G(X)$, we define the push-forward $\int_f \rho \in {\rm CF}_G(Y)$ of $\rho$ by
\begin{equation}
\( \int_f \rho \) (y):=\int_{f^{-1}(y)} \rho
\end{equation}
for $y \in Y$. This defines a homomorphism
\begin{equation}
\int_f \colon {\rm CF}_G(X) \longrightarrow {\rm CF}_G(Y)
\end{equation}
of abelian groups. If $G={\mathbb Z}$, these operations $\int_X$ and $\int_f$ correspond to the ones $R\varGamma(X;\ \cdot \ )$ and $Rf_*$ respectively in the derived categories as follows. For an algebraic variety $X$ over ${\mathbb C}$, consider a free abelian group
\begin{equation}
{\mathbb Z}({\bf D}_{c}^{b}(X)):=\left\{ \left. \displaystyle \sum_{j \colon \text{finite}}a_j [{\cal F}_j] \ \right| \ a_j \in {\mathbb Z}, \ {\cal F}_j \in {\bf D}_{c}^{b}(X)\right\}
\end{equation}
generated by the objects ${\cal F}_j \in {\bf D}_{c}^{b}(X)$ in ${\bf D}_{c}^{b}(X)$ and take its subgroup
\begin{eqnarray}
R&:=&\langle [{\cal F}_2]-[{\cal F}_1]-[{\cal F}_3] \ | {\cal F}_1 \longrightarrow {\cal F}_2 \longrightarrow {\cal F}_3 \overset{+1}{\longrightarrow} \ \text{ is a distinguished triangle} \rangle\nonumber\\
&\subset &{\mathbb Z}({\bf D}_{c}^{b}(X)).
\end{eqnarray}
We set ${\bf K}_{c}^{b}(X):={\mathbb Z}({\bf D}_{c}^{b}(X))/R$ and call it the Grothendieck group of ${\bf D}_{c}^{b}(X)$. Then the following result is well-known (see for example \cite[Theorem 9.7.1]{K-S}).
\begin{theorem}\label{thm:2-3}
The homomorphism
\begin{equation}
\chi_X \colon {\bf K}_{c}^{b}(X) \longrightarrow {\rm CF}_{{\mathbb Z}}(X)
\end{equation}
defined by taking the local Euler-Poincar{\'e} indices:
\begin{equation}
\chi_X([{\cal F}])(x):=\displaystyle \sum_{j \in {\mathbb Z}} (-1)^j \d_{{\mathbb C}}H^j({\cal F})_x \hspace{5mm}(x
\in X)
\end{equation}
is an isomorphism.
\end{theorem}
For any morphism $f \colon X \longrightarrow Y$ of algebraic varieties over ${\mathbb C}$, there exists also a commutative diagram
\begin{equation}
\xymatrix{
{\bf K}_{c}^{b}(X) \ar[r]^{Rf_*} \ar[d]^{\wr}_{\chi_X}& {\bf K}_{c}^{b}(Y) \ar[d]^{\wr}_{\chi_Y} \\
{\rm CF}_{{\mathbb Z}}(X) \ar[r]^{\int_f} & {\rm CF}_{{\mathbb Z}}(Y).}
\end{equation}
In particular, if $Y$ is the one-point variety $\{{\rm pt}\}$ (${\bf K}_{c}^{b}(Y) \simeq {\rm CF}_{{\mathbb Z}}(Y) \simeq {\mathbb Z}$), we obtain a commutative diagram
\begin{equation}
\xymatrix@R=2.5mm@C=30mm{
{\bf K}_{c}^{b}(X) \ar[rd]^{\chi(R\varGamma(X; \ \cdot \ ))} \ar[dd]_{\wr}^{\chi_X}& \\
& {\mathbb Z}. \\
{\rm CF}_{{\mathbb Z}}(X) \ar[ru]^{\int_X}}
\end{equation}
Among various operations in derived categories, the following nearby cycle functors introduced by Deligne will be frequently used in this paper (see \cite[Section 4.2]{Dimca} for an excellent survey of this subject).
\begin{definition}\label{dfn:2-4}
Let $f \colon X \longrightarrow {\mathbb C}$ be a non-constant regular function on an algebraic variety $X$ over ${\mathbb C}$. Set $X_0:= \{x\in X\ |\ f(x)=0\} \subset X$ and let $i_X \colon X_0 \DOTSB\lhook\joinrel\longrightarrow X$, $j_X \colon X \setminus X_0 \DOTSB\lhook\joinrel\longrightarrow X$ be inclusions. Let $p \colon \tl{{\mathbb C}^*} \longrightarrow {\mathbb C}^*$ be the universal covering of ${\mathbb C}^* ={\mathbb C} \setminus \{0\}$ ($\tl{{\mathbb C}^*} \simeq {\mathbb C}$) and consider the Cartesian square
\begin{equation}\label{eq:2-13}
\xymatrix@R=2.5mm@C=2.5mm{
\tl{X \setminus X_0} \ar[rr] \ar[dd]^{p_X} & &\tl{{\mathbb C}^*} \ar[dd]^p \\
& \Box & \\
X \setminus X_0 \ar[rr]^f & & {\mathbb C}^*.}
\end{equation}
Then for ${\cal F} \in {\bf D}^{b}(X)$ we set
\begin{equation}
\psi_f({\cal F}) := i_X^{-1}R(j_X \circ p_X)_*(j_X \circ p_X)^{-1}{\cal F} \in {\bf D}^{b}(X_0)
\end{equation}
and call it the nearby cycle of ${\cal F}$.
\end{definition}
Since the nearby cycle functor preserves the constructibility, in the above situation we obtain a functor
\begin{equation}
\psi_f \colon {\bf D}_{c}^{b}(X) \longrightarrow {\bf D}_{c}^{b}(X_0).
\end{equation}
As we see in the next proposition, the nearby cycle functor $\psi_f$ generalizes the classical notion of Milnor fibers. First, let us recall the definition of Milnor fibers and Milnor monodromies over singular varieties (see for example \cite{Takeuchi} for a review on this subject). Let $X$ be a subvariety of ${\mathbb C}^m$ and $f \colon X \longrightarrow {\mathbb C}$ a non-constant regular function on $X$. Namely we assume that there exists a polynomial function $\tl{f} \colon {\mathbb C}^m \longrightarrow {\mathbb C}$ on ${\mathbb C}^m$ such that $\tl{f}|_X=f$. For simplicity, assume also that the origin $0 \in {\mathbb C}^m$ is contained in $X_0=\{x \in X \ |\ f(x)=0\}$. Then the following lemma is well-known (see for example \cite[Definition 1.4]{Massey}).
\begin{lemma}\label{lem:2-5}
For sufficiently small $\varepsilon >0$, there exists $\eta_0 >0$ with $0<\eta_0 \ll \varepsilon$ such that for $0 < \forall \eta <\eta_0$ the restriction of $f$:
\begin{equation}
X \cap B(0;\varepsilon) \cap \tl{f}^{-1}(D_{\eta}^*) \longrightarrow D_{\eta}^*
\end{equation}
is a topological fiber bundle over the punctured disk $D_{\eta}^*:=\{ z \in {\mathbb C} \ |\ 0<|z|<\eta\}$, where $B(0;\varepsilon)$ is the open ball in ${\mathbb C}^m$ with radius $\varepsilon$ centered at the origin.
\end{lemma}
\begin{definition}\label{dfn:2-6}
A fiber of the above fibration is called the Milnor fiber of the function $f \colon X\longrightarrow {\mathbb C}$ at $0 \in X$ and we denote it by $F_0$.
\end{definition}
For $x \in X_0$, denote by $F_x$ the Milnor fiber of $f\colon X \longrightarrow {\mathbb C}$ at $x$.
\begin{proposition}{\rm \bf(\cite[Proposition 4.2.2]{Dimca})}\label{prp:2-7-2} For any ${\cal F}\in {\bf D}_{c}^{b}(X)$, $x \in X_0$ and $j \in {\mathbb Z}$, there exists a natural isomorphism
\begin{equation}\label{eq:2-24}
H^j(F_x ;{\cal F}) \simeq H^j(\psi_f({\cal F}))_x.
\end{equation}
\end{proposition}
By this proposition, we can study the cohomology groups $H^j(F_x;{\mathbb C})$ of the Milnor fiber $F_x$ by using sheaf theory. Recall also that in the above situation, as in the same way as the case of polynomial functions over ${\mathbb C}^n$ (see \cite{Milnor}), we can define the Milnor monodromy operators
\begin{equation}
\Phi_{j,x} \colon H^j(F_x;{\mathbb C}) \overset{\sim}{\longrightarrow} H^j(F_x;{\mathbb C}) \ (j=0,1,\ldots)
\end{equation}
and the zeta-function
\begin{equation}
\zeta_{f,x}(t):=\prod_{j=0}^{\infty} \det({\rm id} -t\Phi_{j,x})^{(-1)^j}
\end{equation}
associated with it. Since the above product is in fact finite, $\zeta_{f,x}(t)$ is a rational function of $t$ and its degree in $t$ is the topological Euler characteristic $\chi(F_x)$ of the Milnor fiber $F_x$. This classical notion of Milnor monodromy zeta functions can be also generalized as follows.
\begin{definition}\label{dfn:2-8}
Let $f \colon X \longrightarrow {\mathbb C}$ be a non-constant regular function on $X$ and ${\cal F} \in {\bf D}_{c}^{b}(X)$. Set $X_0 :=\{x\in X\ |\ f(x)=0\}$. Then there exists a monodromy automorphism
\begin{equation}
\Phi({\cal F}) \colon \psi_f({\cal F}) \overset{\sim}{\longrightarrow} \psi_f({\cal F})
\end{equation}
of $\psi_f({\cal F})$ in ${\bf D}_{c}^{b}(X_0)$ associated with a generator of the group ${\rm Deck}(\tl{{\mathbb C}^*}, {\mathbb C}^*)\simeq {\mathbb Z}$ of the deck transformations of $p \colon \tl{{\mathbb C}^*} \longrightarrow {\mathbb C}^*$ in the diagram \eqref{eq:2-13}. We define a ${\mathbb C}(t)^*$-valued constructible function $\zeta_f({\cal F}) \in {\rm CF}_{{\mathbb C}(t)^*}(X_0)$ on $X_0$ by
\begin{equation}
\zeta_{f,x}({\cal F})(t):=\prod_{j \in {\mathbb Z}} \det\({\rm id} -t\Phi({\cal F})_{j,x}\)^{(-1)^j}
\end{equation}
for $x \in X_0$, where $\Phi({\cal F})_{j,x} \colon (H^j(\psi_f({\cal F})))_x \overset{\sim}{\longrightarrow} (H^j(\psi_f({\cal F})))_x$ is the stalk at $x \in X_0$ of the sheaf homomorphism
\begin{equation}
\Phi({\cal F})_j \colon H^j(\psi_f({\cal F})) \overset{\sim}{\longrightarrow} H^j(\psi_f({\cal F}))
\end{equation}
associated with $\Phi({\cal F})$.
\end{definition}
The following proposition will play a crucial role in the proof of Theorem \ref{thm:3-4} and \ref{thm:3-12}. For the proof, see for example, \cite[p.170-173]{Dimca} and \cite{Schurmann}.
\begin{proposition}\label{prp:2-9}
Let $\pi \colon Y \longrightarrow X$ be a proper morphism of algebraic varieties over ${\mathbb C}$ and $f \colon X \longrightarrow {\mathbb C}$ a non-constant regular function on $X$. Set $g:=f \circ \pi \colon Y \longrightarrow {\mathbb C}$, $X_0:=\{x\in X\ |\ f(x)=0\}$ and $Y_0:=\{y\in Y\ |\ g(y)=0\}=\pi^{-1}(X_0)$. Then for any ${\cal G}\in {\bf D}_{c}^{b}(Y)$ we have
\begin{equation}
\int_{\pi|_{Y_0}} \zeta_g({\cal G}) =\zeta_f(R\pi_*{\cal G})
\end{equation}
in ${\rm CF}_{{\mathbb C}(t)^*}(X_0)$, where
\begin{equation}
\int_{\pi|_{Y_0}}\colon {\rm CF}_{{\mathbb C}(t)^*}(Y_0) \longrightarrow {\rm CF}_{{\mathbb C}(t)^*}(X_0)
\end{equation}
is the push-forward of ${\mathbb C}(t)^*$-valued constructible functions by $\pi|_{Y_0} \colon Y_0 \longrightarrow X_0$.
\end{proposition}
Finally, we recall Bernstein-Khovanskii-Kushnirenko's theorem \cite{Khovanskii}.
\begin{definition}
Let $g_1, g_2, \ldots , g_p$ be Laurent polynomials on $({\mathbb C}^*)^n$. Then we say that the subvariety $Z^*=\{ x\in ({\mathbb C}^*)^n \ |\ g_1(x)=g_2(x)= \cdots =g_p(x)=0 \}$ of $({\mathbb C}^*)^n$ is non-degenerate complete intersection if the $p$-form $dg_1 \wedge dg_2 \wedge \cdots \wedge dg_p$ does not vanish on it.
\end{definition}
\begin{definition}
Let $g(x)=\sum_{v \in {\mathbb Z}^n} a_vx^v$ be a Laurent polynomial on $({\mathbb C}^*)^n$ ($a_v\in {\mathbb C}$). We call the convex hull of ${\rm supp}(g):=\{v\in {\mathbb Z}^n \ |\ a_v\neq 0\} \subset {\mathbb Z}^n \subset {\mathbb R}^n$ in ${\mathbb R}^n$ the Newton polygon of $g$ and denote it by $NP(g)$.
\end{definition}
\begin{theorem}[\cite{Khovanskii}]\label{thm:2-14}
Let $g_1, g_2, \ldots , g_p$ be Laurent polynomials on $({\mathbb C}^*)^n$. Assume that the subvariety $Z^*=\{ x\in ({\mathbb C}^*)^n \ |\ g_1(x)=g_2(x)= \cdots =g_p(x)=0 \}$ of $({\mathbb C}^*)^n$ is non-degenerate complete intersection. Set $\Delta_i:=NP(g_i)$ for $i=1,\ldots, p$. Then we have
\begin{equation}
\chi(Z^*)=(-1)^{n-p}\displaystyle \sum_{\begin{subarray}{c} a_1,\ldots,a_p \geq 1\\ a_1+\cdots +a_p=n\end{subarray}}{\rm Vol}_{{\mathbb Z}}(\underbrace{\Delta_1,\ldots,\Delta_1}_{\text{$a_1$-times}},\ldots,\underbrace{\Delta_p,\ldots,\Delta_p}_{\text{$a_p$-times}}),
\end{equation}
where ${\rm Vol}_{{\mathbb Z}}(\underbrace{\Delta_1,\ldots,\Delta_1}_{\text{$a_1$-times}},\ldots,\underbrace{\Delta_p,\ldots,\Delta_p}_{\text{$a_p$-times}})\in {\mathbb Z}$ is the normalized $n$-dimensional mixed volume of $\underbrace{\Delta_1,\ldots,\Delta_1}_{\text{$a_1$-times}},\ldots,\underbrace{\Delta_p,\ldots,\Delta_p}_{\text{$a_p$-times}}$ with respect to the lattice ${\mathbb Z}^n \subset {\mathbb R}^n$.
\end{theorem}
\begin{remark}\label{rem:2-13}
Let $Q_1,Q_2,\ldots,Q_n$ be integral polytopes in $({\mathbb R}^n, {\mathbb Z}^n)$. Then their normalized $n$-dimensional mixed volume ${\rm Vol}_{{\mathbb Z}}(Q_1,Q_2,\ldots,Q_n) \in {\mathbb Z}$ is given by the formula
\begin{eqnarray}
\lefteqn{n! {\rm Vol}_{{\mathbb Z}}(Q_1, Q_2, \ldots , Q_n)}\nonumber\\
&=&{\rm Vol}_{{\mathbb Z}}(Q_1+ Q_2+ \cdots + Q_n)-\sum_{i=1}^n {\rm Vol}_{{\mathbb Z}}(Q_1+ \cdots +Q_{i-1}+Q_{i+1} + \cdots + Q_n)\nonumber\\
& &+\sum_{1 \leq i<j \leq n} {\rm Vol}_{{\mathbb Z}}(Q_1+ \cdots +Q_{i-1}+Q_{i+1} + \cdots +Q_{j-1}+Q_{j+1} + \cdots + Q_n)\nonumber\\
& &+\cdots + (-1)^{n-1} \sum_{i=1}^n {\rm Vol}_{{\mathbb Z}}(Q_i),
\end{eqnarray}
where ${\rm Vol}_{{\mathbb Z}}(\ \cdot\ )\in {\mathbb Z}$ is the normalized $n$-dimensional volume.
\end{remark}
\section{Milnor fibers over singular toric varieties}\label{sec:3}
In this section, we give explicit formulas for the monodromy zeta functions of non-degenerate polynomials over possibly singular toric varieties. These formulas can be considered to be natural generalizations of the fundamental results obtained by Kushnirenko \cite{Kushnirenko}, Varchenko \cite{Varchenko}, Kirillov \cite{Kirillov} and Oka \cite{Oka-1}, \cite{Oka-2} etc.
Let $M \simeq {\mathbb Z}^n$ be a ${\mathbb Z}$-lattice of rank $n$ and set $M_{{\mathbb R}}:={\mathbb R} \otimes_{{\mathbb Z}}M$. We take a finitely generated subsemigroup $\SS$ of $M$ such that $0 \in \SS$ and denote by $K(\SS)$ the convex hull of $\SS$ in $M_{{\mathbb R}}$. For simplicity, assume that $K(\SS)$ is a strongly convex polyhedral cone in $M_{{\mathbb R}}$ (for the general case, see Remark \ref{rem:3-7}) such that $\d K(\SS) =n$. Then the group algebra ${\mathbb C}[\SS]$ is finitely generated over ${\mathbb C}$ and $X(\SS):={\rm Spec}({\mathbb C}[\SS])$ is a (not necessarily normal) toric variety of dimension $n$ (see \cite{Fulton}, \cite{G-K-Z} and \cite{Oda} etc. for the detail). Indeed, let $M(\SS)$ be the ${\mathbb Z}$-sublattice of rank $n$ in $M$ generated by $\SS$ and consider the algebraic torus $T:={\rm Spec}({\mathbb C}[M(\SS)]) \simeq ({\mathbb C}^*)^n$. Then the affine toric variety $X(\SS)$ admits a natural action of $T={\rm Spec}({\mathbb C}[M(\SS)])$ and has a unique $0$-dimensional orbit. We denote this orbit point by $0$ and call it the $T$-fixed point of $X(\SS)$. Recall that a polynomial function $f \colon X(\SS) \longrightarrow {\mathbb C}$ on $X(\SS)$ corresponds to an element $f=\sum_{v \in \SS} a_v \cdot v$ ($a_v \in {\mathbb C}$) of ${\mathbb C}[\SS]$.
\begin{definition}\label{dfn:3-1}
Let $f =\sum_{v \in \SS} a_v \cdot v$ ($a_v \in {\mathbb C}$) be a polynomial function on $X(\SS)$.
\begin{enumerate}\renewcommand{\labelenumi}{{\rm (\roman{enumi})}}
\item We define the support ${\rm supp}(f)$ of $f$ by
\begin{equation}
{\rm supp}(f) :=\{ v \in \SS \ |\ a_v \neq 0\} \subset \SS .
\end{equation}
\item We define the Newton polygon $\Gamma_+(f)$ of $f$ to be the convex hull of $\bigcup_{v \in {\rm supp}(f)}(v+ K(\SS))$ in $K(\SS)$.
\end{enumerate}
\end{definition}
Now let us fix a function $f\in {\mathbb C}[\SS]$ such that $0 \notin {\rm supp}(f)$ (i.e. $f \colon X(\SS) \longrightarrow {\mathbb C}$ vanishes at the $T$-fixed point $0$) and consider its Milnor fiber $F_0$ at $0 \in X(\SS)$. Choose a ${\mathbb Z}$-basis of $M(\SS)$ and identify $M(\SS)$ with ${\mathbb Z}^n$. Then each element $v$ of $\SS \subset M(\SS)$ is identified with a ${\mathbb Z}$-vector $v=(v_1,\ldots,v_n)$ and to any $g=\sum_{v \in \SS}b_v \cdot v \in {\mathbb C}[\SS]$ we can associate a Laurent polynomial $L(g)=\sum_{v \in \SS}b_v \cdot x^v$ on $T=({\mathbb C}^*)^n$. One can easily prove that the following definition does not depend on the choice of the ${\mathbb Z}$-basis of $M(\SS)$.
\begin{definition}\label{dfn:3-2}
We say that $f=\sum_{v \in \SS}a_v \cdot v \in {\mathbb C}[\SS]$ is non-degenerate if for any compact face $\gamma$ of $\Gamma_+(f)$ the complex hypersurface
\begin{equation}
\{ x=(x_1,\ldots,x_n) \in ({\mathbb C}^*)^n \ |\ L(f_{\gamma}) (x)=0\}
\end{equation}
in $({\mathbb C}^*)^n$ is smooth and reduced, where we set $f_{\gamma}:=\sum_{v \in \gamma\cap \SS}a_v \cdot v$.
\end{definition}
For each face $\Delta \prec K(\SS)$ of $K(\SS)$ such that $\Gamma_+(f) \cap \Delta \neq \emptyset$, let $\gamma_1^{\Delta}, \gamma_2^{\Delta},\ldots, \gamma_{n(\Delta)}^{\Delta}$ be the compact faces of $\Gamma_+(f) \cap \Delta$ such that $\d \gamma_i^{\Delta}=\d \Delta -1$. Let ${\mathbb L}(\Delta)$ be the linear subspace of $M_{{\mathbb R}}$ spanned by $\Delta$ and denote by $M(\SS \cap \Delta )$ the sublattice of $M(\SS)$ generated by $\SS \cap \Delta$. Note that the rank of $M(\SS \cap \Delta)$ is $\d \Delta$ and we have $M(\SS \cap \Delta)_{{\mathbb R}}={\mathbb R} \otimes_{{\mathbb Z}} M(\SS \cap \Delta) \simeq {\mathbb L}(\Delta)$. Then there exists a unique primitive vector $u_i^{\Delta}$ in the dual lattice $M(\SS \cap \Delta)^*$ of $M(\SS \cap \Delta)$ which takes its minimal in $\Gamma_+(f) \cap \Delta$ exactly on $\gamma_i^{\Delta} \subset \Gamma_+(f) \cap \Delta$.
\begin{definition}\label{dfn:3-3}
We define the lattice distance $d_i^{\Delta}\in {\mathbb Z}_{>0}$ from $\gamma_i^{\Delta}$ to the origin $0 \in {\mathbb L}(\Delta)$ to be the value of $u_i^{\Delta} $ on $\gamma_i^{\Delta}$.
\end{definition}
Then by using the normalized $(\d \Delta -1)$-dimensional volume ${\rm Vol}_{{\mathbb Z}}(\gamma_i^{\Delta}) \in {\mathbb Z}$ of $\gamma_i^{\Delta}$ with respect to the lattice $M(\SS \cap \Delta) \cap {\mathbb L}(\gamma_i^{\Delta})$ we have the following result.
\begin{theorem}\label{thm:3-4} Assume that $f=\sum_{v \in \SS}a_v \cdot v \in {\mathbb C}[\SS]$ is non-degenerate. Then the monodromy zeta function $\zeta_{f,0}(t)$ of $f \colon X(\SS) \longrightarrow {\mathbb C}$ at $0 \in X(\SS)$ is given by
\begin{equation}
\zeta_{f,0}(t) =\prod_{\Gamma_+(f) \cap \Delta \neq \emptyset} \zeta_{\Delta}(t),
\end{equation}
where for each face $\Delta \prec K(\SS)$ of $K(\SS)$ such that $\Gamma_+(f) \cap \Delta \neq \emptyset$ we set
\begin{equation}
\zeta_{\Delta}(t) = \prod_{i=1}^{n(\Delta)} \left(1-t^{d_i^{\Delta}}\right)^{(-1)^{\d \Delta -1}{\rm Vol}_{{\mathbb Z}}(\gamma_i^{\Delta})}.
\end{equation}
\end{theorem}
We will prove this theorem as the special case of Theorem \ref{thm:3-12} (see Section \ref{sec:4}).
Let $\Gamma_i^{\Delta}$ be the convex hull of $\gamma_i^{\Delta} \sqcup \{0\}$ in ${\mathbb L}(\Delta)$. Then the normalized $(\d \Delta)$-dimensional volume ${\rm Vol}_{{\mathbb Z}}(\Gamma_i^{\Delta})\in {\mathbb Z}$ of $\Gamma_i^{\Delta}$ with respect to the lattice $M(\SS \cap \Delta)$ is equal to $d_i^{\Delta} \cdot {\rm Vol}_{{\mathbb Z}}(\gamma_i^{\Delta})$ and we obtain the following result.
\begin{corollary}\label{cor:3-5} Assume that $f=\sum_{v \in \SS}a_v \cdot v \in {\mathbb C}[\SS]$ is non-degenerate. Then the Euler characteristic of the Milnor fiber $F_0$ of $f \colon X(\SS) \longrightarrow {\mathbb C}$ at $0 \in X(\SS)$ is given by
\begin{equation}
\chi (F_0)=\sum_{\Gamma_+(f) \cap \Delta \neq \emptyset} (-1)^{\d\Delta -1} \sum_{i=1}^{n(\Delta)} {\rm Vol}_{{\mathbb Z}}(\Gamma_i^{\Delta}).
\end{equation}
\end{corollary}
Now recall the following correspondence ($0 \leq k \leq n$):
\begin{equation}
\{ \text{$k$-dimensional faces in $K(\SS)$} \}\overset{\text{1:1}}{\longleftrightarrow} \{ \text{$k$-dimensional $T$-orbits in $X(\SS)$}\}.
\end{equation}
For a face $\Delta$ of $K(\SS)$, we denote by $T_{\Delta}$ the corresponding $T$-orbit in $X(\SS)$. Namely we set $T_{\Delta}:={\rm Spec}({\mathbb C}[M(\SS\cap\Delta)])$. Then we obtain a decomposition $X(\SS)=\bigsqcup_{\Delta \prec K(\SS)} T_{\Delta}$ of $X(\SS)$. By the proof of Theorem \ref{thm:3-12} below, we obtain the following local version of Bernstein-Khovanskii-Kushnirenko's theorem which expresses the Euler characteristic $\chi (T_{\Delta} \cap F_0)$ of $T_{\Delta} \cap F_0$ in terms of the Newton polygon of $f$.
\begin{corollary}\label{cor:3-6} Assume that $f=\sum_{v \in \SS}a_v \cdot v \in {\mathbb C}[\SS]$ is non-degenerate. Then we have
\begin{eqnarray}
\chi (T_{\Delta} \cap F_0)
&=& \chi(R\varGamma(F_0;{\mathbb C}_{T_{\Delta} \cap F_0}))\\
&=& \chi(\psi_f({\mathbb C}_{T_{\Delta}})_0)\\
&=& (-1)^{\d \Delta -1} \sum_{i=1}^{n(\Delta)}{\rm Vol}_{{\mathbb Z}}(\Gamma_i^{\Delta}).
\end{eqnarray}
\end{corollary}
\begin{proof}
In the proof of Theorem \ref{thm:3-12} below, we will prove that
\begin{equation}
\chi(\psi_f({\mathbb C}_{T_{\Delta}})_0)= (-1)^{\d \Delta -1} \sum_{i=1}^{n(\Delta)}{\rm Vol}_{{\mathbb Z}}(\Gamma_i^{\Delta}).
\end{equation}
Moreover by using Proposition \ref{prp:2-7-2} we see easily that $\chi(R\varGamma(F_0;{\mathbb C}_{T_{\Delta} \cap F_0}))$ is equal to $\chi(\psi_f({\mathbb C}_{T_{\Delta}})_0)$. Since $T_{\Delta}$ is a $T$-orbit, the decomposition $X(\SS)=\bigsqcup_{\Delta \prec K(\SS)} T_{\Delta}$ of $X(\SS)$ satisfies the Whitney regularity condition along $T_{\Delta}$. Then the decomposition $F_0 =\bigsqcup_{\Delta \prec K(\SS)} (T_{\Delta} \cap F_0) $ of $F_0 \subset X(\SS)$ is also a Whitney stratification ($f \colon X(\SS) \longrightarrow {\mathbb C}$ has the isolated stratified critical value $0 \in {\mathbb C}$ by \cite[Proposition 1.3]{Massey}). Finally, by applying \cite[Theorem 4.1.22]{Dimca} to the constructible sheaf ${\mathbb C}_{T_{\Delta} \cap F_0}$ on the Whitney stratified analytic space $F_0 =\bigsqcup_{\Delta \prec K(\SS)} (T_{\Delta} \cap F_0)$ we obtain
\begin{equation}
\chi(R\varGamma(F_0;{\mathbb C}_{T_{\Delta} \cap F_0}))=\chi (T_{\Delta} \cap F_0).
\end{equation}
This completes the proof.
\hfill $\Box$
\end{proof}
\begin{remark}\label{rem:3-7}
In Theorem \ref{thm:3-4}, we assumed that $K(\SS)$ is strongly convex and $0 \in X(\SS)={\rm Spec}({\mathbb C}[\SS])$ is the $T$-fixed point. We can remove these assumptions as follows. Let $\SS$ be a finitely generated subsemigroup of $M={\mathbb Z}^n$ such that $0 \in \SS$ and $\d K(\SS)=n$. We shall explain how to calculate the monodromy zeta functions $\zeta_{f,x}(t)$ of polynomials $f$ on $X(\SS)$ at general points $x \in X(\SS)$. For a point $x$ of $X(\SS)$, let $\Delta_0$ be the unique face of $K(\SS)$ such that $x \in T_{\Delta_0}={\rm Spec}({\mathbb C}[M(\SS\cap\Delta_0)])$. Then
\begin{equation}
Y(\SS):={\rm Spec}({\mathbb C}[ \SS + M(\SS \cap \Delta_0)])
\end{equation}
is an open subset of $X(\SS)$ containing $T_{\Delta_0}$ and we can regard $f$ as a function on $Y(\SS)$. Set $K^{\prime}:=K(\SS + M(\SS \cap \Delta_0))$. Then there exists a decomposition $Y(\SS)=\bigsqcup_{\Delta \prec K^{\prime}} T_{\Delta}$ of $Y(\SS)$ into $T$-orbits. Note that $T_{\Delta_0}$ is the smallest $T$-orbit in $Y(\SS)$. For $\Delta \prec K^{\prime}$, let $i_{\Delta}\colon \overline{T_{\Delta}}={\rm Spec}({\mathbb C}[(\SS \cap \Delta)+M(\SS \cap \Delta_0)]) \DOTSB\lhook\joinrel\longrightarrow Y(\SS)$ be the embedding. Then by $Y(\SS)=\bigsqcup_{\Delta \prec K^{\prime}} T_{\Delta}$ and Proposition \ref{prp:2-9} we have
\begin{equation}
\zeta_{f,x}(t)=\prod_{\Delta \prec K^{\prime}}\zeta_{f \circ i_{\Delta}, x}({\mathbb C}_{T_{\Delta}})(t)
\end{equation}
(see also \eqref{eq:4-1}-\eqref{eq:4-5}). Now let us set $\SS_{\Delta}:=(\SS \cap \Delta)+ (M(\SS \cap \Delta) \cap {\mathbb L} (\Delta_0))$ and $\tl{Z} (\Delta):={\rm Spec} ({\mathbb C}[\SS_{\Delta}])$. Then there exists a decomposition $\tl{Z}(\Delta)=\bigsqcup_{\Delta_1 \prec \Delta } T^{\prime}_{\Delta_1}$ of $\tl{Z}(\Delta)$ into $T$-orbits and the natural morphism $\pi_{\Delta}\colon \tl{Z}(\Delta)\longrightarrow \overline{T_{\Delta}}$ induces a finite covering $T^{\prime}_{\Delta_1}\longrightarrow T_{\Delta_1}$ for any $\Delta_1 \prec \Delta$. Since $\pi_{\Delta}$ induces an isomorphism $T^{\prime}_{\Delta} \overset{\sim}{\longrightarrow} T_{\Delta}$, we have $R\pi_{\Delta *}({\mathbb C}_{T^{\prime}_{\Delta}}) \simeq {\mathbb C}_{T_{\Delta}}$. Therefore by Proposition \ref{prp:2-9}, in order to calculate $\zeta_{f \circ i_{\Delta}, x}({\mathbb C}_{T_{\Delta}})(t)$ it suffices to calculate $\zeta_{f \circ i_{\Delta} \circ \pi_{\Delta}}({\mathbb C}_{T^{\prime}_{\Delta}})(t)$ at each point of the finite set $\pi_{\Delta}^{-1}(x)=\{ p_1,p_2, \ldots , p_k\} \subset \tl{Z}(\Delta)$. Let $M^{\prime} \simeq {\mathbb Z}^{n -\d \Delta_0}$ be a sublattice of $M$ such that $M^{\prime} \oplus (M \cap {\mathbb L}(\Delta_0))=M$ and set $\SS^{\prime}_{\Delta}=M^{\prime} \cap \SS_{\Delta}$. Then we have
\begin{equation}
\SS_{\Delta}= \SS^{\prime}_{\Delta}\oplus (M(\SS \cap \Delta) \cap {\mathbb L}(\Delta_0))
\end{equation}
and $K(\SS^{\prime}_{\Delta})\subset M^{\prime}_{{\mathbb R}}$ is a strongly convex cone. Hence we have
\begin{equation}
\tl{Z}(\Delta)\simeq{\rm Spec} ({\mathbb C} [\SS^{\prime}_{\Delta}]) \times ({\mathbb C}^*)^{\d \Delta_0}\supset \{ 0\} \times ({\mathbb C}^*)^{\d \Delta_0}\supset \pi_{\Delta}^{-1}(x)
\end{equation}
and $\zeta_{f \circ i_{\Delta} \circ \pi_{\Delta}}({\mathbb C}_{T^{\prime}_{\Delta}})(t)$ can be calculated at each point of $\pi_{\Delta}^{-1}(x)=\{ p_1,p_2, \ldots , p_k\}$ by Corollary \ref{cor:3-6}. Indeed, we first multiply a monomial in ${\mathbb C}[M(\SS \cap \Delta) \cap {\mathbb L}(\Delta_0)] \subset {\mathbb C} [\SS_{\Delta}]$ to $f \circ i_{\Delta} \circ \pi_{\Delta} \in {\mathbb C} [\SS_{\Delta}]$ and extend it to a function on ${\rm Spec} ({\mathbb C} [\SS^{\prime}_{\Delta}]) \times {\mathbb C}^{\d \Delta_0}$. Then by a suitable translation we may assume that $p_i \in \pi_{\Delta}^{-1}(x)$ is the unique $T$-fixed point of the product toric variety ${\rm Spec} ({\mathbb C} [\SS^{\prime}_{\Delta}]) \times {\mathbb C}^{\d \Delta_0}$ and Corollary \ref{cor:3-6} can be applied.
\end{remark}
In the rest of this section, we extend our results to non-degenerate complete intersection subvarieties in the affine toric variety $X(\SS)$. Let $f_1,f_2,\ldots, f_k \in {\mathbb C}[\SS]$ ($1 \leq k \leq n=\d X(\SS)$) and consider the following subvarieties of $X(\SS)$:
\begin{equation}
V:=\{f_1=\cdots =f_{k-1}=f_k=0\} \subset W:=\{f_1=\cdots =f_{k-1}=0\}.
\end{equation}
Assume that $0 \in V$. Our objective here is to study the Milnor fiber $G_0$ of $g:=f_k|_W \colon W \longrightarrow {\mathbb C}$ at $0 \in V=g^{-1}(0) \subset W$ and its monodromy zeta function $\zeta_{g,0}(t)$. We call $\zeta_{g,0}(t)$ the $k$-th principal monodromy zeta function of $V=\{ f_1=\cdots =f_k=0\}$. For each face $\Delta \prec K(\SS)$ of $K(\SS)$ such that $\Gamma_+(f_k) \cap \Delta \neq \emptyset$, we set
\begin{equation}
I(\Delta):=\{ j =1,2,\ldots, k-1 \ | \ \Gamma_+(f_j) \cap \Delta \neq \emptyset \} \subset \{ 1,2,\ldots, k-1\}
\end{equation}
and $m(\Delta):=\sharp I(\Delta)+1$. Let ${\mathbb L}(\Delta)$, $M(\SS \cap \Delta)$, $M(\SS \cap \Delta)^*$ be as before and ${\mathbb L}(\Delta)^*$ the dual vector space of ${\mathbb L}(\Delta)$. Then $M(\SS \cap \Delta)^*$ is naturally identified with a subset of ${\mathbb L}(\Delta)^*$ and the polar cone
\begin{equation}
\Delta^{\vee}=\{ u \in {\mathbb L}(\Delta)^* \ | \ \text{$\langle u, v\rangle \geq 0$ for any $v \in \Delta$}\}
\end{equation}
of $\Delta$ in ${\mathbb L}(\Delta)^*$ is a rational polyhedral convex cone with respect to the lattice $M(\SS \cap \Delta)^*$ in ${\mathbb L}(\Delta)^*$.
\begin{definition}\label{dfn:3-7}
\begin{enumerate}
\item For a function $f=\sum_{v \in \Gamma_+(f)} a_v \cdot v \in {\mathbb C}[\SS]$ on $X(\SS)$ and $u \in \Delta^{\vee}$, we set $f|_{\Delta}:= \sum_{v \in \Gamma_+(f) \cap \Delta}a_v \cdot v \in {\mathbb C}[\SS \cap \Delta]$ and
\begin{equation}\label{eq:3-12}
\Gamma(f|_{\Delta};u):=\left\{ v\in \Gamma_+(f) \cap \Delta \ \left|\ \langle u, v \rangle =\min_{w \in \Gamma_+(f) \cap \Delta} \langle u,w \rangle \right.\right\}.
\end{equation}
We call $\Gamma(f|_{\Delta};u)$ the supporting face of $u$ in $\Gamma_+(f) \cap \Delta$.
\item For $j \in I(\Delta) \sqcup \{ k\}$ and $u \in \Delta^{\vee}$, we define the $u$-part $f_j^u \in {\mathbb C}[\SS \cap \Delta]$ of $f_j$ by
\begin{equation}
f_j^u:=\sum_{v \in \Gamma(f_j|_{\Delta};u)} a_v \cdot v \in {\mathbb C}[\SS \cap \Delta ],
\end{equation}
where $f_j=\sum_{v \in \Gamma_+(f_j)} a_v \cdot v $ in ${\mathbb C}[\SS]$.
\end{enumerate}
\end{definition}
By taking a ${\mathbb Z}$-basis of $M(\SS)$ and identifying the $u$-parts $f_j^u$ with Laurent polynomials $L(f_j^u)$ on $T=({\mathbb C}^*)^n$ as before, we have the following definition which does not depend on the choice of the ${\mathbb Z}$-basis of $M(\SS)$.
\begin{definition}\label{dfn:3-9}
We say that $(f_1,\ldots, f_k)$ is non-degenerate if for any face $\Delta \prec K(\SS)$ such that $\Gamma_+(f_k) \cap \Delta \neq \emptyset$ (including the case where $\Delta =K(\SS)$) and any $u \in {\rm Int}(\Delta^{\vee}) \cap M(\SS \cap \Delta)^*$ the following two subvarieties of $({\mathbb C}^*)^n$ are non-degenerate complete intersections.
\begin{gather}
\{ x\in ({\mathbb C}^*)^n \ |\ \text{$L(f_j^u)(x)=0$ for any $j \in I(\Delta)$}\},\\
\{ x\in ({\mathbb C}^*)^n \ |\ \text{$L(f_j^u)(x)=0$ for any $j \in I(\Delta) \sqcup \{ k\}$}\}.
\end{gather}
\end{definition}
\begin{remark}\label{rem:3-10}
The above definition is slightly different from the one in \cite{Oka-2} etc., since our result (Theorem \ref{thm:3-12} below) generalizes the ones in \cite{Kirillov}, \cite{Oka-1} and \cite{Oka-2}.
\end{remark}
For each face $\Delta \prec K(\SS)$ of $K(\SS)$ such that $\Gamma_+(f_k) \cap \Delta \neq \emptyset$, let us set
\begin{equation}
f_{\Delta}:=\left(\prod_{j \in I(\Delta)}f_j\right) \cdot f_k \in {\mathbb C}[\SS]
\end{equation}
and consider its Newton polygon $\Gamma_+(f_{\Delta}) \subset K(\SS)$. Let $\gamma_1^{\Delta}, \gamma_2^{\Delta}, \ldots, \gamma_{n(\Delta)}^{\Delta}$ be the compact faces of $\Gamma_+(f_{\Delta}) \cap \Delta$ ($\neq \emptyset$) such that $\d \gamma_i^{\Delta}=\d \Delta -1$. Then for each $1 \leq i \leq n(\Delta)$ there exists a unique primitive vector $u_i^{\Delta} \in {\rm Int}(\Delta^{\vee}) \cap M(\SS \cap \Delta)^*$ which takes its minimal in $\Gamma_+(f_{\Delta}) \cap \Delta$ exactly on $\gamma_i^{\Delta}$. For $j \in I(\Delta) \sqcup \{k\}$, set
\begin{equation}
\gamma(f_j)_i^{\Delta}:=\Gamma(f_j|_{\Delta}; u_i^{\Delta})
\end{equation}
and
\begin{equation}
d_i^{\Delta} := \min_{w \in \Gamma_+(f_k) \cap \Delta} \langle u_i^{\Delta},w \rangle .
\end{equation}
Note that we have
\begin{equation}
\gamma_i^{\Delta}=\sum_{j \in I(\Delta)\sqcup \{ k\}} \gamma(f_j)_i^{\Delta}
\end{equation}
for any face $\Delta \prec K(\SS)$ such that $\Gamma_+(f_k) \cap \Delta \neq \emptyset$ and $1 \leq i \leq n(\Delta)$. For each face $\Delta \prec K(\SS)$ such that $\Gamma_+(f_k) \cap \Delta \neq \emptyset$, $\d \Delta \geq m(\Delta)$ and $1 \leq i \leq n(\Delta)$, we set
\begin{equation}
K_i^{\Delta}:=\hspace*{-10mm}\sum_{\begin{subarray}{c}\alpha_1+\cdots +\alpha_{m(\Delta)}=\d \Delta-1 \\ \text{$\alpha_q \geq 1$ for $q \leq m(\Delta)-1$}, \ \alpha_{m(\Delta)} \geq 0 \end{subarray}} \hspace*{-10mm}{\rm Vol}_{{\mathbb Z}}(\underbrace{\gamma(f_{j_1})_i^{\Delta},\ldots, \gamma(f_{j_1})_i^{\Delta}}_{\text{$\alpha_1$ times}}, \ldots, \underbrace{\gamma(f_{j_{m(\Delta)}} )_i^{\Delta}, \ldots, \gamma(f_{j_{m(\Delta)}} )_i^{\Delta}}_{\text{$\alpha_{m(\Delta)} $ times}}).
\end{equation}
Here we set $I(\Delta) \sqcup \{k\} =\{j_1, j_2, \ldots , k=j_{m(\Delta)} \}$ and
\begin{equation}
{\rm Vol}_{{\mathbb Z}}(\underbrace{\gamma(f_{j_1})_i^{\Delta},\ldots, \gamma(f_{j_1})_i^{\Delta}}_{\text{$\alpha_1$ times}}, \ldots, \underbrace{\gamma(f_{j_{m(\Delta)}} )_i^{\Delta}, \ldots, \gamma(f_{j_{m(\Delta)}} )_i^{\Delta}}_{\text{$\alpha_{m(\Delta)} $ times}})
\end{equation}
is the normalized $(\d \Delta -1)$-dimensional mixed volume of
\begin{equation}
\underbrace{\gamma(f_{j_1})_i^{\Delta},\ldots, \gamma(f_{j_1})_i^{\Delta}}_{\text{$\alpha_1$ times}}, \ldots, \underbrace{\gamma(f_{j_{m(\Delta)}})_i^{\Delta}, \ldots, \gamma(f_{j_{m(\Delta)}})_i^{\Delta}}_{\text{$\alpha_{m(\Delta)}$ times}}
\end{equation}
(see Remark \ref{rem:2-13}) with respect to the lattice $M(\SS \cap \Delta)\cap {\mathbb L}(\gamma_i^{\Delta})$.
\begin{remark}\label{rem:3-11}
If $\d \Delta -1=0$, we set
\begin{equation}
K_i^{\Delta}={\rm Vol}_{{\mathbb Z}}(\underbrace{\gamma(f_k)_i^{\Delta}, \ldots, \gamma(f_k)_i^{\Delta}}_{\text{$0$ times}}):=1
\end{equation}
(in this case $\gamma(f_k)_i^{\Delta}$ is a point).
\end{remark}
\begin{theorem}\label{thm:3-12} Assume that $(f_1,\ldots, f_k)$ is non-degenerate. Then the $k$-th principal monodromy zeta function $\zeta_{g,0}(t)$ ($g=f_k|_W \colon W \longrightarrow {\mathbb C}$) is given by
\begin{equation}
\zeta_{g,0}(t)= \prod_{\begin{subarray}{c} \Gamma_+(f_k) \cap \Delta \neq \emptyset \\ \d \Delta \geq m(\Delta) \end{subarray}} \zeta_{g,\Delta}(t),
\end{equation}
where for each face $\Delta \prec K(\SS)$ of $K(\SS)$ such that $\Gamma_+(f_k) \cap \Delta \neq \emptyset$ and $\d \Delta \geq m(\Delta)$ we set
\begin{equation}
\zeta_{g,\Delta}(t) = \prod_{i=1}^{n(\Delta)} \left(1-t^{d_i^{\Delta}} \right)^{(-1)^{\d \Delta -m(\Delta)}K_i^{\Delta}}.
\end{equation}
In particular, the Euler characteristic of the Milnor fiber $G_0$ of $g=f_k|_W \colon W \longrightarrow {\mathbb C}$ at $0 \in V=g^{-1}(0)$ is given by
\begin{equation}\label{eq:3-28}
\chi(G_0)=\sum_{\begin{subarray}{c}
\Gamma_+(f_k) \cap \Delta \neq \emptyset \\ \d \Delta \geq m(\Delta) \end{subarray}} (-1)^{\d \Delta -m(\Delta)} \sum_{i=1}^{n(\Delta)} d_i^{\Delta}\cdot K_i^{\Delta}.
\end{equation}
\end{theorem}
\section{Proof of Theorem \ref{thm:3-12}}\label{sec:4}
Now let us prove Theorem \ref{thm:3-12}. Theorem \ref{thm:3-4} will be proved as a special case of Theorem \ref{thm:3-12}. Our proof is similar to the one in \cite{Varchenko}.
First let $i_{\Delta} \colon \overline{T_{\Delta}} \DOTSB\lhook\joinrel\longrightarrow X(\SS)$ be the closed embedding. Since nearby cycle functors take distinguished triangles to distinguished triangles, by $X(\SS)=\bigsqcup_{\Delta \prec K(\SS)} T_{\Delta}$ and $W=\bigsqcup_{\Delta \prec K(\SS)}(T_{\Delta} \cap W)$ we obtain
\begin{eqnarray}
\label{eq:4-1}\zeta_{g, 0}(t)
&=&\zeta_{f_k,0}\({\mathbb C}_W\)(t)\\
\label{eq:4-2}&=&\zeta_{f_k,0}\({\mathbb C}_{W \setminus\{0\}}\)(t)\\
\label{eq:4-3}&=&\prod_{\{0\} \precneqq \Delta \prec K(\SS)} \zeta_{f_k,0}\({\mathbb C}_{T_{\Delta} \cap W}\)(t)\\
\label{eq:4-4}&=&\prod_{\{0\} \precneqq \Delta \prec K(\SS)} \zeta_{f_k,0}\(i_{\Delta *}{\mathbb C}_{T_{\Delta} \cap W}\)(t)\\
\label{eq:4-5}&=&\prod_{\{0\}\precneqq \Delta \prec K(\SS)} \zeta_{f_k\circ i_{\Delta},0}\({\mathbb C}_{T_{\Delta} \cap W}\)(t).
\end{eqnarray}
Here we used Proposition \ref{prp:2-9} to prove the first and last equalities. We set
\begin{equation}
\zeta_{g,\Delta}(t):=\zeta_{f_k\circ i_{\Delta},0}\({\mathbb C}_{T_{\Delta} \cap W}\)(t) \in {\mathbb C}(t)^*.
\end{equation}
Since the condition $\Gamma_+(f_k) \cap \Delta =\emptyset$ is equivalent to the one $f_k \circ i_{\Delta}\equiv 0$, for a face $\Delta$ of $K(\SS)$ such that $\Gamma_+(f_k) \cap \Delta =\emptyset$ the nearby cycle $\psi_{f_k \circ i_{\Delta}}\({\mathbb C}_{T_{\Delta} \cap W}\)$ vanishes and hence $\zeta_{g,\Delta}(t) \equiv 1$. Therefore, in order to calculate the monodromy zeta function $\zeta_{g,0}(t)$ of $g=f_k|_W \colon W \longrightarrow {\mathbb C}$ at $0 \in g^{-1}(0)$, it suffices to calculate $\zeta_{g,\Delta}(t)$ only for faces $\Delta$ of $K(\SS)$ such that $\Delta \neq \{0\}$ and $\Gamma_+(f_k) \cap \Delta \neq \emptyset$.
Let us fix such a face $\Delta$ of $K(\SS)$. Set $n^{\prime}:=\d \Delta$ and $f:=f_k \circ i_{\Delta} \colon \overline{T_{\Delta}} \longrightarrow {\mathbb C}$. Note that we have $T_{\Delta}={\rm Spec} ({\mathbb C} [M(\SS\cap\Delta)])$ and $\overline{T_{\Delta}} =X(\SS \cap \Delta):={\rm Spec}({\mathbb C}[\SS\cap\Delta])$. We shall calculate the function
\begin{equation}
\zeta_{g,\Delta}(t)=\zeta_{f,0}({\mathbb C}_{T_{\Delta} \cap W})(t) \in {\mathbb C}(t)^*.
\end{equation}
For this purpose, we divide the polar cone
\begin{equation}
\Delta^{\vee}=\{ u \in {\mathbb L}(\Delta)^* \ |\ \text{$\langle u,v \rangle \geq 0$ for any $v \in \Delta$}\}
\end{equation}
of $\Delta$ in ${\mathbb L}(\Delta)^*$ by the equivalence relation
\begin{equation}
u \sim u^{\prime} \hspace{5mm}\overset{{\rm def}}{\Longleftrightarrow} \hspace{5mm}\Gamma(f_j|_{\Delta};u)=\Gamma(f_j|_{\Delta};u^{\prime}) \hspace{5mm}(\forall j \in I(\Delta) \sqcup \{k\})
\end{equation}
(see \eqref{eq:3-12}). Then we obtain a fan $\tl{\Sigma_{\Delta}}=\{ \sigma_i^{\prime}\}_i$ in $({\mathbb L}(\Delta)^*, M(\SS \cap \Delta)^*)$ such that $\bigsqcup_i {\rm rel.int} (\sigma_i^{\prime}) =\Delta^{\vee}$, where ${\rm rel.int}(\, \cdot\, )$ is the relative interior. Note that for the product $f_{\Delta}=\(\prod_{j \in I(\Delta)} f_j\) \cdot f_k \in {\mathbb C}[\SS]$ and $u, u^{\prime} \in \Delta^{\vee}$ we have
\begin{equation}
u \sim u^{\prime} \hspace{5mm}\Longleftrightarrow \hspace{5mm} \Gamma(f_{\Delta}|_{\Delta};u)=\Gamma(f_{\Delta}|_{\Delta};u^{\prime}).
\end{equation}
By applying sufficiently many barycentric subdivisions to $\tl{\Sigma_{\Delta}}$, we obtain a fan $\Sigma:=\Sigma_{\Delta}=\{\sigma_i\}_{i \in I}$ in $({\mathbb L}(\Delta)^*, M(\SS \cap \Delta)^*)$ such that $\bigsqcup_i {\rm rel.int} (\sigma_i ) =\Delta^{\vee}$ and the (normal) toric variety $X_{\Sigma}$ associated with it is smooth. Then the open dense torus $T_{\Delta}^{\prime \prime}$ in $X_{\Sigma}$ is defined by
\begin{equation}
T_{\Delta}^{\prime \prime}={\rm Spec}({\mathbb C}[M(\SS\cap\Delta)]).
\end{equation}
Since the subsemigroup $M(\SS \cap \Delta) \cap \Delta$ of $M(\SS \cap \Delta)$ is saturated, the affine toric variety $Z(\Delta):={\rm Spec}({\mathbb C}[M(\SS\cap\Delta)\cap \Delta])$ is normal and there exists a natural $T_{\Delta}^{\prime \prime}$-equivariant morphism
\begin{equation}
\pi_1 \colon Z(\Delta) \longrightarrow X(\SS \cap \Delta)
\end{equation}
induced by $\SS \cap \Delta \subset M(\SS\cap\Delta)\cap\Delta$. There exists also a natural $T_{\Delta}^{\prime \prime}$-equivariant proper birational morphism
\begin{equation}
\pi_2 \colon X_{\Sigma} \longrightarrow Z(\Delta)
\end{equation}
induced by the morphism $\Sigma \longrightarrow \{\Delta^{\vee}\}$ of fans in $({\mathbb L}(\Delta)^*, M(\SS \cap \Delta)^*)$. Hence we obtain a $T_{\Delta}^{\prime \prime}$-equivariant morphism
\begin{equation}
\pi :=\pi_1 \circ \pi_2 \colon X_{\Sigma} \longrightarrow X(\SS \cap \Delta).
\end{equation}
We shall use this morphism $\pi$ for the calculation of $\zeta_{g,\Delta}(t)$. For a face $\tau \prec \Delta^{\vee}$ of $\Delta^{\vee}$, denote by $\Delta_{\tau}$ the polar face $\Delta \cap \tau^{\perp}$ of $\tau$ in ${\mathbb L}(\Delta)$ and set
\begin{eqnarray}
T_{\tau}^{\prime}&:=&{\rm Spec}({\mathbb C}[M(\SS\cap\Delta)\cap\tau^{\perp}])={\rm Spec}({\mathbb C}[M(\SS\cap\Delta)\cap{\mathbb L}(\Delta_{\tau})]),\\
T_{\tau}&:=& {\rm Spec}({\mathbb C}[M(\SS \cap \Delta_{\tau})]).
\end{eqnarray}
Then $T_{\tau}^{\prime}$ and $T_{\tau}$ are $T_{\Delta}^{\prime \prime}$-orbits in $Z(\Delta)$ and $X(\SS \cap \Delta)$ respectively. Note that we have $T_{\{0\}}=T_{\Delta}$ and $T_{\{0\}}^{\prime}=T_{\Delta}^{\prime \prime}$ under this notation. Since the kernel of the canonical morphism
\begin{equation}
T_{\tau}^{\prime}={\rm Spec}({\mathbb C}[M(\SS \cap \Delta) \cap {\mathbb L}(\Delta_{\tau})]) \longrightarrow T_{\tau}={\rm Spec}({\mathbb C}[M(\SS \cap \Delta_{\tau})])
\end{equation}
is isomorphic to the finite group $M(\SS \cap \Delta_{\tau})^*/( M(\SS \cap \Delta)\cap {\mathbb L}(\Delta_{\tau}))^*$ (see \cite[p.22]{Oda}), the morphism $\pi_1|_{T_{\tau}^{\prime}} \colon T_{\tau}^{\prime} \longrightarrow T_{\tau}$ is a finite covering.
For a cone $\sigma_i \in \Sigma$, denote by $T_{\sigma_i}^{\prime \prime}$ the $T_{\Delta}^{\prime \prime}$-orbit ${\rm Spec}({\mathbb C}[M(\SS\cap\Delta)\cap\sigma_i^{\perp}]) \simeq ({\mathbb C}^*)^{\d \Delta-\d \sigma_i}$ in $X_{\Sigma}$ (in particular we have $T^{\prime \prime}_{\{0\}}=T_{\Delta}^{\prime \prime}$). Then we obtain a decomposition $X_{\Sigma}=\bigsqcup_{\sigma_i\in \Sigma} T_{\sigma_i}^{\prime \prime}$ of $X_{\Sigma}$. Note that the proper morphism $\pi_2\colon X_{\Sigma} \longrightarrow Z(\Delta)$ induces an isomorphism $\pi_2|_{T^{\prime \prime}_{\{0\}}} \colon T^{\prime \prime}_{\{0\}} \overset{\sim}{\longrightarrow} T^{\prime}_{\{0\}}$.
Let us set $\tl{f}:=f \circ \pi \colon X_{\Sigma} \longrightarrow {\mathbb C}$ and apply Proposition \ref{prp:2-9} to the constructible sheaf ${\mathbb C}_{T_{\Delta}^{\prime \prime}\cap \pi^{-1}( T_{\Delta} \cap W)}\in {\bf D}_{c}^{b}(X_{\Sigma})$. Since the morphism $\pi \colon X_{\Sigma} \longrightarrow X(\SS\cap\Delta)$ is proper and induces an isomorphism $\pi|_{T_{\Delta}^{\prime \prime}} \colon T_{\Delta}^{\prime \prime}\overset{\sim}{\longrightarrow} T_{\Delta}$, by the above descriptions of $\pi_1$ and $\pi_2$ we have
\begin{equation}
R\pi_*\({\mathbb C}_{T_{\Delta}^{\prime \prime}\cap \pi^{-1}(T_{\Delta} \cap W)}\)\simeq {\mathbb C}_{T_{\Delta}\cap W}
\end{equation}
in ${\bf D}_{c}^{b}(X(\SS\cap\Delta))$. Then by Proposition \ref{prp:2-9} for the calculation of
\begin{equation}
\zeta_{f,0}\({\mathbb C}_{T_{\Delta}\cap W}\)(t)=\zeta_{f,0}\(R\pi_*\({\mathbb C}_{T_{\Delta}^{\prime \prime}\cap \pi^{-1}(T_{\Delta} \cap W)}\)\)(t)\in {\mathbb C}(t)^*
\end{equation}
it suffices to calculate the value of $\zeta_{\tl{f}}\({\mathbb C}_{T_{\Delta}^{\prime \prime}\cap \pi^{-1}(T_{\Delta} \cap W)}\)$ at each point of $\pi^{-1}(0)$.
Let $\sigma_0 \in \Sigma$ be a cone in $\Sigma$ such that ${\rm rel.int}(\sigma_0) \subset {\rm Int}(\Delta^{\vee})$ ($\Longleftrightarrow T_{\sigma_0}^{\prime \prime} \subset \pi^{-1}(0)$). In order to calculate the constructible function $\zeta_{\tl{f}}\({\mathbb C}_{T_{\Delta}^{\prime \prime} \cap \pi^{-1}(T_{\Delta} \cap W)}\)$ on $T_{\sigma_0}^{\prime \prime}$, take an $n^{\prime}$-dimensional cone $\sigma_1 \in \Sigma$ such that $\sigma_0 \prec \sigma_1$ and let $\{a_1,a_2,\ldots, a_{n^{\prime}}\}$ be the $1$-skelelton of $\sigma_1$. In other words, $a_i\neq 0\in M(\SS \cap \Delta)^*$ are the primitive vectors on the $1$-dimensional faces of $\sigma_1$. Set $p:=\d \sigma_0$. Without loss of generality, we may assume that $\{a_1,a_2,\ldots, a_p\}$ is the $1$-skeleton $\sigma_0$. For $j \in I(\Delta) \sqcup \{ k\}$, we set
\begin{equation}
m(j)_i:=\min_{w \in \Gamma_+(f_j)\cap \Delta} \langle a_i, w \rangle \geq 0 \hspace{5mm}(i=1,2,\ldots, n^{\prime}).
\end{equation}
For simplicity, we set also $m_i:=m(k)_i$ ($i=1,2,\ldots, n^{\prime}$).
Let $U_1 :={\mathbb C}^{n^{\prime}}(\sigma_1) \simeq{\mathbb C}_y^{n^{\prime}}$ be the affine toric variety associated with the fan $\{\sigma^{\prime}\}_{\sigma^{\prime} \prec \sigma_1}$ in $({\mathbb L}(\Delta)^*, M(\SS \cap \Delta)^*)$. Then $U_1$ is an affine open subset of $X_{\Sigma}$ and in $U_1\simeq{\mathbb C}_y^{n^{\prime}}$ the $T_{\Delta}^{\prime \prime}$-orbit $T_{\sigma_0}^{\prime \prime}$ is defined by
\begin{equation}
T_{\sigma_0}^{\prime \prime}=\{ y=(y_1,\ldots, y_{n^{\prime}})\ |\ y_1=\cdots =y_p=0, \ y_{p+1}, \ldots, y_{n^{\prime}}\neq 0\} \simeq ({\mathbb C}^*)^{n^{\prime}-p}.
\end{equation}
Moreover for $j \in I(\Delta) \sqcup \{k\}$, on $U_1\simeq {\mathbb C}_y^{n^{\prime}}$ the function $f_j \circ \pi \colon X_{\Sigma} \longrightarrow {\mathbb C}$ has the form
\begin{equation}
(f_j \circ \pi)(y)=y_1^{m(j)_1}y_2^{m(j)_2} \cdots y_{n^{\prime}}^{m(j)_{n^{\prime}}} \cdot f_j^{\sigma_1}(y),
\end{equation}
where $f_j^{\sigma_1}$ is a polynomial function on $U_1$. Note that by the assumptions $T_{\sigma_0}^{\prime \prime} \subset \pi^{-1}(0)$ and $f_k(0)=0$ we have $T_{\sigma_0}^{\prime \prime} \subset \tl{f}^{-1}(0)$. Since $f_k^{\sigma_1}|_{T_{\sigma_0}^{\prime \prime}} \not\equiv 0$ by the construction of the fan $\Sigma$, at least one of the integers $m_1,m_2,\ldots , m_p \geq 0$ must be positive. Note that by the definition of $I(\Delta)$ we have $f_j \circ i_{\Delta} \equiv 0$ on $X(\SS \cap \Delta)$ for any $j \notin I(\Delta) \sqcup \{k\}$. Hence we obtain our key formula
\begin{equation}
T_{\Delta}^{\prime \prime}\cap \pi^{-1}(T_{\Delta} \cap W)=T_{\Delta}^{\prime \prime} \cap \bigcap_{j \in I(\Delta)} \{ f_j^{\sigma_1} =0\}.
\end{equation}
Moreover by the non-degeneracy of $(f_1,f_2,\ldots,f_k)$ (see Definition \ref{dfn:3-9}) we obtain the following lemma.
\begin{lemma}\label{lem:4-1}
The gradient vectors $\{ {\rm grad}(f_j^{\sigma_1}) \}_{j \in I(\Delta)}$ (resp. $\{ {\rm grad}(f_j^{\sigma_1}) \}_{j \in I(\Delta) \sqcup \{k\}}$) are linearly independent on $\bigcap_{j \in I(\Delta)}\{f_j^{\sigma_1}=0\}$ (resp. $\bigcap_{j \in I(\Delta) \sqcup \{k\}} \{f_j^{\sigma_1}=0\}$) in a neighborhood of $T_{\sigma_0}^{\prime \prime}$.
\end{lemma}
\begin{proof}
First, by the non-degeneracy of $(f_1,f_2,\ldots, f_k)$, for any $u \in {\rm Int}(\Delta^{\vee}) \cap M(\SS \cap \Delta)^*$ the Laurent polynomials $L^{\prime}(f_j^u)$ on ${\rm Spec}({\mathbb C}[M(\SS)\cap{\mathbb L}(\Delta)])\simeq ({\mathbb C}^*)^{\d \Delta}$ defined by the $u$-parts $f_j^u \in {\mathbb C} [\SS \cap \Delta]$ ($j \in I(\Delta) \sqcup \{k\}$) satisfy the conditions similar to the ones in Definition \ref{dfn:3-9}. Since the natural morphism
\begin{equation}
{\rm Spec}({\mathbb C} [M(\SS)\cap{\mathbb L}(\Delta)])\simeq ({\mathbb C}^*)^{\d \Delta}\longrightarrow T_{\Delta}={\rm Spec}({\mathbb C} [M(\SS \cap \Delta)])
\end{equation}
is a finite covering, also the corresponding Laurent polynomials $L^{\prime \prime}(f_j^u)$ ($j \in I(\Delta) \sqcup \{k\}$) on $T_{\Delta}\simeq ({\mathbb C}^*)^{\d \Delta}$ satisfy such conditions. Then the result follows from the classical arguments (see \cite{Oka-2} etc. for the detail). \hfill $\Box$
\end{proof}
For $j \in I(\Delta) \sqcup \{ k\}$, we set
\begin{equation}
h_j:=f_j^{\sigma_1}|_{T_{\sigma_0}^{\prime \prime}} \colon T_{\sigma_0}^{\prime \prime} \longrightarrow {\mathbb C}.
\end{equation}
\begin{proposition}\label{prp:4-2}
In the situation as above, we have
\begin{enumerate}
\item If $\d \sigma_0=1$, then for $y=(0,y_2,\ldots,y_{n^{\prime}})\in T_{\sigma_0}^{\prime \prime}\simeq ({\mathbb C}^*)^{n^{\prime}-1}$ we have
\begin{equation}\label{eq:4-26}
\zeta_{\tl{f},y}\({\mathbb C}_{T_{\Delta}^{\prime \prime}\cap \pi^{-1}( T_{\Delta} \cap W)}\)(t)=\begin{cases}1-t^{m_1} & \text{if $y\in (\bigcap_{j\in I(\Delta)} \{h_j=0\}) \setminus \{h_k=0\} $},\\
1 &\text{otherwise}.
\end{cases}
\end{equation}
\item If $\d \sigma_0 \geq 2$, we have
\begin{equation}
\zeta_{\tl{f}}\({\mathbb C}_{T_{\Delta}^{\prime \prime}\cap \pi^{-1}(T_{\Delta} \cap W)}\) \Big|_{T_{\sigma_0}^{\prime \prime}}=\1_{T_{\sigma_0}^{\prime \prime}}.
\end{equation}
\end{enumerate}
\end{proposition}
\begin{proof}
Set $l:=\sharp \{ 1 \leq i \leq p \ |\ m_i>0\}$. If $l\geq 2$, then by Lemma \ref{lem:4-1} we obtain
\begin{equation}
\zeta_{\tl{f}}\({\mathbb C}_{T_{\Delta}^{\prime \prime} \cap \pi^{-1}(T_{\Delta} \cap W)}\)\Big|_{T_{\sigma_0}^{\prime \prime}} = \1_{T_{\sigma_0}^{\prime \prime}}
\end{equation}
(see for example \cite[p.48-49]{Oka-2} etc.). Let us consider the case where $l=1$. If $y \in T_{\sigma_0}^{\prime \prime} \setminus (\bigcap_{j\in I(\Delta)} \{h_j=0\})$ or $y \in (\bigcap_{j\in I(\Delta)} \{h_j=0\}) \cap \{h_k=0\}$, also by Lemma \ref{lem:4-1} we can show that $\zeta_{\tl{f},y}\({\mathbb C}_{T_{\Delta}^{\prime \prime}\cap \pi^{-1}( T_{\Delta} \cap W )}\)(t)=1$. By $\d T_{\Delta}^{\prime \prime}-\d T_{\sigma_0}^{\prime \prime}=p$, in a neighborhood of each point of $(\bigcap_{j\in I(\Delta)} \{h_j=0\}) \setminus \{h_k=0\}$, we have
\begin{equation}
\{ \tl{f} =\varepsilon \} \cap \( T_{\Delta}^{\prime \prime}\cap \pi^{-1}(T_{\Delta} \cap W)\) \simeq ({\mathbb C}^*)^{p-1} \times A \qquad (0 <|\varepsilon | \ll 1),
\end{equation}
where $A$ is a constructible set. If $p\geq 2$, the Euler characteristic of $({\mathbb C}^*)^{p-1}$ is zero and we can easily prove that the equality
\begin{equation}
\zeta_{\tl{f}}\({\mathbb C}_{T_{\Delta}^{\prime \prime} \cap \pi^{-1}(T_{\Delta} \cap W)}\)\Big|_{T_{\sigma_0}^{\prime \prime}} = \1_{T_{\sigma_0}^{\prime \prime}}
\end{equation}
holds (see \cite[p.48-49]{Oka-2} etc.). Finally, consider the case where $l=1$ and $p=1$. In this case, on $U_1 \simeq {\mathbb C}_y^{n^{\prime}}$ the function $\tl{f}=f_k \circ \pi$ has the form
\begin{equation}
\tl{f}(y)=y_1^{m_1}\cdot (y_{2}^{m_{2}}y_{3}^{m_{3}}\cdots y_{n^{\prime}}^{m_{n^{\prime}}})\cdot f_k^{\sigma_1}(y).
\end{equation}
Then by Lemma \ref{lem:4-1}, for $y \in T_{\sigma_0}^{\prime \prime}$ we can easily prove \eqref{eq:4-26}.
\hfill $\Box$
\end{proof}
Now let us return to the proof of Theorem \ref{thm:3-12}. By the proposition above, in order to calculate $\zeta_{g,\Delta}(t)$, it suffices to consider the values of the ${\mathbb C}(t)^*$-valued constructible function
\begin{equation}
\zeta_{\tl{f}}\({\mathbb C}_{T_{\Delta}^{\prime \prime}\cap \pi^{-1}(T_{\Delta} \cap W)}\)\Big|_{\pi^{-1}(0)} \colon \pi^{-1}(0) \longrightarrow {\mathbb C}(t)^*
\end{equation}
only on $T_{\sigma_0}^{\prime \prime}$ for $\sigma_0 \in \Sigma$ such that ${\rm rel.int}(\sigma_0) \subset {\rm Int}(\Delta^{\vee})$ and $\d \sigma_0=1$. Let us take such a $1$-dimensional cone $\sigma_0 \in \Sigma$. Let $u\neq 0 \in M(\SS \cap \Delta)^*$ be the unique non-zero primitive vector on $\sigma_0$ and for $j \in I(\Delta) \sqcup \{k\}$ set $\gamma(f_j)_u^{\Delta}:=\Gamma(f_j|_{\Delta};u)$. Then $\gamma(f_j)_u^{\Delta}$ is naturally identified with the Newton polytope of the Laurent polynomial $h_j \colon T_{\sigma_0}^{\prime \prime} \longrightarrow {\mathbb C}$ on $T_{\sigma_0}^{\prime \prime} \simeq ({\mathbb C}^*)^{\d \Delta-1}$ and by Theorem \ref{thm:2-14} we have
\begin{eqnarray}
\lefteqn{(-1)^{\d \Delta-m(\Delta)}\chi\( \(\bigcap_{j \in I(\Delta)} \{h_j=0\}\) \setminus \{h_k=0\} \)} \nonumber\\
&=& (-1)^{\d \Delta-m(\Delta)}\left\{\chi\(\bigcap_{j \in I(\Delta)}\{h_j=0\}\)-\chi\(\bigcap_{j \in I(\Delta)\sqcup \{k\}}\{h_j=0\}\)\right\}\\
&=& \hspace*{-12mm}\sum_{\begin{subarray}{c}\alpha_1+\cdots +\alpha_{m(\Delta)}=\d \Delta-1 \\ \text{$\alpha_q \geq 1$ for $q \leq m(\Delta)-1$}, \ \alpha_{m(\Delta)} \geq 0 \end{subarray}} \hspace*{-12mm}{\rm Vol}_{{\mathbb Z}}(\underbrace{\gamma(f_{j_1})_u^{\Delta},\ldots, \gamma(f_{j_1})_u^{\Delta}}_{\text{$\alpha_1$-times}}, \ldots, \underbrace{\gamma(f_{j_{m(\Delta)}} )_u^{\Delta}, \ldots, \gamma(f_{j_{m(\Delta)}} )_u^{\Delta}}_{\text{$\alpha_{m(\Delta)}$-times}}). \hspace*{10mm}\label{eq:4-35}
\end{eqnarray}
Here we set $I(\Delta)\sqcup \{k\}=\{j_1,j_2,\ldots, k=j_{m(\Delta)}\}$. Now recall that we have
\begin{equation}
\Gamma(f_{\Delta}|_{\Delta};u)=\sum_{j \in I(\Delta) \sqcup\{k\}}\gamma(f_j)_u^{\Delta}.
\end{equation}
Hence if $\d \Gamma(f_{\Delta}|_{\Delta};u) <\d \Delta-1$, then all the mixed volumes in \eqref{eq:4-35} vanish and
\begin{equation}
\chi\(\(\bigcap_{j \in I(\Delta)} \{h_j=0\}\)\setminus \{h_k=0\} \)=0.
\end{equation}
This implies that for the calculation of $\zeta_{g,\Delta}(t)=\zeta_{f,0}\({\mathbb C}_{T_{\Delta} \cap W}\)(t)\in {\mathbb C}(t)^*$, we have only to consider the compact faces $\gamma_1^{\Delta}, \gamma_2^{\Delta},\ldots , \gamma_{n(\Delta)}^{\Delta}$ of $\Gamma_+(f_{\Delta}) \cap \Delta$ such that $\d \gamma_i^{\Delta}=\d \Delta-1$ and their normal primitive vectors $u_1^{\Delta}, u_2^{\Delta}, \ldots, u_{n(\Delta)}^{\Delta} \in {\rm Int}(\Delta^{\vee}) \cap M(\SS \cap \Delta)^*$.
Summerizing these arguments, we finally obtain
\begin{eqnarray}
\zeta_{g,\Delta}(t)
&=& \zeta_{f,0}\({\mathbb C}_{T_{\Delta} \cap W}\)(t)\\
&=& \prod_{i=1}^{n(\Delta)} \(1-t^{d_i^{\Delta}}\)^{(-1)^{\d \Delta -m(\Delta)}K_i^{\Delta}}.
\end{eqnarray}
Since $K_i^{\Delta}=0$ for $m(\Delta)>\d \Delta$ by the definition of $K_i^{\Delta}$, we also have
\begin{equation}
\zeta_{g,0}(t)=\prod_{\begin{subarray}{c}\Gamma_+(f_k) \cap \Delta \neq \emptyset\\ \d \Delta \geq m(\Delta) \end{subarray}} \zeta_{g,\Delta}(t).
\end{equation}
This completes the proof of Theorem \ref{thm:3-12}. \hfill $\Box$
\section{Monodromy zeta functions of torus invariant sheaves}\label{sec:5}
In this section, we generalize our Theorem \ref{thm:3-4} to $T$-invariant constructible sheaves on general toric varieties.
First, let $X$ be a (not necessarily normal) toric variety over ${\mathbb C}$ and $T\subset X$ the open dense torus which acts on $X$ itself. Let $X=\bigsqcup_{\alpha}X_{\alpha}$ be the decomposition of $X$ into $T$-orbits.
\begin{definition}\label{dfn:5-1}
\begin{enumerate}
\item We say that a constructible sheaf ${\cal F}$ on $X$ is $T$-invariant if ${\cal F}|_{X_{\alpha}}$ is a locally constant sheaf of finite rank for any $\alpha$.
\item We say that a constructible object ${\cal F} \in {\bf D}_{c}^{b}(X)$ is $T$-invariant if the cohomology sheaf $H^j({\cal F})$ of ${\cal F}$ is $T$-invariant for any $j \in {\mathbb Z}$.
\end{enumerate}
\end{definition}
Note that the so-called $T$-equivariant constructible sheaves on $X$ are $T$-invariant in the above sense.
From now on, we consider the (not necessarily normal) toric variety $X(\SS)$ and the regular function $f \colon X(\SS) \longrightarrow {\mathbb C}$ on it considered in Section \ref{sec:3}. We shall freely use the notations in Section \ref{sec:3}. Let ${\cal F}\in {\bf D}_{c}^{b}(X(\SS))$ be a $T$-invariant object. Our objective here is to calculate the monodromy zeta function
\begin{equation}
\zeta_f({\cal F})(t):=\zeta_{f,0}({\cal F})(t) \in {\mathbb C}(t)^*
\end{equation}
of ${\cal F} \in {\bf D}_{c}^{b}(X(\SS))$ at the $T$-fixed point $0 \in X(\SS)$. Since we have
\begin{equation}
\zeta_f({\cal F})(t)=\prod_{j \in {\mathbb Z}}\zeta_f(H^j({\cal F}))^{(-1)^j},
\end{equation}
we may assume from the first that ${\cal F}$ is a $T$-invariant constructible sheaf on $X(\SS)$. For each face $\Delta \prec K(\SS)$ of $K(\SS)$, denote by $T_{\Delta} \subset X(\SS)$ the corresponding $T$-orbit in $X(\SS)$ and consider the decomposition
\begin{equation}
X(\SS)=\bigsqcup_{\Delta \prec K(\SS)} T_{\Delta}
\end{equation}
of $X(\SS)$ into $T$-orbits. For $\Delta \prec K(\SS)$, we denote the local system ${\cal F}|_{T_{\Delta}}$ on $T_{\Delta}$ by $\L_{\Delta}$. Let $j_{\Delta} \colon T_{\Delta} \DOTSB\lhook\joinrel\longrightarrow X(\SS)$ be the inclusion. Then by Proposition \ref{prp:2-9} we have
\begin{equation}
\zeta_f({\cal F})(t)=\prod_{\Delta \prec K(\SS)} \zeta_f((j_{\Delta})_!\L_{\Delta})(t).
\end{equation}
In order to calculate the monodromy zeta functions $\zeta_f((j_{\Delta})_!\L_{\Delta})(t)\in {\mathbb C}(t)^*$ as in the proof of Theorem \ref{thm:3-4}, we need the following elementary propositions.
\begin{proposition}\label{prp:5-2}
Let $\L$ be a local system of rank $r>0$ on ${\mathbb C}^*={\mathbb C}\setminus \{0\}$. Denote by $A \in GL_r({\mathbb C})$ the monodromy matrix of $\L$ along the loop $\{e^{\sqrt{-1}\theta}\ |\ 0 \leq \theta \leq 2\pi\}$ in ${\mathbb C}^*$, which is defined up to conjugacy. Let $j \colon {\mathbb C}^* \DOTSB\lhook\joinrel\longrightarrow {\mathbb C}$ be the inclusion.
\begin{enumerate}
\item Set $d:=\d {\rm Ker}({\rm id}-A)$. Then we have
\begin{equation}
H^j({\mathbb C}^*;\L) \simeq \begin{cases} {\mathbb C}^d & (j=0,1),\\ 0 & (\text{otherwise}). \end{cases}
\end{equation}
\item For any $j \in {\mathbb Z}$, we have
\begin{equation}
H^j({\mathbb C};j_!\L)\simeq 0.
\end{equation}
\item Let $h$ be a function on ${\mathbb C}$ defined by $h(z)=z^m$ ($m\in {\mathbb Z}_{>0}$) for $z \in {\mathbb C}$. Then we have
\begin{equation}\label{eq:5-9}
\zeta_{h,0}(j_!\L)(t) =\det ({\rm id} -t^mA) \in {\mathbb C}(t)^*.
\end{equation}
\end{enumerate}
\end{proposition}
\begin{proof}
For the proof of (i), see for example \cite[Lemma 3.3]{N-T}. The assertion (ii) is easily obtained from (i). Let us prove (iii). By taking small $\varepsilon >0$, for $k=0,1, \ldots , m-1$ set $p_k:=\varepsilon e^{\frac{2\pi \sqrt{-1} k}{m}} \in {\mathbb C}^*$. Then we have an isomorphism
\begin{equation}
\psi_h(j_! \L )_0 \simeq \bigoplus_{k=0}^{m-1}\L_{p_k}.
\end{equation}
We fix an isomorphism $\L_{p_0}\simeq {\mathbb C}^r$ and for each $k=1,2, \ldots , m-1$ construct an isomorphism $\L_{p_k}\simeq \L_{p_0}={\mathbb C}^r$ by the translation of the sections of $\L$ along the path $\gamma_k\colon [0,1] \DOTSB\lhook\joinrel\longrightarrow {\mathbb C}^*$, $\gamma_k(s)=\varepsilon e^{\frac{2\pi \sqrt{-1} k}{m} s}$. Then we obtain an isomorphism
\begin{equation}
\psi_h(j_! \L )_0 \simeq \bigoplus_{k=0}^{m-1}\L_{p_k} \simeq {\mathbb C}^{mr}.
\end{equation}
Since the monodromy automorphism of $\psi_h(j_! \L )_0$ corresponds to the matrix
\begin{equation}
\begin{pmatrix}
O & O & \ldots &O & A\\
{\rm id} & O & \ldots &O & O\\
O & {\rm id} & \ddots &\ddots & O\\
\vdots & \ddots & \ddots & \ddots &\vdots \\
O & \ldots & \ldots & {\rm id} & O
\end{pmatrix}
\in GL_{mr}({\mathbb C})
\end{equation}
via this isomorphism $\psi_h(j_! \L )_0 \simeq {\mathbb C}^{mr}$, we obtain \eqref{eq:5-9}. \hfill $\Box$
\end{proof}
\begin{proposition}\label{prp:5-3}
Let $\L$ be a local system on $({\mathbb C}^*)^k$ and $j \colon ({\mathbb C}^*)^k \DOTSB\lhook\joinrel\longrightarrow {\mathbb C}^k$ the inclusion. Let $h \colon {\mathbb C}^k \longrightarrow {\mathbb C}$ be a function on ${\mathbb C}^k$ defined by $h(z)=z_1^{m_1}z_2^{m_2}\cdots z_k^{m_k} (\not\equiv 1)$ ($m_i \in {\mathbb Z}_{\geq 0}$) for $z\in {\mathbb C}^k$. If $k \geq 2$, the monodromy zeta function $\zeta_{h,0}(j_!\L)(t)$ (resp. $\zeta_{h,0}(Rj_*\L)(t)$) of $j_!\L \in {\bf D}_{c}^{b}({\mathbb C}^k)$ (resp. $Rj_*\L \in {\bf D}_{c}^{b}({\mathbb C}^k)$) at $0 \in {\mathbb C}^k$ is $1 \in {\mathbb C}(t)^*$.
\end{proposition}
\begin{proof}
Since the proof of $\zeta_{h,0}(Rj_*\L)(t)\equiv 1$ is similar, we prove only $\zeta_{h,0}(Rj_!\L)(t)\equiv 1$. Let $F_0$ be the Milnor fiber of $h$ at $0 \in {\mathbb C}^k$. Then there exist $\varepsilon_0, \eta_0 >0$ with $0< \eta_0 \ll \varepsilon_0$ such that the restriction
\begin{equation}
B(0; \varepsilon_0) \cap h^{-1}(D_{\eta_0}^*) \longrightarrow D_{\eta_0}^*
\end{equation}
of $h$ is a fiber bundle over the punctured disk $D_{\eta_0}^*=\{ x \in {\mathbb C} \ | \ 0<|x|<\eta_0\}$ with fiber $F_0$. Furthermore, by using the special form of $h$, we may replace the above constant $\varepsilon_0, \eta_0 >0$ so that there exists also an isomorphism
\begin{equation}\label{isom}
R\varGamma (h^{-1}(x) ; j_!\L ) \overset{\sim}{\longrightarrow} R\varGamma (h^{-1}(x) \cap B(0; \varepsilon_0) ; j_!\L )
\end{equation}
for any $x \in D_{\eta_0}^*$. Indeed, this isomorphism can be obtained by applying Kashiwara's non-characteristic deformation lemma (\cite[Proposition 2.7.2]{K-S}) to the constructible sheaf $(j_!\L)|_{h^{-1}(x)}$ on the complex manifold $h^{-1}(x)$. Set ${\cal F} =(j_!\L)_{ \overline{B(0; \varepsilon_0)}} \in {\bf D}^{b}({\mathbb C}^k)$. Then for any $j \in {\mathbb Z}$ the cohomology sheaf $H^j(Rh_*{\cal F})$ is a local system on $D_{\eta_0}^*$ and via the isomorphism
\begin{equation}
H^j(\psi_h(j_! \L))_0 \simeq H^j(F_0 ;j_!\L) \simeq H^j(Rh_*{\cal F})_x \qquad (x \in D_{\eta_0}^*)
\end{equation}
(obtained by Proposition \ref{prp:2-7-2}) the monodromy automorphism of $H^j(\psi_h(j_!\L))_0$ corresponds to the one $Q_{j,x}\colon H^j(Rh_*{\cal F})_x \overset{\sim}{\longrightarrow} H^j(Rh_*{\cal F})_x$ obtained by the translation of the sections of the local system $H^j(Rh_*{\cal F})_x$ along the path $\gamma_x \colon [0,1] \longrightarrow D_{\eta_0}^*$, $\gamma_x(s)=e^{2\pi \sqrt{-1} s}x$ (see the discussions just after \cite[Proposition 4.2.2]{Dimca}). This automorphism $Q_{j,x}$ can be functorially constructed as follows. First, define a morphism $\tl{\Psi}\colon [0,1] \times D_{\eta_0}^* \longrightarrow D_{\eta_0}^* $ by $\tl{\Psi}(s,x)=e^{2 \pi \sqrt{-1} s}x$ and let $\pi \colon [0,1] \times D_{\eta_0}^* \longrightarrow D_{\eta_0}^*$ be the projection. For $q=0,1$, let $i_q\colon D_{\eta_0}^* \simeq \{q\} \times D_{\eta_0}^* \DOTSB\lhook\joinrel\longrightarrow [0,1] \times D_{\eta_0}^*$ be the inclusion and set $\Psi_q=\tl{\Psi}(q,*) =\tl{\Psi}\circ i_q\colon D_{\eta_0}^*\overset{\sim}{\longrightarrow} D_{\eta_0}^*$. Note that $\Psi_0=\Psi_1={\rm id}_{D_{\eta_0}^*}$ in this case. Then for $q=0,1$ we obtain an isomorphism
\begin{equation}
R\pi_*\tl{\Psi}^{-1}Rh_*{\cal F} \overset{\sim}{\longrightarrow} R\pi_* (i_q)_*(i_q)^{-1} \tl{\Psi}^{-1}Rh_*{\cal F} \simeq \Psi_q^{-1}Rh_*{\cal F}
\end{equation}
in ${\bf D}_{c}^{b} (D_{\eta_0}^*)$. Hence by setting $\Psi :=\Psi_1$ we obtain an automorphism of $Rh_*{\cal F}$
\begin{equation}
Rh_*{\cal F} \overset{\sim}{\longrightarrow} R\Psi_*\Psi^{-1}Rh_*{\cal F} \overset{\sim}{\longrightarrow} R\Psi_*\Psi_0^{-1}Rh_*{\cal F} =Rh_*{\cal F}
\end{equation}
which induces $Q_{j,x}$. Similarly, define a morphism $\tl{\Phi}\colon [0,1] \times {\mathbb C}^k \longrightarrow {\mathbb C}^k$ by
\begin{equation}
\tl{\Phi}(s,z)=\left(e^{\frac{2\pi \sqrt{-1} }{md}s}z_1, e^{\frac{2\pi \sqrt{-1}}{md}s}z_2, \ldots , e^{\frac{2\pi \sqrt{-1}}{md}s}z_l, z_{l+1}, \ldots , z_k\right)
\end{equation}
and let $\varpi \colon [0,1] \times {\mathbb C}^k \longrightarrow {\mathbb C}^k$ be the projection. For $q=0,1$, set $\Phi_q=\tl{\Phi}(q,*)\colon {\mathbb C}^k \longrightarrow {\mathbb C}^k$. In this case, we have $\Phi_0={\rm id}_{{\mathbb C}^k}$, and $\Phi_1$ induces the monodromy automorphisms of the global Milnor fiber $h^{-1}(x)$ and the local one $F_0 =h^{-1}(x)\cap B(0; \varepsilon_0)$ for any $x \in D_{\eta_0}^*$. Then by setting $\Phi :=\Phi_1\colon {\mathbb C}^k \overset{\sim}{\longrightarrow} {\mathbb C}^k$ we obtain also isomorphisms
\begin{equation}
\Phi^{-1}j_!\L \overset{\sim}{\longleftarrow} R\varpi_* \tl{\Phi}^{-1}j_!\L \overset{\sim}{\longrightarrow} \Phi_0^{-1}j_!\L =j_!\L
\end{equation}
and hence an automorphism $R_{j,x}$ of $H^j(F_0;j_!\L)$ defined by
\begin{equation}
H^j(F_0;j_!\L) \longrightarrow H^j(F_0; R\Phi_*\Phi^{-1}j_!\L)\simeq H^j(F_0; R\Phi_*\Phi_0^{-1}j_!\L)=H^j(F_0;j_!\L).
\end{equation}
By the functorial constructions of $Q_{j,x}$ and $R_{j,x}$, we can easily check that via the isomorphism $H^j(Rh_* {\cal F})_x \simeq H^j(F_0;j_!\L)$ the automorphism $Q_{j,x}$ corresponds to $R_{j,x}$. Therefore, in order to calculate the monodromy zeta function $\zeta_{h,0}(j_!\L)(t)$ it suffices to calculate the one for the automorphisms $R_{j,x}\colon H^j(F_0;j_!\L) \overset{\sim}{\longrightarrow} H^j(F_0;j_!\L)$ induced by the isomorphism $\Phi^{-1}j_!\L \overset{\sim}{\longrightarrow} j_!\L $. Moreover, by the isomorphism \eqref{isom}, we have only to calculate the zeta function for the automorphisms of $H^j(h^{-1}(x) ;j_!\L)$ induced also by $\Phi^{-1}j_!\L \overset{\sim}{\longrightarrow} j_!\L$.
Now, without loss of generality, we may assume that there exists $1 \leq l \leq k$ such that $m_1, m_2, \ldots , m_l >0$ and $m_{l+1}= \cdots =m_k=0$. Moreover, by replacing exponents, we may assume also that $h(z)=(z_1^{m_1}\cdots z_l^{m_l})^m$ with $m\geq 1$ and ${\rm gcd}(m_1,\ldots,m_l)=1$. Set $d:=m_1+\cdots +m_l$. Let $M=(m_{i,j})\in {\rm SL}(l;{\mathbb Z})$ be a unimodular matrix such that $m_{1,j}=m_j$ for $j=1,\ldots,l$. We define an isomorphism $\Lambda_M \colon ({\mathbb C}^*)^l\times {\mathbb C}^{k-l} \overset{\sim}{\longrightarrow} ({\mathbb C}^*)^l\times {\mathbb C}^{k-l}$ by
\begin{equation}
w=\Lambda_M(z)=(z_1^{m_{1,1}}\cdots z_l^{m_{1,l}}, \ldots, z_1^{m_{l,1}}\cdots z_l^{m_{l,l}}, z_{l+1}, \ldots, z_k)
\end{equation}
and an isomorphism $\Phi^{\prime} \colon {\mathbb C}^k \overset{\sim}{\longrightarrow} {\mathbb C}^k$ by
\begin{equation}
\Phi^{\prime}(w_1,\ldots,w_k)=\(e^{\frac{2\pi \sqrt{-1}}{m}}w_1,e^{\frac{2\pi \sqrt{-1} d_2}{md}}w_2,\ldots, e^{\frac{2\pi \sqrt{-1} d_l}{md}}w_l, w_{l+1},\ldots, w_k\),
\end{equation}
where we set $d_i:=m_{i,1}+\cdots+m_{i,l}$ for $i=2,\ldots ,l$. Moreover we define a function $h^{\prime} \colon ({\mathbb C}^*)^l \times {\mathbb C}^{k-l} \longrightarrow {\mathbb C}$ by $h^{\prime}(w)=w_1^m$. Then we have
\begin{gather}
\Lambda_M(h^{-1}(x))=h^{\prime -1}(x)\qquad (x \in D_{\eta_0}^*), \\
\Lambda_M\circ (\Phi|_{({\mathbb C}^*)^l\times {\mathbb C}^{k-l}})=(\Phi^{\prime}|_{({\mathbb C}^*)^l\times {\mathbb C}^{k-l}}) \circ \Lambda_M.
\end{gather}
Let $j^{\prime} \colon ({\mathbb C}^*)^k \DOTSB\lhook\joinrel\longrightarrow ({\mathbb C}^*)^l \times {\mathbb C}^{k-l}$ be the inclusion and consider the local system $\L^{\prime}:=(\Lambda_{M}|_{({\mathbb C}^*)^k})_*\L$ on $({\mathbb C}^*)^k$. Similarly to $R_{j,x}$, we can construct an automorphism $R^{\prime}_{j,x}$ of $H^j(h^{\prime -1}(x);j^{\prime}_!\L^{\prime})$ by constructing an isomorphism $\Phi^{\prime -1}j^{\prime}_!\L^{\prime} \overset{\sim}{\longrightarrow} j^{\prime}_!\L^{\prime}$. Then via the natural isomorphism $H^j(F_0 ;j_!\L) \simeq H^j(h^{-1}(x);j_!\L) \simeq H^j(h^{\prime -1}(x);j^{\prime}_!\L^{\prime})$ induced by $\Lambda_M$ the automorphism $R_{j,x}$ corresponds to $R^{\prime}_{j,x}$. Define an automorphism $\Phi^{\prime \prime} \colon {\mathbb C}^k \overset{\sim}{\longrightarrow} {\mathbb C}^k$ by
\begin{equation}
\Phi^{\prime \prime}(w_1,\ldots,w_k)=\(e^{\frac{2\pi \sqrt{-1}}{m}}w_1,w_2,\ldots, w_k\).
\end{equation}
Then we can construct also an automorphism $R^{\prime \prime}_{j,x}$ of $H^j(h^{\prime -1}(x);j^{\prime}_!\L^{\prime})$ by constructing an isomorphism $\Phi^{\prime \prime -1}j^{\prime}_!\L^{\prime} \overset{\sim}{\longrightarrow} j^{\prime}_!\L^{\prime}$. We define a homotopy morphism $\Theta \colon h^{\prime -1}(x) \times [0,1] \longrightarrow h^{\prime -1}(x)$ such that $\Theta(\cdot,0)= \Phi^{\prime \prime}|_{h^{\prime -1}(x)}$ and $\Theta(\cdot,1)=\Phi^{\prime}|_{h^{\prime -1}(x)}$ by
\begin{equation}
\Theta(w,s)=\(e^{\frac{2\pi \sqrt{-1}}{m}}w_1,e^{\frac{2\pi \sqrt{-1}d_2}{md}s}w_2,\ldots, e^{\frac{2\pi \sqrt{-1} d_l}{md}s}w_l,w_{l+1}, \ldots, w_k\).
\end{equation}
Let $p\colon h^{\prime -1}(x) \times [0,1] \longrightarrow h^{\prime -1}(x)$ be the projection. Then similarly to $\Phi^{-1}j_!\L \overset{\sim}{\longrightarrow} j_!\L$, we can construct a natural isomorphism $\Theta^{-1}(j^{\prime}_!\L^{\prime}|_{h^{\prime -1}(x)}) \overset{\sim}{\longrightarrow} p^{-1}(j^{\prime}_!\L^{\prime}|_{h^{\prime -1}(x)})$. By applying Lemma \ref{lem:5-3-2} below to $Y=h^{\prime -1}(x)$ and $\Theta$, we get $R^{\prime}_{j,x}=R^{\prime \prime}_{j,x}$ and
\begin{equation}
\zeta_{h,0}(j_!\L)(t)=\prod_{j=0}^{\infty} \det({\rm id} -tR^{\prime \prime}_{j,x})^{(-1)^j}.
\end{equation}
Note that each connected component $K$ of $h^{\prime -1}(x)$ is isomorphic to $({\mathbb C}^*)^{l-1} \times {\mathbb C}^{k-l}$. Moreover by our assumption $k \geq 2$ and Proposition \ref{prp:5-2} the Euler-Poincar{\'e} index $\chi(R\varGamma(K;j^{\prime}_!\L^{\prime}))$ of $R\varGamma(K;j^{\prime}_!\L^{\prime})$ is zero. Then the result follows from the classical arguments as in \cite[Example (3.7)]{Oka-2} and the K{\" u}nneth formula. This completes the proof. \hfill $\Box$
\end{proof}
\begin{lemma}\label{lem:5-3-2}
Let $f_0,f_1 \colon Y \longrightarrow X$ be two morphisms of topological spaces. Set $I=[0,1]$ and let $p_Y \colon Y \times I \longrightarrow Y$ be the projection. Assume that there exists a homotopy morphism $\Theta \colon Y \times I \longrightarrow X$ between $f_0$ and $f_1$ such that $\Theta(\cdot ,q)=f_q$ for $q=0,1$. For ${\cal F}\in {\bf D}^{b}(Y)$ and ${\cal G} \in {\bf D}^{b}(X)$, assume that there exists an isomorphism $\Phi \colon \Theta^{-1}{\cal G} \overset{\sim}{\longrightarrow} p_Y^{-1}{\cal F}$. For $q=0,1$, let $f_q^{\sharp} \colon R\varGamma(X;{\cal G}) \longrightarrow R\varGamma(Y;{\cal F})$ be the morphism obtained by
\begin{equation}
\xymatrix@C=12mm{
R\varGamma(X;{\cal G}) \ar[r] &R\varGamma(X; Rf_{q *} f_q^{-1}{\cal G}) \simeqR\varGamma(Y;f_q^{-1}{\cal G}) \ar[r]^{\hspace*{20mm}\Phi |_{Y \times \{ q\} } } & R\varGamma(Y;{\cal F})}.
\end{equation}
Then we have $f_0^{\sharp}=f_1^{\sharp}$.
\end{lemma}
\begin{proof}
For $q=0,1$, let $i_q \colon Y \simeq Y \times \{ q\} \DOTSB\lhook\joinrel\longrightarrow Y \times I$ be the embedding. Then we obtain the following commutative diagram:
\begin{equation}
\xymatrix@C=11mm@R=6mm{
R\varGamma(X;{\cal G}) \ar[r] \ar[rd] & R\varGamma(Y\times I; \Theta^{-1}{\cal G}) \ar[r]^{\Phi} \ar[d] & R\varGamma(Y\times I; p_Y^{-1}{\cal F}) \ar[d] & R\varGamma(Y;{\cal F}) \ar[l]_{\hspace{5mm}\sim}\\
& R\varGamma(Y;i_q^{-1}\Theta^{-1}{\cal G}) \ar[r]^{\Phi |_{Y \times \{ q\} } } & R\varGamma(Y;i_q^{-1}p_Y^{-1}{\cal F}). \ar@{=}[ru]}
\end{equation}
This proves the lemma.\hfill $\Box$
\end{proof}
With these propositions at hands, we can prove the following explicit formula for $\zeta_f({\cal F})(t)\in {\mathbb C}(t)^*$ as in the same way as the proof of Theorem \ref{thm:3-4}. For each $\Delta \prec K(\SS)$ by fixing an isomorphism $M(\SS \cap \Delta) \simeq {\mathbb Z}^{\d \Delta}$, we obtain an isomorphism $T_{\Delta}={\rm Spec}({\mathbb C}[M(\SS \cap \Delta)]) \simeq ({\mathbb C}^*)^{\d \Delta}$. We regard $\L_{\Delta}$ as a local system of rank $r_{\Delta}$ on $({\mathbb C}^*)^{\d \Delta}$ via this isomorphism and denote by $A_j^{\Delta} \in GL_{r_{\Delta}}({\mathbb C})$ ($j=1,2,\ldots, \d \Delta$) the monodromy matrices of $\L_{\Delta}$ along the loops
\begin{equation}
\{(1,1,\ldots, 1, \overset{j}{\check{e^{\sqrt{-1}\theta}}}, 1,\ldots, 1)\in ({\mathbb C}^*)^{\d \Delta}\ |\ 0 \leq \theta \leq 2\pi\}
\end{equation}
in $({\mathbb C}^*)^{\d \Delta} \simeq T_{\Delta}$, which are determined up to conjugacy. Note that the matrices $A_1^{\Delta}, A_2^{\Delta}, \ldots, A_{\d \Delta}^{\Delta}$ mutually commute. Finally by using the inner conormal vectors $u_1^{\Delta}, u_2^{\Delta},\ldots, u_{n(\Delta)}^{\Delta} \in M(\SS \cap \Delta)^*$ of the compact faces $\gamma_1^{\Delta}, \gamma_2^{\Delta}, \ldots, \gamma_{n(\Delta)}^{\Delta}$ of $\Gamma_+(f) \cap \Delta$ introduced in Section \ref{sec:3} we set
\begin{equation}
B_i^{\Delta}:=\prod_{j=1}^{\d \Delta} (A_j^{\Delta})^{u_{i,j}^{\Delta}} \in GL_{r_{\Delta}}({\mathbb C})
\end{equation}
for $1 \leq i \leq n(\Delta)$, where $(u_{i,1}^{\Delta}, u_{i,2}^{\Delta}, \ldots, u_{i,\d \Delta}^{\Delta})\in {\mathbb Z}^{\d \Delta}$ is the image of $u_i^{\Delta}$ by the isomorphism $M(\SS \cap \Delta)^*\simeq {\mathbb Z}^{\d \Delta}$.
\begin{theorem}\label{thm:5-4}
Assume that $f=\sum_{v \in \SS} a_v \cdot v \in {\mathbb C}[\SS]$ is non-degenerate. Then the monodromy zeta function $\zeta_f({\cal F})(t)=\zeta_{f,0}({\cal F})(t)\in {\mathbb C}(t)^*$ of the $T$-invariant constructible sheaf ${\cal F}$ at $0 \in X(\SS)$ is given by
\begin{equation}
\zeta_f({\cal F})(t)=\prod_{\Gamma_+(f) \cap \Delta \neq \emptyset} \zeta_{f,\Delta}({\cal F})(t),
\end{equation}
where for each face $\Delta \prec K(\SS)$ of $K(\SS)$ such that $\Gamma_+(f) \cap \Delta\neq \emptyset$ we set
\begin{equation}
\zeta_{f,\Delta}({\cal F})(t)=\prod_{i=1}^{n(\Delta)}\det ({\rm id}-t^{d_i^{\Delta}}B_i^{\Delta})^{(-1)^{\d \Delta-1}{\rm Vol}_{{\mathbb Z}}(\gamma_i^{\Delta})}.
\end{equation}
\end{theorem}
By the same methods, also for non-degenerate complete intersection subvarieties $W=\{f_1=\cdots =f_{k-1}=0\}\supset V=\{f_1=\cdots =f_{k-1}=f_k=0\}$ in $X(\SS)$ and $T$-invariant constructible sheaves ${\cal F}$ on $X(\SS)$ we can give a formula for the monodromy zeta function
\begin{equation}
\zeta_{f_k}({\cal F}_W)(t):=\zeta_{f_k,0}({\cal F}_W)(t) \in {\mathbb C}(t)^*
\end{equation}
of ${\cal F}_W ={\cal F} \otimes_{{\mathbb C}_{X(\SS)}} {\mathbb C}_W \in {\bf D}_{c}^{b}(X(\SS))$ at $0 \in X(\SS)$. The precise formulation is now easy and left to the reader.
|
1,477,468,750,296 | arxiv | \section{#1}}
\numberwithin{equation}{section}
\newtheorem{dfn}{Definition}[section]
\newtheorem{thm}[dfn]{Theorem}
\newtheorem{lma}[dfn]{Lemma}
\newtheorem{hypo}[dfn]{Hypothesis}
\newtheorem{ppsn}[dfn]{Proposition}
\newtheorem{cor}[dfn]{Corollary}
\newtheorem{xmpl}[dfn]{Example}
\newtheorem{rmrk}[dfn]{Remark}
\DeclarePairedDelimiterX{\norm}[1]{\lVert}{\rVert}{#1}
\DeclarePairedDelimiterX{\bnorm}[1]{\big\lVert}{\big\rVert}{#1}
\DeclarePairedDelimiterX{\Bnorm}[1]{\Big\lVert}{\Big\rVert}{#1}
\newcommand\at[2]{\left.#1\right|_{#2}}
\newcommand{\R}{\mathbb{R}}
\newcommand{\N}{\mathbb{N}}
\newcommand{\C}{\mathbb{C}}
\newcommand{\Rplus}{\Real_+}
\newcommand{\Nat}{\mathbb{N}}
\newcommand{\Comp}{\mathbb{C}}
\newcommand{\Int}{\mathbb{Z}}
\newcommand{\D}{\mathbb{D}}
\newcommand{\sfd}{\mathsf{D}}
\newcommand{\nrm}[2]{\left\|#1\right\|_{#2}}
\newcommand{\Dts}{\sfd_{T^*}}
\newcommand{\Dt}{\sfd_{T}}
\newcommand{\Dtone}{\sfd_{T_1}}
\newcommand{\Dttwo}{\sfd_{T_2}}
\newcommand{\hil}{\mathcal{H}}
\newcommand{\hils}{\mathcal{B}_2(\hil)}
\newcommand{\boh}{\mathcal{B}_1(\hil)}
\newcommand{\bh}{\mathcal{B}(\hil)}
\newcommand{\bthrh}{\mathcal{B}_3(\hil)}
\newcommand{\bfrh}{\mathcal{B}_4(\hil)}
\newcommand{\bnh}{\mathcal{B}_n(\hil)}
\newcommand{\balphah}{\mathcal{B}_\alpha(\hil)}
\newcommand{\bsa}{\mathcal{B}_{sa}(\hil)}
\newcommand{\Tr}{\operatorname{Tr}}
\newcommand{\dds}{\dfrac{d}{ds}}
\newcommand{\ddt}{\dfrac{d}{dt}}
\newcommand{\ipi}{\int_{0}^{2\pi}}
\newcommand{\cir}{\mathbb{T}}
\newcommand{\la}{\langle}
\newcommand{\ra}{\rangle}
\newcommand{\tl}{\tau_{_l}}
\newcommand{\fl}{f_{_l}}
\newcommand{\gl}{g_{_l}}
\newcommand{\fkl}{f_{_{kl}}}
\newcommand{\gkl}{g_{_{kl}}}
\newcommand{\Mcal}{\mathcal{M}}
\newcommand{\Hcal}{\mathcal{H}}
\newcommand{\Bcal}{\mathcal{B}}
\newcommand{\mR}{\mathcal{R}}
\newcommand{\Cr}{C_\mR}
\newcommand{\Sr}{\S_\mR}
\newcommand{\mB}{\mathcal{B}}
\newcommand{\W}{\mathcal{W}}
\newcommand{\Wr}{\mathcal{W}^0_\mR}
\newcommand{\mC}{\mathfrak{C}}
\newcommand{\mL}{\mathfrak{L}}
\newcommand{\mW}{\mathfrak{W}}
\newcommand{\Dhr}{\hat{\D}_\mR}
\newcommand{\mN}{\mathcal{N}}
\newcommand{\M}{\mathcal{M}}
\renewcommand{\P}{\mathcal{P}}
\newcommand{\mK}{\mathcal{K}}
\renewcommand{\O}{O}
\newcommand{\res}{\mathcal{R}}
\def\Re{{\mathrm{Re}\,}}
\def\Im{{\mathrm{Im}\,}}
\newcommand{\vect}[2]{\begin{pmatrix}#1\\#2\end{pmatrix}}
\begin{document}
\title[Approximation of the spectral action functional]
{Approximation of the spectral action functional in the case of $\tau$-compact resolvents}
\author[Chattopadhyay] {Arup Chattopadhyay}
\address{Department of Mathematics, Indian Institute of Technology Guwahati, Guwahati, 781039, India}
\email{arupchatt@iitg.ac.in, 2003arupchattopadhyay@gmail.com}
\author[Pradhan]{Chandan Pradhan}
\address{Department of Mathematics, Indian Institute of Technology Guwahati, Guwahati, 781039, India}
\email{chandan.math@iitg.ac.in, chandan.pradhan2108@gmail.com}
\author[Skripka] {Anna Skripka}
\address{Department of Mathematics and Statistics, University of New Mexico, 311 Terrace Street NE, Albuquerque, NM 87106, USA}
\email{skripka@math.unm.edu}
\subjclass[2010]{47A55}
\keywords{Spectral action functional, multiple operator integral}
\begin{abstract}
We establish estimates and representations for the remainders of Taylor approximations of the spectral action functional $V\mapsto\tau(f(H_0+V))$ on bounded self-adjoint perturbations, where $H_0$ is a self-adjoint operator with $\tau$-compact resolvent in a semifinite von Neumann algebra and $f$ belongs to a broad set of compactly supported functions including $n$-times differentiable functions with bounded $n$-th derivative. Our results significantly extend analogous results in \cite{SkAnJOT}, where $f$ was assumed to be compactly supported and $(n+1)$-times continuously differentiable. If, in addition, the resolvent of $H_0$ belongs to the noncommutative $L^n$-space, stronger estimates are derived and extended to noncompactly supported functions with suitable decay at infinity.
\end{abstract}
\maketitle
\section{Introduction}
Let $\Mcal$ be a semifinite von Neumann algebra acting on a separable Hilbert space $\Hcal$ equipped with a normal faithful semifinite trace $\tau$. Let $H_0$ be a closed densely defined self-adjoint operator affiliated with $\Mcal$ and assume that its resolvent is $\tau$-compact.
Examples of such operators include differential operators on compact Riemannian manifolds
(see, e.g., \cite[Chapter 3, Section B]{BE86} or \cite[Chapter 3, Section 6]{Kato})
and generalized Dirac operators of unital spectral triples (see, e.g., \cite{{NuSkJST}}).
For a sufficiently nice function $f$ and a self-adjoint element $V$ in $\Mcal$, which models a gauge potential, we consider a spectral action functional $V\mapsto\tau(f(H_0+V))$. The latter was introduced in \cite{CC97} to encompass different field actions in noncommutative geometry and recently received considerable attention in the literature. Counterparts of the spectral action functional also arise in problems of mathematical physics on deterministic and random Dirac and Schr\"{o}dinger operators (see, e.g., \cite{S21,ST19}).
A perturbation approach to the spectral action functional was taken in \cite{vNvS21a}, where a noncommutative analog of the Taylor series expansion served as a starting point to understanding the structure of gauge fluctuations. Analytical aspects of Taylor approximations of the spectral action functional were also studied in \cite{SA,SkAnJOT}.
In this paper we significantly relax assumptions imposed on admissible functions $f$ in \cite{SA,SkAnJOT}. We note that it is important to relax assumptions on $f$ appearing in the spectral action since that function might be prescribed by the model (see, e.g., \cite{CCS}).
Given a natural number $n\in\mathbb{N}$ and suitable $f$ and $H_0$, we denote the $n$-th order Taylor remainder of the spectral action functional $V\mapsto\tau\big(f(H_0+V)\big)$ by
\begin{align}\label{a5}
\tau\left(\mathcal{R}_{H_0,f,n}(V)\right)=\tau\Big(f(H_0+V)-f(H_0)
-\sum_{k=1}^{n-1}\,\frac{1}{k!}\frac{d^k}{ds^k}\,f(H_0+sV)\big\lvert_{s=0}\Big).
\end{align}
In Theorem \ref{mainthm} we establish that
\begin{align}\label{intro1}
\left|\tau\big(\mathcal{R}_{H_0,f,n}(V)\big)\right|\leq D_{a,b,n,H_0,V} \|f^{(n)}\|_\infty
\end{align} for every self-adjoint $H_0$ with $\tau$-compact resolvent
and every $n$-times differentiable compactly supported in $(a,b)$ function $f$ with bounded $f^{(n)}$ and we derive an upper bound on the constant $D_{a,b,n,H_0,V}$ revealing explicit dependence on $H_0,V,n,a,b$.
We also establish the trace formula
\begin{align}
\label{tf}
\tau\big(\mathcal{R}_{H_0,f,n}(V)\big)
=\int_\R f^{(n)}(\lambda)\eta_{n,H_0,V}(\lambda)d\lambda
\end{align}
for every $n$-times differentiable compactly supported in $(a,b)$ function $f$ such that $f^{(n)}$ exists almost everywhere and $f^{(n)}\in L^2(\R)$. The real-valued function $\eta_{n,H_0,V}\in L^1((a,b))$ is determined by \eqref{tf} uniquely up to a polynomial summand of degree at most $n-1$.
In Theorem~\ref{resolvent trace thm} we relax the differentiability assumption and remove the support restriction on functions $f$ satisfying \eqref{tf} under the stronger assumption on the operator $H_0$. Namely, in the case when $(H_0-iI)^{-1}$ belongs to the $\tau$-Schatten-von Neumann ideal associated with $(\Mcal,\tau)$ (see the definition \eqref{tausvn}), we establish \eqref{tf} for $(n-1)$-times continuously differentiable functions with suitable decay at infinity.
The locally integrable real-valued function $\eta_{n,H_0,V}$ is determined by \eqref{tf} uniquely up to a polynomial summand of degree at most $n-1$ and it satisfies the estimate
\begin{align*}
|\eta_n(x)|\leq \text{ const}_n\,(2+\|V\|)\, \|V\|^{n-1}\,\|(H_0-iI)^{-1}\|_n^n\,(1+|x|)^n
\end{align*}
for every $x\in\R$, where $\|\cdot\|$ denotes the operator norm and $\|\cdot\|_n$ the noncommutative $L^n$-norm.
Our bound \eqref{intro1} extends the analogous bound of \cite{SkAnJOT}, where the additional assumption $f\in C_c^{n+1}((a,b))$ was imposed.
The formula \eqref{tf} was earlier established under the additional restriction $f\in C_c ^3((a,b))$ in the case $n=1$ in \cite[Theorem 2.5]{ACD} and under the restriction $f\in C_c^{n+1}((a,b))$ in the case $n\ge 2$ in \cite{SkAnJOT}. Other results in this direction were obtained in \cite[Theorem 18 and Corollary 19]{SJ} and \cite[Theorems 3.4 and 3.10]{SA} in the particular case when $\Mcal$ is the algebra of bounded linear operators $\Bcal(\Hcal)$.
The result of Theorem~\ref{resolvent trace thm} relaxes the differentiability assumption made in \cite[Theorem 4.1]{NuSkJST} and extends the trace formula from the setting of $(\Bcal(\Hcal),\Tr)$ to the setting of a general semi-finite $(\Mcal,\tau)$.
The paper is organized as follows: preliminary results are discussed in Section~2; our main results are proved in Section~3 and Section~4 under the assumptions that $H_0$ has $\tau$-compact resolvent and that the resolvent of $H_0$ belongs to the noncommutative $L^n$-space, respectively.
\section{Preliminaries and notation}
We denote positive constants by letters $c,d,C,D$ with subscripts indicating dependence on their parameters. For instance, the symbol $c_\alpha$ denotes a constant depending only on the parameter $\alpha$.
\paragraph{\bf Function spaces.}
Let $n\in\mathbb N$ and let $X$ be an interval in $\mathbb{R}$ possibly coinciding with $\mathbb{R}$. Let $B(X)$ denote the space of all bounded functions on $X$, $C(X)$ the space of all continuous functions on $X$, $C_0(\R)$ the space of continuous functions on $\R$ decaying to $0$ at infinity, $C^n(X)$ the space of $n$-times continuously differentiable functions on $X$. Let $C_b^n(X)$ denote the subset of $C^n(X)$ of such $f$ for which $f^{(n)}$ is bounded. Let $C_c^n(\mathbb{R})$ denote the subspace of $C^n(\mathbb{R})$ consisting of compactly supported functions.
We also use the notation $C^0(\mathbb{R})=C(\mathbb{R})$. Let $a,b\in\mathbb{R}$. Let $C_c^n((a,b))$ denote the subspace of $C_c^n(\R)$ consisting of the functions whose closed support is contained in $(a,b)$, let $D_c^n((a,b))$ denote the space of $n$-times differentiable functions in $C_c((a,b))$, and let $F_c^n((a,b))$ denote the subspace of $C_c^{n-1}((a,b))$ consisting of $f$ such that $f^{(n)}$ exists almost everywhere in $(a,b)$ and $f^{(n)}\in L^2((a,b))$. We note that for every $f\in F_c^n((a,b))$ the function $f^{(n-1)}$ is absolutely continuous. We write $f(x)=o(g(x))$ if for all $\epsilon>0$, we have $|f(x)|\leq\epsilon g(x)$ for all $x$ outside a compact set depending on $\epsilon$.
Given $f\in L^1(\R)$, let $\hat{f}$ denote the Fourier transform of $f$.
We will use the well-known property that every $f\in C_c^n(\R)$ satisfies $\widehat{f^{(n-1)}}\in L^1(\R)$.
We will need the following elementary lemma.
\begin{lma}\label{l3}
Let $a,b\in\R,~a<b$, $k\in\mathbb{N}$, and $f\in D_c^k((a,b))$ be such that $f^{(k)}\in B([a,b])$. Then,
\begin{align*}
\|f^{(j)}\|_{\infty}\leq (b-a)^{k-j} \|f^{(k)}\|_{\infty}, \quad j=0,\dots,k.
\end{align*}
\end{lma}
\smallskip
\paragraph{\bf Operators with $\tau$-compact resolvent.}
In the sequel we fix a semifinite von Neumann algebra $\Mcal$ endowed with a normal faithful semifinite trace $\tau$ and acting on a separable Hilbert space $\Hcal$. An example of such pair $(\Mcal,\tau)$ is the algebra of bounded linear operators $\Bcal(\Hcal)$ on $\Hcal$ equipped with the canonical trace $\Tr$. A projection $P\in\Mcal$ is called $\tau$-finite if $\tau(P)<\infty$. We note that a $\Tr$-finite projection has finite rank.
Let $\mu_t(A)$ denote the $t^{\text\rm th}$ generalized $s$-number \cite[Definition 2.1]{FK} of a $\tau$-measurable \cite[Definition 1.2]{FK} operator $A$ affiliated with $\Mcal$. An operator $A\in\Mcal$ is said to be {\it $\tau$-compact} if and only if $\lim_{t\rightarrow\infty}\mu_t(A)=0$. If $(\Mcal,\tau)=(\Bcal(\Hcal),\Tr)$, then the concept of $\tau$-compactness coincides with the concept of compactness.
When $H$ is a closed densely defined self-adjoint operator affiliated with $\Mcal$, we briefly write that {\it $H$ is a self-adjoint operator affiliated with $\Mcal$.}
We say that a self-adjoint operator $H$ {\it has $\tau$-compact resolvent} if $H$ is affiliated with $\Mcal$ and $(H-zI)^{-1}$ is $\tau$-compact for some and, hence, for each resolvent point $z$ of $H$.
The following useful property is a consequence of the second resolvent identity (see, e.g., \cite[Lemma 1.3]{ACD}).
\begin{lma
\label{l1}
Let $H_0$ be a self-adjoint operator with $\tau$-compact resolvent and let $V$ be a self-adjoint operator in $\Mcal$. Then, $H=H_0+V$ also has $\tau$-compact resolvent.
\end{lma}
Let $E_H(\cdot)$ denote the spectral measure of a self-adjoint operator $H$.
The following result is a consequence of Lemma \ref{l1} and \cite[Lemma 1.4]{ACD}.
\begin{lma}\label{l2}
Let $H_0$ be a self-adjoint operator with $\tau$-compact resolvent and let $V$ be a self-adjoint operator in $\Mcal$.
Then, for every compact subset $\Delta\subset\mathbb{R}$, the spectral projections $E_{H_0}(\Delta)$ and $E_{H_0+V}(\Delta)$ are $\tau$-finite.
\end{lma}
Let $f\in C^n(\R)$. Recall that the divided difference of order $n$ is an operation on the function $f$ of one
(real) variable, and is defined recursively as follows:
\begin{align*}
&f^{[0]}(\lambda)=f(\lambda),\\
&f^{[n]}(\lambda_0,\lambda_1,\ldots,\lambda_n)=\begin{cases*}
\frac{f^{[n-1]}(\lambda_0,\lambda_1,\ldots,\lambda_{n-2},\lambda_n)-f^{[n-1]}(\lambda_0,\lambda_1,\ldots,\lambda_{n-2},\lambda_{n-1})}{\lambda_n-\lambda_{n-1}} \quad \text{if}\quad \lambda_n\neq\lambda_{n-1},\\
\frac{\partial}{\partial \lambda}f^{[n-1]}(\lambda_0,\lambda_1,\ldots,\lambda_{n-2},\lambda)\big|_{\lambda=\lambda_{n-1}}\quad \text{if}\quad \lambda_n=\lambda_{n-1}.
\end{cases*}
\end{align*}
Let $L^p$, $1\leq p<\infty$, denote the noncommutative $L^p$-space associated with $(\mathcal{M},\tau)$, that is,
$$L^p=\{A\;{\rm is\; affiliated\; with}\; \mathcal{M}:\;\|A\|_p:=(\tau(|A|^p))^{1/p}<\infty\}$$
and let $\mathcal{L}^p$ denote the $\tau$-Schatten-von Neumann ideal associated with $(\Mcal,\tau)$, that is,
\begin{align}
\label{tausvn}
\mathcal{L}^p=L^p\cap\M.
\end{align}
When $p=\infty$ we use the convention $\mathcal{L}^\infty=\Mcal$, $\|\cdot\|_\infty=\|\cdot\|$.
\smallskip
\paragraph{\bf Multilinear operator integrals.}
Let $H$ be a self-adjoint operator affiliated with $\mathcal{M}$ and let $f\in C^{n}(\R)$ be such that $\widehat{f^{(n)}}\in L^1(\R)$. Let $p_k\in[1,\infty], 1\leq k\leq n$, and $E_{l,m}= E_H\big(\big[\frac{l}{m},\frac{l+1}{m}\big)\big)$ for $m\in \mathbb{N}$ and $l\in \mathbb{Z}$.
Define a multilinear map on $\mathcal{L}^{p_1}\times\cdots\times\mathcal{L}^{p_n}$ by
\begin{align}\label{a1}
&T^{H,\ldots,H}_{f^{[n]}}(V_1,V_2,\ldots,V_n)\\
\nonumber
&=\lim_{m\to\infty}\lim_{N\to\infty}\sum_{|l_0|,|l_1|,\ldots,|l_n|\leq N}f^{[n]}\Big(\frac{l_0}{m},\frac{l_1}{m},\ldots,\frac{l_n}{m}\Big)E_{l_0,m}V_1E_{l_1,m}V_2E_{l_2,m}\cdots V_n E_{l_n,m},
\end{align}
where the limits are evaluated in the norm $\|\cdot\|_p$, $\frac{1}{p}=\frac{1}{p_1}+\frac{1}{p_2}+\cdots+\frac{1}{p_n}$. The existence of the limits in \eqref{a1} is justified in \cite[Lemma 3.5]{PSS}. We call $T^{H,\ldots,H}_{f^{[n]}}$ defined in \eqref{a1} a {\it multilinear operator integral with symbol $f^{[n]}$} and write $T_{f^{[n]}}$ when there is no ambiguity which element $H$ is used.
Discussion of multiple operator integrals, including those with more general symbols, and their applications can be found in \cite{ST19}.
It was shown in \cite[Lemma 3.1, Theorem 3.2]{SA} that the multilinear operator integral given by \eqref{a1} is bounded for all $f\in C^{n+1}_c(\R)$ when $H$ has compact resolvent.
The following estimate is a consequence of \cite[Theorem 5.3 and Remark 5.4]{PSS} and \cite[Theorem 4.4.7]{ST19}.
\begin{thm}\label{inv-thm}
Let $k\in\mathbb{N}$ and let $\alpha,\alpha_1,\ldots,\alpha_k\in(1,\infty)$ satisfy $\tfrac{1}{\alpha_1}+\cdots+\tfrac{1}{\alpha_k}=\tfrac{1}{\alpha}$. Let $H$ and $\tilde{H}$ be two self-adjoint operators affiliated with $\mathcal{M}$. Assume that $V_\ell\in\mathcal{L}^{\alpha_\ell},\, 1\leq \ell\leq k$. Then there exists $c_{\alpha,k}>0$ such that
\begin{align}\label{est}
\|T^{\tilde{H},H,\ldots,H}_{f^{[k]}}(V_1,V_2,\ldots,V_k)\|_{\alpha}\leq c_{\alpha,k}\, \|f^{(k)}\|_\infty \prod\limits_{1\leq \ell \leq k}\|V_\ell\|_{\alpha_\ell}
\end{align}
for every $f\in C_b^k(\R).$
\end{thm}
We also have the following bound for the seminorm $|\tau(\cdot)|$ of a multilinear operator integral.
\begin{thm}\label{est-thm}
Let $k\in\mathbb{N}$ and let $\alpha_1,\ldots,\alpha_k\in(1,\infty)$ satisfy $\tfrac{1}{\alpha_1}+\cdots+\tfrac{1}{\alpha_k}=1$. Let $H$ be a self-adjoint operator affiliated with $\mathcal{M}$. Assume that $V_\ell\in\mathcal{L}^{\alpha_\ell},\, 1\leq \ell\leq k$. Then, for $c_{1,k}>0$ satisfying \eqref{est},
\begin{align}\label{ss}
|\tau(T^{H,H,\ldots,H}_{f^{[k]}}(V_1,\ldots,V_k))|&\leq c_{1,k}\|f^{(k)}\|_\infty\prod\limits_{1\leq \ell \leq k}\|V_\ell\|_{\alpha_\ell}
\end{align}
for every $f$ with $f^{(k)}\in C_0(\R)$ satisfying $\widehat{f^{(k)}}\in L^1(\R)$.
\end{thm}
\begin{proof}
By the definition of the multiple operator integral \eqref{a1} and cyclicity of the trace,
\begin{align}\label{sss}
\tau\big(T^{H,H,\ldots,H}_{f^{[k]}}(V_1,\ldots,V_k)\big)
=\tau\big(T^{H,H,\ldots,H}_{{\tilde{f}}^{[k]}}(V_1,\ldots,V_{k-1})V_k\big),
\end{align}
where ${\tilde{f}}^{[k]}(\lambda_0,\lambda_1,\ldots,\lambda_{k-1})=f^{[k]}(\lambda_0,\lambda_1,\ldots,\lambda_{k-1},\lambda_{k-1})$. Therefore, by H\"{o}lder's inequality, Theorem \ref{inv-thm}, and \cite[Remark 5.4]{PSS} applied to \eqref{sss}, we obtain \eqref{ss}.
\end{proof}
Let $a,b\in \R$, $a<b$, and $\epsilon>0$. Let $a_\epsilon=a-\epsilon,\, b_\epsilon=b+\epsilon$. Define the function $\Phi_\epsilon$ on $\R$ by
\begin{align}\label{indicator}
\Phi_\epsilon(x)=(h_1(x)- h_2(x))^4,
\end{align} where
\begin{align*}
&h_1(x)=\dfrac{\int_{a_\epsilon}^{x}\Phi(t-a_\epsilon)\Phi(a-t) \,dt}{\int_{a_\epsilon}^{a}\Phi(t-a_\epsilon)\Phi(a-t) \,dt},\quad h_2(x)=\dfrac{\int_{b}^{x}\Phi(t-b)\Phi(b_\epsilon-t) \,dt}{\int_{b}^{b_\epsilon}\Phi(t-b)\Phi(b_\epsilon-t) \,dt},\\
& \Phi(x)=\begin{cases}
e^{-\frac{1}{x}} & \text{ if } x>0,\\
0 & \text{ if } x\leq 0.
\end{cases}
\end{align*}
Note that $\Phi_\epsilon\lvert_{(a,b)}=1,\,\Phi_\epsilon^{1/4}\in C_c^\infty((a_\epsilon,b_\epsilon))$ and $\|\Phi_\epsilon\|_\infty=1$. Moreover, if $H$ has $\tau$-compact resolvent, then by the spectral theorem, $\Phi_\epsilon(H)\in\mathcal{{L}}^1$ and $\|\Phi_\epsilon(H)\|_1\leq \tau(E_H(a_\epsilon,b_\epsilon))\|\Phi_\epsilon\|_\infty$ (see \cite[Proposition 2.3]{SkAnJOT}).
\begin{thm}
Let $H_0$ be a self-adjoint operator with $\tau$-compact resolvent, $k\in\mathbb{N}$, and $V_1,V_2,\ldots,V_k\in\Mcal$. Let $a,b\in\R$, $a<b$, and $\epsilon>0$. Then
\begin{align}\label{a55}
\big|\tau\big(T^{H_0,\ldots,H_0}_{f^{[k]}}(V_1,V_2,\ldots,V_k)\big)\big|\leq C_{a,b,k,\epsilon,H_0}~\|f^{(k)}\|_\infty\prod_{\ell=1}^k \|V_\ell\|
\end{align} for every $f\in F_c^{k+1}((a,b))$, where
\begin{align}
\label{Clabel}
C_{a,b,k,\epsilon,H_0}=\left((k+1)2^k+ c_{2,k} \right)(b-a+1)^k\,d_{k,\epsilon,H_0}\,\big(1+\tau(E_{H_0}(a,b))\big),
\end{align}
where $c_{2,k}$ satisfies \eqref{est} and
\begin{align*}
&d_{k,\epsilon,H_0}=\max\Big\{\|\Phi_\epsilon(H_0)\|_1,\,\max\limits_{1\leq \ell \leq k}\frac{1}{\ell!}\big\|\,\widehat{\Phi_\epsilon^{(\ell)}}\,\big\|_{1}~\Big\},
\end{align*}
where $\Phi_\epsilon$ is given by \eqref{indicator}.
\end{thm}
\begin{proof}
The proof follows along the lines of the proof of \cite[Theorem 3.1]{SkAnJOT}.
\end{proof}
\begin{rmrk}
Note that the upper bound for $ \big|\tau\big(T^{H_0,\ldots,H_0}_{f^{[k]}}(V_1,V_2,\ldots,V_k) \big)\big|$ in \eqref{a55} is stronger than the upper bound stated in \cite[Theorem 3.1]{SkAnJOT}.
\end{rmrk}
The following result is a consequence of \cite[Theorems 4.4.6 and 5.3.5]{ST19}.
\begin{thm}\label{th1}
Let $n\in\N$ and let $f\in C^n(\R)$ be such that $f^{(k)},\,\widehat{f^{(k)}}\in L^1(\R)$, $k=0,1,\dots,n$. Let $H$ be a self-adjoint operator affiliated with $\mathcal{M}$ and $V$ a self-adjoint operator in $\mathcal{M}$. Then, the G\^{a}teaux derivative $\frac{d^k}{dt^k}f(H+tV)|_{t=0}$ exists in the uniform operator topology and admits the multiple operator integral representation
\begin{align}
\label{a4}
\frac{1}{k!}\frac{d^k}{ds^k}f(H+sV)\big|_{s=t}=T_{f^{[k]}}^{H+tV,\dots,H+tV}(V,V,\dots,V),\quad 1\leq k\leq n.
\end{align}
Moreover, if $V\in\mathcal{L}^{k}$, then the above $k$-th derivative is an element in $\mathcal{L}^{1}$.
\end{thm}
\section{ Trace formulas for operators with $\tau$-compact resolvents}
In this section we establish our first main result.
\begin{thm}\label{mainthm}
Let $H_0$ be a self-adjoint operator with $\tau$-compact resolvent, $V$ a self-adjoint operator in $\Mcal$, $n\in \mathbb{N}$, and let $-\infty<a<b<\infty$, $\epsilon>0$. Then,
\begin{align}\label{a6}
\left|\tau\big(\mathcal{R}_{H_0,f,n}(V)\big)\right|\leq D_{a,b,n,\epsilon,H_0,V} \|f^{(n)}\|_\infty
\end{align}
for every $f\in D_c^n((a,b))$ with $f^{(n)}\in B([a,b])$, where
\begin{align}\label{const}
&D_{a,b,n,\epsilon,H_0,V}\\
\nonumber
&=(b-a)^{n}\max\big\{\tau(E_{H_0}([a,b])),\tau(E_{H_0+V}([a,b]))\big\}+\sum_{k=1}^{n-1} (b-a)^{n-k}C_{a,b,k,\epsilon,H_0}\|V\|^k,
\end{align}
and $C_{a,b,k,\epsilon,H_0}$ satisfies \eqref{a55}. Furthermore, there exists a real-valued function $\eta_{a,b,n,\epsilon,H_0,V}\in L^1((a,b))$ such that
\begin{align}
\label{aa7}
\tau\big(\mathcal{R}_{H_0,f,n}(V)\big)=\int_{a}^{b}f^{(n)}(\lambda)\eta_{a,b,n,\epsilon,H_0,V}(\lambda)d\lambda
\end{align}
for every $f\in F_c^n((a,b))$ and
\begin{align}
\label{etabound}
\int_a^b\big|\eta_{a,b,n,\epsilon,H_0,V}(\lambda)\big|d\lambda\leq D_{a,b,n,\epsilon,H_0,V}.
\end{align}
The function $\eta_{a,b,n,\epsilon,H_0,V}\in L^1((a,b))$ is determined by \eqref{aa7} uniquely up to a polynomial summand of degree at most $n-1$.
\end{thm}
\begin{proof}
By Lemma \ref{l2}, $E_{H_0}([a,b])$ and $E_{H_0+V}([a,b])$ are $\tau$-finite projections. Let $f\in F_c^n((a,b))$.
Case 1: $n=1$. \\
By the spectral theorem for a self-adjoint operator $H$ in $\mathcal{H}$ with $\tau$-compact resolvent, we have
$$f(H)=f(H)E_{H}([a,b])=\int_{a}^{b}f(\lambda)\,dE_{H}(\lambda),$$
where the integral converges in $\mathcal{L}^1$. Hence, by continuity of the trace $\tau$,
\begin{align}\label{z}
\tau\big(\mathcal{R}_{H_0,f,1}(V)\big)=\int_a^b f(\lambda)d\big(\tau(E_{H_0+V}(\lambda))-\tau(E_{H_0}(\lambda))\big).
\end{align}
Integrating by parts on the right-hand side of \eqref{z} and applying the support property
and absolute continuity of $f$ yield
\begin{align}
\label{rem1}
\nonumber
\tau\big(\mathcal{R}_{H_0,f,1}(V)\big)
&=-\int_a^b \big(\tau(E_{H_0+V}([a,\lambda)))-\tau(E_{H_0}([a,\lambda)))\big) df(\lambda)\\
&=\int_a^b f'(\lambda)\big(\tau(E_{H_0}([a,\lambda)))-\tau(E_{H_0+V}([a,\lambda)))\big)d\lambda.
\end{align}
Thus, \eqref{aa7} holds for $n=1$ with
\begin{align}
\label{etavianu1}
\eta_{a,b,1,\epsilon,H_0,V}(\lambda)=\tau(E_{H_0}([a,\lambda)))-\tau(E_{H_0+V}([a,\lambda)))
\end{align}
and \eqref{etabound} holds with
\begin{align*}
D_{a,b,1,\epsilon,H_0,V}=(b-a)\max\big\{\tau(E_{H_0}([a,b])),\tau(E_{H_0+V}([a,b]))\big\}.
\end{align*}
Case 2: $n\ge 2$. \\
By \eqref{rem1},
\begin{align*}
\big|\tau(f(H_0+V))-\tau(f(H_0))\big|\le\|f'\|_\infty(b-a)
\max\big\{\tau(E_{H_0}([a,b])),\tau(E_{H_0+V}([a,b]))\big\}.
\end{align*}
The latter along with Lemma \ref{l3} implies
\begin{align}
\label{33}
&\big|\tau(f(H_0+V))-\tau(f(H_0))\big|\\
\nonumber
&\le\|f^{(n-1)}\|_\infty(b-a)^{n-1}\max\big\{\tau(E_{H_0}([a,b])),\tau(E_{H_0+V}([a,b]))\big\}.
\end{align}
Combining \eqref{a55} and \eqref{a4} and then applying Lemma \ref{l3} yield
\begin{align}
\label{34}
\nonumber
\left|\tau\left(\frac{1}{k!}\frac{d^k}{ds^k}f(H_0+sV)\big|_{s=0}\right)\right|&\leq ~C_{a,b,k,\epsilon,H_0}~\|f^{(k)}\|_\infty\|V\|^k\\
&\leq(b-a)^{n-k-1}C_{a,b,k,\epsilon,H_0}\|f^{(n-1)}\|_\infty\|V\|^k
\end{align}
for every $k=1,\dots,n-1$, where $C_{a,b,k,\epsilon,H_0}$ satisfies \eqref{Clabel}.
Combining \eqref{a5}, \eqref{33}, and \eqref{34} implies
\begin{align}\label{a9}
\left|\tau\big(\mathcal{R}_{H_0,f,n}(V)\big)\right|\le\widetilde{D}_{a,b,n,\epsilon,H_0,V}\|f^{(n-1)}\|_\infty,
\end{align}
where
\begin{align*}
&\widetilde{D}_{a,b,n,\epsilon,H_0,V}\\
&\quad=(b-a)^{n-1}\max\big\{\tau(E_{H_0}([a,b])),\tau(E_{H_0+V}([a,b]))\big\}+\sum_{k=1}^{n-1} (b-a)^{n-k-1}C_{a,b,k,\epsilon,H_0}\|V\|^k.
\end{align*}
By the Riesz representation theorem for elements in $\big(C_0(\mathbb{R})\big)^*$, Hahn-Banach theorem, and estimate \eqref{a9}, there exists a finite (complex) measure $\nu_{a,b,n,\epsilon,H_0,V}$ such that
\begin{align}\label{a8}
\tau\big(\mathcal{R}_{H_0,f,n}(V)\big)=\int_{a}^{b}f^{(n-1)}(\lambda)d\nu_{a,b,n,\epsilon,H_0,V}(\lambda)
\end{align}
for every $f\in F_c^n((a,b))$ and the total variation of $\nu_{a,b,n,\epsilon,H_0,V}$ is bounded by
\begin{align}
\label{nubound}
\|\nu_{a,b,n,\epsilon,H_0,V}\|\leq \widetilde{D}_{a,b,n,\epsilon,H_0,V}.
\end{align}
Integrating by parts on the right-hand side of \eqref{a8} and applying the support property of $f$
and absolute continuity of $f^{(n-1)}$ yield
\begin{align}
\label{a10}
\tau\big(\mathcal{R}_{H_0,f,n}(V)\big)
=\int_{a}^{b}f^{(n)}(\lambda)\big(-\nu_{a,b,n,\epsilon,H_0,V}([a,\lambda))\big)d\lambda.
\end{align}
Thus, \eqref{a10} implies \eqref{aa7} with
\begin{align}
\label{etavianu}
\eta_{a,b,n,\epsilon,H_0,V}(\lambda)=-\nu_{a,b,n,\epsilon,H_0,V}([a,\lambda)).
\end{align}
Combining \eqref{nubound} and \eqref{etavianu} ensures \eqref{etabound}, where
\begin{align}
\label{D=tD}
D_{a,b,n,\epsilon,H_0,V}=(b-a)\widetilde{D}_{a,b,n,\epsilon,H_0,V}.
\end{align}
Let $\tilde{\eta}_{a,b,n,\epsilon,H_0,V}:=\Re( \eta_{a,b,n,\epsilon,H_0,V})$. Since the left-hand side of \eqref{aa7} is real-valued whenever $f$ is real-valued, we obtain that
$\tilde{\eta}_{a,b,n,\epsilon,H_0,V}$ satisfies \eqref{aa7} and \eqref{etabound} for real-valued $f\in F_c^n((a,b))$ and, consequently, for all $f\in F_c^n((a,b))$. Therefore, without loss of generality we may consider $\eta_{a,b,n,\epsilon,H_0,V}$ to be real-valued satisfying \eqref{aa7} and \eqref{etabound}. Next,
suppose there exists another real-valued function $\xi_{a,b,n,H_0,V}\in L^1((a,b))$ satisfying \eqref{aa7}. Let $h_n=\eta_{a,b,n,\epsilon,H_0,V}-\xi_{a,b,n,H_0,V}$. Then, it follows from \eqref{aa7} that
\begin{align}\label{a11}
\int_{a}^{b}f^{(n)}(\lambda)h_n(\lambda)d\lambda=0\quad \text{for all }f\in C_c^\infty((a,b)).
\end{align}
Consider the distribution $T_{h_n}$ defined by
$$T_{h_n}(\phi)=\int_{a}^{b} \phi\, h_n\,d\lambda$$ for every $\phi\in C^\infty_c((a,b))$.
By \eqref{a11} and the definition of the derivative of a distribution, $T_{h_n}^{(n)}=0$. Hence by \cite[Theorem 3.10 and Example 2.21]{gwaiz}, $h_n$ is a polynomial of degree at most $n-1$. Consequently, $\eta_{a,b,n,\epsilon,H_0,V}\in L^1((a,b))$ satisfying \eqref{aa7} is unique up to an additive polynomial of degree at most $n-1$.
Assume now that $n\in\mathbb{N}$ and $f\in D_c^n((a,b))$ with $f^{(n)}\in B([a,b])$.
Applying Lemma \ref{l3} in \eqref{a9} yields \eqref{a6}, completing the proof of the theorem.
\end{proof}
\begin{rmrk}
It follows from the proof of Theorem \ref{mainthm} that the representation \eqref{aa7} with $n=1$ holds for a larger class of functions $f$, namely, for every $f$ absolutely continuous on $[a,b]$ and compactly supported in $(a,b)$.
\end{rmrk}
\begin{rmrk}
The uniqueness of the spectral shift function $\eta_{a,b,n,\epsilon,H_0,V}$ up to an additive polynomial of degree at most $n-1$ was not addressed in \cite[Theorem 4.3]{SkAnJOT}.
\end{rmrk}
\section{The case of resolvents in the $\tau$-Schatten-von Neumann ideal}
In this section, we obtain an upper bound for $|\tau(\mathcal{R}_{H_0,f,n}(V))|$ that is independent of the support of $f$ under the additional assumption that the resolvent of $H_0$ belongs to $\mathcal{L}^n$, $n\in \mathbb{N}$. As an application, we extend the trace formula \eqref{aa7} to a larger class of scalar functions $f$ and obtain a pointwise bound on the spectral shift function.
The trace formula \eqref{tf} was obtained in \cite{NuSkJST} for the class of scalar functions $\mathfrak{W}_n$ defined below under the assumption that $V(H_0-iI)^{-1}$ belongs to the Schatten-von Neumann ideal associated with $(\Bcal(\Hcal),\Tr)$.
\begin{dfn}
\label{fracw}
Let $n\in\mathbb{N}$. Let $\mathfrak{W}_n$ denote the set of functions $f\in C^n(\R)$ such that
\begin{enumerate}[(i)]
\item\label{fracwi} $\widehat{f^{(k)}u^{k}}\in L^{1}(\R)$, $k=0,1,\ldots, n$,
\item\label{fracwii} $f^{(k)}\in L^1\big(\R,(1+|x|)^{k-1}\,dx\big)$, $k=1,\ldots,n$.
\end{enumerate}
\end{dfn}
As noted in Remark \ref{rk} below, $F_c^{n}(\R)\not\subset \mathfrak{W}_n$. Our principle goal in this section is to establish the trace formula \eqref{a5} for a larger set of functions containing $F_c^{n}(\R)$. In this context, we introduce the following class of functions.
\begin{dfn}
Let $n\in \mathbb{N}$. Let $\mathfrak{H}_n$ denote the set of functions $f\in C^{n-1}(\R)$ such that
\begin{enumerate}[(i)]
\item $\widehat{f^{(k)}u^{k}}, \widehat{f^{(k)}u^{k+1}}\in L^{1}(\R)$, $k=0,1,\ldots, n-1$,
\item $f^{(n)}$ exists almost everywhere,
\item $f^{(k)}\in L^1\big(\R,(1+|x|)^{k}\,dx\big)$, $k=0,1,\ldots,n$.
\end{enumerate}
\end{dfn}
\begin{rmrk}\label{rk}
We have
\begin{align}
\label{inclusion}
F_c^{n}(\R)\subset\mathfrak{H}_n\subset \mathfrak{W}_{n-1},
\end{align}
but
\begin{align}
\label{notinclusion}
F_c^{n}(\R)\not\subset \mathfrak{W}_{n},\quad \mathfrak{H}_n\not\subset\mathfrak{W}_n,\quad \mathfrak{W}_n\not\subset\mathfrak{H}_n.
\end{align}
The inclusions \eqref{inclusion} follow directly from the definitions of the sets.
Note that the function
\[f(x)=\begin{cases}
0 &\text{ if } x<0,\\
x^n(x-1)^n &\text{ if } 0\leq x\leq 1,\\
0 &\text{ if } x>1,
\end{cases}\]
satisfies $f\in F_c^n(\R)$ but $f\not\in C^n(\R)$. Hence, $f\not\in\mathfrak{W}_n$, so the first two properties in \eqref{notinclusion} hold. Note that $g(x)=(x-i)^{-1}\in \mathfrak{W}_{n}$ but $g\notin \mathfrak{H}_{n}$. Therefore, the third property in \eqref{notinclusion} holds.
\end{rmrk}
Below we will also use the notations $u(\lambda):=(\lambda-i)$ and $u^{k}(\lambda)=(u(\lambda))^k,\, k\in\mathbb{Z},\, \lambda\in\R.$
\begin{lma}\label{convl}
Let $f\in \mathfrak{H}_n$. Then, $\widehat{f^{(k)}u^l}\in L^1({\R})$ for $0\leq l\leq k\leq n-1$.
\end{lma}
\begin{proof}
For $k=l$, the result follows from the definition of $\mathfrak{H}_n$.
Let $l<k$. Since $u^{-k},u^{-k-1}\in L^2(\R)$, we obtain $\widehat{u^{-k}}\in L^1(\R)$ for all $k\in\N$ (see, e.g., \cite[Lemma 7]{PoSu09}).
By the convolution theorem,
\[\widehat{f^{(k)}u^l}=\widehat{\big(f^{(k)}u^{k}u^{l-k}\big)}=\widehat{f^{(k)}u^{k}}*\widehat{u^{l-k}}.\]
Since $L^1(\R)$ is closed under the convolution product,
$\widehat{f^{(k)}u^l}\in L^1(\R)$.
\end{proof}
\begin{lma}\label{exp-lem}
Let $n\in\mathbb{N}$. Let $H_0$ be a self-adjoint operator affiliated with $\mathcal{M}$ and let $V$ be a self-adjoint operator in $\mathcal{M}$. Let $H_t=H_0+tV,\, t\in\R$, and $\tilde{V}=V(H_0-iI)^{-1}$. Then
\begin{align}\label{ddd1}
\nonumber&T^{H_t,H_0,\ldots,H_0}_{f^{[n-1]}}(V,\ldots,V)
=(-1)^{n-1}f(H_t)\,\tilde{V}^{n-1}\\
\nonumber&+\sum_{p=1}^{n-1}\sum_{\substack{j_1,\ldots,j_{p}\geq1,j_{p+1}\geq0\\j_1+\cdots+j_{p+1}=n-1}}(-1)^{n-p-1}\, \Big(T^{H_t,H_0,\ldots,H_0}_{(fu^{p+1})^{[p]}}(\tilde{V}^{j_1},\ldots,
\tilde{V}^{j_{p}}(H_0-iI)^{-1})\tilde{V}^{j_{p+1}}\\
&\hspace{2cm}-T^{H_t,H_0,\ldots,H_0}_{(fu^{p})^{[p-1]}}(\tilde{V}^{j_1},\ldots,\tilde{V}^{ j_{p-1}})\tilde{V}^{j_p}(H_0-iI)^{-1}\tilde{V}^{j_{p+1}}\Big)
\end{align}
for all $f\in\mathfrak{H}_n$.
\end{lma}
\begin{proof}
The identity \eqref{ddd1} is trivial in the case $n=1$.
Let $n\geq 2$. It was noted in \cite[Equation (24)]{NuSkJST} that
\begin{align}
\label{dd1}
&f^{[n-1]}(\lambda_0,\ldots,\lambda_{n-1})\\
\nonumber
&=\sum_{p=0}^{n-1}\sum_{0<j_1<\cdots<j_{p}\leq n-1}(-1)^{n-1-p}
(fu^p)^{[p]}(\lambda_0,\lambda_{j_1},\ldots,\lambda_{j_p})
u^{-1}(\lambda_1)\cdots u^{-1}(\lambda_{n-1}).
\end{align}
By Lemma \ref{convl}, $\widehat{f^{(n-1)}},\,\widehat{(fu^p)^{(p)}}\in L^1(\R)$ for $0\leq p\leq n-1$. Therefore, applying \cite[Lemmas 3.5, 5.1, 5.2]{PSS} yields
\begin{align}\label{dd2}
&T^{H_t,H_0,\ldots,H_0}_{f^{[n-1]}}(V,\ldots,V)\\
\nonumber
&=(-1)^{n-1}f(H_t)\tilde{V}^{n-1}+\sum_{p=1}^{n-1}\sum_{\substack{j_1,\ldots,j_{p}\geq1,j_{p+1}\geq0\\j_1+\cdots+j_{p+1}=n-1}}\!(-1)^{n-1-p}\, T^{H_t,H_0,H_0,\ldots,H_{0}}_{(fu^p)^{[p]}}(\tilde{V}^{j_1},\ldots,\tilde{V}^{j_p})\, \tilde{V}^{j_{p+1}}.
\end{align}
By Lemma \ref{convl} $\widehat{(fu^p)^{(p)}}, \widehat{(fu^{p+1})^{(p)}},$ and $\widehat{(fu^{p})^{(p-1)}}\in L^1(\R)$ for $1\leq p\leq n-1$. Therefore, applying \cite[Theorem 3.10(i)]{NuSkJST} to \eqref{dd2} yields \eqref{ddd1}.
\end{proof}
\begin{thm}\label{resolvent trace thm}
Let $n\in\N$, let $H_0$ be a self-adjoint operator affiliated with $\mathcal{M}$ such that $(H_0-iI)^{-1}\in\mathcal{L}^{n}$, and let $V$ be a self-adjoint operator in $\mathcal{M}$. Then, there exists constant $K_n>0$ and a real-valued function $\eta_n$ such that
\begin{align}\label{rr0}
|\eta_n(x)|\leq K_n\,(2+\|V\|)\, \|V\|^{n-1}\,\|(H_0-iI)^{-1}\|_n^n\,(1+|x|)^n,\quad x\in\R,
\end{align} and
\begin{align}
\label{rr1}
\tau(\mathcal{R}_{H_0,f,n}(V))=\int_\R f^{(n)}(x)\eta_n(x)\,dx\,
\end{align}
for every $f\in \mathfrak{H}_{n}\cup \mathfrak{W}_n$.
The locally integrable function $\eta_n$ is determined by \eqref{rr1} uniquely up to a polynomial summand of degree at most $n-1$.
\end{thm}
\begin{proof}
The result for $\mathcal{M}=\mathcal{B}(\mathcal{H})$ and $f\in\mathfrak{W}_n$ is established in \cite[Theorem 4.1]{NuSkJST}. It extends to the case of a general semifinite $(\mathcal{M},\tau)$ by replacing results for $\mathcal{B}(\mathcal{H})$ with completely analogous results for $\mathcal{M}$ (see Theorems \ref{inv-thm}, \ref{est-thm}, \ref{th1} and \cite[Lemma 5.1]{PaWiSu02}).
Therefore, there exists a real-valued function $\eta_{_{\mathfrak{W}_n}}$, unique up to a polynomial summand of degree at most $n-1$, satisfying \eqref{rr1} for all $f\in\mathfrak{W}_n$.
Now we assume that $f\in\mathfrak{H}_n$. If $n=1$, then
\begin{align}\label{dddd1}
\nonumber\lvert\tau(\mathcal{R}_{H_0,f,n}(V))\rvert=&\,\lvert\tau(f(H_0+V)-f(H_0))\rvert\\
\nonumber=&\,\lvert\tau((fu)(H_0+V)(H_0+V-iI)^{-1}-(fu)(H_0)(H_0-iI)^{-1})\rvert\\
\leq&\, \|fu\|_\infty\, (2+\|V\|)\,\|(H_0-iI)^{-1}\|_1.
\end{align}
Let $n\geq 2$. By Theorem \ref{th1} and \cite[Theorem 4.3.14]{ST19},
\begin{align}\label{r1}
\nonumber
\mathcal{R}_{H_0,f,n}(V)=&\mathcal{R}_{H_0,f,n-1}(V)-\frac{1}{(n-1)!}\frac{d^{n-1}}{ds^{n-1}}f(H_0+sV)\big|_{s=0}\\
=&T_{f^{[n-1]}}^{H_0+V,H_0,\dots,H_0}(V,\ldots,V)-T_{f^{[n-1]}}^{H_0,\dots,H_0}(V,\ldots,V).
\end{align}
Let $H_t=H_0+tV,\, t\in\R$, and $\tilde{V}=V(H_0-iI)^{-1}$.
By Lemma \ref{exp-lem} applied to \eqref{r1},
\begin{align}\label{r3}
&\mathcal{R}_{H_0,f,n}(V)\\
\nonumber &=(-1)^{n-1}T_{f^{[1]}}^{H_0+V,H_0}(V)\tilde{V}^{n-1}+\sum_{p=1}^{n-1}\sum_{\substack{j_1,\ldots,j_{p}\geq1,j_{p+1}\geq0\\j_1+ \cdots+j_{p+1}=n-1}}(-1)^{n-p-1}\\
\nonumber&\times\Big[\Big(T^{H_0+V,H_0,\ldots,H_0}_{(fu^{p+1})^{[p]}}(\tilde{V}^{j_1},\ldots,\tilde{V}^{j_{p}}(H_0-iI)^{-1})-T^{H_0,H_0,\ldots,H_0}_{(fu^{p+1})^{[p]}}(\tilde{V}^{j_1},\ldots,\tilde{V}^{j_{p}}(H_0-iI)^{-1})\Big)\tilde{V}^{j_{p+1}}\\
\nonumber &\qquad-\Big(T^{H_0+V,H_0,\ldots,H_0}_{(fu^{p})^{[p-1]}}(\tilde{V}^{j_1},\ldots,\tilde{V}^{j_{ p-1}})-T^{H_0,H_0,\ldots,H_0}_{(fu^{p})^{[p-1]}}(\tilde{V}^{j_1},\ldots,\tilde{V}^{ j_{p-1}})\Big){\tilde{V}^{j_p}(H_0-iI)^{-1}}\tilde{V}^{j_{p+1}}\Big].
\end{align}
Note that, if $j_{p+1}=0$, then by definition of the multiple operator integral we have
\begin{align}\label{new}
T^{H_0+V,H_0,\ldots,H_0}_{(fu^{p+1})^{[p]}}(\tilde{V}^{j_1},\ldots,\tilde{V}^{j_{p}}(H_0-iI)^{-1})=T^{H_0+V,H_0,\ldots,H_0}_{(fu^{p+1})^{[p]}}(\tilde{V}^{j_1},\ldots,\tilde{V}^{j_{p}})(H_0-iI)^{-1}.
\end{align}
By Lemma \ref{convl}, $\widehat{f},\,\widehat{f'},\,\widehat{(fu)'}\in L^1(\R)$. Applying \cite[Theorem 3.10(i)]{NuSkJST} yields
\begin{align}\label{dd}
T_{f^{[1]}}^{H_0+V,H_0}(V)=T_{(fu)^{[1]}}^{H_0+V,H_0}(\tilde{V})-f(H_0+V)\tilde{V}.
\end{align}
Combining \eqref{r3}, \eqref{new}, \eqref{dd}, Theorems \ref{inv-thm} and \ref{est-thm}, and applying H\"{o}lder's inequality yields
\begin{align}\label{dddd2}
\nonumber&\left|\tau(\mathcal{R}_{H_0,f,n}(V))\right|\\
\nonumber&\leq\Big[\left(\|f\|_\infty+\|(fu)'\|_\infty\right)\|V\|^{n}
+\sum_{p=1}^{n-1}d_{p,n}\left(\|(fu^{p+1})^{(p)}\|_\infty
+\|(fu^{p})^{(p)}\|_\infty\right)\|V\|^{n-1}\Big]\|(H_0-iI)^{-1}\|_n^n\\
&\leq C_n\Big(\sum_{p=1}^{n-1}\|(fu^{p+1})^{(p)}\|_\infty
+\sum_{p=0}^{n-1}\|(fu^{p})^{(p)}\|_\infty\Big)(1+\|V\|)\|V\|^{n-1}\|(H_0-iI)^{-1}\|_n^n,
\end{align}
where $C_n$ is some constant depending only on $n$.
Arguing similarly to the proof of the existence of the spectral shift function in \cite[Proposition 4.2]{NuSkJST} we obtain from \eqref{dddd1} and \eqref{dddd2} that for each $n\in\Nat$,
\begin{align}\label{r6}
\tau\left(\mathcal{R}_{H_0,f,n}(V)\right)=\int_{\R} f^{(n)}(x)\acute\eta_n(x)dx,
\end{align}
where $\acute\eta_n$ is a continuous function on $\R$ such that
\begin{align}\label{r7}
|\acute\eta_n(x)|\leq D_n\,(2+\|V\|)\|V\|^{n-1}\|(H_0-iI)^{-1}\|_n^n(1+|x|)^n,\quad x\in\R{,}
\end{align}
for some constant $D_n>0$. We define $$\eta_{_{\mathfrak{H}_n}}:=\Re(\acute\eta_n).$$ Then it is clear that $\eta_{_{\mathfrak{H}_n}}$ satisfies \eqref{r7} as $|\eta_{_{\mathfrak{H}_n}}|\leq|\acute\eta_{n}|$.
Since the left-hand side of \eqref{r6} is real-valued whenever $f$ is real-valued, we obtain that
$\eta_{_{\mathfrak{H}_n}}$ satisfies \eqref{rr1} for real-valued $f\in \mathfrak{H}_{n}$ and, consequently, for all $f\in \mathfrak{H}_{n}$.
The uniqueness of $\eta_{_{\mathfrak{H}_n}}$ satisfying \eqref{rr1} up to a polynomial summand of degree at most $n-1$ can be established completely analogously to the uniqueness of the function $\eta_{a,b,n,\epsilon,H_0,V}$ established in Theorem \ref{mainthm}. Since both $\eta_{_{\mathfrak{H}_n}}$ and $\eta_{_{\mathfrak{W}_n}}$ satisfy \eqref{rr1} for all $f\in C_c^{\infty}(\R)$, by using properties of distributions as in the proof of Theorem \ref{mainthm}, we conclude that
$\eta_{_{\mathfrak{H}_n}}-\eta_{_{\mathfrak{W}_n}}=Q_n$, where $Q_n$ is a polynomial of degree at most $n-1$.
By Definition \ref{fracw} and integration by parts,
$\int_{\R} f^{(n)}(x)Q_n(x)dx=0$ for each $f\in \mathfrak{W}_n$. Therefore,
\begin{align*}
\int_{\R} f^{(n)}(x)\eta_{_{\mathfrak{W}_n}}dx=\int_{\R} f^{(n)}(x)(\eta_{_{\mathfrak{H}_n}}-Q_n)dx
=\int_{\R} f^{(n)}(x)\eta_{_{\mathfrak{H}_n}}dx,
\end{align*}
for each $f\in \mathfrak{W}_n$. Hence $\eta_n:=\eta_{_{\mathfrak{H}_n}}$ satisfies \eqref{rr0} and \eqref{rr1} for all $f\in \mathfrak{H}_n\cup\mathfrak{W}_n$.
\end{proof}
\section*{Declarations}
{\textbf{Conflicts of interests:}} The authors declare that they have no conflict of interest. No datasets were generated or analyzed during the current study.
\section*{Acknowledgements}
\textit{The research of the first named author is supported by the Mathematical Research Impact Centric Support (MATRICS) grant, File No: MTR/2019/000640, by the Science and Engineering Research Board (SERB), Department of Science $\&$ Technology (DST), Government of India. The second named author gratefully acknowledge the support provided by IIT Guwahati, Government of India. The research of the third named author is supported in part by NSF grant DMS-1554456.}
|
1,477,468,750,297 | arxiv | \section{Introduction}
\label{sec:intro}
The clustering of galaxies is a fundamental measure of the statistical
properties of the cosmic density field through cosmic time. In the
last decade, it became possible to determine the clustering strength
of galaxy populations at spatial scales out to tens of Mpc and beyond
with reasonable accuracy by means of massive galaxy surveys such as
the Two-Degree Field Galaxy Redshift Survey
\citep[e.g.,][]{2001MNRAS.328.1039C} and Sloan Digital Sky Survey
\citep[SDSS-I/II; e.g.,][]{Gunn1998, 2000AJ....120.1579Y, Gunn2006}. These and previous
studies have shown that the correlation function is not a simple power-law
and that the correlation length of luminous and massive galaxies
is larger than that of less luminous ones \citep[see][ and references therein]{Zehavi2011}.
Furthermore, it has been also shown that the clustering strength
of Luminous Red Galaxies (LRGs) is an excellent tracer of the Baryon Acoustic
Oscillation (BAO) signal, which can be used to constrain the expansion
history of the Universe \citep[e.g.,][]{Eisenstein2005}.
The Baryon Oscillation Spectroscopic Survey (BOSS), a branch of the ongoing
SDSS-III \citep[][]{Eisenstein2011}, is considerably increasing the size
of available galaxy samples.
BOSS consists of galaxy and quasar spectroscopic surveys over
a sky area of 10,000\,deg$^2$ and its main goal is to measure the BAO
feature at high precision. Specifically, BOSS aims at measuring the redshifts of
about 1.5 million galaxies out to $z=0.7$. It will also acquire
about 150,000 Ly$\alpha$ forest spectra of quasars in the range $2.2<z<4$, to map
the large-scale distribution of galaxies at these earlier epochs
\citep[see][]{2011arXiv1104.5244S}. The effective volume of the galaxy
survey is expected to be about 7 times higher than that of the
SDSS-I/II LRG sample which consisted of $\sim100,000$ galaxies out to
$z=0.45$. The selection criteria of the BOSS targets results in a sample of
massive, and hence highly clustered systems, which are suitable
candidates for a reliable detection of the acoustic peak. Additionally, the
project also provides a wealth of other information on clustering and
physical properties of galaxies.
Requirements for theoretical predictions of galaxy
clustering in BOSS are extreme: one needs accurate predictions for very large volumes
in order to compare with observations. Therefore, the combination of large-volume
cosmological $N$-body simulations with prescriptions to associate galaxies
with dark matter haloes turns out to be the most efficient way to generate
the required model galaxy samples. Recently, \citet{White2011}
presented clustering results for scales in the range $\sim0.5$--$20\,\mbox{$h^{-1}$\,Mpc}$ based on $\sim44,000$
galaxies in the redshift range $0.4<z<0.7$ obtained during the first semester of BOSS operation.
To compare these observational results with theory, the authors combined
large, albeit low-resolution, $N$-body simulations with the Halo Occupation Distribution (HOD)
model \citep[e.g.,][]{Berlind02,KravtsovHOD04,Zentner2005,Skibba2009,RossBrunner2009,RPB2010}.
Their results suggest that the majority of
BOSS galaxies are central systems living in haloes with a mass of
$\sim10^{13}~h^{-1}$ M$_{\sun}$, while about 10\% of them are
satellites typically residing in haloes $\sim10$ times more massive.
The HOD approach is the most often used framework to make predictions for
the large-scale distribution of galaxies. Alternatively, HODs can also be measured in observations
\citep{Zehavi05,Abazajian2005,Brown08,Zheng09}. The
main component of classical HOD models is the probability, $P(N|M)$, that a
halo of virial mass $M$ hosts $N$ galaxies with some specified properties.
In general, theoretical HODs require the fitting of a function with several parameters
\citep[e.g.,][]{KravtsovHOD04,Zheng2005}, which gives some freedom to
match the observed clustering of galaxies. These models also depend on
the theoretical approach adopted to predict the galaxy number $N$ inside
haloes of mass $M$. For example, \citet{Zheng2005} used SPH
simulations and semi-analytical models to measure the number of
galaxies as a function of hosting halo mass, which is definitely a
challenging theoretical exercise. \citet{White2011} tuned five HOD free parameters
to fit the observed clustering of galaxies. In this case a random
fraction of dark matter particles is selected from the simulations with
a fraction following the optimized HOD. This prescription will
have the best match to observations hence producing good-quality
mock catalogs. However, this is not the best way of testing a
cosmological model. \citet{KravtsovHOD04} used a different
approach: they identify subhaloes in high-resolution $N$-body simulations
in order to associate them with satellite galaxies. This is a
more attractive path, which can be further perfected by more accurate
simulations and more elaborate prescriptions for ``galaxies'' in dark
matter-only simulations \citep[e.g.,][]{Trujillo-Gomez}.
Halo Abundance Matching (HAM) has recently emerged as an attractive
alternative to HOD in order to bridge the gap between dark matter haloes and
galaxies \citep{KravtsovHOD04,Tasitsiomi04,Vale04,conroy06,
Guo10,Wetzel10,Trujillo-Gomez}. Abundance-matching resolves the issue of
connecting observed galaxies to simulated dark matter
haloes and subhaloes by setting a one-to-one correspondence between the red-band
luminosity or stellar and dynamical masses: more luminous galaxies are assigned
to more massive (sub)haloes. By construction, the method reproduces the observed
luminosity function (or stellar mass function). It also reproduces the
scale dependence of galaxy clustering over a large range of epochs
\citep{conroy06,Guo10}. When abundance matching is used for the
observed stellar mass function \citep{li09}, it gives also a reasonable
fit to lensing results \citep{Mandelbaum06} and to
the relation between stellar and virial mass
\citep{Guo10}. \citet{Guo10} also attempted to reproduce the observed
relation between the stellar mass and the maximum circular velocity with
partial success, finding deviations both in shape and
amplitude between predictions and observations. At circular velocities
in the range $100$--$150$~km~s$^{-1}$ the
predicted circular velocity was $\sim 25$\% lower than the observed
one. They proposed that this disagreement is likely due to the fact that
they did not include the effect of baryons. Indeed,
\citet{Trujillo-Gomez} show that accounting for baryons drastically
improves the situation.
Just like as with HODs, there are different flavours of HAMs. Generally, one does
not expect a pure monotonic relation between stellar and dynamical
masses. There should be some degree of stochasticity in this relation
due to deviations in the merger history, angular momentum, and
halo concentration. Even for haloes (or subhaloes) with the same mass, these
properties should be different for different systems, which would lead to
deviations in stellar mass. Observational errors are also responsible in part
for the non-monotonic relation between halo and stellar masses.
Most of modern HAM models already implement prescriptions to account
for the stochasticity
\citep{Tasitsiomi04,Behroozi10,Trujillo-Gomez,Leauthaud11}. The
difference between monotonic and stochastic models depends on the
magnitude of the scatter and on the stellar mass. The typical value of
the scatter in the $r$-band is expected to be $\Delta M_r =0.3$--$0.5$ mag
\citep[e.g.,][]{Trujillo-Gomez}. For the Milky-Way-size galaxies the
differences are practically negligible \citep{Behroozi10}, but they
increase for very massive galaxies such as those targeted with BOSS due
to the strong dependence of the bias with mass.
\begin{figure}
\includegraphics[width=91mm]{Figs/fig1.ps}
\caption{Sky area covered by the DR9 BOSS-CMASS sample
shown in Aitoff projection colour-coded by completeness (see text).
The upper and lower maps display the northern and southern galactic
caps respectively.
}
\label{fig:map}
\end{figure}
Almost two years after the start of the project, BOSS has obtained the
spectra of about 487,000 galaxies and 61,000 quasars.
Using the SDSS-III Data Release 9 (DR9) BOSS data
we present results on the two-dimensional, projected and redshift-space correlation
functions on scales from $\sim500\,\mbox{$h^{-1}$\,kpc}$ to $\sim90\,\mbox{$h^{-1}$\,Mpc}$ including fibre collision
corrections. In order to make predictions for the $\Lambda$CDM cosmological model
we use a large high-resolution $N$-body simulation with a resolution
high enough to resolve subhaloes, which is very important for the
HAM prescription. When connecting haloes with galaxies we use a
stochastic HAM model.
This paper is organized as follows. In Section~\ref{sec:cmass} we
present the BOSS galaxy sample studied here, dubbed ``CMASS'', and the
measurements of the two-dimensional, projected and redshift-space
galaxy clustering in observations. In Section~\ref{MultiDark_clustering} we
present the details of the MultiDark simulation, the halo catalogs
and the HAM technique adopted here.
In Sections~\ref{results} and~\ref{sec:satellites} we compare the
clustering measures with observations and study the occupation distribution
given by our halo catalog. We also discuss
the comparison between our halo occupation distribution with that
obtained by \cite{White2011} using an HOD model. In
Section~\ref{bias} we study the scale-dependent bias of galaxy
clustering of the CMASS sample as inferred from our HAM model
both in real and Fourier space. Finally, in
Section~\ref{conclusions} we close the paper with the summary and
conclusions.
In Appendix~\ref{app:a} we discuss several effects that
can affect the clustering power.
\section{Observations}
\label{sec:cmass}
\subsection{The CMASS sample}
\label{sec:cmass_sample}
\begin{figure}
\includegraphics[width=0.45\textwidth]{Figs/n_of_z.ps}
\caption{The comoving number density of galaxies in the DR9
BOSS-CMASS sample both for the north and south subsamples in the
redshift range $0.4<z<0.7$. Dashed lines show the smoothed
distributions used to create the Poisson distribution of particles
when computing the correlation functions (see text). }
\label{fig:CMASS}
\end{figure}
In this section we introduce the BOSS sample of massive galaxies
analyzed in this work. The target galaxies are selected in such a way
that the stellar mass of the systems is approximately constant over
the entire redshift range of interest. As a consequence, the resulting
galaxy sample is usually dubbed `constant mass' (CMASS) sample. These
galaxies are characterized by high-luminosities which translate in a
rather low comoving space density of about $3\times10^{-4}\,\mbox{$h^{-1}$\,Mpc}$.
The sample can be obtained by applying the following colour
cuts to the observations \cite[see e.g.][]{Eisenstein2011}:
\begin{eqnarray}
17.5 \,\,\,\, < \,\,\,\, i_{\rm cmod} &<& 19.9{\rm,} \nonumber \\
r_{\rm mod}-i_{\rm mod} &<& 2{\rm,} \nonumber \\
i_{\rm fiber2} &<& 21.5{\rm,} \nonumber \\
d_{\perp} &>& 0.55{\rm,} \nonumber \\
i_{\rm cmod} &<& 19.86+1.6\times\left(d_{\perp}-0.8\right)
\end{eqnarray}
\noindent where $d_{\perp} = \left(r_{\rm mod}-i_{\rm mod}\right) -
\left(g_{\rm mod}-r_{\rm mod}\right)/8$ and $i_{\rm fiber2}$ is the
$i$ magnitude measured with the 2$''$ BOSS fiber within the SDSS
$ugriz$ photometric system \citep[][]{1996AJ....111.1748F}. The
subscripts $_{\rm cmod}$ and $_{\rm mod}$ denote ``cmodel'' and ``model''
magnitudes respectively. These cuts are chosen to pick out massive red
galaxies at $z\gtrsim0.4$. In particular, the condition
$d_{\perp}>0.55$ selects systems with observed red $r-i$ colours,
whereas the conditions imposed on the $i$-magnitude is designed to
identify an approximately complete galaxy sample down to a limiting
stellar mass. Most of these galaxies ($\sim75\%$) show an early-type morphology
with a characteristic stellar mass of $M_*\sim10^{11}$ $h^{-1}$
M$_{\sun}$ and an absolute $r$-band magnitude
of $M_{r}-5\log h\lesssim-20.7$ \citep[][]{Masters2011}.
\citet{Schlafly2010} and \citet{Schlafly2011} found systematic offsets
between the colours of SDSS objects in the southern and northern
Galactic hemispheres which might reflect a combination of percent
calibration errors in the SDSS photometry and errors in the
corrections for Galactic extinction. The \citet{Schlafly2011} results
suggest a systematic offset in the value of $d_{\perp}$ of 0.0064
between the north and south. As the CMASS selection criteria depends
on $d_{\perp}$, this offset leads, in principle, to a difference in the galaxy
samples selected for spectroscopic observations in the two
hemispheres. \citet{Ross2011} found a 2\% difference in the number density of
CMASS targets between the northern and southern hemispheres, which
reduces to 0.3\% when this offset is applied to the
galaxies in the south before applying the CMASS selection criteria.
However, \citet{Ho2012} found no appreciable north/south colour offsets in their sample.
In this work we do not apply a colour offset to the selection
of CMASS galaxies in the south. Although we present results obtained from
the combined (north$+$south) CMASS sample, we also analyse the data from the northern and
southern hemispheres separately in order to avoid potential systematics
that could be associated with the use of slightly different selection
criteria.
For a number of reasons it is not possible to obtain reliable redshifts
for all the galaxies satisfying the CMASS selection criteria (see
Section \ref{sec:clustering}). We estimate the completeness
$c=n_z/n_{\rm t}$, where $n_{t}$ is the number of galaxy targets and
$n_z$ the number of these with reliable redshift estimates (weighted
as described in Section \ref{sec:clustering}) for each sector of the
survey mask, that is, the areas of the sky covered by a unique set of
spectroscopic tiles \citep[][]{Blanton2003,Tegmark2004} which
we characterize using the {\sc Mangle} software
\citep[][]{HamiltonTegmark2004, Swanson2008}. The average completeness of
the combined CMASS sample is 98.2\%. We trim the final area of our sample to all
sectors with completeness $c \geq 0.75$, producing our final sample of
282,068 galaxies, of which 219,773 and 62,295 are located in the northern
and southern galactic caps respectively. Fig.~\ref{fig:map} shows an
Aitoff projection of the resulting survey mask in the northern (upper
panel) and southern (lower panel) regions, with effective areas
$A_{\rm eff}=\sum_i c_i \Omega_i$, where the sum extends over all
sectors contained in the mask and $\Omega_i$ corresponds to their
solid angles, 2502 deg$^2$ and 688 deg$^2$ respectively. The redshift
distribution of the CMASS sample can be seen in Fig.~\ref{fig:CMASS}
both for the northern and southern subsamples. The dashed lines show
the smoothed distributions used to create the random samples of points
for our clustering analysis (see Section~\ref{sec:clustering}). As
shown in this figure the galaxy number density peaks at $z\simeq0.52$
having a value of $\bar{n}_{\rm g}\simeq3.6\times10^{-4}$ $h^3$
Mpc$^{-3}$ and a mean redshift of $z=0.55$.
\subsection{Clustering measures}
\label{sec:clustering}
We characterize the clustering of the CMASS galaxy sample by means of
two-point statistics in configuration space. We measure the
angle-averaged redshift-space correlation function $\xi(s)$ and the
full two-dimensional $\xi(\sigma,\pi)$, where $\sigma$ and $\pi$ are
the components in the direction perpendicular and parallel to the line
of sight of the total separation vector ${\bf s}$. These measurements
are affected by redshift-space distortions. In order to obtain a
clustering measure that is less sensitive to these effects
we also compute the projected correlation function \citep[][]{DavisPeebles83}
\begin{equation}
\Xi(\sigma)=2\int_0^{\infty}\xi(\sigma,\pi)\,{\rm d}\pi.
\label{int_xi}
\end{equation}
In practice, we sum all pairs with $\pi_{\rm max}<100\,h^{-1}\,{\rm Mpc}$.
We compute the full correlation functions $\xi(\sigma,\pi)$
using the \citet{LandySzalay93} estimator
\begin{equation}
\xi(\sigma,\pi) = \frac{DD- 2 DR + RR}{RR}
\end{equation}
where $DD$, $DR$, and $RR$ are the suitably normalized numbers of
data-data, data-random, and random-random pair counts in each bin of
$(\sigma,\pi)$.
In order to measure these quantities without introducing systematic effects,
a few important corrections must be taken into account. Here we give a brief
description of the main issues that should be considered while a more detailed
discussion will be presented in \citet{Ross2012}.
As described in the previous section, the spectroscopic CMASS sample
is constructed from a target list drawn from the SDSS photometric
observations. Even though the overall completeness of the CMASS sample
is high, it is not possible to obtain reliable redshifts for all
galaxies satisfying the selection criteria specified in
Section~\ref{sec:cmass_sample}. Which galaxies are observed
spectroscopically is determined by an adaptive tiling algorithm, based
on that of \citet{Blanton2003}, which attempts to maximize the number
of measured spectra over the survey area. As a result of this
algorithm, not all galaxies satisfying the CMASS criteria are selected
as targets for spectroscopy. Even when a fibre is assigned to a galaxy
and a spectrum is observed, it might not be possible to obtain a
reliable estimation of the redshift of the object, leading to what is
called a {\it redshift failure}. These tend to occur for fibres
located near the edges of the observed plates. This implies that it
is not possible to simply consider these redshift failures as an extra
component affecting the overall completeness of the sector since their
probability is not uniform across the field.
In order to correct for this effect we define a set of weights, $w_{\rm zf}$,
whose default value is one for all galaxies in the sample.
For every galaxy with a redshift failure, we increase by one the value of
$w_{\rm zf}$ of the nearest galaxy with a good redshift measurement.
The application of these weights effectively corrects for
the non-uniformity effects produced by redshift failures.
The main cause for the loss of objects is, however, fibre collisions
\citep[][]{Zehavi2002, Masjedi2006}. The BOSS spectrographs are fed by
optical fibres plugged on plates, which must be separated by at least
$62''$ (in the concordance cosmology this corresponds to a distance of $\sim0.27\,\mbox{$h^{-1}$\,Mpc}$
at $z\sim0.5$). It is then impossible, in any given observation, to obtain
spectra of all galaxies with neighbours closer than this angular
distance. The problem is alleviated in regions covered by multiple
exposures, but it is in general not possible to observe all objects in
crowded regions.
In this work we correct for the impact of fibre collisions on our clustering measurements
by applying the correction presented in \citet{Guo2011}. Using this method the total galaxy sample $D$
is divided into two subsamples, dubbed as $D_1$ and $D_2$. These are constructed following the targeting
algorithm of the catalogue in a way that guarantees that group $D_1$ is not affected by fibre collisions,
while $D_2$ contains all collided galaxies.
Any clustering measurement of the combined sample can be obtained as a combination of the contributions
from these two groups. Based on tests on mock galaxy catalogues, \citet{Guo2011} showed that the
application of this method can accurately recover the projected and redshift-space correlation functions on
scales both below and above the fibre collision scale, providing a substantial improvement over the commonly
used nearest neighbour and angular correction methods.
We constructed random catalogues for subsamples $D_1$ and $D_2$ for the northern and southern
hemispheres with 40 times more objects than the real data
following their respective angular completenesses. The redshifts of
these random points were generated in order to follow the
distributions of the real samples, which were obtained by a smoothing
spline interpolation of the observed redshift distributions.
With the increasing size of current galaxy surveys, and the corresponding
improvement on the statistical uncertainties, the contribution of systematic errors
to the total error budget of any clustering statistic becomes increasingly important.
Due to its large volume and high number density BOSS is perhaps one of the best
examples of this. \citet{Ross2012} present a detailed analysis of the
systematic effects that could potentially affect any clustering measurement
based on the CMASS sample and show that, besides redshift failures and fibre
collisions, other important systematics must be considered in order to obtain
accurate clustering measurements.
The main result from this analysis is that these systematics can be corrected for
by applying a set of weights, $w_{\rm sys}$, which depend on both, the galaxy properties and their
positions in the sky. We consider these weights in all our clustering measurements.
Finally, we also include a set of weights to reduce the variance of the
estimator that are given by
\begin{equation}
w = (1 + n(z)J_{w})^{-1},
\end{equation}
where $n(z)$ is the mean galaxy density at redshift $z$ and $J_w$ is a free parameter.
\citet{Hamilton93} showed that setting $J_{w}=4\pi J_3(s)$,
where $J_3(s)=\int_0^s\xi(s')s'^2{\rm d}s'$, minimizes the
variance on the measured correlation function for the given scale $s$.
Here we follow the standard practice and use a scale-independent value of $J_w=2\times 10^4$.
\begin{figure}
\includegraphics[width=0.50\textwidth]{Figs/wp_boss.ps}
\caption{Projected correlation function times the projected distance
for the DR9 BOSS-CMASS galaxy sample in the redshift range
$0.43<z<0.7$. The blue and red shaded areas correspond to
the north and south subsamples and give an estimate of their standard
deviation. The dot-dashed lines display their mean value.
The result of combining both subsamples is shown as filled circles.
Standard deviation for the projected correlations of all samples are
estimated using an ensemble of 600 mock catalogs (see Section~\ref{sec:clustering}).
For comparison the projected correlation inferred from the first
semester of the BOSS-CMASS data is also shown \citep[open circles;][]{White2011}.
}
\label{fig:proj_CMASS_comp}
\end{figure}
Fig.~\ref{fig:proj_CMASS_comp} shows the projected correlation functions $\Xi(\sigma)$
times the projected distance of the north, south and combined CMASS samples.
The combined sample gives a similar outcome to that of the north as a result of the higher
statistics in the latter.
For comparison the projected correlation inferred from a CMASS sample
corresponding to the first semester of the BOSS observations is also
shown \citep[open circles;][]{White2011}. Besides the increase in the sample
size and the volume probed, there are differences at small and large
scales which are probably due to the different corrections for fibre
collisions and the use of the weights to correct for the systematics
affecting the galaxy density field.
Although the projected correlation functions of the northern and
southern subsamples agree within their respective uncertainties, they
show some intriguing differences. At scales in the range $\sim20$--$50\,\mbox{$h^{-1}$\,Mpc}$
the amplitude of $\Xi(\sigma)$ in the south is higher than that of the
north. Similarly, the measurements of $\xi(s)$ show the same behaviour.
However, in this case, the agreement of the mean values is somewhat
closer (see section~\ref{results}).
To estimate covariance matrices for these clustering measures, we use
a set of 600 mock catalogs designed to follow the same geometry and
redshift distribution of the CMASS sample while mimicking their
clustering properties at large scales \citep{Manera2012}.
These mocks are inspired by the {\it PTHalos} method of \citet{Scoccimarro2002},
although there are some important differences. The resulting covariances are compatible with
the results of $N$-body simulations. For a detailed description about these
mocks and their comparison with $N$-body results
\citep[see ][]{Manera2012}\footnote{Mocks will be available in http://www.marcmanera.net/mocks/}.
\section{Clustering in the $\Lambda$CMD model}
\label{MultiDark_clustering}
\subsection{The MultiDark simulation}
\label{MultiDark}
The MultiDark run (MDR1) is an $N$-body cosmological simulation of the
$\Lambda$CDM model that was done using the Adaptive-Refinement-Tree
(ART) code \citep{ART1997,ART2008}. The simulation has
2048$^{3}\approx 8.6\times 10^9$ dark matter particles in a box of
1\,$h^{-1}$\,Gpc on a side. The mass of the dark matter particle
is $8.72\times10^9$ $h^{-1}$ M$_{\sun}$. The cosmological parameters
adopted in the simulation are consistent with the latest WMAP7 results
\citep{2011ApJS..192...14J} and with other cosmological probes
\citep[see Table 1 of][]{Bolshoi}. Hence, we adopt a matter
density parameter $\Omega_{\rm M}=0.27$ and a dimensionless Hubble
parameter $h=0.7$. Initial conditions were set at the
redshift $z_{\rm init}=65$ using a power spectrum characterized by a scalar
spectral index $n_s=0.95$ and normalized to $\sigma_8=0.82$ in the same way as
done for the Bolshoi simulation \citep[see][ for a detailed description of this
simulation]{Bolshoi}. The ART code is designed in such a way that the
physical resolution is nearly preserved over time with a value of
$\sim7\,h^{-1}$\,kpc for the redshift range between $z=0$--$8$. For
further details on the ART code and MultiDark simulation
see \citet{2011arXiv1104.5130P} and references therein.
\subsubsection{Halo finding}
Dark matter haloes are identified in the simulation with a parallel
version of the Bound-Density-Maxima (BDM) algorithm
\citep{1997astro.ph.12217K,Riebe11}. The BDM is a Spherical
Overdensity (SO) code. It finds all density maxima in the
distribution of particles using a top-hat filter with 20
particles. For each maximum the code estimates the radius within which
the overdensity has a specified value. Among all overlapping density
maxima the code finds the one having the deepest gravitational
potential. The position of this maximum is the centre of a
``distinct'' halo, which is a halo whose centre is not inside the virial radius
of a bigger one. Distinct haloes are also tracers of central galaxies.
Self-bound haloes with more than 20 particles lying inside the virial
radius of a distinct halo are classified as subhaloes. Subhalo
identification is more subtle since it requires the removal of unbound
particles and identification of fake satellites. See \citet{Riebe11}
for a more detailed description of the algorithm. The BDM halo finder
was extensively tested and compared with other halo finders
\citep[][]{Knebe11,RockStar}. In Appendix~\ref{app:a} we show a comparison
between the real-space correlation function for halo catalogs selected
both with BDM and RockStar halo finders (see Fig~\ref{app:A1}).
The BDM halo catalogs for the MDR1 simulation
are publicly available at the MultiDark Database: http://www.multidark.org.
The size of a distinct halo can be defined by means of the spherical
radius within which the average density is $\Delta$ times higher than the
critical density of the Universe, $\rho_{\rm cr}(z)$. As a
consequence, the corresponding enclosed mass is given by
\begin{equation}
M_{\Delta} =\frac{4\pi}{3}\Delta\rho_{\rm cr}(z)R_{\Delta}^3.
\label{eq:Delta}
\end{equation}
\begin{figure}
\includegraphics[width=85mm]{Figs/Nsat_Ndistinct_Vmax.ps}
\caption{{\it Bottom panel:} The cumulative number density of
distinct haloes (dashed line) and subhaloes (dotted line) in
the MultiDark simulation at $z=0.53$ as a function of maximum
circular velocity. The cumulative number for all haloes is
also shown as a solid line. {\it Top panel:} The cumulative
subhalo fraction as a function of halo maximum circular
velocity. As a reference we indicate in both panels the mean number density
of the BOSS-CMASS galaxy sample and as vertical lines the
corresponding maximum circular velocity threshold ($V_{\rm cut}$)
used in the HAM procedure.}
\label{fig:cum_sat}
\end{figure}
\noindent We use a threshold overdensity of
$\Delta=200$ that corresponds to values for halo mass and radius of
$\mbox{$M_{\rm 200}$}$ and $\mbox{$R_{\rm 200}$}$. BDM catalogs also provide virial masses and
radius ($M_{\rm vir}$ and $R_{\rm vir}$) defined using the standard
overdensity $360 \, \rho_{\rm back}(z)$ (background mean density).
One of the most important characteristics of a (sub)halo is its
maximum circular velocity:
\begin{equation}
V^2_{\rm max}=\max\left[\frac{GM(<r)}{r}\right].
\end{equation}
There are several advantages of using $\mbox{$V_{\rm max}$}$ to characterize a halo
as opposed to the ``virial mass''. First, $\mbox{$V_{\rm max}$}$ does not have the
ambiguity related with the definition of mass. Virial mass and radius
vary depending on the overdensity threshold used. For the often-employed
overdensity 200 and ``virial'' overdensity thresholds, the differences
in definitions result in changes in the halo radius from one
definition to another and, thus, in concentration, by a factor of
$1.2$--$1.3$, with the exact value dependent on the halo
concentration. Second and more important, the maximum circular
velocity $\mbox{$V_{\rm max}$}$ is a better quantity to characterize haloes when we
relate them to the galaxies inside these haloes. For galaxy-size
haloes the maximum circular velocity is defined at a radius of $\sim
40$~kpc, i.e., closer to the sizes of luminous parts of galaxies than
the much larger virial radius, which for the Milky-Way halo is $\sim
250~\mbox{kpc}$ \citep{Klypin2002}.
\subsection{Bridging the gap between galaxies and haloes}
\label{HAM}
Once we have the maximum circular velocities for distinct haloes and
subhaloes the implementation of the HAM prescription is simple. We start with
a monotonic assignment. We count all haloes and subhaloes, which have
maximum circular velocity $V_{\rm max}$ larger than $V_{\rm cut}$, and gradually
decrease the value of $V_{\rm cut}$ until the number density of
(sub)haloes is equal to that of BOSS galaxies at $z\approx 0.5$.
The bottom panel of Fig.~\ref{fig:cum_sat} shows the number
density of (sub)haloes in the MultiDark simulation at $z=0.53$. A number density close to that
of the BOSS-CMASS sample corresponds to haloes and subhaloes with a
maximum circular velocity above 362\,\mbox{km~s$^{-1}$}, which is larger than the
completeness limit of the MultiDark simulation, i.e., $\sim180\,\mbox{km~s$^{-1}$}$.
This means that haloes and subhaloes hosting BOSS-CMASS galaxies are well
resolved. The top panel of Fig.~\ref{fig:cum_sat} shows the cumulative
subhalo fraction as a function of maximum circular velocity. For
values of $V_{\rm max}>350\,\mbox{km~s$^{-1}$}$ the subhalo fractions are
typically less than $10\%$. We will return to this point again in
Section \ref{sec:satellites}.
\subsubsection{Halo stochasticity}
There are a number of arguments why there should be some degree of
stochasticity in the stellar mass -- circular velocity relation
\citep[e.g.,][]{Tasitsiomi04,Behroozi10,Trujillo-Gomez}. In our case
the stochasticity means that some haloes above the velocity cut host
galaxies with stellar masses smaller than the corresponding stellar mass
cut of the BOSS sample and should not be included into the sample. Simultaneously,
some smaller haloes may host galaxies with a larger stellar
mass, and should be considered. Because the number density of galaxies is
fixed by observations, the numbers of included and excluded haloes must be
equal. Following \citet{Trujillo-Gomez} we implement this process
using a Gaussian spread with an offset. If $V_{\rm cut}$ is the
velocity cut in the monotonic assignment, then a (sub)halo is taken if
its maximum circular velocity $V_{\rm max}$ satisfies the condition
\begin{equation}
V_{\rm max}\left[1+ \mathcal{N}(0,\sigma)\right]-\Delta V > V_{\rm cut},
\end{equation}
where $\mathcal{N}(0,\sigma)$ is a Gaussian random number with mean zero and
{\it rms} $\sigma$. The offset $\Delta V$ is needed to compensate the larger
influx of smaller haloes. We use $\sigma =0.2$ and $\Delta V
=18\,\mbox{km~s$^{-1}$}$, which is similar to the values adopted by \citet{Trujillo-Gomez}.
Note that the offset $\Delta V$ and the spread $\sigma$ are not free
parameters. The offset is just a normalization. The value of $\sigma$ is
defined by the spread of the observational Baryonic Tully-Fisher
relation (or its equivalent for early-type galaxies), which has
uncertainties \citep[e.g.,][]{Trujillo-Gomez}. The stochastic
assignment has a very small effect on clustering for scales larger than
$0.5\;\mbox{$h^{-1}$\,Mpc}$ decreasing the correlation functions no more than $\sim 8\%$.
\subsubsection{Subhalo tidal stripping}
In order to apply our HAM technique we use the maximum circular velocity at
$z=0$ as a proxy, which is a quantity that can be easily measured for haloes in our simulation.
However, it is generally accepted that, for subhaloes, a better characteristic would
be the peak value of the maximum circular velocity, $V_{\rm peak}$, during subhalo
evolution \citep[e.g.,][]{conroy06,Trujillo-Gomez}.
The latter is motivated by the tidal stripping effect: once a halo falls into the
potential well of a larger one some of its material can be stripped away, thus
lowering the value of $V_{\rm max}$. Since in real galaxies stars occupy the inner regions
of subhaloes, where tidal forces are much weaker, their circular velocities should
be, in general, less influenced by this effect.
We expect that the tidal stripping for BOSS-CMASS satellites, though present, not to be
dominant, thus allowing us to use $V_{\rm max}$ for subhaloes as a realiable proxy for the HAM technique.
In this case, satellites with masses of $\sim 10^{13}\,h^{-1}$ M$_{\sun}$ are typically
located at large distances from their central hosts, which can reach even larger mass values
of $\sim 10^{14}$--$10^{15}\,h^{-1}$ M$_{\sun}$ (see Section~\ref{sec:satellites}).
To estimate the magnitude of the potential stripping effect in these systems we run a series
of simple simulations. Using a direct-summation $N$-body code, we study the idealized
case of a satellite orbiting its central host, where the latter is modeled as a static
Navarro-Frenk-White (NFW) potential \citep[][]{NFW96}. Initially, the satellite was set as a distribution of
particles following an equilibrium NFW distribution with isotropic velocities. The mass per particle
and force softening were set to $8\times10^7\,h^{-1}$ M$_{\sun}$ and $0.1\,\mbox{$h^{-1}$\,kpc}$ respectively.
Particle mass decreases with decreasing distance to the central halo as a way to achieve a better
mass resolution in the central regions. In order to check for equilibrium stability and numerical effects
we did a test run for an isolated satellite, i.e. without considering an external tidal field,
finding that its maximum circular velocity was well preserved during the entire evolution of the system,
which was set to 5~Gyr.
We study two different cases for a satellite of mass $M_{\rm sat}=10^{13}\,h^{-1}$ M$_{\sun}$, alternatively
assuming either $M_{\rm host}=10^{14}\,h^{-1}$ M$_{\sun}$ or $M_{\rm host}=10^{15}\,h^{-1}$ M$_{\sun}$ for the
mass of the central host. Halo concentrations were selected to follow the results of \citet{2011arXiv1104.5130P}, thus they
were taken to be $c_{\rm vir}=8.2, 6.9, 5.8$, in order of increasing halo mass. Stripping severely depends on the distance
to the centre. For instance, for a central system with mass $M_{\rm host}=10^{14}\,h^{-1}$ M$_{\sun}$, we find that
a satellite with a pericentre (apocentre) of $100\,(500)\,\mbox{$h^{-1}$\,kpc}$ loses around half
of its maximum circular velocity in 5~Gyr. However, this is not typical of satellites in large galaxy clusters.
We find that, for both host halo masses, satellites falling from the virial radius with
apocentre-to-pericentre ratios of $\sim 4:1$ -- $3:1$, the tidal stripping is much less efficient,
changing its maximum circular velocity only by $15$--$20$\% after 5~Gyr. The effect is much smaller after the first $\sim2$~Gyr
of evolution producing a variation less than 5\%.
Since, in this work, the minimum studied physical scale is $\gtrsim 0.5$ Mpc, it is expected
that most of the BOSS-CMASS satellites have spent most of their time at larger distances from their central haloes, where
the impact of tidal forces is small. Thus, considering the relatively small change of the subhalo
maximum circular velocities due to tidal stripping, we use $V_{\rm max}$ as a proxy for our HAM instead of $V_{\rm peak}$.
Interestingly, \citet{Watson2012}, based on a subhalo evolution model applied to clustering measurements in the SDSS,
suggest that tidal stripping of stars in luminous galaxies is much less efficient than in less luminous systems, which
provides additional support to our choice.
\begin{figure}
\includegraphics[width=85mm]{Figs/2D_cf.ps}
\caption{
Contours of the two-dimensional correlation function
$\xi(\sigma,\pi)$ estimated from the DR9 BOSS-CMASS north
galaxy sample (dashed contours) at $0.4<z<0.7$ and for our MultiDark
halo catalog constructed using the HAM technique at $z=0.53$
(solid contours).}
\label{fig:sig_pi_MD}
\end{figure}
\subsection{Modeling BOSS-CMASS clustering}
\label{Modeling}
We use the MultiDark BDM catalogs constructed for the overdensity $360
\, \rho_{\rm back}(z)$ to facilitate the comparison with the HOD
modeling presented in \citet{White2011}. However, as stated before, our results do not
depend on halo mass definition since halo matching is done using the
maximum circular velocity $V_{\rm max}$ of either distinct haloes or
subhaloes. We use redshift $z=0.53$, which is close to the peak value
of the BOSS-CMASS sample (see Fig.~\ref{fig:CMASS}).
To model the effect of galaxy peculiar velocities in the redshift
measurements, we transform the coordinates of our (sub)haloes to
redshift-space using ${\bf s}={\bf x}$ + ${\bf v\cdot\hat{r}}/(aH)$,
where ${\bf x}$ and ${\bf v}$ are their position and peculiar velocity
vectors respectively, $a$ is the scale factor and $H$ is the Hubble constant.
We compute the two-dimensional correlation function $\xi(\sigma,\pi)$ of
our catalog counting the number of ``galaxy'' tracers in bins parallel
($\pi$) and perpendicular ($\sigma$) to the line-of-sight. When
estimating the projected correlation function, we count all pairs
along the parallel direction out to $\pi_{\rm max}\sim100\,\mbox{$h^{-1}$\,Mpc}$.
\begin{figure*}
\includegraphics[width=86mm]{Figs/wp_mdark_scatter.ps}
\includegraphics[width=86mm]{Figs/wp_por_sigma_mdark_scatter.ps}
\caption{{\it Left panel:}
Projected correlation function for the $0.4<z<0.7$ DR9
BOSS-CMASS north, south and Full galaxy samples (open blue triangles,
open red circles and filled black circles respectively) and the MultiDark catalog selected
with the HAM procedure at $z=0.53$ (solid line). The shaded area for MultiDark gives an
estimate of the cosmic variance. BOSS-CMASS error bars were estimated using an
ensemble of 600 mock galaxies (see Section~\ref{sec:clustering}). For clarity, only error bars for the
combined sample are shown. The corresponding ones for the north and
south are a factor of about 1.13 and 2.15
times larger respectively. The transition between the one-halo and two-halo terms can be
seen at $\sim1\,\mbox{$h^{-1}$\,Mpc}$. Flattening of the signal at intermediate scales and bending
at large scales are also evident features. {\it Right panel:}
Detailed differences between the $\Lambda$CDM model and BOSS
clustering are better seen when plotting the quantity
$\Xi(\sigma)\,\sigma$.}
\label{fig:wp_MD}
\end{figure*}
\begin{figure*}
\includegraphics[width=88mm]{Figs/xi_s_carmen_mdark_scatter.ps}
\includegraphics[width=88mm]{Figs/xi_por_s_carmen_mdark_scatter.ps}
\caption{{\it Left panel:}
Redshift-space correlation function for the $0.4<z<0.7$ DR9
BOSS-CMASS north, south and Full galaxy samples (open blue triangles,
open red circles and filled black circles respectively) and the MultiDark catalog selected
with the HAM procedure at $z=0.53$ (solid line). Standard deviation for model and observations
are shown in the same way as in Fig.~\ref{fig:wp_MD}. {\it Right panel:} Shown is the quantity
$\xi(s)\,s^2$ which better reflects the differences between our $\Lambda$CDM model and
BOSS clustering measures.}
\label{fig:z_MD}
\end{figure*}
To estimate the cosmic variance we use a set of simulations
from the Large Suite of Dark Matter Simulations
({\it LasDamas}; see http://lss.phy.vanderbilt.edu/lasdamas/). We use
mock galaxy catalogs extracted from the {\it Carmen} boxes, which are
40 dark matter-only low-resolution runs done with $1120^3$ particles
in a periodic cube with $1\,h^{-1}$\,Gpc on a side. In this way, we can
get a simple estimate of the expected {\it rms} deviations from our
fiducial MultiDark result due to random fluctuations in the
intial conditions of the universe. The dark matter density and scalar spectral
index of the {\it Carmen} simulations display a difference of
about $8\%$ in comparison to the corresponding
values of MultiDark. However, since here we only want to obtain an
estimate for the magnitude of the cosmic variance, we consider this
approach as good enough for this purpose.
As already mentioned in Section~\ref{sec:clustering}, to estimate
the covariance matrices of observed correlation functions we use a set of 600
galaxy mocks designed to follow the same geometry and redshift distribution of
the CMASS sample, while mimicking its clustering properties.
\cite{Manera2012} show that the covariances for the correlation functions of $N$-body
simulations are consistent with those resulting from the mocks.
This means that it is safe to compare the cosmic variance of MultiDark
(estimated from the {\it Carmen} set of simulations) with that resulting from the
mock galaxy catalogs.
\section{Clustering of galaxies in the BOSS-CMASS sample}
\label{results}
The two-dimensional correlation function $\xi(\sigma,\pi)$ for
the north subsample of BOSS-CMASS is presented in Fig.~\ref{fig:sig_pi_MD} for
distances up to $\sim20\,\mbox{$h^{-1}$\,Mpc}$ (dashed contours). The Finger-Of-God elongation along
the line-of-sight direction at small perpendicular separations, which
is due to galaxy small-scale random velocities, is clearly seen. The
flattening of contours at larger projected scales is due to the Kaiser
effect caused by large-scale infall velocities
\citep{Kaiser1987}. Predictions for the clustering of galaxies obtained
from the MultiDark cosmological simulation (solid contours)
produce a fair representation of the measured clustering in the
CMASS sample. Nevertheless, there are some deviations. At small
separations, $\sigma\lesssim 2\;\mbox{$h^{-1}$\,Mpc}$, observations show more clustering
as compared with results from the simulation. The situation reverses at large scales,
where our cosmological simulation predicts slightly stronger clustering.
These tendencies are clearly seen in the correlation functions presented
in Figs.~\ref{fig:wp_MD} and~\ref{fig:z_MD}. The north, south and combined
CMASS samples are shown together with the result of our simple HAM model.
The shaded area for MultiDark gives an estimate of the cosmic variance
as computed from {\it LasDamas} suite of simulations.
Again, the overall agreement at all scales is quite good showing a remarkable
match with observations. However, as noted before, there are some noticeable discrepancies at
small and intermediate scales. The detailed differences between the projected
correlation function and MultiDark
can be better seen in the right panel of Fig.~\ref{fig:wp_MD}, where differences in the
correlations are amplified after multiplying by the corresponding projected distance.
The disagreement at scales $\lesssim1\,\mbox{$h^{-1}$\,Mpc}$ is perhaps related to the simple
stochastic HAM adopted here. At large scales, starting from $\sim 20\,\mbox{$h^{-1}$\,Mpc}$, the
theoretical estimates lie slightly above the observational estimates
of the northern galaxy subsample (at $\sim 1\sigma$ level), which has about
four times larger statistics than the corresponding southern sample.
\begin{figure}
\includegraphics[width=90mm]{Figs/mean_abund_sat.ps}
\caption{
The mean occupancy of all haloes in our MultiDark sample used to
match the BOSS-CMASS observations as a function of halo mass
(open squares). Open circles and dashed line correspond to
satellites and central haloes respectively. Error bars
are calculated assuming Poisson statistics in the counting.
The fit given by Eq.~(\ref{eq:fsat}) is shown as a
dot-dashed line.}
\label{fig:sat2}
\end{figure}
The redshift-space clustering results, both for the CMASS sample and
the $\Lambda$CDM model given by the MultiDark simulation,
are shown in Fig.~\ref{fig:z_MD}. As before,
the shaded area represents cosmic variance estimates and differences
between model and observations are better seen in the right panel.
Peculiar velocities of galaxies inside virialized systems reduce the
clustering signal thus lowering the slope of the correlation function
at scales of $1$--$2\,\mbox{$h^{-1}$\,Mpc}$.
For scales in the range $\sim0.6$--$1\,\mbox{$h^{-1}$\,Mpc}$ our model underpredicts the observed values,
as already showed for the case of the projected correlation function.
The agreement between the simulation measurement and observed redshift-space correlation
function is quite remarkable at scales $\gtrsim 1\,\mbox{$h^{-1}$\,Mpc}$.
Differences are less than 2--3\% for a wide range of distances ranging from
$2\,\mbox{$h^{-1}$\,Mpc}$ to $20\,\mbox{$h^{-1}$\,Mpc}$. At $20$--$40\,\mbox{$h^{-1}$\,Mpc}$ the MultiDark results overpredict
the observed clustering by about $\sim 10\%$. Statistically the
differences are significant: the effect is about $3\sigma$ at $\sim30\,\mbox{$h^{-1}$\,Mpc}$
(e.g., at $s=33.5\,\mbox{$h^{-1}$\,Mpc}$ the redshift-space correlation function for the combined CMASS sample
and MultiDark give $\xi_{\rm N+S}(s)=0.077\pm0.004$ and $\xi_{\rm MD}(s)=0.091\pm0.003$, respectively).
The small differences between the $N$-body results and observations may
be alleviated if we use a more sophisticated HAM procedure
including, for instance, light-cone effects and a match to
the stellar mass distribution at these redshifts. Nevertheless,
the high level of agreement found between data and
observations using the simple HAM procedure adopted here is a striking
result.
\section{The mean halo occupancy of BOSS-CMASS galaxies}
\label{sec:satellites}
Our analysis allows us also to study the halo occupation distribution and
the satellite fraction of BOSS-CMASS galaxies at $z\sim0.5$. The main
advantage of the MultiDark simulation is that it has sufficient
resolution to resolve
satellites around central distinct haloes. The satellite distribution
around massive haloes can be directly studied from the resulting halo
catalogs. As shown previously in the top panel of
Fig.~\ref{fig:cum_sat}, the fraction of satellites for haloes with a
number density close to that of the CMASS sample is less than
$10\%$. In particular, for haloes having $V_{\rm max} \geq 362$ km
s$^{-1}$, which corresponds to a number density of $3.6\times10^{-4}$
$h^{-3}$ Mpc$^{3}$, the resulting satellite fraction is $6.8\%$. The
HOD modeling by \citet{White2011}, using the first semester of
BOSS data, reported a satellite fraction $(10 \pm 2)\%$ which is
reduced to $(7 \pm 2)\%$ when they ignore in their fit to the
correlation function the very small scales affected by fibre
collisions. Note that our HAM procedure is non-parametric and provides
satellite fractions consistent with our $\Lambda$CDM cosmological
simulation. Yet, the halo-occupancy distribution and satellite fractions
from HOD modeling are obtained from a fit to the empirical correlation
function.
\begin{figure}
\includegraphics[width=90mm]{Figs/M1_Mcut_ndens_MDARKV.62.ps}
\caption{
MultiDark HOD parameters, $M_{\rm cut}$ and $M_{1}$, as a
function of number density (solid line) using the simple HAM
prescription at $z=0.53$. We compare our results with a variety
of intermediate redshift massive galaxy samples. The data are
taken from \citet{Phleps06}, \citet{Mandelbaum06}, \citet{Kulkarni07},
\citet{Blake08}, \citet{Brown08}, \citet{Padmanabhan09}, \citet{Wake08}
and \citet{Zheng09}. Filled circles show results from \citet{White2011}
HOD's analysis of early BOSS data (see text).
}
\label{fig:M1_Mcut}
\end{figure}
Fig.~\ref{fig:sat2} shows the mean occupancy of haloes for the
BOSS-CMASS sample as obtained from the MultiDark halo abundance-matching
scheme. The dashed line and open circles are the contributions of distinct haloes
and subhaloes respectively. Open squares correspond to the total occupancy
of haloes, including both central and satellite galaxies from our halo
catalog. Distinct haloes display a sharp transition around
$M_{\rm vir}\gtrsim10^{13}$ $h^{-1}$ M$_{\sun}$. The mean number of satellite
galaxies as a function of halo mass can be modeled with the following
expression \citep[e.g.,][]{Wetzel10}
\begin{equation}
\bar{N}_{\rm sat}(M)=\left(\frac{M}{M_1}\right)^{\alpha} e^{-M_{\rm cut}/M_1},
\label{eq:fsat}
\end{equation}
\noindent where $\log M_{\rm cut}=13.07\pm0.40$,
$\log M_{1}=14.25\pm0.17$ and $\alpha=0.94\pm0.42$ (dot-dashed line). Here, $M_{1}$ is the halo mass
which hosts, approximately, one satellite and $M_{\rm cut}$ governs the
strength of the transition between systems with and without satellite
systems. For high halo masses, fluctuations in the determination of
the satellite occupancy arise because we are dealing with small number
statistics as a result of the fixed volume of the simulation.
The solid line in Fig.~\ref{fig:sat2} shows the total mean halo occupancy but using in this
case the best fit model for the satellite distribution in order to
extrapolate the result towards higher masses.
In Fig.~\ref{fig:M1_Mcut} we compare the HOD parameters, $M_{\rm cut}$
and $M_{1}$, obtained from MultiDark at $z=0.53$ as a function of
number density (solid lines) following our HAM scheme. We also show
estimates for a variety of intermediate redshift massive galaxy
samples from the literature, including the HOD results from White et
al. (2011) for the early BOSS data sample. This compilation of
different datasets has been kindly provided by M. White.
Error bars on the individual points are typically $\sim0.1$ dex, as represented by the
size of the symbols. The agreement between the MultiDark HAM
predictions and data from different surveys is remarkable if one
considers the differences in sample selection, redshift range and HOD
methods. Our estimates for the HOD parameters of the
BOSS-CMASS sample yield consistent values with those of
White et al.'s HOD analysis which are contained within our error bars.
Nevertheless, it is worth noting here that \citet{White2011} did not consider
weights in the estimation of the correlation function used, which could have
an impact on the derived parameters, and that our approach relies completely
on our halo catalog.
\begin{figure}
\includegraphics[width=85mm]{Figs/bias.ps}
\caption{Scale-dependent galaxy bias at $z=0.53$ predicted for
BOSS-CMASS galaxies using the MultiDark simulation. The solid
curve shows the bias relative to the dark matter (Eq.~(\ref{eq:bias_xi})).
The bias relative to the linear-theory predictions is shown as a dashed line.
}
\label{fig:bias_xi}
\end{figure}
\begin{figure*}
\centering
\includegraphics[width=0.45\textwidth]{Figs/powerRaw.ps}
\includegraphics[width=0.45\textwidth]{Figs/powerRawBolsh2.ps}
\caption{{\it Left panel:} Recovering the power spectrum: shot-noise and
density assignment corrections. The top solid thin curve shows the
``raw'' estimate of the power spectrum at $z=0.53$ for haloes and
subhaloes with circular velocities larger than $\mbox{$V_{\rm max}$}>362~\mbox{km~s$^{-1}$}$
corresponding to a number density close to that of galaxies in the
BOSS sample $n=3.6\times10^{-4}\,h^3\,{\rm Mpc}^{-3}$. The dot-dashed line
is the combined correction in Eq.~(\ref{eq:noise}) due to the
shot-noise and the density assignment. The vertical line shows the
Nyquist frequency. The thick solid line is the recovered power
spectrum. The dashed line shows the linear power spectrum of dark
matter density perturbations scaled up to match the amplitude of the
recovered power spectrum at long waves. {\it Right panel:} Comparison
between the recovered power spectra for haloes+subhaloes
with $\mbox{$V_{\rm max}$}>200~\mbox{km~s$^{-1}$}$ in the MultiDark (solid line) and the
Bolshoi (dashed line) simulations at $z=0$. Deviations at $k<0.1\,h\,{\rm Mpc}^{-1}$ are due
to cosmic variance. The deviations at $k>5\,h\,{\rm Mpc}^{-1}$ are due to
density assignment effects in the MultiDark simulation. However, for wave-numbers
in the range $0.2\,h\,{\rm Mpc}^{-1}<k<5\,h\,{\rm Mpc}^{-1}$ the resulting
power spectra are not affected by cosmic variance and resolution and the agreement
between simulations is excellent, with deviations less than just few percent. }
\label{fig:powerStart}
\end{figure*}
\section{Power spectrum and biases}
\label{bias}
In this section we focus on the abundance-matched halo catalog to the
BOSS-CMASS galaxy sample, presenting further predictions from the MultiDark
simulation that can be tested with future obervations.
Using the resulting halo sample and dark matter particles from the simulation
we can estimate the bias of the halo population with respect to the underlying
mass distribution as follows
\begin{equation}
b(r) \equiv \sqrt{\frac{\xi_{\rm h}(r)}{\xi_{\rm m}(r)}}
\label{eq:bias_xi}
\end{equation}
\noindent where $\xi_{\rm h}(r)$ and $\xi_{\rm m}(r)$ are the real space correlation
functions for the MultiDark haloes and dark matter in the volume at the redshift of interest.
This bias is shown in Fig.~\ref{fig:bias_xi} as a function of spatial scale (solid line).
The resulting bias is $b\sim2$ at the transition scale of $\sim1$ $h^{-1}$ Mpc and,
as expected, increases strongly for smaller scales where galaxies are more strongly clustered
with respect to the dark matter. For the remaining scales we can constrain the bias factor
to be in the range $b\approx1.8$--$2.2$. Interestingly, this result is in very good agreement with
the findings of \citet{Ho2012}. These authors found a galaxy bias of $b=1.98\pm0.05$ in the redshift
range $z=0.50$--$0.55$ by studying the angular clustering of the photometric CMASS sample.
The bump-like feature between $\sim 1$--$10$ $h^{-1}$ Mpc is related to the transition
between the one- and two-halo terms in the correlation function,
while for larger scales the bias factor tends to decrease. The linear
bias estimation is shown as a dashed line, where the linear matter
correlation function is used instead. As expected, the linear bias at
small scales differs strongly from the non-linear result while
approaching more similar values at larger scales.
We have a number of goals with the analysis of the power spectrum and
biases: (1) We want to present accurate approximations for the
numerical results, which can be used for comparison of observational
results with predictions of the cosmological model used in our
simulations. It is more convenient to use these approximations instead
of having to deal with raw simulations. (2) The high quality of our
results allows us to study effects which are difficult to measure
with low-resolution simulations.
One should clearly understand the role of the standard $\Lambda$CDM model with
the particular set of cosmological parameters used for our simulations.
Our results show that, once we match the abundance of haloes, the
model reasonably reproduces a wide range of scales of the observed
projected and redshift-space correlation
functions. In principle, one can invert the correlation function to obtain
the power spectrum. However, in practice, a model-independent inversion is a technically complicated
process. This is why we chose a different approach: we use
the power spectrum of haloes in the model as a proxy of the power
spectrum of galaxies in BOSS.
We use two other sets of simulations in addition to MultiDark.
The first one is the already mentioned {\it Carmen} series of 40 simulations
of the {\it LasDamas} suite of simulations that allow us to estimate
the effect of cosmic variance.
These mock galaxy catalogs are produced with an HOD model with parameters aimed at fitting the
respective SDSS galaxy samples \citep[][]{2009AAS...21342506M}. Note that, as before,
we use only relative model-to-model deviations in the {\it Carmen}
simulations: error bars in our results are obtained in this way. Secondly, we
also use results of the Bolshoi simulation \citep{Bolshoi}. This simulation has a
factor of $\sim 5$ better mass and force resolution, but it was performed
for a smaller simulation box ($250~\mbox{$h^{-1}$\,Mpc}$ on a side). There is an overlap
between the MultiDatk and Bolshoi simulations: the simulation volume of
Bolshoi is large enough to study haloes (and subhaloes) with circular
velocities of $\sim 200~\mbox{km~s$^{-1}$}$. At the same time, these (sub)haloes are
reasonably well resolved in the MultiDark simulation having more than 100
particles. Comparison of MultiDark and Bolshoi power spectra for these
haloes allows us to look for biases at scales $k>0.1\,h\,{\rm Mpc}^{-1}$.
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{Figs/powerRawK2.ps}
\caption{Power spectra (multiplied by $k^{1.5}$) of dark matter haloes in real space (open circles with
error bars) for haloes with $V_{\rm max}>362~\mbox{km~s$^{-1}$}$ (top) and
$V_{\rm max}>180~\mbox{km~s$^{-1}$}$ (bottom). Solid curves show the linear power spectra
scaled to match the amplitude of fluctuations at long waves. The
four vertical lines indicate the positions of maxima due to BAO. The BAO
peaks in the linear spectrum give rise to peaks in the power
spectrum of haloes.}
\label{fig:powerRaw}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{Figs/powerBias.ps}
\caption{{\it Bottom panel}: Real-space bias factor
$b(k)=(P_{\rm gg}/P_{\rm linear})^{1/2}$ for haloes with circular velocities
$V_{\rm max}=180,200,220,250,300$ and $362~\mbox{km~s$^{-1}$}$ (from bottom to top).
{\it Top panel}: Bias factor for different haloes normalized to unity at
long-waves. The bias factor $b(k)$ depends on the circular velocity
$V_{\rm max}$ and on wave-number $k$ in a rather complicated way. There
are small depressions in the bias factor at peaks of BAOs.
When normalized to the long-wave value
$b_0$, the bias factor is slightly smaller for less massive
haloes. However, the main effect is the overall shift $b_0$.}
\label{fig:biasVc}
\end{figure}
To estimate power spectra, we use a large density mesh
of $4096^3$ cells and then we apply the standard FFT method.
The Cloud-In-Cell density assignment scheme is used to
calculate the density fields from the coordinates of haloes in the
simulations. However, before the power spectra can be reliably used
two corrections should be applied: a correction due to the density
assignment \citep[][]{2005ApJ...620..559J}
and the usual shot-noise correction. If the number density of objects
is $n=N/L^3$ and the Nyquist wave-number
is $k_{\rm Ny}=\pi N_{\rm grid}/L$, then
the corrected power spectrum is given by
\begin{equation}
P(k) = P_{\rm raw}(k) -\frac{1}{n}\left[1-\frac{2}{3}
\sin^2\left(\frac{\pi k}{2k_{\rm Ny}}\right) \right],
\label{eq:noise}
\end{equation}
\noindent where $L$ is the length of the computational box and
$N_{\rm grid}=4096$. This approximation is known to work well for
$k<0.7k_{\rm Ny}$ \citep[][]{2005ApJ...620..559J,2008ApJ...687..738C}.
However, to remain on safe ground we decided to limit our
analysis to $k < 0.4k_{\rm Ny}=5\,h\,{\rm Mpc}^{-1}$.
The left panel of Fig.~\ref{fig:powerStart} illustrates the procedure
of shot-noise and density corrections using a halo
sample with $V_{\rm max}>362$ km s$^{-1}$ extracted
from the MultiDark simulation at $z=0.53$.
\begin{figure}
\centering
\includegraphics[width=0.47\textwidth]{Figs/powerApprox.ps}
\includegraphics[width=0.47\textwidth]{Figs/powerApproxBAO.ps}
\caption{Real-space bias factor for haloes with circular velocities
larger than $V_{\rm max}=362~\mbox{km~s$^{-1}$}$. {\it Top panel:} The bias factor
normalized to the long-wave value $b_0$ (bottom panel, solid line) is compared
with the analytical approximation given by
Eq.~(\ref{eq:bk2}) (dashed line). The top panel displays the relative error in
percentages of the analytical approximation (filled circles). Error bars show
the {\it rms} fluctuations due to the cosmic variance. {\it Bottom panel:}
Deviations of the bias from the ``de-wiggled'' component of the
bias factor given by Eq.~(\ref{eq:bk2}). Open circles show the
relative deviations $b(k)/b_{\rm no-wiggle}-1$ for each wave-number. The
solid line is an analytical model for the residuals: the sum of
exponential terms in Eq.~(\ref{eq:bk2}). Error bars show
the {\it rms} fluctuations due to cosmic variance.}
\label{fig:bias362}
\end{figure}
In the right panel of Fig.~\ref{fig:powerStart} we compare results
of the MultiDark and Bolshoi simulations. Just as one may expect,
there are some deviations at long waves due to the cosmic variance:
the Bolshoi box of $250~\mbox{$h^{-1}$\,Mpc}$ is too small to accurately probe these
waves. There are also deviations at short waves that correspond
to $k>7\,h\,{\rm Mpc}^{-1}$ that are mainly due to the difference in
density assignment between both simulations. For the Bolshoi simulation,
the adopted mesh sets a minimum physical scale four times higher in frequency
in comparison to MultiDark. However, for wave-numbers in the
range $0.2\,h\,{\rm Mpc}^{-1}<k<5\,h\,{\rm Mpc}^{-1}$
the agreement between the simulations is remarkably good.
This agreement is especially important for short waves, where
both resolution and shot-noise could have corrupted the results.
However, since this has not happened, it indicates that the obtained
power spectrum for MultiDark can be trusted
up to, at least, $k=5\,h\,{\rm Mpc}^{-1}$.
Fig.~\ref{fig:powerRaw} shows power spectra of haloes with circular
velocity cuts $\mbox{$V_{\rm max}$}>362~\mbox{km~s$^{-1}$}$ (top curves) and $\mbox{$V_{\rm max}$}>180~\mbox{km~s$^{-1}$}$
(bottom curves). To highlight BAO features, we actually plot the power
spectra of the halo distribution multiplied by $k^{1.5}$. As a result, the first four
peaks in the spectra are clearly seen in the plot. However, they are somewhat
smeared out by the non-linear evolution. As expected, the smearing increases for
larger wave-numbers where the non-linearity is more important.
In what follows, we define the bias factor by
\begin{equation}
b(k,\mbox{$V_{\rm max}$}) \equiv \left[ \frac{P_{\rm gg}(k,\mbox{$V_{\rm max}$})}{P_{\rm linear}(k)}\right]^{1/2},
\label{eq:biasdef}
\end{equation}
\noindent where $P_{\rm linear}(k)$ is the linear power spectrum of the dark
matter and $P_{\rm gg}(k,\mbox{$V_{\rm max}$})$ is the power spectrum of haloes and
subhaloes with circular velocities larger than $\mbox{$V_{\rm max}$}$. In order to
distinguish the latter from the often used non-linear
dark matter power spectrum or from the power spectrum of distinct haloes,
we use subscript ``gg'' to indicate that our results mimic galaxies.
We start our analysis with the long-wave normalization of the bias parameter
for different velocity cuts and, thus, for different number-densities
of our ``galaxies''. The bottom panel
in Fig.~\ref{fig:biasVc} shows $b(k,\mbox{$V_{\rm max}$})$ for different
velocities. At all wave-numbers the bias $b(k,\mbox{$V_{\rm max}$})$ increases
with increasing $\mbox{$V_{\rm max}$}$. The top panel shows that when normalized to
the long-wave value, $b_0(\mbox{$V_{\rm max}$})$, the bias factor is nearly the same.
However, there is some residual dependence on \mbox{$V_{\rm max}$}, i.e., the deviations
of the bias from one velocity cut to another can be as large as 15\% and this
should be taken into account if an accurate fit is needed. An approximation
for the real-space long-wave bias factor as a function of the average
number density of dark matter haloes $n(>\mbox{$V_{\rm max}$})$ or $\mbox{$V_{\rm max}$}$ is
presented below:
\begin{eqnarray}
\centering
b_0(n)&=&1+0.57\log_{10}\left(\frac{2.05\times 10^{-2}h^3\mbox{Mpc}^{-3}}n\right), \\
b_0(\mbox{$V_{\rm max}$})&=&1+\left(\frac{\mbox{$V_{\rm max}$}}{361~\mbox{km~s$^{-1}$}} \right)^{4/3}.
\label{eq:b0}
\end{eqnarray}
We now focus our analysis on the bias factor of haloes with $\mbox{$V_{\rm max}$}>362~\mbox{km~s$^{-1}$}$ at $z=0.53$, whose abundance
$n=3.6\times10^{-4}\,h^{3}\;{\rm Mpc}^{-3}$ matches that of BOSS galaxies.
The top panel of Fig.~\ref{fig:bias362} shows the bias factor of these haloes
normalized to the value $b_0(\mbox{$V_{\rm max}$})=2.01$ found at long waves.
Overall the bias factor is nearly flat at long waves and monotonically
increases to short waves. The following approximation for the smooth
component of the real-space bias factor gives an accuracy better than $4\%$:
\begin{equation}
b(k,\mbox{$V_{\rm max}$}) = b_0(\mbox{$V_{\rm max}$})\left[1+\log_{10}(1+3k^{1.8}+5.8k^3) \right],
\label{eq:bk}
\end{equation}
\noindent where the wave-number $k$ is in units of $h\,\mbox{Mpc}^{-1}$.
However, this approximation misses an important effect of non-linearities, the damping of the BAO.
The coupling between different Fourier modes washes out the acoustic oscillations, erasing the higher
harmonic peaks \citep{meiksin1999,eisenstein07,angulo05,angulo08,sanchez08,montesano10}.
In recent years, there has been substantial progress in the theoretical understanding of non-linear distortions
in the BAO signal, which can now be accurately modelled
\citep[see e.g.,][]{crocce06,Crocce08,matsubara08a,matsubara08b,taruya09},
and even partially corrected for \citep{eisenstein07b, seo10}.
As the bias factor in Eq.~(\ref{eq:biasdef}) is defined with respect to the extrapolated linear theory power spectrum,
this damping leads to small wiggles in $b(k)$ with an amplitude at the 2--4\% level, that can be better seen in the bottom
panel of Fig.~\ref{fig:bias362}.
The accuracy of the fitting of $b(k,\mbox{$V_{\rm max}$})$ for $\mbox{$V_{\rm max}$}>362~\mbox{km~s$^{-1}$}$ can be
improved by including extra terms in the expansion and by adding the
four main BAO peaks as follows
\begin{eqnarray}
b(k,\mbox{$V_{\rm max}$}) &=& b_0(\mbox{$V_{\rm max}$})\times \nonumber\\
&& \left[1+\log_{10}(1+4.0k^{1.8}+3.1k^3+1.0k^{4.5})\right]\times \nonumber \\
&&\prod_i\left[1-\alpha_i\exp\left(-\frac{(k-k_i)^2}{\sigma_i^2}\right)\right]\label{eq:bk2}.
\end{eqnarray}
\noindent Here each BAO peak is approximated as a small suppression of
the bias factor given by the last term of the equation, $k_i$ is the wave-number of
the peak and $\alpha_i\approx 0.01$--$0.05$ and $\sigma_i\approx
0.01$--$0.02$ are free parameters. The typical errors given by this
approximation are smaller than 2\% (see the top panel of
Fig.~\ref{fig:bias362}). The values of the parameters used in the
approximation can be seen in Table~\ref{tab:baos}.
Using Eq.~(\ref{eq:b0}) and the bias factor $b(k)$ for the velocity
cut $\mbox{$V_{\rm max}$}=362~\mbox{km~s$^{-1}$}$, we develop corrections to the bias factor for
different values of $\mbox{$V_{\rm max}$}$. In this way, we find the following set
of equations that yield an accuracy better than 4\% for the range
of velocities within $\mbox{$V_{\rm max}$}=180$--$370\,\mbox{km~s$^{-1}$}$:
\begin{eqnarray}
b(k,\mbox{$V_{\rm max}$}) & = & b_0(\mbox{$V_{\rm max}$})\times\nonumber \\
&& \left[1+\log_{10}(1+4.0k^{1.8}+3.1k^3+1.0k^{4.5})\right]\times\nonumber \\
&& \prod_i\left[1-\alpha_i\exp\left(-\frac{(k-k_i)^2}{\sigma_i^2}\right)\right]\times\label{eq:bk6} \\
&& \left[1-\beta_0\left(1-e^{-k^2/0.22^2}\right)+\beta_1k-\beta_2k^3\right]\nonumber
\end{eqnarray}
where the parameters $\beta_{0,1,2}$ depend only on \mbox{$V_{\rm max}$}
\begin{eqnarray}
\beta_0 &=& \left(\frac{66.6~\mbox{km~s$^{-1}$}}{\mbox{$V_{\rm max}$}} \right)^3,\nonumber\\
\beta_1 &=& 2.18\times 10^{-2}\left[1-\left(\frac{205.8~\mbox{km~s$^{-1}$}}{\mbox{$V_{\rm max}$}} \right)^{103/14}\right],\label{eq:bk7}\\
\beta_2 &=& 1.64\times 10^{-2}\left[1-\left(\frac{266.5~\mbox{km~s$^{-1}$}}{\mbox{$V_{\rm max}$}} \right)^{1/6}\right].\nonumber
\end{eqnarray}
\begin{table}
\centering
\caption{Parameters for the approximation of the real-space bias factor given by Eq.~(\ref{eq:bk2}).}
\begin{tabular}{@{}lccc@{}}
\hline
BAO peak& $k\;(h\;{\rm Mpc}^{-1})$ &$\alpha_i$& $\sigma_i$ \\
\hline
1& 0.071 &0.010& 0.017 \\
2&0.130 &0.043& 0.017\\
3&0.191 &0.022& 0.017\\
4&0.251 &0.013& 0.012 \\
\hline
\end{tabular}
\label{tab:baos}
\end{table}
\section{Conclusions}
\label{conclusions}
We presented an analysis of the clustering of $282,068$ galaxies in
the DR9 sample of BOSS data for a wide range of scales, ranging from
$\sim500\,\mbox{$h^{-1}$\,kpc}$ to $\sim90\,\mbox{$h^{-1}$\,Mpc}$. We separately studied the clustering
in the northern and southern hemispheres, as well as for the full sky
sample. We measured the two-dimensional, projected and redshift-space
correlation functions and compare the results with
those obtained from a large cosmological simulation with $1\,h^{-1}\,{\rm Gpc}$
on a side at a redshift of $z=0.53$. The cosmological parameters adopted in
the simulation are consistent with the latest WMAP7 results and several other
probes. Our simulation, also known as MultiDark, is able to resolve the relevant subhalo
masses needed to compare with the observed satellite population.
To bridge the gap between galaxies and dark matter haloes we use a
simple HAM technique applied to the BOSS-CMASS sample.
Our main results can be summarized as follows:
\begin{itemize}
\item
There is a 10--20\% asymmetry in the projected and redshift-space
correlation functions between the north and south subsamples at $\gtrsim20\,\mbox{$h^{-1}$\,Mpc}$ scales,
which is better seen in the case of the projected correlation function.
However, for both subsamples, the mean values agree with each other
within a $\sim1\sigma$ level of uncertainty.
\\
\item As compared with the first-semester of BOSS results presented by
\citet{White2011}, we find a small increase in power in the projected correlation
function at scales smaller than $\sim1\,\mbox{$h^{-1}$\,Mpc}$ due to the improved treatment of fibre collisions
and new corrections for systematics. However, the correlation functions
(projected and redshift-space) decline by 10--20\% at $10$--$30\,\mbox{$h^{-1}$\,Mpc}$
scales in comparison with our HAM model. This is most noticeable for the north
subsample which has about four times larger statistics than its southern
counterpart. The comparison with the south subsample yields more consistent
results with MultiDark at all scales, both in the projected and redshift-space correlations.\\
\item Our $N$-body predictions for the clustering of galaxies give
a fair representation of the measured clustering in the CMASS sample for
a wide range of scales. The more consistent results between the north
and south subsamples for the redshift-space correlation function show
a remarkable agreement with theory: the differences are
of the order of $\sim 2\%$ on scales ranging from $2\,\mbox{$h^{-1}$\,Mpc}$ up to $20\,\mbox{$h^{-1}$\,Mpc}$.
This result is more impressive when considering the fact that our
simple HAM scheme does not include any free parameter. At larger
distances, however, we find some deviations when comparing with
the north subsample. For scales in the range of $20$--$40\,\mbox{$h^{-1}$\,Mpc}$ the
theoretical redshift-space correlation function is above the observations by
$\sim 10\%$. Statistically, this difference is important -- e.g., it
represents a $\sim3\,\sigma$ deviation at $\sim30\,\mbox{$h^{-1}$\,Mpc}$. Future
data and a more sophisticated theoretical modeling may help to
clarify the situation.\\
\item The distribution of (sub)haloes as a function of
halo mass, as measured from our abundance-matched halo catalog, points
towards a BOSS-CMASS galaxy population inhabiting
haloes of mass $M\gtrsim10^{13}\,h^{-1}$ M$_{\sun}$, with
$\sim7\%$ of them being satellites orbiting centrals with
$M\gtrsim10^{14}\,h^{-1}$ M$_{\sun}$. We also derived values
for the HOD parameters of the sample using our simulation: $\log M_{\rm cut}=13.07\pm0.40$
and $\log M_{1}=14.25\pm0.17$.
\\
\item The scale-dependent galaxy bias of BOSS galaxies is likely to
be $b\simeq2$ at scales $\gtrsim10\,h^{-1}$ Mpc (see Eq.~(\ref{eq:bias_xi})).
Furthermore, using our simulation, we also compute a large-scale bias
(defined as the ratio between the abundance-matched galaxy catalog and
the extrapolated linear matter power spectra; see Eq.~(\ref{eq:biasdef}))
and found that it depends on the galaxy maximum circular velocity
as $b(V_{\rm max})=1+ (V_{\rm max}/361\;{\mbox{km~s$^{-1}$}})^{4/3}$, or on the galaxy number density as
$b(n_{\rm g})=0.0377-0.57\log_{10}\left(n_{\rm g}/h^{3}\;{\rm Mpc}^{-3}\right)$.
These approximations can be used to compare observational results with predictions
for the cosmology adopted in our simulation.
\\
\item The large-scale galaxy bias, defined using Eq.~(\ref{eq:biasdef}), has $\sim2$--4\% dips
at the positions of BAO peaks in the spectrum of fluctuations that are due to shifts caused
by non-linear effects. In this case, we also provide very accurate fits of the bias as a
function of maximum circular velocity of galaxies that can also be used to recover the
non-linear galaxy power spectrum in terms of the extrapolated linear density field of matter.
\end{itemize}
\section*{Acknowledgments}
Funding for SDSS-III has been provided by the Alfred P. Sloan Foundation, the Participating Institutions,
the National Science Foundation, and the U.S. Department of Energy.
SDSS-III is managed by the Astrophysical Research Consortium for the Participating Institutions of the
SDSS-III Collaboration including the University of Arizona, the Brazilian Participation Group, Brookhaven
National Laboratory, University of Cambridge, University of Florida, the French Participation Group, the
German Participation Group, the Instituto de Astrofisica de Canarias, the Michigan State/Notre Dame/JINA
Participation Group, Johns Hopkins University, Lawrence Berkeley National Laboratory, Max Planck Institute
for Astrophysics, New Mexico State University, New York University, Ohio State University,
Pennsylvania State University, University of Portsmouth, Princeton University, the Spanish Participation Group,
University of Tokyo, University of Utah, Vanderbilt University, University of Virginia, University of Washington,
and Yale University.
The MultiDark Database used in this paper and the web application providing online access to it were constructed
as part of the activities of the German Astrophysical Virtual Observatory as a result of the collaboration between
the Leibniz-Institute for Astrophysics Potsdam (AIP) and the Spanish MultiDark Consolider Project CSD2009-00064.
The Bolshoi and MultiDark simulations were run on the NASA's Pleiades supercomputer at the NASA Ames Research Center.
S.E.N. and F.P. acknowledges support from the Spanish MICINN's Consolider grant MultiDark CSD2009-00064.
S.E.N. also acknowledges support by the Deutsche Forschungsgemeinschaft under the grant MU1020 16-1.
F.P. also thanks the support of the MICINN Spanish grant AYA2010-21231-C02-01 and the Campus of
International Excellence UAM$+$CSIC. A.K. acknowledges support from the NSF under a grant to NMSU.
|
1,477,468,750,298 | arxiv | \section{Introduction}
Lie algebras are vector spaces
equipped with an antisymmetric bracket satisfying the Jacobi identity. Many interesting physical theories can be cast in this language, but others require a suitable generalization of this program. This is the case of Double Field Theory (DFT) \footnote{For reviews check \cite{ReviewDFT} and references therein.} \cite{Siegel} \cite{DFT}, a proposal to incorporate T-duality as a symmetry of a field theory, since it contains a non-trivial Jacobiator and therefore satisfies the Jacobi identity up to homotopy. For this reason its algebraic structure requires a set of brackets defined on a graded vector space satisfying a generalized notion of the Jacobi identities. Such structures are known as $L_{\infty}$ algebras and were initially described in the context of closed string field theory \cite{Zwiebach:1992ie} and, in the mathematics literature, in topology \cite{Topology}.
One way of organizing the algebraic structure of DFT in an $L_\infty$ structure turns from noticing that the Courant algebroids can be cast in this language \cite{Roytenberg:1998vn}, as well as their duality covariant counterparts \cite{Hull:2009zb} \cite{Deser:2016qkw}. Moreover, when dynamics is taken into account the full DFT, written in the generalized metric approach, also fit in an $L_{\infty}$ structure, as described in \cite{Hohm:2017pnh} \cite{Otros}.
In this work we are interested in Gauged Double Field Theory (GDFT) \cite{GDFT} with $O(D,D+n)$ as global duality group, where $n$ is the dimension of a gauge group. This formalism is a generalization of DFT and requires a frame \cite{frame} or flux formalism \cite{flux} in order to introduce the generalized version of the structure constants $f_{M N P}$. Additionally, the generalized Lie derivative acting on a generic vector $V_{M}$ with $M=0,\dots,2D-1+n$ is consistently deformed,
\begin{eqnarray}
\widehat{\cal L}_\xi V_M = {\cal L}_{\xi} V_{M} + f_{M N P} \xi^{N} V^{P} \, ,
\end{eqnarray}
and the closure is given by a deformed bracket
\begin{eqnarray}
[\xi_1, \xi_{2} ]^{ M}_{(C_{f})}=2\xi^{ P}_{[1}\partial_{ P}\xi_{2]}^{ M}-\xi_{[1}^{ N}\partial^{ M}\xi_{2] N}+f_{{ PQ}}{}^{ M} \xi_{1}^{ P} \xi_2^{ Q}\, ,
\end{eqnarray}
which reduces to the C-bracket when the structure constants vanish. As expected, the Jacobiator is also deformed
\begin{equation}
J(\xi_{1},\xi_{2},\xi_3)^{M} = \frac{3}{2}\partial^{M}(\xi_{[1}^{N} \xi_{2}^{P} \partial_{N} \xi_{3]P} + \frac13 f_{N P Q} \xi_{1}^{N} \xi_{2}^{P} \xi_{3}^{Q}) \, .
\end{equation}
The inclusion of the generalized frame/fluxes introduces a double Lorentz symmetry given by $O(D-1,1)_{L} \times O(1,D-1+n)_{R}$. From a $L_{\infty}$ point of view all these new ingredients enrich the algebraic structure of DFT or, in other words, define the algebraic structure of GDFT.
The products related to the dynamics of the theory can be cast in a closed form if we restrict our study to a family of theories given by the generalized Kerr-Schild ansatz. This ansatz was introduced in the context of DFT in \cite{KL}, extended to heterotic DFT in \cite{KL2} and \cite{alpha}, and further explore in the context of duality covariant theories in \cite{GKSA}. In this ansatz the perturbation of the generalized exact frame is given by
\begin{eqnarray}
{\cal E}_{M}{}^{\overline A} = E_{M}{}^{\overline A} + \frac12 \kappa E_{M}{}^{\underline B} K_{\underline B} {\bar K}^{\overline A} \, , \nonumber \\ {\cal E}_{M}{}^{\underline A} = E_{M}{}^{\underline A} - \frac12 \kappa E_{M}{}^{\overline B} {\bar K}_{\overline B} K^{\underline A}\, ,
\label{introan}
\end{eqnarray}
where $K_{\underline A}$ and $\bar K_{\overline A}$ are a pair of generalized null vectors,
\begin{eqnarray}
K_{\underline A} K^{\underline A} & = & \bar K_{\overline A} \bar K^{\overline A} = 0 \, ,
\end{eqnarray}
and $\kappa$ in (\ref{introan}) is an order parameter. We use $\underline A$, $\overline A$ as the flat left and right projections of the $M,N$ indices. The vectors $K_{\underline A}$ and $\bar K_{\overline A}$ satisfy the equivalent of a geodesic condition in the context of DFT,
\begin{eqnarray}
K^{\underline A} D_{\underline A} {\bar K}^{\overline B} =
{\bar K}^{\overline A} D_{\overline A} {K}^{\underline B} = 0 \, ,
\end{eqnarray}
where $D_{A}$ is a generalized covariant derivative. The ansatz (\ref{introan}) plus a linear expansion for the generalized dilaton,
\begin{eqnarray}
d = d_{o} + \kappa f
\end{eqnarray}
with $K^{\underline A} E_{\underline A} f = K^{\overline A} E_{\overline A} f = 0$ provide a family of exact solutions in a perturbative framework. In this sense, all the non-trivial products of the $L_{\infty}$ structure of GDFT can be explicitly/exactly computed, as we show. Considering a $L^{\rm{gauge+fields}}_{\infty}$ structure the theory can be cast in an $L_{3}$ algebra, where new brackets related to the generalized structure constants and the double Lorentz transformations are computed. When one also considers the equation of motion of the fields the algebraic structure is exactly promoted to an $L_{4}$ algebra.
This work is organized as follows: in Section \ref{GKSAA} we introduce GDFT in the generalized metric/flux formulation. Here we present the generalized Kerr-Schild ansatz (GKSA) for flat backgrounds. In Section \ref{Linf} we start by reviewing the way to obtain the products for a generic $L_{\infty}$ algebra. Then we cast the algebraic structure of both DFT and GDFT when the GKSA is considered. The present computations show the algebraic structure of the fundamental charged heterotic string and (Bosonic) Enhanced Double Field Theory, as we show in Section \ref{App}. Finally in Section \ref{Con} we summarize our work.
\section{The generalized Kerr-Schild ansatz}
\label{GKSAA}
\subsection{The DFT approach in metric formalism}
The GKSA is given by an exact and linear perturbation of the generalized background metric $H_{M N}$ ($M,N=0, \dots, 2D-1$) and an exact perturbation of the generalized background dilaton $d_{o}$. In this work we will consider linear perturbations in both fields.
The perturbation of the generalized background metric $H_{M N}$ is given by a pair of generalized vectors, $K_{M}$ and $\bar{K}_{M}$, and an order parameter $\kappa$, such that
\begin{eqnarray}
{\cal H}_{MN} = H_{MN} + \kappa (\bar{K}_{M} K_{N} + {K}_{M} \bar{K}_{N} ) \, ,
\label{DFTKS}
\end{eqnarray}
while the vectors satisfy
\begin{eqnarray}
\bar{K}_{M} & = & \frac12({\eta}_{M N} + {H}_{M N}) \bar{K}^{N} = \bar{P}_{M N} \bar{K}^{N} \, , \nonumber \\
K_{M} & = & \frac12({\eta}_{M N} - {H}_{M N}) K^{N} = {P}_{M N} {K}_{N} \, ,
\end{eqnarray}
and the generlized null conditions,
\begin{eqnarray}
\eta^{MN} \bar{K}_{M} \bar{K}_{N} & = & \eta^{MN} K_{M} K_{N} = \eta^{MN} \bar{K}_{M} K_{N} = 0 \, .
\label{nulldft}
\end{eqnarray}
or, equivalently,
\begin{eqnarray}
H^{MN} \bar{K}_{M} \bar{K}_{N} & = & H^{MN} K_{M} K_{N} = H^{MN} \bar{K}_{M} K_{N} = 0 \, .
\label{nullHdft}
\end{eqnarray}
The generalized background dilaton $d_{o}$ is perturbed in a similar way\footnote{We work with a linear perturbation for simplicity, \textit{i.e.,} $f=\textrm{const.}$. In the general case $f = \sum_{n=0}^{\infty}\kappa^{n}f_{n}$},
\begin{equation}
d = d_{o} + \kappa f\, .
\label{dilaton}
\end{equation}
The perturbations of the GKSA satisfy the following extra conditions
\begin{eqnarray}
{\bar K}^{P} \partial_{P} K^{M} + K_{P} \partial^{M}{\bar K}^{P} - K^{P} \partial_{P}{\bar K}^{M} & = & 0 \nonumber \, , \\
K^{M} \partial_{M}f = {\bar K}^{M} \partial_{M}f & = & 0 \, ,
\label{geodesic2}
\end{eqnarray}
which play the role of generalized geodesic equations.
In addition to the global $O(D,D)$ symmetry, the principle action of DFT is invariant under generalized diffeomorphisms generated infinitesimally by $\xi^{M}$ through the generalized Lie derivative. Acting on an arbitrary vector it reads,
\begin{eqnarray}
{\cal L}_\xi V_M = \xi^{N} \partial_N V_M + (\partial_M \xi^N - \partial^N \xi_{M}) V_N + \omega (\partial_{N} \xi^{N})V_{M} \, ,
\label{glie}
\end{eqnarray}
where $\omega$ is a weight constant. The generalized metric ${\cal H}_{M N}$ and the generalized background metric $H_{M N}$ are tensors with $\omega=0$ with respect to generalized diffeomorphisms, and $\omega(e^{-2d})= \omega(e^{-2d_{o}})=1$. It is straightforward to check that conditions (\ref{geodesic2}) are covariant under generalized diffeomorphism transformations.
The Lagrangian of DFT is defined as,
\begin{eqnarray}
{\cal L}_{DFT} & = & e^{-2d}(\frac18 {\cal H}^{MN} \partial_{M}{\cal H}^{KL}\partial_{N}{\cal H}_{KL} - \frac12 {\cal H}^{MN}\partial_{N}{\cal H}^{KL}\partial_{L}{\cal H}_{MK} \nonumber \\ && + 4 {\cal H}^{MN} \partial_{M}d\partial_{N}d - 2 \partial_{M}{\cal H}^{MN} \partial_{N}d) \, ,
\label{scalarDFT}
\end{eqnarray}
while the equations of motion can be written in terms of generalized curvatures,
\begin{eqnarray}
{\cal R}_{\underline P \overline Q} = {\cal P}_{P}{}^{M} \bar {\cal P}_{Q}{}^{N} \Big(\frac 18 \partial_M {\cal H}^{KL} \partial_N {\cal H}_{KL} -
\frac 1 4 (\partial_L - 2 (\partial_L d)) ({\cal H}^{LK} \partial_K {\cal
H}_{MN}) + 2 \partial_M \partial_N d \nonumber \\
- \frac 1 2 \partial_{(M|} {\cal H}^{KL} \partial_L {\cal H}_{|N)K} + \frac
1 2 (\partial_L - 2 (\partial_L d)) ({\cal H}^{KL} \partial_{(M} {\cal H}_{N)K}
+ {\cal H}^K{}_{(M|}\partial_K {\cal H}^L{}_{|N)})\Big) = 0 \, ,
\label{Ricci}
\end{eqnarray}
and
\begin{eqnarray}
{\cal R} = && \frac18 {\cal H}^{MN} \partial_{M}{\cal H}^{KL}\partial_{N}{\cal H}_{KL} - \frac12 {\cal H}^{MN}\partial_{N}{\cal H}^{KL}\partial_{L}{\cal H}_{MK} + 4 {\cal H}^{MN} \partial_{M}\partial_{N}d \nonumber \\ && + 4 \partial_{M}{\cal H}^{MN} \partial_{N}d - 4 {\cal H}^{MN} \partial_{M}d \partial_{N}d - \partial_{M} \partial_{N} {\cal H}^{MN} = 0 \, .
\label{scalar}
\end{eqnarray}
\subsection{Extension to GDFT in flux formalism}
The ansatz (\ref{DFTKS}) and (\ref{dilaton}) are powerful tools to work pertubatively since the generalized null and geodesic conditions provide finite contributions to the action principle and the equations of motion. Interestingly enough (\ref{DFTKS}) admits an extension to the flux formulation of DFT, which is a mandatory step to consider a GDFT. In this case we consider perturbations of the form,
\begin{eqnarray}
{\cal E}_{M}{}^{\overline A} = E_{M}{}^{\overline A} + \frac12 \kappa E_{M}{}^{\underline B} K_{\underline B} {\bar K}^{\overline A} \, , \nonumber \\ {\cal E}_{M}{}^{\underline A} = E_{M}{}^{\underline A} - \frac12 \kappa E_{M}{}^{\overline B} {\bar K}_{\overline B} K^{\underline A}\, ,
\label{GKSA}
\end{eqnarray}
where $K_{\underline A} = {\cal E}^{M}{}_{\underline A} K_{M}={E}^{M}{}_{\underline A} K_{M}$ and $\bar{K}_{\overline A} = {\cal E}^{M}{}_{\overline A} \bar{K}_{M}=E^{M}{}_{\overline A} \bar{K}_{M}$ and ${\cal E}_{MA}$ is an $O(D,D+n)/O(D-1,1)_{L} \times O(1,D-1+n)_{R}$ frame. Here $\underline A= 0, \dots, D-1$ and $\overline A=0, \dots, D-1+n$ are $O(D-1,1)_{L}$ and $O(1,D-1+n)_{R}$ indices, respectively. In agreement with the previous section, we are going to consider a constant generalized frame background, \textit{i.e.}, $\partial_{M}E_{N A}=0$ and a constant generalized dilaton background $\partial_{M}{d_{o}}=0$ .
Defining $\eta_{A B}$ and $H_{A B}$ as the invariant metrics of $O(D-1,1)_{L} \times O(1,D-1+n)_{R}$, we have,
\begin{eqnarray}
\eta_{AB} = {\cal E}_{M A}\eta^{MN} {\cal E}_{N B} = E_{M A}\eta^{MN} E_{N B} \, , \\
{H}_{AB} = {\cal E}_{MA} {\cal H}^{MN} {\cal E}_{N B} = E_{MA} {H}^{MN}E_{N B} \, .
\end{eqnarray}
The generalized fluxes take the form
\begin{eqnarray}
{\cal F}_{ABC} & = & 3 {\cal E}_{[A}({\cal E}^{M}{}_{B}){\cal E}_{M C]} + \sqrt{2} f_{M N P} {\cal E}^{M}{}_{A} {\cal E}^{N}{}_{B} {\cal E}^{P}{}_{C} \, , \nonumber \\ {\cal F}_{A} & = & \sqrt{2}e^{2d}\partial_{M}\left({\cal E}^{M}{}_{A}e^{-2d}\right) \, ,
\end{eqnarray}
where $f_{M N P}$ plays the role of generalized structure constants and therefore satisfy
\begin{eqnarray}
f_{MNP}=f_{[MNP]}\, , \qquad f_{[MN}{}^{ R}f_{{P}]R}{}^{Q}=0\, , \label{consf}
\end{eqnarray}
and
\begin{eqnarray}
f_{ {MN}}{}^{ P}\partial_{P}\cdots =0 \, .
\label{fcond}
\end{eqnarray}
The generalized Lie derivative is deformed as,
\begin{eqnarray}
\widehat{\cal L}_\xi V_M = {\cal L}_{\xi} V_{M} + f_{M N P} \xi^{N} V^{P} \, ,
\label{glie2}
\end{eqnarray}
and, in addition, the theory is invariant under $O(D-1,1)_{L} \times O(1,D-1+n)_{R}$ or double Lorentz transformations,
\begin{eqnarray}
\delta_{\Gamma} V^{A} = V^{B} \Gamma_{B}{}^{A} \, ,
\end{eqnarray}
where $V^{A}$ is a generic vector and $\Gamma_{A B} = - \Gamma_{B A}$ an arbitrary parameter. The previous transformations close with the following parameters
\begin{eqnarray}\label{par0}
\xi^{ M}_{12} & = & [\xi_1, \xi_{2} ]^{ M}_{(C_f)} \, ,\\
\Gamma_{12 { A} { B}} & = & 2 \xi_{[1}^{ P} \partial_{ P} \Gamma_{2] { A} { B}} - 2 \Gamma_{[1 A}{}^{ C} \Gamma_{2] {C B}}
\, , \label{Lorentzbrack}
\end{eqnarray}
where the $C_f$-bracket is a deformation of the $C$-bracket given by
\begin{eqnarray}
[\xi_1, \xi_{2} ]^{ M}_{(C_{f})}=2\xi^{ P}_{[1}\partial_{ P}\xi_{2]}^{ M}-\xi_{[1}^{ N}\partial^{ M}\xi_{2] N}+f_{{ PQ}}{}^{ M} \xi_{1}^{ P} \xi_2^{ Q}\, , \label{Cbrack}
\end{eqnarray}
where (\ref{fcond}) is required for consistency.
A flat covariant derivative acting on a generic vector is given by
\begin{eqnarray}
{\cal D}_{A} V_{B} = {\cal E}_{A} V_{B} + {\cal W}_{AB}{}^{C} V_{C} \, ,
\label{covder}
\end{eqnarray}
where ${\cal E}_{A} = \sqrt{2} {\cal E}^{M}{}_{A} \partial_{M}$. The covariant derivative as well as the flat derivative can also be defined for background fields in a similar fashion. In (\ref{covder}) ${\cal W}_{AB}{}^{C}$ is the generalized spin connection, which is partially identified with the generalized fluxes according to
\begin{eqnarray}
{\cal W}_{[ABC]} & = & -\frac13{\cal F}_{ABC}\, , \\
{\cal W}_{BA}{}^{B} & = & - {\cal F}_{A}\, ,
\end{eqnarray}
in order to have, on the one hand, frame compatibility under covariant derivation and, on the other, partial integration with respect to the dilaton density, \textit{i.e.},
\begin{equation}
\int e^{-2d}V {\cal D}_{ A}V^{ A}=-\int e^{-2d}V^{ A} {\cal D}_{ A}V\, .
\end{equation}
Considering the flat projectors as $P_{A B}=\frac12 \eta_{A B} - \frac12 H_{A B}$ and $\bar P_{A B}=\frac12 \eta_{A B} + \frac12 H_{A B}$, and the notation
\begin{eqnarray}
V_{A}=V_{\underline A} + V_{\overline A}= P_{\underline A}{}^{\underline B} V_{\underline B} + \bar P_{\overline A}{}^{\overline B} V_{\overline B} \, ,
\end{eqnarray}
the generalized curvatures (\ref{Ricci}) and (\ref{scalar}) are rewritten as
\begin{eqnarray}
\label{GRicci_scalar}
{\cal R} & = & 2{\cal E}_{\underline{A}}{\cal F}^{\underline{A}} + {\cal F}_{\underline{A}}{\cal F}^{\underline{A}} - \frac16 {\cal F}_{\underline{ABC}} {\cal F}^{\underline{ABC}} - \frac12{\cal F}_{\overline{A}\underline{BC}}{\cal F}^{\overline{A}\underline{BC}} \, , \\
{\cal R}_{\overline{A}\underline{B}} & = & {\cal E}_{\overline{A}}{\cal F}_{\underline{B}} - {\cal E}_{\underline{C}}{\cal F}_{\overline{A}\underline{B}}{}^{\underline{C}} + {\cal F}_{\underline{C}\overline{DA}}{\cal F}^{\overline{D}}{}_{\underline{B}}{}^{\underline{C}} - {\cal F}_{\underline{C}}{\cal F}_{\overline{A}\underline{B}}{}^{\underline{C}} \, .
\label{GRicci_tensor}
\end{eqnarray}
while the relevant projections of the fluxes are written in terms of $K_{A}$ and $\bar K_{\bar A}$ in the following way,
{\footnotesize
\begin{eqnarray}
{\cal F}_{\underline{ABC}} & = & \sqrt{2} f_{M N P} \Bigg( { E}^{M}{}_{{\underline A}} { E}^{N}{}_{{\underline B}} {E}^{P}{}_{\underline C} -\frac{1}{2} \kappa K_{{\underline A}} {\overline K}_{{\overline B}} E^{M{\overline B}} E^{N}{}_{{\underline B}} E^{P}{}_{{\underline C}} -\kappa E^{M}{}_{{\underline A}}K_{[{\underline B}|} {\overline K}_{{\overline C}} E^{N{\overline C}} E^{P}{}_{|{\underline C}]} \Bigg), \nonumber \\
{\cal F}_{\underline{A}\overline{BC}} & = & \kappa\left(\bar K{}_{[\overline{C}}D{}_{\overline{B}]}K_{\underline{A}} + K_{\underline{A}}E_{[\overline{B}}\bar K_{\overline{C}]} \right) + \sqrt{2} f_{M N P} \Bigg( { E}^{M}{}_{{\underline A}} { E}^{N}{}_{{\overline B}} {E}^{P}{}_{\overline C} - \frac12 \kappa K_{\underline A} \bar K_{\overline D} E^{M \overline D} E^{N}{}_{\overline B} E^{P}{}_{\overline C} \Bigg)\, , \nonumber \\
{\cal F}_{\overline{A}\underline{BC}} & = & - \kappa\left(K_{[\underline{C}}D_{\underline{B}]}\bar K_{\overline{A}} + \bar K_{\overline{A}}E_{[\underline{B}}K_{\underline{C}]} \right) + \sqrt{2} f_{M N P} \Bigg( { E}^{M}{}_{{\overline A}} { E}^{N}{}_{{\underline B}} {E}^{P}{}_{\underline C} + \frac12 \kappa \bar K_{\overline A} K_{\underline D} E^{M \underline D} E^{N}{}_{\underline B} E^{P}{}_{\underline C} \Bigg) \, , \nonumber \\
{\cal F}^{\underline{A}} & = & - \frac{1}{2}\kappa\left( (E_{\bar B}\bar K^{\overline B})K^{\underline A} + (E_{\overline B} K^{\underline A}) \bar K^{\overline B} + 4E^{\underline{A}}f\right)\, .
\label{constrained_fluxes}
\end{eqnarray}}
The flat version of the null conditions reads
\begin{eqnarray}
K_{\underline A} K^{\underline A} & = & \bar K_{\overline A} \bar K^{\overline A} = 0 \, , \label{flatnull}
\end{eqnarray}
and the flat geodesic conditions now contain a contribution related to the generalized structure constants,
\begin{eqnarray}
K^{\underline A} E_{\underline A} {\bar K}^{\overline C} + \sqrt{2} {K}^{\underline A} \bar K^{\overline B} f_{M P Q} E^{M}{}_{\underline A} E^{P}{}_{\overline B} E^{Q \overline C} & = & 0 \, , \\
{\bar K}^{\overline A} E_{\overline A} {K}^{\underline C} + \sqrt{2} {\bar K}^{\overline A} K^{\underline B} f_{M P Q} E^{M}{}_{\overline A} E^{P}{}_{\underline B} E^{Q \underline C} & = & 0 \, , \\
K^{\underline A} E_{\underline A} f = {\bar K}^{\overline A} E_{\overline A} f & = & 0 \, .
\label{flatgeo}
\end{eqnarray}
\section{$L_{\infty}$ algebras}
\label{Linf}
In this section we start by reviewing how to fit DFT in an $L_{\infty}$ algebra and then we show the extension to GDFT. We always consider the GKSA in order to obtain closed expressions when dynamics is taken into account and we dedicate next section to discuss about the family of theories that can be described within this approach.
\subsection{Basics}
\label{Basics}
Let us consider a vector graded space $X$ which is the direct sum of vector spaces $X_n$, each of which has degree $n$
\begin{equation}
X = \bigoplus_{n} X_n \,, \quad n \in \mathbb{Z} \ .
\end{equation}
We will denote by $x$ an element of $X$ with definite degree, $i.e$, $x\in X_p$ for some fixed $p$.
We consider multilinear products $\ell_k$
\begin{equation}
\ell_k: X^{\otimes k} \rightarrow X \ ,
\end{equation}
with degree given by
\begin{equation}
\hbox{deg}(\ell_k(x_1,x_2,...,x_k))= k-2 + \sum_{i=1}^k \hbox{deg} (x_i) \ .
\end{equation}
For a permutation $\sigma$ of $k$ labels we have
\begin{equation}
\ell_k ( x_{\sigma(1)} , \ldots , x_{\sigma(k)} ) \ = \ (-1)^\sigma \epsilon(\sigma;x ) \,
\ell_k (x_1 , \ldots \,, x_k) \ .
\end{equation}
The $(-1)^\sigma$ factor gives a plus or minus sign if the permutation is even or odd, respectively. The $\epsilon(\sigma;x )$ factor is the Koszul sign. For a graded commutative algebra $\Lambda (x_1, x_2, \cdots )$ with
\begin{equation}
x_i \wedge x_j \ = \ (-1)^{{\rm deg}(x_i) {\rm deg}(x_j)} \, x_j \wedge x_i \,, \quad \forall i, j\ ,
\end{equation}
the Koszul sign for a general permutation is given by
\begin{equation}
x_1\wedge \ldots \wedge x_k = \epsilon (\sigma; x) \ x_{\sigma(1)} \wedge \ldots \wedge \, x_{\sigma(k)} \ .
\end{equation}
It is convenient to abuse with the notation in the following way
\begin{equation}
\ (-1)^{{\rm deg}(x_i) {\rm deg}(x_j)}\equiv(-1)^{x_i x_j}\ .
\end{equation}
The $L_\infty$ relations are labeled by a positive integer $n$ given by the number of inputs. Explicitly they are
\begin{equation}
\label{main-Linty-identity}
\sum_{i+j= n+1} (-1)^{i(j-1)} \sum_\sigma (-1)^\sigma \epsilon (\sigma; x) \, \ell_j \, \bigl( \, \ell_i ( x_{\sigma(1)} \,, \, \ldots\,, x_{\sigma(i)} ) \,, \, x_{\sigma(i+1)}, \, \ldots \, x_{\sigma (n)} \bigr) \ = \ 0\ .
\end{equation}
The sum over $\sigma$ is a sum over ``unshuffles'', it includes only the terms which satisfy
\begin{equation}
\sigma(1) < \, \cdots \, < \, \sigma(i) \,, \qquad
\sigma(i+1) < \, \cdots \, < \, \sigma(n) \ .
\end{equation}
It is common to write these relations as
\begin{equation}
\label{main-Linty-identity-schem}
\sum_{i+j= n+1} (-1)^{i(j-1)} \ell_j \, \ell_i \ = \ 0\ ,
\end{equation}
such that
\begin{eqnarray}
n = 1 \ \ \ \ \ \ \ \ 0 &=& \ell_1 \ell_1 \\
n = 2 \ \ \ \ \ \ \ \ 0 &=& \ell_1 \ell_2 - \ell_2 \ell_1 \\
n= 3 \ \ \ \ \ \ \ \ 0 &=& \ell_1 \ell_3 + \ell_2 \ell_2 + \ell_3 \ell_1 \\
n = 4 \ \ \ \ \ \ \ \ 0 &=& \ell_1 \ell_4 - \ell_2 \ell_3 + \ell_3 \ell_2 - \ell_4 \ell_1 \ , \ \dots
\end{eqnarray}
For instance, the $n=3$ case is given by
\begin{eqnarray}
0 & = & \ell_2(\ell_2(x_1,x_2),x_3) + (-1)^{(x_1+ x_2) x_3}\ell_2(\ell_2(x_3,x_1),x_2)
+(-1)^{(x_2+ x_3) x_1 }\ell_2(\ell_2(x_2,x_3),x_1) \nonumber \label{L3L1}\\
&&\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! + \ell_1(\ell_3 (x_1,x_2, x_3)) + \ell_3(\ell_1 (x_1) ,x_2, x_3)
+ (-1)^{x_1} \ell_3( x_1 ,\ell_1(x_2), x_3)
+ (-1)^{x_1+ x_2} \ell_3( x_1 ,x_2, \ell_1(x_3)) \ . \nonumber
\label{n=3}
\end{eqnarray}
One must assign a given degree $p$ to gauge parameters, fields, EOM's, etc., and so specify to what vector subspace $X_p$ they belong. In this work we consider that the space of degree two contains the constants ($c$), the space of degree one contains
functions ($\chi$), the space of degree zero contains the gauge parameters ($\zeta$), the space of degree minus one contains the fields ($\Psi$) and, finally, the space of degree minus two the dynamics (${\cal F}$). In general the products can be read from the symmetries and dynamics of a given field theory. The symmetry transformations define the brackets $\ell_{n+1}(\zeta, \Psi^n)$ as follows
\begin{equation}
\delta_{\xi}\Psi =\sum_{n\ge 0} \frac{1}{n!}
(-1)^{{n(n-1)}/{2}}\,
\ell_{n+1}(\xi, \Psi^n)
\ , \label{gaugel}
\end{equation}
where $
\Psi^k = \underbrace{\Psi,...,\Psi}_{k\;\text{times}}$. The equations of motion define the $l_n (\Psi^n)$ brackets as follows
\begin{equation}
{\cal F}(\Psi) = \sum_{n=1}^\infty
\frac{(-1)^{n(n-1)/ 2}}{n!} \ell_n(\Psi^n) \, .
\label{eoml}
\end{equation}
Both (\ref{gaugel}) and (\ref{eoml}) are fundamental relations that can be used to read non-trivial products, and then extra products can appeared upon checking the $L_{\infty}$ relations (\ref{main-Linty-identity}).
\subsection{GKSA-DFT as an $L_{3}$ algebra}
\label{GKSA-DFT}
Here we follow the construction presented in \cite{Hohm:2017pnh}. In that work the authors show that when the arguments of $l_2$ are the DFT gauge parameters, this product is related to the C-bracket. Moreover, the first line in (\ref{n=3}) coincides with the Jacobiator and the last line characterizes the non-trivial Jacobiator of DFT given by
\begin{equation}
J(\xi_{1},\xi_{2},\xi_3)^{M} = \frac{3}{2}\partial^{M}(\xi_{[1}^{N} \xi_{2}^{P} \partial_{N} \xi_{3]P}) = N^M \, .
\end{equation}
Considering the following relation derived from (\ref{gaugel}),(\ref{eoml}) and (\ref{main-Linty-identity}),
\begin{eqnarray}
\label{commurel}
[\delta_{\zeta_1},\delta_{\zeta_2}] \Psi=\delta_{-\mathbf C(\zeta_1,\zeta_2)} \Psi \, ,
\end{eqnarray}
with $\mathbf C(\zeta_1,\zeta_2) \equiv \;\ell_{2}(\zeta_1,\zeta_2)\ ,$
the non-trivial products are
\begin{eqnarray}
\ell_{1}(\chi) = \partial \chi \in X_{0}, \\
\ell_{1}(c) = \iota c \in X_{1},\\
\ell_{2}(\xi_{1},\xi_{2}) = \big[ \xi_{1},\xi_{2} \big]_{C} \in X_{0},\\
\ell_{2}(\xi , \chi) = \frac{1}{2} \xi^{K}\partial_{K}\chi \in X_{1},\\
\ell_{3}(\xi_{1},\xi_{2},\xi_{3}) = - N(\xi_{1},\xi_{2},\xi_{3}) \in X_{1}.
\end{eqnarray}
On the other hand, considering
\begin{eqnarray}
\delta_{\xi} {\cal H}_{M N} & = & \xi^{P} \partial_{P} {\cal H}_{M N} + 2 (\partial_{(M} \xi^{P} - \partial^{P} \xi_{(M}) {\cal H}_{N) P} \nonumber \\
\delta_{\xi} d & = & \xi^{P} \partial_{P}d - \frac12 \partial_{P}\xi^{P} \, ,
\end{eqnarray}
and (\ref{gaugel}), and invoking the GKSA it is straightforward to find
\begin{eqnarray}
\ell_{1}(\xi)_{t} & = & 2(\partial_{\underline M} \xi_{\overline N} - \partial_{\overline N} \xi_{\underline M}) \, , \\
\ell_{1}(\xi)_{s} & = & - \frac12 \partial_{P} \xi^{P} \, , \\
\ell_{2}(\xi,K)_{v} & = & \delta_{\xi} K_{M} \, , \\
\ell_{2}(\xi,\bar K)_{v} & = & \delta_{\xi} \bar K_{M} \, , \\
\ell_{2}(\xi,f)_{s} & = & \xi^{P}\partial_{P}f \, ,
\end{eqnarray}
where the letters $s$,$v$,$t$ means that we are considering the scalar, vectorial or tensorial part of the product, respectively.
\subsubsection{Pertubative DFT as an exact $L_{3}$ algebra}
The closed expressions for the dynamics can be easily obtained from (\ref{eoml}). Considering the equation of motion for the generalized dilaton we identify,
\begin{eqnarray}
\ell_{1}(f)_s & = & 4\kappa{H}^{K L} \partial_{K}\partial_{L}{f} \\
\ell_{2}(f,f)_s & = & 8\kappa^{2} {H}^{K L} \partial_{K}{f} \partial_{L}{f} \\
\ell_{2}(\bar K,K)_s & = & 4 \kappa \partial_{K} \partial_{L} ( K^{K} \bar{K}^{L}) \, ,
\end{eqnarray}
and, analogously, from the generalized metric equation we obtain,
\begin{eqnarray}
\ell_{1}(f)_t & = & 4 \kappa P_{K}{}^{M} \bar{P}_{L}{}^{N}\partial_{M N}{f} \\
\ell_{2}(\bar K,K)_t & = & \kappa \Big[ {H}^{M N} \partial_{M N}\big(K_{K} \bar{K}_{L}\big)-2\partial_{M N} \big( K^{N} \bar{K}_{L} P_{K}{}^{M} - K_{K} \bar{K}^{N} \bar{P}_{L}{}^{M}\big) \Big] \\
\ell_{3}(f,\bar K, K)_{t} & = & -6 \kappa^{2} \Big[\ {H}^{MN}\partial_{M}f \partial_{N}\big( K_{K} \bar{K}_{L}\big) -2 P_{K}{}^{M} \partial_{M}\big( K^{N}\bar{K}_{L}\partial_{N}f\big) \nonumber \\ && + 2\bar{P}_{L}{}^{M}\partial_{M}\big( K_{K}\bar{K}^{N}\partial_{N}f\big) \ \Big] \, ,
\end{eqnarray}
where $\partial_{M N}= \partial_{M} \partial_{N}$.
In order to verify the $L_{\infty}$ relations given by (\ref{main-Linty-identity})
it is necessary to only include extra products related to the gauge transformation of the equations of motion, $\ell_{2}(\xi,{\cal R})_s=\delta_{\xi}{\cal R}$ and $\ell_{2}(\xi,{\cal R}_{\underline M \overline N})_t=\delta_{\xi}{\cal R}_{\underline M \overline N}$, while the remaining products are null.
\subsection{GKSA-GDFT as an $L_{4}$ algebra}
\label{GKSA-GDFT}
The extension to GDFT will be performed in several steps. We start by considering only the subspaces related to the brackets algebra ($X_2$,$X_1$,$X_0$), then we include the subspace of fields $X_{-1}$ and finally we include both fields and their dynamics $X_{-2}$.
\subsubsection{GDFT bracket algebra as an $L_{3}$ algebra}
We start by discussing the
subalgebra corresponding to the pure gauge structure, given by the $C_f$-bracket algebra (\ref{Cbrack}) and the Double Lorentz bracket (\ref{Lorentzbrack}). The graded vector space is taken to contain three spaces of fixed degree,
\begin{eqnarray}
0 & \rightarrow X_2 & \rightarrow X_1 \rightarrow X_0 \nonumber \\ & \quad c & \quad \, \, \, \chi \, \, \, \quad \quad \, \zeta
\end{eqnarray}
where $\zeta=(\xi,\Lambda)$ is a generic parameter and the above arrows define the $\ell_1$ action. From $X_2$ to $X_1$ the action is given by the inclusion map, while from $X_1$ to $X_0$ the action is given by the partial derivative. Acting on $X_0$ the map $\ell_1$ is null since we are not considering the fields yet. At this level the non-trivial products are
\begin{eqnarray} \label{prod1}
\ell_{1}(\chi) = \partial \chi \in X_{0}, \\
\ell_{1}(c) = \iota c \in X_{1},\\
\ell_{2}(\xi_{1},\xi_{2}) = \big[ \xi_{1},\xi_{2} \big]_{C_{f}} \in X_{0},\\
\ell_{2}(\xi , \chi) = \frac{1}{2} \xi^{K}\partial_{K}\chi \in X_{1},\\
\ell_{3}(\xi_{1},\xi_{2},\xi_{3}) = - N_{o}(\xi_{1},\xi_{2},\xi_{3}) - N_{f}(\xi_{1},\xi_{2},\xi_{3}) \in X_{1} \, , \\
\ell_{2}(\xi,\Gamma) = \xi^{ P} \partial_{ P} \Gamma_{{ A} { B}} \in X_{0}, \label{extra}\\
\ell_{2}(\Gamma_{1},\Gamma_{2}) = - \Gamma_{1 A}{}^{ C} \Gamma_{2 {C B}} \in X_{0} \, , \label{prod7}
\end{eqnarray}
where $N_o$ and $N_f$ can be computed from the Jacobiator of GDFT,
\begin{equation}
J(\xi_{1},\xi_{2},\xi_3)^{M} = \frac{3}{2}\partial^{M}(\xi_{[1}^{N} \xi_{2}^{P} \partial_{N} \xi_{3]P} + \frac13 f_{N P Q} \xi_{1}^{N} \xi_{2}^{P} \xi_{3}^{Q}) = N_{o}^M + N_{f}^M \, .
\end{equation}
The bracket $\Gamma_{[1A}{}^{C} \Gamma_{2]C B}$ encodes the algebra of matrix multiplication and therefore the analogous of the Jacobiator for the Double Lorentz symmetry is trivially null. Moreover from (\ref{extra}) it is straightforward to show the following relation,
\begin{eqnarray}
\ell_{2}(\Gamma,\partial \chi)=\partial \ell_{2}(\Gamma,\chi) \, .
\end{eqnarray}
Using the previous relation and the products (\ref{prod1})-(\ref{prod7}) it is straightforward to show that the relations $n\geq4$ are trivial.
\subsubsection{Off-shell GDFT as extended $L_{3}$ algebra}
Now we extend the $L_{3}$ algebra describing the $C_{f}$ and the double Lorentz brackets to include the fields and the symmetry
transformations. We recall at this point that the generalized metric formalism has to be abandoned in order to describe the GDFT structure. The graded vector space now contains four spaces,
\begin{eqnarray}
0 \rightarrow & X_2 & \rightarrow X_1 \rightarrow X_0 \rightarrow X_{-1} \nonumber \\ & c & \quad \, \, \, \chi \, \, \, \quad \quad \zeta \, \, \, \quad \, \, \, \Psi
\end{eqnarray}
where $\Psi=(K,\bar K , f)$ and $\ell_{n}\Psi^{n}=0$ with $n\geq1$ since there is no dynamics at this point. From the symmetry transformations we read the following products,
\begin{eqnarray}
\ell_{1}(\xi)_{t} & = & 2(\partial_{\underline M} \xi_{\overline N} - \partial_{\overline N} \xi_{\underline M}) \, , \\
\ell_{1}(\xi)_{s} & = & - \frac12 \partial_{P} \xi^{P} \, , \\
\ell_{2}(\xi,\bar K)_{\overline v} & = & \widehat{\mathcal L}_{\xi} \bar K_{\overline A} \, , \\
\ell_{2}(\Gamma,\bar K)_{\overline v} & = & \delta_{\Gamma} \bar K_{\overline A} \, , \\
\ell_{2}(\xi, K)_{\underline v} & = & \widehat{\mathcal L}_{\xi} K_{\underline A} \, , \\
\ell_{2}(\Gamma, K)_{\underline v} & = & \delta_{\Gamma} K_{\underline A} \, , \\
\ell_{2}(\xi,f)_{s} & = & \xi^{P}\partial_{P}f \, .
\end{eqnarray}
The $L_{\infty}$ relations can be probed considering the previous list and the one from the previous section. The relations $n=1$ and $n=2$ are trivial. The relation $n=3$ is not trivial for the case $x_1=\Psi$, $x_2=\zeta_2$, $x_3=\zeta_3$,
\begin{eqnarray}
0 & = & \ell_2(\ell_2(\Psi,\zeta_2),\zeta_3) + \ell_2(\ell_2(\zeta_3,\Psi,),\zeta_2)
+\ell_2(\ell_2(\zeta_2,\zeta_3),\Psi)
\ . \nonumber
\end{eqnarray}
The previous expression can be rewritten in the following form,
\begin{eqnarray}
[\delta_{\zeta_2},\delta_{\zeta_3}]\Psi = \delta_{\zeta_{23}} \Psi
\end{eqnarray}
and therefore it is satisfied using the closure condition for the deformed generalized diffeomorphisms and double Lorentz transformations. Relations with $n \geq 4$ are trivial.
\subsubsection{Pertubative GDFT as an exact $L_{4}$ algebra}
Finally we extend the $L_{3}$ algebra describing the $C_{f}$ and the double Lorentz brackets algebra to include the dynamics. The graded vector space now contains five spaces,
\begin{eqnarray}
0 \rightarrow & X_2 & \rightarrow X_1 \rightarrow X_0 \rightarrow X_{-1} \rightarrow X_{-2} \nonumber \\ & c & \quad \, \, \, \chi \, \, \, \quad \quad \zeta \, \, \, \quad \, \, \, \Psi \quad \quad \, \, {\cal F}
\end{eqnarray}
where the perturbative equations of motion are related to the equations of the generalized dilaton and the generalized metric ${\cal F}=(\cal R, \cal R_{\overline A \underline B})$, but considering the GKSA and the linear perturbation for the generalized dilaton. From (\ref{eoml}) we have
\begin{eqnarray}
\ell_{1}(f)_s & = & - 4 \kappa E^{\underline A}\left(E_{\underline{A}}f\right) \\ \ell_{2}(f,f)_s & = & - 8 \kappa^2 E_{\underline{A}}f E^{\underline{A}}f \\ \ell_{2}(K,\bar K)_s & = & 2 \kappa E^{\underline A}\left(K_{\underline A} E_{\overline B}\bar K^{\overline B} + \bar K^{\overline B} E_{\overline B} K_{\underline A}\right) - 4 \kappa f_{\overline A \underline B \underline C} f^{\underline D \underline B \underline C} \bar K^{\overline A} K_{\underline D} \\ \ell_{4}(K,K,\bar K,\bar K)_s & = &
- 6 \kappa^2 \bar K_{\overline B} \bar K_{\overline C} \left[ (E^{\overline C} K^{\underline A}) E^{\overline B} K_{\underline A} - 2 f^{\overline B}{}_{\underline B \underline C} K_{{\underline A}} f^{\overline C \underline A \underline C} K^{{\underline B}} \right]
\end{eqnarray}
where we use the compact notation $f_{A B C} = E^{M}{}_{A} E^{N}{}_{B} E^{P}{}_{C} f_{M N P}$. The previous contributions come from the GDFT Lagrangian up to a cosmological term that requires a field redefinition. Analogously, from the generalized flat Ricci scalar we read
\begin{eqnarray}
\ell_{1}(f)_t & = & 2\kappa \left[ \sqrt{2} f_{\overline A \underline B \underline C} E^{\underline{C}}f - E_{\overline A}[E_{\underline{B}}f] \right] \\ \ell_{2}(K,\bar K)_t & = & \kappa\left[ 2 f^{\overline D}{}_{\underline B \underline C} K^{\underline C} \bar K_{\overline E} f^{\overline E}{}_{\overline D \overline A} -2 f^{\underline C}{}_{\overline D \overline A} \bar K^{\overline D} K_{\underline D} f^{\underline D}{}_{\underline B \underline C} \right. \nonumber \\ &&
- 2 \sqrt{2} f^{\underline C}{}_{\overline D \overline A} \left(- \frac12 K_{\underline{C}}D_{\underline{B}}\bar K^{\overline{D}} - \frac12 \bar K^{\overline{D}}E_{\underline{B}}K_{\underline{C}} + \frac12 K_{\underline{B}}D_{\underline{C}}\bar K^{\overline{D}} \right. \nonumber \\&& \left. + \frac12 \bar K^{\overline{D}}E_{\underline{C}}K_{\underline{B}} \right) - \sqrt{2} \left((E_{\bar B}\bar K^{\overline B})K^{\underline C} + (E_{\overline B} K^{\underline C}) \bar K^{\overline B} \right) f_{\overline A \underline B \underline C} \nonumber \\ && +2 E^{\underline C}\left[ - K_{[\underline{C}}D_{\underline{B}]}\bar K_{\overline{A}} - \bar K_{\overline{A}}E_{[\underline{B}}K_{\underline{C}]} + \frac{1}{\sqrt{2}} \bar K_{\overline A} K_{\underline D} f^{\underline D}{}_{\underline B \underline C} \right] \nonumber \\
&&
+ E_{\overline A}\left[(E_{\bar C}\bar K^{\overline C})K_{\underline B} + (E_{\overline C} K_{\underline B}) \bar K^{\overline C} \right]
\nonumber \\&& \left. - 2 \sqrt{2} f^{\overline D}{}_{\underline B \underline C}\left(\bar K{}_{[\overline{A}}D{}_{\overline{D}]}K^{\underline{C}} + K^{\underline{C}}E_{[\overline{D}}\bar K_{\overline{A}]} \right) \right] \\ \ell_{3}(f,K,\bar K)_t & = & \kappa^2 \left[-\frac{3}{2} \left(4E^{\underline{C}}f\right) \left( K_{\underline{B}}D_{\underline{C}}\overline{K}_{\overline{A}} \right) + 6 K_{\underline C}\bar{K}_{\overline A} E^{\underline C}\left[E_{\underline{B}}f\right] \right. \nonumber \\ &&
- 6 \left( E^{\underline{C}}f\right) K_{\underline{B}}D_{\underline{C}}\overline{K}_{\overline{A}} + 6 \left( E^{\underline{C}}f\right) \overline{K}_{\overline{A}}E_{\underline{B}}K_{\underline{C}} \nonumber \\ && \left.
- 6 \left( E^{\underline{C}}f\right) \overline{K}_{\overline{A}}E_{\underline{C}}K_{\underline{B}} - 6 \sqrt{2} \left( E^{\underline{C}}f\right) \bar K_{\overline A} K_{\underline D} f^{\underline D}{}_{\underline B \underline C} \right] \\ \ell_{4}(K,K, \bar K,\bar K)_t & = & 6 \kappa^2 \left[ \left( (E_{\overline B} K^{\underline C}) \bar K^{\overline B} \right) \left( K_{\underline{B}}D_{\underline{C}}\overline{K}_{\overline{A}} \right) - K_{\underline C}\bar{K}_{\overline A} E^{\underline C}\left[(E_{\bar C}\bar K^{\overline C})K_{\underline B} \right. \right. \nonumber \\ && \left. + (E_{\overline C} K_{\underline B}) \bar K^{\overline C} \right] - K^{\underline C} \bar K_{\overline D} \bar K_{\overline{A}}E^{\overline D}\left[E_{\underline{B}}K_{\underline{C}}\right]
+ K^{\underline C} \bar K_{\overline D} K_{\underline{B}}E^{\overline D}\left[D_{\underline{C}}\bar K_{\overline{A}}\right] \nonumber \\ &&
+ K^{\underline C} \bar K_{\overline D} E^{\overline D} \left[\bar K_{\overline{A}}E_{\underline{C}}K_{\underline{B}} \right]
+ \sqrt{2} K^{\underline C} \bar K_{\overline D} K_{\overline A} E^{\overline D} \bar K_{\underline D} f^{\underline D}{}_{\underline B \underline C} \nonumber \\ &&
+ K_{\underline{B}}D_{\underline{C}}\bar K^{\overline{D}} \bar K{}_{\overline{A}}D{}_{\overline{D}}K^{\underline{C}} + \bar K^{\overline{D}}E_{\underline{C}}K_{\underline{B}} K^{\underline{C}}E_{\overline{D}}\bar K_{\overline{A}} \nonumber \\
&&
+ \left( (E_{\overline B} K^{\underline C}) \bar K^{\overline B} \right) K_{\underline{B}}D_{\underline{C}}\overline{K}_{\overline{A}} - \left((E_{\overline B} K^{\underline C}) \bar K^{\overline B} \right) \overline{K}_{\overline{A}}E_{\underline{B}}K_{\underline{C}} \nonumber \\ &&
+ \left((E_{\bar B}\bar K^{\overline B})K^{\underline C} + (E_{\overline B} K^{\underline C}) \bar K^{\overline B} \right) \overline{K}_{\overline{A}}E_{\underline{C}}K_{\underline{B}} \nonumber \\ && \left. + \sqrt{2} \left(\bar K^{\overline B} E_{\overline B} K^{\underline C} \right) \bar K_{\overline A} K_{\underline D} f^{\underline D}{}_{\underline B \underline C} \, \right].
\end{eqnarray}
At this point we include products related to the gauge transformation of the equations of motion, $\ell_{2}(\xi,{\cal R})_s=\delta_{\xi}{\cal R}$, and $\ell_{2}(\xi,{\cal R}_{\overline A \underline B})_t=\delta_{\xi}{\cal R}_{\overline A \underline B}$, $\ell_{2}(\Lambda,{\cal R}_{\overline A \underline B})_t=\delta_{\Lambda}{\cal R}_{\overline A \underline B}$, where the last contribution can be easily computed considering that each index of ${\cal R}_{\overline A \underline B}$ transforms as a projected double Lorentz vector. The products related to the transformation of the equations of motion are required to check the $n=2$ relation. In this context, the absence of a $\ell_{3}(\zeta_{1},\zeta_2,{\cal F})$ implies that the closure of the gauge algebra holds off-shell. The remaining products are also null as can be easily verified.
\section{Applications}
\label{App}
\subsection{Fundamental charged heterotic string}
The most simple theory that lies inside the family of low energy effective field theories that can be reproduced with the GKSA is the fundamental charged heterotic string solution in $D=10$ \cite{Sen}. The duality approach can be easily constructed considering the following parametrization,
\begin{eqnarray}
{H}_{M N} = \left(\begin{matrix} g_{o}^{\mu \nu} & - g_{o}^{\mu \rho} C_{o\rho \nu} & - g_{o}^{\mu \rho} A_{o \rho i} \\
- g_{o}^{\nu \rho} C_{o\rho \mu} & g_{o\mu \nu} + C_{o\rho \mu} C_{o\sigma \nu} g_{o}^{\rho \sigma} + A_{o\mu}{}^i \kappa_{ij} A_{o\nu}{}^j &
C_{o\rho \mu} g_{o}^{\rho \sigma} A_{o\sigma i} + A_{o\mu}{}^j \kappa_{ji} \\
- g_{o}^{\nu \rho} A_{o\rho i} & C_{o\rho \nu} g_{o}^{\rho \sigma} A_{o\sigma i} + A_{o\nu}{}^j \kappa_{ij} & \kappa_{ij} + A_{o\rho i} g_{o}^{\rho \sigma} A_{o\sigma j}\end{matrix}\right) \ ,
\label{Gmetric}
\end{eqnarray}
with $\kappa_{ij}$ a Cartan-Killing metric. Since the generalized structure constants force us to describe the theory with the generalized frame/flux formalism, compatibility with the ansatz forces that the gauge field remains unperturbed as in \cite{KL}. Similarly, the generalized null vectors $K_{M}$ and ${\bar K}_{M}$ can be parametrized in terms of a pair of null vectors $l$ and $\bar l$ in the following way,
\begin{eqnarray}
K_{M} = \, \frac{1}{\sqrt{2}} \left( \begin{matrix} l^{\mu} \\ - l_{\mu} - C_{o \rho \mu} l^{\rho } \\ - A_{oi \rho} {l}^{\rho} \end{matrix} \right) \, , \quad
\bar{K}_{M} = \, \frac{1}{\sqrt{2}} \left( \begin{matrix} {\bar l}^{\mu} \\ {\bar l}_{\mu} - C_{o \rho \mu} {\bar l}^{\rho} \\- A_{oi \rho} {\bar l}^{\rho} \end{matrix} \right) \, ,
\end{eqnarray}
where $C_{o\mu \nu}=b_{o\mu \nu} + \frac12 A_{o\mu}{}^{i} A_{o\nu i}$ and the null vectors are constraint by (\ref{flatgeo}). The parametrization of the dilaton is $
e^{-2d_{o}} = \sqrt{g_{o}} e^{-2 \phi}$. In this case the perturbed solution is given by
\begin{eqnarray}
ds^{2} = \frac{1}{1+NH(r)}(-dt^{2}+ (dx^{9})^2) + \frac{q^2 H(r)}{4N(1+NH(r))^2}(dt+dx^{9})^2 + \sum_{i=1}^{8} dx^{i} dx^{i} \, ,
\end{eqnarray}
with $H(r)$ a Green function and $N$ a constant. The non-vanishing components of the two form and gauge field are
\begin{eqnarray}
b_{9t} & = &\frac{NH(r)}{1+NH(r)} \,, \\
A^1_{0} & = & A^{1}_{9} = \frac{qH(r)}{1+NH(r)} \, ,
\end{eqnarray}
with $q$ a charge and $\phi=-\frac12 \textrm{ln}(1+N H(r))$.
At the level of the symmetry transformations the algebraic structure of the duality covariant approach of this theory is an $L_{3}$ algebra, given by the transformations of $K_{M}$, $\bar K_{M}$ and $f$ under generalized diffeomorphisms and Double Lorentz transformations. While the former encodes ordinary diffeomorphisms and abelian/non abelian gauge transformations for $b_{0 \mu \nu}/A_{o\mu i}$, the latter transforms the flat version of the null vectors with a 10-dimensional Lorentz parameter $\Lambda_{ab}$ such that,
\begin{equation}
\delta_{\Lambda} l_{a} = \Lambda_{a}{}^{b} l_{b} \, , \quad \delta_{\Lambda} \bar l_{a} = \Lambda_{a}{}^{b} \bar l_{b} \, ,
\end{equation}
where $l_{a} = e_{o}^{\mu}{}_{a} l_{\mu}$ and $\bar l_{a} = e_{o}^{\mu}{}_{a} \bar l_{\mu}$. The full perturbative GDFT for this solution can be cast in an $L_{4}$ algebra with $\kappa^2$ contributions in the equations of motion as we showed in the previous section.
\subsection{(Bosonic) Enhanced DFT}
In compactifications of the bosonic string on k-dimensional tori,
the $U(1)_{L} \times U(1)_R$ symmetry of the Kaluza-Klein reduction gets enhanced at special points in moduli space and there are new massless scalars transforming in the adjoint representation of the enhanced symmetry groups. The new gauge group corresponds to $G_{L} \times G_{R}$ where $G_{L}=G_{R}$, and reduces to the standard $U(1)_{L} \times U(1)_R$ outside the points that provide the enhancement. As we mentioned, DFT incorpores T-duality
as a global symmetry group and therefore it is expected that there exists a formulation that captures these new states in a duality covariant way \cite{Enh}. When $G_{L}\times G_{R}$ has non-simple roots, it was shown in \cite{Mariana} that the C-bracket can be deformed in a way that preserves the duality covariance. The deformation accounts for
the cocycle factors that are necessary in the vertex representation of the current algebra. In terms of the generalized Lie derivative this deformation is
\begin{eqnarray}
({\cal \widehat L}_{E_A}E_B)^M= ({\cal L}_{E_A}E_B)^M + \widehat \Omega_{AB}{}^C E^{M}{}_{C} \, ,
\end{eqnarray}
where $ \widehat\Omega_{ABC}$ vanishes if one or more indices correspond to Cartan generators and if $A,B,C$\footnote{We use the same notation, but these indices $A,B,C \dots$ must be thought as double internal index of a generalized parallelizable manifold.} are associated with roots of the enhancement algebra, say $\alpha,\beta,\gamma$, respectively,
\begin{eqnarray}
\widehat\Omega_{ABC}=\left\{\begin{matrix} (-1)^{\alpha * \beta}\;\delta_{\alpha+\beta+\gamma} &\ \ {\rm if \ two \ roots \ are \ positive,}\\
-(-1)^{\alpha * \beta}\;\delta_{\alpha+\beta+\gamma} &\ \ {\rm if \ two \ roots \ are \ negative.} \\
\end{matrix}\right.
\label{deform}
\end{eqnarray}
The tensor $\widehat\Omega_{ABC}$ satisfies
\begin{eqnarray}
\widehat\Omega_{ABC}=\widehat\Omega_{[ABC]}\, , \qquad \widehat\Omega_{[AB}{}^D\widehat\Omega_{C]D}{}^E=0\, , \qquad \widehat\Omega_{ABC}\partial^C\cdots =0\, ,
\end{eqnarray}
and therefore (\ref{deform}) can be easily identified with the generalized structure constants $f_{ABC}$ upon trivially extended $O(D,D+n)\rightarrow O(D+n,D+n)$ in (\ref{glie2}). In this sense, the algebraic structure of enhanced DFT can be cast in the $L_{3}$ framework at the level of $C_{\Omega}$-bracket algebra according to the results of this work.
\section{Summary}
\label{Con}
In this work we show that GDFT can be cast in an $L_{\infty}$ structure. The presence of a deformed generalized Lie derivative and the double Lorentz transformation enrich the algebraic structure including non-trivial products to the well known $L_{\infty}$ structure of DFT. The frame formalism is needed to compute the generalized fluxes, which are deformed with a generalized version of the structure constants. At the level of the symmetry transformations the algebraic structure of GDFT is given by an $L_{3}$ algebra. We also show that the study of the dynamics can be performed in a closed form considering a GKSA for the generalized frame and a linear perturbation for the generalized dilaton. When dynamics is taken into account, the structure is promoted to an $L_{4}$ algebra with $\kappa^2$ corrections. The present computation has direct implications in the low energy effective action principle of the fundamental heterotic charged string, and bosonic string compactified on an specific internal k-dimensional torus with enhanced gauge symmetry. The latter is described by a enhanced DFT, which can be understood as a particular case of GDFT and therefore, at the level of the deformed bracket, the algebraic structure is an $L_{3}$ algebra.
\subsection*{Acknowledgements}
We thank D. Marqués for useful discussions. M.M. thanks the Max-Planck institute for kind hospitality during the final stages of this project. This work is partially supported by CONICET grant PIP-11220110100005 and PICT-2016-1358.
|
1,477,468,750,299 | arxiv | \section{Introduction}
Twitter is increasingly used as a source of news and latest trends. Being open to all, Twitter emerged as an excellent means to disseminate information to a large user community in the shortest time. On the negative side, this very open uncontrolled nature makes microblogging vulnerable to false information from malicious or credulous users~\cite{nytimestwitter,spamTwitterEconomist}. Recent trend of web search engines and online retailers considering the real-time trends in tweets for ranking products, news and recommendations aggravate this problem~\cite{abel2011analyzing,dong2010time} making microblog spamming more lucrative. Consequently, it is important to formulate sophisticated methods for analysis of relevance and trustworthiness for ranking tweets.
Current Twitter ranking hearsay considers presence of query key words and recency of the tweets~\cite{rankingTwitter}. The increase in number of queries on a topic is generally associated with an increase in number of tweets. For example, when Apple releases a new model of iPhone, Twitter searches, as well as the tweets about the new model are likely to soar up. Considering this correlation between tweets and searches, the popularity of a fact in tweets is a strong indicator of tweets' relevance. Twitter recognizes the importance of popularity, and assesses the popularity by the number of retweets. While the number of retweets is an indication of popularity, this does not consider the content based popularity i.e. though two tweets are not retweets of each other, they may be semantically similar. Secondly, considering the trustworthiness, retweeting need not indicate trust, as many users re-tweet without verifying the content. To get trustworthy tweets, Twitter tries to filter out spam tweets~\cite{spamTwitter}. While spam tweets are a form of untrustworthy tweets, providing correct information is more than just removing spam~\cite{nytimestwitter}. Even if the information is not deliberately manipulative, tweets may be incorrect.
To overcome these problems, we need a ranking sensitive to the content based popularity and trustworthiness of microblogs. Ranking should place the most credible and popular tweets in the top slots while searching with a keyword or \emph{hashtag}. To achieve this, we need methods to analyze the content based popularity and trustworthiness of individual tweets. Further, since the ranking is an online operation, the computational time should be acceptable. We believe that these problems are relevant not only to Twitter, but also to the search engines and retailers exploiting the Twitter trends for their rankings.
The main stumbling block in analyzing popularity and trustworthiness of tweets is that there is no authoritative source against which the information can be compared. Approaches like certifying user profiles have limitations, since it is hard to verify millions of unknown and new users. Thus the very charm of open microblogging---\emph{anyone may say anything}---makes the problem harder. Further many users hardly verify the veracity of information before retweeting making propagation of false information easier. To deal with similar problems, web search engines use link analysis like PageRank~\cite{brin1998anatomy} to estimate the trustworthiness and importance of pages. Link analysis is not directly applicable to tweets since there are no hyperlinks between the tweets.
To surmount these hurdles, we propose to assess trustworthiness and popularity of tweets based on the analysis of the entire tweet ecosystem spanning across tweets, users and the web. In the tweet space, we assess the popularity of tweets based on the pair-wise content based agreement. On the web page space, we consider the page rank of the pages referred by the tweets. In the user space, we consider the implicit links between the users based on the follower-followee relationships. We propagate scores from all three layers based on the inter-layer relationships to compute a single tweet score.
We compare the credibility and relevance of the ranking by our method with the baselines. We show that the proposed method improves both the relevance and the trustworthiness of the top tweets compared to the baselines. Further timing experiments show that the computation time for the ranking is acceptable.
Rest of the paper is organized as the following. Next section describes the related work. The following section we present our model of the tweet space. Subsequently we describe our ranking methods, followed by section on experiments and results. Finally we present our conclusions and the planned future work.
\section{Related Work}
Ranking of tweets considering only relevance is researched extensively~\cite{TRECTwitter,duan2010empirical,nagmoti2010ranking}. Unlike our paper, these ranking approaches do not consider the trustworthiness.
Credibility analysis of Twitter stories have been attempted by Castillo \emph{et al.}~\cite{infocredibility}. The work tries to classify Twitter story threads as credible or incredible. Our problem is different, since we try to assess the credibility of individual tweets. As the feature space is much smaller for an individual tweet---compared the Twitter story threads---the problem becomes harder.
Finding relevant and trustworthy results based on implicit and explicit network structures have been considered previously~\cite{gupta2011heterogeneous,sourcerank}. Real time web search considering tweet ranking has been attempted~\cite{abel2011analyzing,dong2010time}. We consider the inverse approach of considering the web page prestige to improve the ranking of the tweets. To the best of our knowledge, ranking of tweets considering trust and content popularity has not been attempted.
\section{ Modeling Twitter Ecosystem}
\label{sec:model}
We model the entire tweet ecosystem as a three layer graph, as shown in Figure~\ref{fig:tweetModel}. In the model the three layers are user layers composed of Twitter users, tweets layer composed of tweets and a web layer composed of pages. We exploit implicit and explicit links within the layers and across the layers for our ranking. The Twitter users are linked by \emph{who is following whom} relations. In the tweets layer, we build implicit links based on the content agreement, in addition to the directed retweet links. These agreement links provide evidence about many more tweets compared to very sparse retweet links. The web layer has explicit hyper links between pages. Though we considered only the relationships relevant to our ranking, other types of relations may be derived in the space.
The proposed ranking is performed in the tweets layer. But we exploit all the three layers---user, web and tweets---to compute ranking scores. Within the tweets layer, we compute the content agreement between the tweets. Two tweets are in agreement if they have the same semantic sense. We will describe the details of the agreement computation in Section~\ref{subsec:agreementComputation}. In the user layer, we compute the scores of the users based on the following-followee relationships. These scores are propagated to the tweets by the \emph{Tweeted by} relationship. Similarly, we get the PageRank of the pages (which believed to be derived partially based on the hyperlinks in the web) referred by the tweets and propagate it back to derive ranking scores of the tweets.
\begin{figure}[t]
\centering
\includegraphics[scale= 0.5, trim=30mm 105mm 24mm 80mm, clip=true]{triLayer.pdf}
\caption{Three layer ecosystem of Twitter space composed of user layer, tweets layer and the web layer. The inter and intra layer edges are the implicit and explicit relations considered for the proposed ranking. }
\label{fig:tweetModel}
\end{figure}
\section{Ranking}
\label{sec:ranking}
In this paper we specifically focus on the ranking of tweets considering agreement. With respect to our model in Figure~\ref{fig:tweetModel}, this corresponds to ranking based on the agreement links in the tweets layer. The complete composite ranking exploiting all three layers are left for the future research.
\subsection{Agreement as a Basis of Ranking}
We explain the intuitions behind the agreement based ranking in this section. We compute the pair-wise agreement of tweets. A tweet which is agreed upon by a large number of other tweets is likely to be popular. Since popularity indicates relevance as we describe in the introduction, tweets with high agreement by other tweets are likely to be relevant. Alternatively, relevance assessment based on agreement may be viewed as an extension of relevance assessment exploiting the retweet based popularity.
With respect to the trustworthiness, if two independent tweeters agree on the same fact, tweets are likely to be trustworthy. The retweets are most likely not independent from the original tweets. Consequently, agreement is more indicative of trustworthiness than retweets. Please refer to Balakrishnan and Kambhampati~\cite{sourcerank} for a more general explanation of why agreement is likely to indicate trustworthiness and relevance.
\begin{figure*}[t]
\begin{minipage}[b]{.47\textwidth}
\includegraphics[scale=.4,trim= 100 70 100 80,clip=true]{Chart-rel.pdf}
\caption{Top-K Results vs Relevance Measure}
\label{fig:Relevance Measurement}
\end{minipage}\qquad
\begin{minipage}[b]{.47\textwidth}
\includegraphics[scale=.4,trim= 100 70 100 80,clip=true ]{Chart-trust.pdf}
\caption{Top-K Results vs Trust Measure}
\label{fig:Trust Measurement}
\end{minipage}
\end{figure*}
\subsection{Agreement Computation}
\label{subsec:agreementComputation}
Computing semantic agreement between the tweets which satisfies the query-time constraints is challenging. We compute the agreement between the query based on Soft-TFIDF, and calculated the ranking scores based on voting.
Soft-TFIDF is similar to the normal TFIDF, but considers similar tokens in two compared document vectors in addition to the exactly same tokens. We use Soft-TFIDF with Jaro-Winkler similarity; which is found to perform well for named entity matching~\cite{cohen2003comparison} and computing semantic similarity between the web database entities~\cite{sourcerank}.
Let $\mathcal{C}(\theta, v_i, v_j)$ be the set of words for
$w \in v_i$ such that there is some $u \in v_j$ with $sim(w,u) >
\theta$. Let $D(w,v_j)=max_{u\in v_j}sim(w,u)$. The
$\mathcal{V}(w,v_i)$ are the normal TF values weighted by $log(IDF)$
used in the basic TF-IDF. SoftTFIDF is calculated as,
\begin{equation}\mathcal{SIM}(v_i,v_j)=\sum_{w\in \mathcal{C}(\theta,v_i ,v_j)}\mathcal{V}(w,v_i)\mathcal{V}(u,v_j)D(w,v_j)\end{equation}
We used Jaro-Winkler as the secondary distance fucntion for the $sim$ function above. Parameter $\theta$ is set to $0.6$, as this value was found to be performing well based on cross-validation.
To formulate the final ranking combining agreement, keyword based similarity and recency of tweets, we send queries to Twitter and retrieve top-$N$ (we used $N=200$) tweets. After computing the pair-wise similarity between the tweets as described above, we represent the tweets as weighted graph with tweets as vertices and edges as similarity (this graph based representation makes some of our future research easier) In this weighted graph, we compute the score for a tweet as the sum of its' the edge weights. Finally we rank the tweets based on this edge weight score and present the top-$k$ to the user. Since the top-$N$ tweets are returned by Twitter considering keyword relevance and recency of the tweets, these two factors are implicitly accounted in the proposed ranking.
\section{Evaluation}
We conducted a preliminary evaluation of the proposed ranking method against popular ranking of TF-IDF based on query similarity. We compared the top-$k$ precision and Normalized Discounted Cumulative Gain (NDCG) of the proposed method with the TF-IDF. Subsequently we compared trustworthiness of the top-$k$ tweets by the proposed method with the baselines. Further, we evaluated the variation of computation timings with the size of the ranked tweet set.
\subsection{Test Tweet Set}
We used the Twitter's trending topics spanning across current news, sports and celebrity gossips for our evaluations. Trending topics are used to get enough number of tweets with varying degrees of trustworthiness and relevance. For each topic, top 1500 tweets are retrieved using the Twitter API (1500 is the maximum number of tweets returned by the Twitter API). The tweets marked as retweets are removed. We randomly sampled 200 tweets from these 1500 tweets to get our test set. We used a random sample of 200 tweets instead of top-200 results from Twitter, as often the top-k tweets contains repetitions of a few tweets; since many users copy-past same information without explicitly retweeting. Thus randomly sampled 200 tweets from top 1500 tweets increases the variance in the tweet quality in the test set so that different ranking methods can be better distinguished. We used enough number of queries to distinguish the proposed method from the TF-IDF with a statistical significance of 0.8 or above in every experiment below.\footnote{We will improve the significance level to 0.9 in our future experiments.}
\subsection{Relevance Evaluations}
To assess the relevance, we manually labeled the tweets with a relevance value of 0, $\frac{1}{3}$, $\frac{2}{3}$ and 1. The test data for 6 search queries contained 187 tweets of zero relevance to the query, 473 tweets of relevance $\frac{1}{3}$, 249 tweets of relevance $\frac{2}{3}$ and 39 tweets of relevance 1. The classification was done based on the relevance of the tweet to the current news matching that trending topic. For example, if the topic is ``\textit{britney spears}" and the current news during the tweet generation were about Britney Spears engagement, the tweets which were not related to the trending topic or spam are given a score of 0 (e.g. \textit{I liked a @YouTube video Britney Spears}), the tweets which are remotely relevant were given a score of $\frac{1}{3}$ (e.g. \textit{Britney Spears Is Engaged}), tweets which have some information on engagement were given a score of $\frac{2}{3}$ (e.g. \textit{Britney Spears engaged to marry Jason Trawick (AP)}), and the tweets which have good amount of information are given a perfect score of 1 (e.g. \textit{@BritneySpears engaged to marry her longtime boyfriend and former agent Jason Trawick}).
The comparison of top-$k$ precision of the proposed method with the TF-IDF is shown in Figure~\ref{fig:Relevance Measurement}. The proposed method improves both NDCG and top-$k$ precision for all values of $k$. Note that the apparently low value of mean relevance (less than 0.5) is due to the fact that only a very small fraction of tweets have high relevance values. Though a direct comparison is not possible with TREC 2011 microblog track results as the data is not publicly available yet, top precisions in TREC are in comparable ranges~\cite{TRECTwitter}.
\subsection{Trust Evaluations}
Similar to the relevance evaluations, we labeled the tweets as trustworthy or untrustworthy manually. Tweets were given a scores of -1, 0 or 1, where -1 is for the untrustworthy tweets such as spam or are wrong facts (e.g.\textit{Britney Spears engaged to a Sachem alum.}), 0 for tweets which are opinions (e.g. \textit{We can all rest now \#Britney}) and 1 to the tweets which contain correct facts (e.g. \textit{Britney Spears is engaged to marry Jason Trawick}). Our dataset for the 6 queries contained 29 tweets with score -1, 157 tweets of score 0 and 742 tweets of score 1. Note that the returned tweets are after the spam filtering by the Twitter which itself eliminates many spam tweets.
Figure \ref{fig:Trust Measurement} shows the comparison of the proposed method with TF-IDF based ranking. The top-$k$ tweets returned by the proposed method are almost always trustworthy, whereas the TF-IDF returns many of the untrustworthy tweets in the top. This shows that the proposed method effectively removes the untrustworthy tweets and returns trustworthy ones in the top slots, even for $k=20$.
\begin{figure}[t]
\centering
\includegraphics[scale=0.27,trim= 65 34 49 16,clip=true ]{Chart-timing.pdf}
\caption{ Number of Tweets vs computation time.}
\label{fig:Time Measurement}
\end{figure}
\subsection{Timing Evaluation}
As the ranking is at the query time, computation time must be within acceptable limits. We evaluated the time taken for ranking against the number of ranked tweets. The experiments are performed in a dual core 3 GHz machine with memory of 8 GB. In Figure~\ref{fig:Time Measurement}, ranking up to 300 tweets takes less than 1.2 seconds. The proposed approach of selecting top tweets based on the recency and further ranking the selected set of tweets, of the order of hundreds, is feasible (note that our experiments used only 200 tweets). The time increases quadratically in the number of tweets as expected. Further, notice that computation of the pairwise agreement---the time consuming part of the ranking---can be easily parallelized (e.g. using MapReduce) since agreement computation can be performed in isolation without interprocess computation.
\section{Conclusions and Future Work}
In order to rank the tweets, consideration of content based popularity and trustworthiness is essential. Towards this end, we model the Twitter ecosystem as a tri-layer---user, tweets and web layers---graph and propose a ranking exploiting explicit and implicit links in the three layers. As the first step towards a complete ranking, we formulate a ranking based on agreement of tweets. Our initial evaluations show improvement of precision and trustworthiness by the proposed ranking and acceptable computation timings.
We plan to extend the method in several directions. In addition to the currently considered agreement, recency and keyword similarity, we propose to exploit web and user layers to formulate a composite ranking. In the user layer, we plan to consider the credibility of the users based on the follower relationships and past tweets. Subsequently, author credibility will be propagated to the tweets for ranking. In the web layer, we plan to consider the reputation of the pages referred by the tweets. Further, we plan to have enhanced agreement computations and extensive user evaluations.
\bibliographystyle{abbrv}
|
1,477,468,750,300 | arxiv | \section{Introduction}
In many communication scenarios, users have access to some side information about the messages that are requested by other users. For example, this scenario can arise in caching networks in which caches opportunistically store content that may be requested in the future. It can also arise in wireless networks in which nodes can overhear the signals intended for other nodes over the shared wireless medium~\cite{KMA2014:isit}. However, since there are many possibilities for what each cache can store at a particular time (or for what signals each node can overhear in a wireless network), tracking the exact content of side information at the users can be very challenging. Therefore, it is more suitable to require the server only track the ``amount'' of side information at each user, and not its exact ``content''. Consequently, a natural question is: how can a sender take advantage of knowledge of only the amount of side information to efficiently deliver messages to users?
To understand this problem, and evaluate and isolate the ultimate gain of such side information, we introduce a basic network communication problem with one sender and several users, depicted in Figure~\ref{fig:model}.
The sender communicates a distinct message, $\vec{w}_i$, to each of $K$ users (labeled $i=1,\ldots,K$) over a broadcast communication channel, while each user, $i$, has some prior side information ($\vec{\phi}_{ij}$) about other users' desired messages ($\vec{w}_j$ where $j\neq i$) that it may use to assist in decoding its own desired message. However, the sender does not know the precise side information given to each user (i.e., the sender is \emph{blind}), and it must employ a transmission strategy that only uses knowledge of the probability distributions of $\vec{\phi}_{ij}$, for all $i\neq j$.
We refer to this new formulation as the \emph{blind index coding} (BIC) problem.
Our formulation is a generalization of the classic index coding problem~\cite{BK2006,ByBJK2011,ALSWH2008:focs}, which is a canonical problem in network communication theory and, despite its simple formulation, remains a powerful tool for analyzing many network communication settings (see e.g.,~\cite{NA2015,Jafar2014,MCJ2014,MaN2014,ErSG2010}). The key difference in BIC problems lies in the sender's uncertainty regarding side information: In classic index coding, precise knowledge of side information is used by the sender to create transmission strategies that treat message bits differently depending on whether they are within or not within side information at each particular user~\cite{ABKSW2013:isit,Ong2014:isit}. However, in BIC the sender is unable to distinguish between such message bits, and thus transmission must ``blindly'' exploit knowledge of the only the amount of side information. As we will see, this minor difference significantly changes the technical challenges of the problem.
The main question that we investigate in this paper is \emph{``To what extent and using what techniques can we blindly exploit such side information?''} To that end, after formally introducing the BIC problem, our first contribution is the development of
a class of \emph{hybrid coding} schemes, which XOR random linear combinations of bits from one subset of messages with uncoded bits from a disjoint subset of messages. In these hybrid codes, the sender XORs uncoded bits in order to probabilistically exploit side information already available at users. We first provide an example to show that this approach can outperform random coding and in fact sometimes achieve capacity. We then construct an general achievable scheme for three users based on this approach and determine the achievable symmetric rate.
\begin{figure}\centering
\begin{tikzpicture}[font=\footnotesize]
\node (M) at (-4.5,9.5) []{Messages};
\node (C) at (-4.5,8.3) []{Shared Link};
\node (U) at (-4.5,7.6) []{$K$ Users};
\node (SI) at (-4.5,7) []{Side Information};
\node (SI) at (-4.5,6.6) []{(e.g., Caches)};
\node (S) at (0,9) [thick, rounded corners,draw]{Sender};
\filldraw[thick,fill=blue!50!white] (0.75,9.4) rectangle (2.25,9.8);
\filldraw[thick,fill=red!50!white] (-2.25,9.4) rectangle (-0.75,9.8);
\filldraw[thick,fill=green!60!black!50!white] (-0.75,9.4) rectangle (0.75,9.8);
\draw (-1.5,9.6) node {$\vec{w}_1$};
\draw (1.5,9.6) node {$\vec{w}_3$};
\draw (0,9.6) node {$\vec{w}_2$};
\node (U1) at (-2.5,7.6) [thick, rounded corners,draw]{$\widehat{\vec{w}_1}$};
\node (U2) at (0,7.6) [thick, rounded corners,draw]{$\widehat{\vec{w}_2}$};
\node (U3) at (2.5,7.6) [thick, rounded corners,draw]{$\widehat{\vec{w}_3}$};
\draw[thick,fill=green!60!black!50!white] (-3.25,6.8) rectangle (-1.75,7.2);
\draw[thick,fill=blue!50!white] (-3,6.4) rectangle (-2,6.8);
\draw (-2.5,7) node {$\vec{\phi}_{12}$};
\draw (-2.5,6.6) node {$\vec{\phi}_{13}$};
\draw[thick,fill=red!50!white] (-0.3,6.8) rectangle (0.3,7.2);
\draw[thick,fill=blue!50!white] (-0.4,6.4) rectangle (0.4,6.8);
\draw (0,7) node {$\vec{\phi}_{21}$};
\draw (0,6.6) node {$\vec{\phi}_{23}$};
\draw[thick,fill=red!50!white] (2.2,6.8) rectangle (2.8,7.2);
\draw[thick,fill=green!60!black!50!white] (2.2,6.4) rectangle (2.8,6.8);
\draw (2.5,7) node {$\vec{\phi}_{31}$};
\draw (2.5,6.6) node {$\vec{\phi}_{32}$};
\draw[very thick,-latex] (S) -- (U2);
\draw[very thick,-latex] (0,8.4) -- (U1);
\draw[very thick,-latex] (0,8.4) -- (U3);
\end{tikzpicture}\vspace{-2ex}
\caption{A $K$-user blind index coding problem (e.g., $K=3$) depicted as a sender-user network with user caches. User~$i$, for $i=1,2,3$, desires message $\vec{w}_i$ and may use side information about other messages to facilitate decoding; $\vec{\phi}_{ij}$ denotes the side information that User~$i$ has about Message $\vec{w}_j$. The sender only has knowledge of the distribution of $\vec{\phi}_{ij}$, and not its precise realization. Notice the \emph{amount} of side information available may vary across users and messages.\vspace{-5ex}}\label{fig:model}\end{figure}
In order to evaluate the efficacy of our scheme, as well as to gain further intuition beyond three users, our second contribution is the development of a new outer bound on the capacity region. An essential aspect of our outer bound is its utilization of a strong data processing inequality~\cite{AGKN2014:isit} which captures the inability of the sender to distinguish between bits of a message known or unknown to a given user prior to transmission. We demonstrate that our converse is tight in two special cases: namely, the two-user and symmetric $K$-user BIC (where all users have the same amount of knowledge about undesired messages). In both cases a simple achievable scheme based on random coding suffices to achieve the entire capacity region. As we move beyond these special cases to the general BIC setting, we confirm that, at least for some problem settings, our three-user hybrid coding scheme can meet the symmetric capacity upper bound. Finally, we numerically evaluate our new hybrid coding scheme and outer bounds relative to existing achievable schemes.
In our final contribution, we further consider the BIC problem in a wireless setting, specifically studying how lossy sender-to-user links can affect schemes to blindly exploit side information and the resulting achievable rates. Interestingly, we demonstrate that in addition to hybrid coding (where XORing uncoded bits of a subset of messages with random combinations of the others played a key role), quite surprisingly, XORing the same uncoded bits more than once (i.e., \emph{repetition of uncoded bits}) can increase the achievable rate.
Equipped with this observation, we then proceed to construct a coding scheme that leverages both hybrid codes and repetition of uncoded message bits in order to establish an achievable rate region, and we demonstrate numerically that such a scheme offers a strict improvement in achievable rate over conventional approaches.
To summarize, the main contributions are as follows:
\begin{enumerate}
\item We introduce the \emph{Blind Index Coding} problem, which generalizes classic Index Coding by considering uncertainty (blindness) at the sender about side information given to users.
\item We propose a class of \emph{hybrid coding} schemes, which XOR random linear combinations from one subset of messages with uncoded bits of another subset.
\item We derive a novel outer bound on the capacity region of BIC problems which leverages a strong data processing inequality to account for the blindness of the sender.
\item We further generalize the problem to better model wireless settings by studying how lossy sender-to-user links affect the efficacy of the hybrid coding schemes, and we find that repetition coding can enhance the performance of hybrid codes.
\end{enumerate}
This remainder of the paper is organized in the following way.
In Section~\ref{sec:problem} we formally state the BIC problem first for error-free broadcast and then for broadcast over lossy channels.
In Section~\ref{sec:example} we motivate both the ideas behind hybrid coding and our outer bound using a simple example, for which the inner and outer bounds meet.
In Section~\ref{sec:achieve}, we define a hybrid coding scheme and study the achievable symmetric rate for the three-user BIC, in Section~\ref{sec:converse}, we state and prove the general outer bound for BIC problems, and in Section~\ref{sec:num} we numerically compare achieved rates to the derived outer bounds.
In Section~\ref{sec:BICW} we consider blind index coding when the sender-to-user links occur over wireless channels.
Concluding remarks and open questions are presented in Section~\ref{sec:concl}.
\section{The Blind Index Coding Problem} \label{sec:problem}
In this section, we formally define the Blind Index Coding problem by stating the network and side information models, and formalizing the notion of capacity.
\subsubsection*{Network model}
In a BIC problem, as shown in Figure~\ref{fig:model}, $K$ users each request a message from a sender; i.e., User~$i$, for $i=1,\ldots,K$, desires the $m_i$-bit message $\vec{w}_i$, which is drawn uniformly from a space $\{0,1\}^{m_i}$. Each user, $i$, has access to side information, $\vec{\phi}_{ij}$, (whose form is described later) about each message $\vec{w}_j$ except the one it desires (i.e., for all $j\neq i$). The sender aims to communicate all messages to the respective users via a common error-free channel. The goal of the problem is to design a scheme that maps messages to a channel input vector, $\vec{x}$, of minimum length, such that each user can decode its desired message.
\subsubsection*{Side information model}
In a blind index coding problem, each side information signal, $\vec{\phi}_{ij}$, is a random fraction of the bits that make up the message, $\vec{w}_j$. We assume that the sender is ``blind'' in the sense that it is only aware of the \emph{average number of bits} in each side information signal.
More specifically, we can model the side information in the following way. Let $\vec{g}_{ij}$ be a length-$m_j$ binary vector drawn i.i.d from a Bernoulli$(1-\mu_{ij})$ distribution. Side information $\vec{\phi}_{ij} = (\phi_{ij}[1],\phi_{ij}[2],\ldots,\phi_{ij}[m_{j}])$ is such that, for $\ell=1,\ldots,m_j$,
\begin{align}
\phi_{ij}[\ell] = g_{ij}[\ell]{w}_j[\ell].
\end{align}
User~$i$ knows $\vec{g}_{ij}$ for all $j\neq i$, however the sender is only aware of parameters, $\{\mu_{ij}\}$, which govern the probabilistic behavior of the side information. Note that the side information model is equivalent to either 1) randomly sampling bits of a message, or 2) passing a message through a side information channel which is an erasure channel.
\begin{remark}
The key difference between the BIC problem formulation and a classic index coding problem of~\cite{BK2006,ALSWH2008:focs} lies in the uncertainty in message bits given as side information. Notably, if we consider the scenario where $\mu_{ij}\in\{0,1\}$ for all $i,j$, then side information availability is deterministic and known to the sender, and our formulation is identical to~\cite{ALSWH2008:focs}. Thus, BIC generalizes classic index coding.
\end{remark}
\begin{remark}
Index coding problems with transformed and random side information were considered in~\cite{LDH2015} and~\cite{HL2012:isit} respectively, but in both cases it was assumed that the side information is known to the sender. Another related problem is described in~\cite{BF2013:isit} where pliable users are considered: Users express no specificity in messages demanded and thus which messages to send are uncertain. Interestingly, in the cases of pliable index coding with known solutions, either canonical random coding or uncoded transmission strategies were sufficient.
\end{remark}
\subsubsection*{Capacity Region}
We now consider a BIC problem with $K$ users and side information parameters $\{\mu_{ij}\}$ as defined above. For this problem, a $(r_1,r_2,\ldots,r_K)$ scheme with block length $n$ consists of an encoding function and $K$ decoding functions.
The encoding function,
\mbox{$f_{\mathsf{enc}}^{(n)}:\prod_{j=1}^K\{0,1\}^{m_j} \rightarrow \{0,1\}^n$},
uses the knowledge of $\{\mu_{ij}\}$ for all $j\neq i$ to map each of $K$ messages (with message $\vec{w}_j$ consisting of $m_j$ bits such that $\lim_{n\rightarrow\infty}\frac{m_{j}}{n} = r_j$) onto a length-$n$ binary vector, $\vec{x}$, that is broadcast to all $K$ users using $n$ channel uses. We reemphasize that the encoding function relies only on the side information parameters, $\{\mu_{ij}\}$, and not the side information signals, $\{\vec{\phi}_{ij}\}$.
The decoding function applied by User~$i$,
\mbox{$f_{\mathsf{dec},i}^{(n)}: \{0,1\}^{n}\times\prod_{j\neq i}\{0,1\}^{m_j}\times\{0,1\}^{m_j} \rightarrow \{0,1\}^{m_i}$},
maps the broadcast signal, $\vec{x}$, as well as $K-1$ side information vector pairs, $(\vec{\phi}_{ij},\vec{g}_{ij})$ for all $j\neq i$, to an estimate of its desired message, $\widehat{\vec{w}}_i$.
We say that a rate tuple $(r_1,\ldots,r_K)$ is \emph{achievable} if there exists a sequence of $(r_1,\ldots,r_K)$ coding schemes with increasing block length, $n$, such that for every $i\in\{1,\ldots,K\}$
\begin{align}
\lim_{n\rightarrow\infty} \Pr\left[\widehat{\vec{w}}_i\neq \vec{w}_i\right] = 0.
\end{align}
The capacity region is defined as the closure of the set of all rate tuples $(r_1,\ldots,r_K)$ that are achievable.
The goal of this paper is to study the capacity region of the BIC problem. As we show later in Proposition~\ref{prop:2u}, the capacity region of a 2-user BIC problem is easy to characterize. Thus, in order to gain a better intuition on BIC problems beyond 2 users, we focus in particular on the 3-user BIC problem.
\section{Motivating Example} \label{sec:example}
In this section we motivate both the proposed coding schemes and outer bound using a simple, concrete example.
Consider a BIC with three users (i.e., $K=3$) and where Users~2 and 3 have \emph{full side information} about other users' messages, while User~1 only knows a third of each of $\vec{w}_2$ and $\vec{w}_3$ (i.e., $\mu_{12}=\mu_{13}=\frac{2}{3}$ and $\mu_{21}=\mu_{23}=\mu_{31}=\mu_{32}=0$). We focus on this specific BIC problem because in this scenario the sender is blind \emph{only about side information at User~1} and therefore we can focus on the impact of blindness regarding just one user.
For this particular BIC problem, we will determine the symmetric capacity (i.e., the maximum rate $r$ such that $r_1=r_2=r_3=r$ is achievable) by assuming the lengths of all messages are the same (i.e., $m_1=m_2=m_3=m$ where $m$ is large), proposing a scheme, and introducing a method to bound the capacity region.
For the sake of comparison, we will first establish a baseline achievable symmetric rate by considering random coding, an often-used approach to coding in the presence of uncertain side information. For example, one natural scheme would be to send random linear combinations (RLC) of all message bits (i.e., parity bits to supplement side information) over the shared channel, until each user has a sufficient number of linearly independent equations (including side information) to decode all of the messages.
For this example, by sending $m(1+\mu_{12}+\mu_{13}) +o(m) = \frac{7m}{3}+o(m)$ random parities, each user has at least $3m+o(m)$ equations for $3m$ unknowns, meaning that with high probability each user can linearly decode all three messages: conventional random coding achieves $r_{sym}=\frac{3}{7}$.\footnote{In the subsequent explanation, we omit the $o(m)$ to simplify the exposition.}
Notice first that we can achieve better by first \emph{grouping} 2 and 3 and sending each bit of $\vec{w}_2$ XORed with a distinct bit of $\vec{w}_3$ (this requires exactly $m$ transmissions). From these transmissions, Users~2 and 3 can use side information to remove the other message and decode their desired message. Then, by sending $\vec{w}_1$ orthogonally in time (requiring another $m$ transmissions) User~1 receives its desired message. In other words by treating subsets of messages differently, we achieved $r_{sym}=\frac{1}{2}$.
We now demonstrate how to further improve the transmission strategy by constructing a ``hybrid coding scheme'' using a combination of uncoded bits and randomly coded parities to go beyond the rate of $\frac{1}{2}$. In these hybrid schemes, during each phase of transmission a subset of messages are randomly coded, and then these are XORed with uncoded bits from another disjoint subset of the messages.
For this example, we only require two such phases.
In the first phase, each channel input is generated by XORing a random combination of $\vec{w}_1$ bits, a single uncoded $\vec{w}_2$ bit, and , a single uncoded $\vec{w}_3$ bit. Each uncoded bit from both $\vec{w}_2$ and $\vec{w}_3$ are used only once to generate an input, and thus the first phase consists of exactly $m$ channel inputs generated in this manner. Formally, for each $\ell=1,\ldots,m$, the sender broadcasts ${\vec{c}[\ell]}^\top\vec{w}_1 \oplus w_2[\ell] \oplus w_3[\ell]$, where $\vec{c}[\ell]$ is a length-$m$ i.i.d. random binary vector.
In the second phase, we send $\frac{8m}{9}$ RLCs of only $\vec{w}_1$ bits.
Notice that with this scheme, if each user decodes its desired message with error probability vanishing as $m$ grows large, we achieve rate of $r_{sym}=\frac{9}{17}$ which is higher than the $\frac{3}{7}$ achieved through conventional random coding and $\frac{1}{2}$ achieved through grouped random coding. We now explain why with this scheme such a rate is achievable by explaining how each user decodes its desired message:
\begin{description}[]
\item[User~1:]\ \
Notice that during the first phase, for each channel input $\ell\in\{1,\ldots,m\}$, there is a probability of $(1-\mu_{12})(1-\mu_{13}) = \frac{1}{9}$ that User 1 knew both $w_2[\ell]$ and $w_3[\ell]$. In such an event, User~1 can cancel $w_2[\ell]\oplus w_3[\ell]$ and received a ``clean'' RLC of $\vec{w}_1$ bits. Therefore, during the first phase User~1 receives (approximately) $\frac{m}{9}$ such RLCs. In the second phase we supplemented this with an additional $\frac{8m}{9}$ RLCs of only $\vec{w}_1$. When combined, at the end of transmission User~1 will be able to identify in total $m$ linearly independent equations describing the $m$ desired bits of $\vec{w}_1$.
\item[User~2:]\ \
User~2 already knows all of $\vec{w}_1$ and $\vec{w}_3$ and therefore can remove their contributions from each channel input of the first phase. Thus, after canceling the undesired message contributions, User~2 receives exactly the $m$ bits of $\vec{w}_2$.
\item[User~3:]\ \
User~3 already knows all of $\vec{w}_1$ and $\vec{w}_2$ and therefore can remove their contributions from each channel input of the first phase. Thus, after canceling the undesired message contributions, User~3 receives exactly the $m$ bits of $\vec{w}_3$.
\end{description}
The key intuition on why we XOR uncoded bits of some messages with RLCs of others is as follows.
Assume our objective is to create an input signal such that User~1 can use side information to cancel ``interference'' from undesired messages,\footnote{We focus on User~1's ability to cancel contributions of other messages, since by assumption User~2 and 3 have full knowledge of undesired messages and can cancel any such interference perfectly.} $\vec{w}_2$ and $\vec{w}_3$. As $m$ grows large the probability that User~1 can cancel a random combination of $\vec{w}_2$ or $\vec{w}_3$ vanishes, and thus RLCs of $\vec{w}_2$ and $\vec{w}_3$ almost surely add to interference that cannot be canceled. However, by XORing uncoded $\vec{w}_2$ and $\vec{w}_3$ bits, the probability that User~1 can exploit side information to cancel interference remains constant regardless of $m$.
It is worth noting that while Users~2 and 3 eventually know all three messages (through side information and decoding their desired messages), User~1 ends up knowing only parts of messages $\vec{w}_2$ and $\vec{w}_3$.
\begin{remark}
To obtain further intuition, we can also interpret the proposed scheme as a form of interference alignment.
Let $\vec{w}_i^+$ and $\vec{w}_i^-$ for $i=2,3$ denote subvectors of $\vec{w}_i$ known and unknown respectively to User~1 via side information.
Note that $\vec{w}_i^-$ may be thought of as the part of $\vec{w}_i$ that interferes with User~1 getting $\vec{w}_1$, and that it cannot be canceled. Additionally, notice that the lengths of $\vec{w}_i^+$ and $\vec{w}_i^-$ are approximately $\frac{m}{3}$ and $\frac{2m}{3}$, respectively.
First, consider what strategy the sender could use if it was not blind and could identify these subvectors. It could first send RLCs of $\vec{w}_2^-$ and $\vec{w}_3^-$ bits knowing that all such content cannot be cancelled by User~1. Then it could send RLCs of $\vec{w}_1$, $\vec{w}_2^+$, and $\vec{w}_3^+$ bits, knowing that User~1 can cancel the $\vec{w}_2$ and $\vec{w}_3$ contribution for \emph{every} such input. Specifically, the non-blind sender \emph{aligns} the bits User~1 can cancel ($\vec{w}_2^+$ and $\vec{w}_3^+$), as well as the bits it cannot cancel ($\vec{w}_2^-$ and $\vec{w}_3^-$). Via an equation counting argument, it is easy to verify that such a scheme achieves a higher rate of $\frac{3}{5}$
When the sender is blind, it is unable to distinguish $\vec{w}_2^+$ from $\vec{w}_2^-$ and $\vec{w}_3^+$ from $\vec{w}_3^-$. However, we would still like to efficiently send both $\vec{w}_2$ and $\vec{w}_3$ simultaneously, and therefore our scheme achieves such alignment \emph{probabilistically} by using uncoded bits from $\vec{w}_2$ ($\vec{w}_3$) to preserve the separation between $\vec{w}_2^-$ and $\vec{w}_2^+$ ($\vec{w}_3^-$ and $\vec{w}_3^+$). Figure~\ref{fig:XORexample2} highlights the two desired interference alignment cases, as well as the transmissions where alignment fails due to the sender being blind. \label{rem:alignment}
\end{remark}
\begin{figure}\centering
\begin{tikzpicture}[yscale=0.66]
\draw [thick,blue] (0,0) rectangle (1.778,0.5);
\draw [thick,blue] (1.778,0) rectangle (2.67,0.5);
\draw [thick,blue] (2.67,0) rectangle (3.556,0.5);
\draw [thick,blue] (3.556,0) rectangle (4,0.5);
\draw [thick,green!50!black] (0,1) rectangle (2.67,1.5);
\draw [thick,green!50!black] (2.67,1) rectangle (4,1.5);
\draw [thick,red] (0,2) rectangle (4,2.5);
\draw (0,0.25) node[left] {$\vec{w}_3$};
\draw (0,1.25) node[left] {$\vec{w}_2$};
\draw (0,2.25) node[left] {$\vec{w}_1$};
\draw (2,0.75) node {$\oplus$};
\draw (2,1.75) node {$\oplus$};
\draw[latex-latex] (0.889,-0.05) |- (0.889,-0.5) -- node[below,pos=0.2,inner sep=1pt] {\scriptsize unc. $\vec{w}_3^-$} (3.111,-0.5) -| (3.111,-0.05);
\draw[latex-latex] (2.222,-0.05) |- (2.222,-0.7) -- node[below,pos=0.2,inner sep=1pt] {\scriptsize unc. $\vec{w}_3^+$} (3.778,-0.7) -| (3.778,-0.05);
\draw (1.33,1.25) node {\scriptsize unc. $\vec{w}_2^-$};
\draw (3.33,1.25) node {\scriptsize unc. $\vec{w}_2^+$};
\filldraw [draw=red,fill=red!50,thick] (0,2) rectangle (7.56,2.5);
\draw (3.81,2.25) node[white] {\scriptsize random combinations};
\draw[latex-latex] (0,2.8) --node[fill=white,inner sep=0.5pt]{$m$} (4,2.8);
\draw[latex-latex] (4,2.8) --node[fill=white,inner sep=0.5pt]{\tiny $\frac{8m}{9}$} (7.56,2.8);
\draw [ thick,dotted] (3.556,-1.5) rectangle (4,3.25);
\draw [ thick,dotted] (0,-1.5) rectangle (1.778,3.25);
\draw (3.778,3.25) node[above]{\scriptsize \parbox[c]{10em}{\centering Useful to all users: Alignment of ``known'' interference}};
\draw (0.889,3.25) node[above]{\scriptsize \parbox[c]{10em}{\centering Alignment of ``unknown'' interference}};
\end{tikzpicture}\vspace{-2ex}
\caption{Illustration of the symmetric-capacity-achieving scheme of the example. The horizontal axis provides scale representation of the number of channel uses dedicated to each phase. Transmission type is illustrated using outlined (uncoded) or shaded (randomly coded) blocks. Notice that, because the sender is blind, parts of messages $\vec{w}_2$ and $\vec{w}_3$ that are known to User~1 cannot be explicitly \emph{aligned} as discussed in Remark~\ref{rem:alignment} and thus some parts of $\vec{w}_3^-$ is XORed with $\vec{w}_2^+$ as well as some of $\vec{w}_2^-$ is XORed with $\vec{w}_3^+$. These are displayed as contiguous blocks in the figure for clarity, but in reality would be interleaved throughout the first $m$ channel uses. \vspace{-5ex}}\label{fig:XORexample2}
\end{figure}
\begin{remark}
Our scheme's probabilisitic alignment is obviously less effective than the explicit alignment that is possible when the sender knows the side information. This loss of effectiveness is particularly well captured by the amount of additional interference incurred by our scheme, and thus any attempt at a converse must capture the amount of interference incurred \emph{as a result of blindness}.
Notice that in both schemes, because we must send all of $\vec{w}_2$ to User~2, we incur \emph{at least} $\frac{2m}{3}$ bits of interference from $\vec{w}_2^-$. In the case of the non-blind sender, one can show that this is all the interference that is incurred at User~1, because the sender can fully align the interference caused from the messages $\vec{w}_2^-$ and $\vec{w}_3^-$ at User~1.
On the other hand, because the sender is blind, our scheme incurs an additional $\frac{1}{3}\times\frac{2}{3}\times m$ bits of interference. In Section~\ref{sec:converse}, we present a general outer bound that shows that this additional interference is indeed unavoidable and hence the scheme presented above is indeed the best possible in this example. More specifically, we will prove a generalization of the following inequality:
\begin{align}
H(\vec{\mathbf{x}}|\vec{w}_1,\vec{w}_2^+,\vec{w}_3^+) \geq{}& \frac{2}{3}H(\vec{\mathbf{x}}|\vec{w}_1,\vec{w}_3^+) + \frac{1}{3}H(\vec{\mathbf{x}} |\vec{w}_1,\vec{w}_2,\vec{w}_3^+).
\label{eq:motexconv}
\end{align}
The above inequality lower bounds the interference at User~1 (i.e., the unknown parts of $\vec{w}_2$ and $\vec{w}_3$) with a convex combination of terms that either represent providing none of $\vec{w}_2$ as side information ($H(\vec{\mathbf{x}}|\vec{w}_1,\vec{w}_3^+)$ or
all of $\vec{w}_2$ as side information ($H(\vec{\mathbf{x}} |\vec{w}_1\vec{w}_2,\vec{w}_3^+)$). The coefficient weights that describe the combination are a function of the side information parameter $\mu = \frac{2}{3}$. A more general form of this inequality is the key lemma used to construct the outer bound.\label{rem:exconv}
\end{remark}
Before concluding the section, we point out that this inequality is valid \emph{only when the sender is blind}.
Indeed, if we consider the non-blind sender like in Remark~\ref{rem:alignment} the correct inequality would be
\begin{align}
H(\vec{\mathbf{x}}|\vec{w}_1,\vec{w}_2^+,\vec{w}_3^+) \geq{}& \frac{2}{3}H(\vec{\mathbf{x}}|\vec{w}_1,\vec{w}_3^+),\label{eq:motexconv2}
\end{align}
which is clearly looser than (\ref{eq:motexconv}). Note that the additional term that appears in (\ref{eq:motexconv}) but does not appear in (\ref{eq:motexconv2}) captures additional interference due to blindness of the server.
\section{Achievability}\label{sec:achieve}
In this section, we study achievable rates in the BIC problem. As alluded to in the motivating example, one possible approach to dealing with blind side information at users is random coding. For example, in a standard random linear code applied to binary message sequences, each channel input is created by XORing random linear combinations of \emph{all} bits from \emph{all} messages. In the rest of the paper, we refer to this approach as conventional random coding.
Using conventional random coding requires that all users decode all messages, thus rate tuples are achievable if and only if they satisfy, for every $i\in\{1,\ldots,K\}$,
\begin{align}
r_i + \sum_{j\neq i} \mu_{ij}r_j \leq 1.\label{eq:randomnetworkrate}
\end{align}
In some cases, this suffices to achieve the full capacity region. For example, we will see that the rate region achievable by conventional random coding in the following two scenarios exactly matches the outer bounds derived in the next section:\footnote{Theorem~\ref{thm:KuOB} states the outer bound while the capacity regions for the two scenarios are formally stated as Propositions~\ref{prop:2u} and~\ref{prop:KuSym}, respectively.}
\begin{itemize}
\item 2-user BICs, for any value of $\mu_{12}$ and $\mu_{21}$.
\item Symmetric $K$-user BICs, where $\mu_{ij}=\mu$ for all $i\neq j$.
\end{itemize}
However, as demonstrated in the previous example, conventional random coding is not optimal in general, and in the rest of this section we propose a new hybrid encoding strategy that XORs random combinations of all bits from \emph{some} messages with \emph{uncoded bits} from others. This hybrid between random coding and uncoded transmission is the key mechanism to blindly exploit side information. For simplicity, we focus on symmetric rates achievable in an arbitrary 3-user BIC. We will first state the achievable symmetric rate as a theorem, then describe the encoding and decoding strategies before finally proving that the symmetric rate claimed in the theorem is indeed achievable.
\subsection{3-user BIC Hybrid Coding}
We now state the 3-user symmetric rate achievable using hybrid coding. We then define a hybrid encoding scheme for 3-user BIC problems, using key points from the motivating example.
\begin{theorem}\label{thm:3uACH}
Consider a 3-user BIC problem, defined by parameters $\{\mu_{ij}\}$, where WLOG\footnote{For any three users, such a condition must hold for at least one permutation of indices.} user indices are such that
\begin{align}
\mu_{32}\leq{}&\mu_{23}\leq \max\{\mu_{1i},\mu_{i1}\},\label{eq:choosing}
\end{align}
for either $i\in\{2,3\}$. Any $r_{sym}$ satisfying the following is achievable:
\begin{align}
r_{sym}^{}\leq{}&\min\left\{\frac{1}{1+\mu_{21}+\mu_{23}},\frac{1}{1+\mu_{31}+\mu_{32}}\right\},\label{eq:3uACH1}\\
r_{sym}^{}\leq{}&\max\bigg\{\frac{1}{1+\mu_{23}+\mu_{12} +\mu_{13}(1-\mu_{23}+\mu_{32})(1-\mu_{12})},
\frac{1}{1+\mu_{12} +\mu_{13}}\bigg\}.\label{eq:3uACH2}
\end{align}
\end{theorem}
\begin{remark}
Consider $r_{sym}$ satisfying (\ref{eq:3uACH1}) and (\ref{eq:3uACH2}). In the right hand side of (\ref{eq:3uACH2}), if the second term within the maximization is larger, then (\ref{eq:3uACH1}) and (\ref{eq:3uACH2}) simplify to
$r_{sym}\leq \min\bigg\{\frac{1}{1+\mu_{21}+\mu_{23}},\frac{1}{1+\mu_{31}+\mu_{32}},
\frac{1}{1+\mu_{12}+\mu_{13}}\bigg\}$.
In this case, from (\ref{eq:randomnetworkrate}) it is clear that conventional random coding suffices to achieve the desired rate.
Hence, our hybrid coding scheme increases the symmetric rate whenever the first term in the max of (\ref{eq:3uACH2}) is larger. Additionally, since conventional random coding suffices when the second term is larger, to prove Theorem~\ref{thm:3uACH}, we need only to describe a scheme and prove achievability for $r_{sym}$ satisfying
\begin{align}
r_{sym}\leq& \min\bigg\{\frac{1}{1+\mu_{21}+\mu_{23}},\frac{1}{1+\mu_{31}+\mu_{32}},\frac{1}{1+\mu_{23}+\mu_{12} +\mu_{13}(1-\mu_{23}+\mu_{32})(1-\mu_{12})}\bigg\}.\label{eq:3uACH2x}
\end{align}
\end{remark}
We now define our hybrid coding scheme where, for any $r_{sym}$ satisfying (\ref{eq:3uACH2x}), the sender will communicate $m = nr_{sym}-\delta_n$ bits where ($\delta_n$ is chosen such that $\delta_n=o(n)$) to each user in $n$ channel uses, such that probability of error vanishes as $n$ goes to infinity.\footnote{The $o(n)$ term 1) accounts for the fact that $m$ must be integer, and 2) as we shall see, ensures that decoding error will vanish as $n$ grows large.}
\subsubsection*{Encoding}
The hybrid coding scheme is characterized by three parameters, $N_1$, $N_2$, and $N_3$.
For each $i\in\{1,2,3\}$, we generate $N_i$ random linear combinations (RLC) of the bits only in $\vec{w}_i$, denoted by vector $\vec{J}_i$.
The precise values of $N_1$, $N_2$, and $N_3$ are specified later, however we point out as depicted in Figure~\ref{fig:3coding}, that $N_1-m\geq N_2\geq N_3$.
As shown in the figure, the sender combines RLCs and uncoded bits of messages in five phases. During Phase~1, each input is the XOR of one bit from each of $\vec{J}_1$, $\vec{J}_2$, and $\vec{J}_3$, where we take bits from each vector sequentially.
Phase~1 ends and Phase~2 begins when the bits in $\vec{J}_3$ are exhausted (i.e., after $N_3$ channel uses).
Similarly, the number of channel uses allocated to each phase of transmission are dictated by when we exhaust the bits of a certain type:
Phase~2 inputs consist of an XOR of $\vec{J}_1$, $\vec{J}_2$, and $\vec{w}_3$ bits, and ends when we have no more bits from $\vec{J}_2$;
Phase~3 inputs consist of an XOR of $\vec{J}_1$, $\vec{w}_2$, and $\vec{w}_3$ bits, and ends when we have no more bits from $\vec{w}_3$;
Phase~4 inputs consist of an XOR of $\vec{J}_1$ and $\vec{w}_2$ bits, and ends when we have no more bits from $\vec{w}_2$;
and Phase~5 inputs consist of only a $\vec{J}_1$ bit.
\begin{figure}
\centering
\begin{tikzpicture}[xscale=1.7,yscale=0.66,font=\footnotesize]
\draw [thick,blue] (0.4,0) rectangle (2.4,0.5);
\draw (1.3,0.25) node{\footnotesize $\vec{w}_3$};
\filldraw [draw=blue,fill=blue!50,thick] (0,0) rectangle (0.4,0.5);
\draw[] (0.2,0.25) node[]{\footnotesize $\vec{J}_3$};
\draw [thick,green!50!black] (0.8,1) rectangle (2.8,1.5);
\draw (1.7,1.25) node{\footnotesize $\vec{w}_2$};
\filldraw [draw=green!50!black,fill=green!60!black!50,thick] (0,1) rectangle (0.8,1.5);
\draw (0.4,1.25) node[]{\footnotesize $\vec{J}_2$};
\filldraw [draw=red,fill=red!50,thick] (0,2) rectangle (4.624,2.5);
\draw (2.312,2.25) node[]{\footnotesize $\vec{J}_1$};
\draw[latex-latex,thin] (0,2.75) --node [fill=white,inner sep=1pt]{$N_1$} (4.624,2.75);
\draw[latex-latex,thin] (0,1.75) --node [fill=white,inner sep=1pt]{$N_2$} (0.8,1.75);
\draw[latex-latex,thin] (0,0.75) --node [fill=white,inner sep=1pt]{$N_3$} (0.4,0.75);
\draw[latex-latex,thin] (0.8,-0.2) --node [fill=white,inner sep=1pt]{$m$} (2.8,-0.2);
\draw[dotted,thin] (0,3.2) -- (0,-0.2);
\draw (0.05,3.2) node[left] {Phase};
\draw (0.2,3.2) node {1};
\draw[dotted,thin] (0.4,3.2) -- (0.4,-0.2);
\draw (0.6,3.2) node {2};
\draw[dotted,thin] (0.8,3.2) -- (0.8,-0.2);
\draw (1.25,1.75) node {$\oplus$};
\draw (1.25,0.75) node {$\oplus$};
\draw (1.6,3.2) node {3};
\draw[dotted,thin] (2.4,3.2) -- (2.4,-0.2);
\draw (2.6,3.2) node {4};
\draw[dotted,thin] (2.8,3.2) -- (2.8,-0.2);
\draw (3.624,3.2) node {5};
\draw[dotted,thin] (4.624,3.2) -- (4.624,-0.2);
\end{tikzpicture}\vspace{-2.5ex}
\caption{Hybrid coding scheme for 3-user BIC, where $\mu_{12}=\mu_{13} = \frac{4}{5}$, $\mu_{21}=\mu_{23}=\frac{2}{5}$, $\mu_{31}=\mu_{32}=\frac{1}{5}$.
Outlined boxes represent uncoded bits, shaded boxes represent RLCs of a single message.
\vspace{-4.5ex}}\label{fig:3coding}
\end{figure}
\subsubsection*{Decoding}
We now describe the decoding scheme of each user. Users~2 and 3 each decodes all 3 messages. As in conventional random coding, this requires that User~2 and 3 each receive a sufficient number of independent linear combinations of messages bits, either via side information or the shared channel.
A key point in our coding scheme lies in how User~1 exploits the hybrid coding structure to decode $\vec{w}_1$.
As in the example, User~1 uses side information to cancel out the combinations of known $\vec{w}_2$ and $\vec{w}_3$ bits from symbols received in Phases~3 and 4. It uses these ``clean'' RLC of only $\vec{w}_1$ bits along with those RLC received during Phase~5 to linearly decode only $\vec{w}_1$.
For the scheme to achieve $r_{sym}$ (i.e., in order for decoding error probability to vanish as $n$ grows large), we claim that choosing $N_1$, $N_2$, and $N_3$ as
\begin{align}
N_1 = n\quad\text{ and }\quad
N_2 = nr_{sym}\mu_{23}\quad\text{ and }\quad
N_3 = nr_{sym}\mu_{32}\label{eq:codelength},
\end{align}
results in a probability of decoding error that vanishes as $n\rightarrow\infty$. We prove this formally in the following subsection.
\begin{remark}
Recall from the illustrative example, we wanted to maximize the chance that User~1 can clean $\vec{w}_2$ and $\vec{w}_3$ content from a transmission, and we assumed that both User~2 and 3 decode all three messages. Thus the phases of transmission in Figure~\ref{fig:3coding}, have the following roles:
Phase~5 provides RLCs about $\vec{w}_1$ to User~1. Phase~5 also provides enough $\vec{w}_1$ RLCs for each of User~2 and User~3 to decode $\vec{w}_1$ (with the help of their side information).
A fraction, $(1-\mu_{12})$, of Phase~4 is useful to User~1 after using side information to clean the $\vec{w}_2$ component, to obtain a clean RLC of only $\vec{w}_1$. Similarly, a smaller fraction, $(1-\mu_{12})(1-\mu_{13})$, of Phase~3 is useful to User~1 by cleaning both the $\vec{w}_2$ and $\vec{w}_3$ components, to obtain a clean RLC of $\vec{w}_1$. Note that User~1 only uses clean RLCs from Phases~3-5 to decode $\vec{w}_1$. Because Users~2 and 3 each decoded $\vec{w}_1$ from Phase 5, each cancels out the $\vec{w}_1$ component from Phases~1-4, and then each uses the remaining residual symbols to decode both messages $\vec{w}_2$ and $\vec{w}_3$.
\end{remark}
\subsection{Proof of Achievability}\label{sec:BIC:achieve}
We now address the achievability of rate $r_{sym}$ satisfying (\ref{eq:3uACH2x}), using the hybrid network codes we just defined. Before proceeding we recall that the message size $m$ is such that $m=nr-\delta_n$, where $\delta_n$ is positive and $\delta_n=o(n)$.
To prove that the rate is achievable, we must show that the probability that any user does not decode its desired message vanishes as $n\rightarrow \infty$ (i.e., $\Pr[ \widehat{\vec{w}}_i \neq \vec{w}_i] \rightarrow 0$). Moreover, since our decoding strategy requires that User~2 and User~3 decode all three messages, we also show that the probability of decoding error of all messages at Users~2 and 3 vanishes as $n\rightarrow\infty$. Specifically, we have the following possible error events, each of which must approach 0 as $n\rightarrow\infty$:
\begin{description}
\item[$\mathcal{E}_1$:] User~1 fails to decode $\vec{w}_1$.
\item[$\mathcal{E}_2$:] User~2 fails to decode $\{\vec{w}_1,\vec{w}_2,\vec{w}_3\}$.
\item[$\mathcal{E}_3$:] User~3 fails to decode $\{\vec{w}_1,\vec{w}_2,\vec{w}_3\}$.
\end{description}
For each event we will separate the error analysis into different sources of error, and for each source of error we will use one of two analysis techniques to prove that the probability of such an event occurring vanishes with large $n$. In order to provide clarity, and since User~1's decoding strategy was the primary difference between hybrid coding and conventional random coding, we will revisit these two techniques after first applying them in the context of analyzing the probability of event $\mathcal{E}_1$.
Recall that User~1 first uses its side information to ``clean'' transmissions from Phases~3 and 4 resulting in random linear combinations (RLCs) of only bits from $\vec{w}_1$. It then combines clean RLCs with those received during Phase~5 (recall from Figure~\ref{fig:3coding} that Phase~5 only has $\vec{w}_1$ content) and attempts to linearly decode $\vec{w}_1$. Therefore, we express User~1's decoding error as the union of two events, $\mathcal{E}_1 = \mathcal{E}_{1a}\cup\mathcal{E}_{1b}$, defined as:
\begin{description}
\item[$\mathcal{E}_{1a}$: ] The total number of random linear combinations (RLCs) cleaned from Phases~3 and 4 and received in Phase~5 is less than $m+\underline{\delta}_n$, where $\underline{\delta}_n$ grows with $n$ and $0 < \underline{\delta}_n < \delta_n$.
\item[$\mathcal{E}_{1b}$: ] The random matrix that describes the transformation of $\vec{w}_1$ to received (clean) RLCs is rank deficient.
\end{description}
We will now proceed to show
\begin{align*}
\Pr[\mathcal{E}_{1}] =\Pr[\mathcal{E}_{1a}\cup\mathcal{E}_{1b}] =\Pr[\mathcal{E}_{1a}] + \Pr[\mathcal{E}_{1a}^c\cap\mathcal{E}_{1b}] = o(n).
\end{align*}
We first address $\Pr[\mathcal{E}_{1a}]$. By the scheme's construction:
\begin{itemize}
\item Phase~3 has duration $m-N_2+N_3$, and the probability of cleaning each transmission is $(1-\mu_{12})(1-\mu_{13})$.
\item Phase~4 has duration $N_2-N_3$, and the probability of cleaning each transmission is $(1-\mu_{12})$.
\item Phase~5 has duration $N_1-N_2-m$, and each transmission is a clean RLC of $\vec{w}_1$.
\end{itemize}
We may thus represent receiving a clean RLC in the $\ell$-th channel use of Phase 3 as a Bernoulli($1-\mu_{12}-\mu_{13}+\mu_{12}\mu_{13}$) random variable $\lambda_3[\ell]$ which is i.i.d. across $\ell=1,\ldots,m-N_2+N_3$ and receiving a clean equation in the $\ell$-th channel use of Phase 4 as a Bernoulli($1-\mu_{12}$) random variable $\lambda_4[\ell]$ which is i.i.d. across $\ell=1,\ldots,N_2-N_3$.
We now note that the duration of Phases~3 and 4 ($D_3$ and $D_4$) are by construction,
\begin{align*}
D_3 ={}& m-N_2+N_3 ={} nr_{sym}(1-\mu_{23}+\mu_{32}) -\delta_n,\\
D_4={}& N_2-N_3 ={} nr_{sym}(\mu_{23}-\mu_{32}),
\end{align*}
and that the duration of Phase~5 may be bounded as
\begin{align}
D_5={}&N_1-N_2-m \nonumber\\
={}& n-nr_{sym}(1+\mu_{23}) +\delta_n \nonumber\\
\stackrel{(a)}{\geq}{}& nr_{sym}(1+\mu_{23}+\mu_{12} +\mu_{13}(1-\mu_{23}+\mu_{32})(1-\mu_{12}))
-nr_{sym}(1+\mu_{23})) +\delta_n \nonumber\\
={}& nr_{sym}(\mu_{12} +\mu_{13}(1-\mu_{23}+\mu_{32})(1-\mu_{12}))
+\delta_n \nonumber\\
={}& nr_{sym}(1-(1-\mu_{23}+\mu_{32})(1-\mu_{12})(1-\mu_{13}))
-(\mu_{23}-\mu_{32})(1-\mu_{12})
+\delta_n \nonumber\\
\geq{}& nr_{sym}-D_3(1-\mu_{12})(1-\mu_{13}))-D_4(1-\mu_{12}),\label{eq:achproofbnd}
\end{align}
where step (a) results directly from (\ref{eq:3uACH2x}). From this, the probability of $\mathcal{E}_{1a}$ occurring is, in the limit,
\begin{align*}
\lim_{n\rightarrow\infty}\Pr[\mathcal{E}_{1a}]
={}& \lim_{n\rightarrow\infty}
\Pr\Bigg[\Bigg(\sum_{\ell=1}^{D_3}\lambda_3[\ell] +\sum_{\ell^\prime=1}^{D_4}\lambda_4[\ell^\prime] + D_5\Bigg)<m+\underline{\delta}_n\Bigg]\\
\stackrel{(b)}{\leq}{}& \lim_{n\rightarrow\infty}
\Pr\Bigg[\Bigg(\sum_{\ell=1}^{D_3}\lambda_3[\ell] +\sum_{\ell^\prime=1}^{D_4}\lambda_4[\ell^\prime] + nr_{sym}
-D_3(1-\mu_{12})(1-\mu_{13})) -D_4(1-\mu_{12})\Bigg)<m+\underline{\delta}_n\Bigg]\\
={}& \lim_{n\rightarrow\infty}
\Pr\Bigg[\Bigg(\sum_{\ell=1}^{D_3}\lambda_3[\ell]-\E\left[\sum_{\ell=1}^{D_3}\lambda_3[\ell]\right]
+\sum_{\ell^\prime=1}^{D_4}\lambda_4[\ell]-\E\left[\sum_{\ell=1}^{D_3}\lambda_3[\ell]\right]
+nr_{sym}
\Bigg)<m+\underline{\delta}_n\Bigg]\\
={}& \lim_{n\rightarrow\infty}
\Pr\Bigg[\Bigg(\sum_{\ell=1}^{D_3}\lambda_3[\ell]-\E\left[\sum_{\ell=1}^{D_3}\lambda_3[\ell]\right]
+ \sum_{\ell^\prime=1}^{D_4}\lambda_4[\ell]-\E\left[\sum_{\ell=1}^{D_3}\lambda_3[\ell]\right] + nr_{sym}
\Bigg)<nr_{sym}-\delta_n+\underline{\delta}_n\Bigg]\\
\leq{}& \lim_{n\rightarrow\infty}
\Pr\left[\left(\frac{\left|\sum_{\ell=1}^{D_3}\lambda_3[\ell]-\E\left[\sum_{\ell=1}^{D_3}\lambda_3[\ell]\right]\right|}{n}
+ \frac{\left|\sum_{\ell^\prime=1}^{D_4}\lambda_4[\ell]-\E\left[\sum_{\ell=1}^{D_3}\lambda_3[\ell]\right]\right|}{n} \right)>\frac{\delta_n-\underline{\delta}_n}{n}\right]\\
\stackrel{(c)}{=}{}& 0,
\end{align*}
where in (b) we applied the bound (\ref{eq:achproofbnd}) while noting that if $a\geq b$ then $\Pr[a<c] \leq \Pr[b<c]$,
and in (c) we invoked the law of large numbers while noting that $\delta_n-\underline{\delta}_n$ is positive by construction.
Now consider the event $\mathcal{E}_{1a}^c\cap\mathcal{E}_{1b}$. This describes the case where User~1 receives enough (i.e., $m+\underline{\delta}_n$) clean equations but the randomly generated matrix that maps $\vec{w}_1$ to clean equations has rank less than $m$. This type of error is well studied throughout the network coding literature. For instance, from expression~(3) of~\cite{MacKay2005}, the probability of a $m\times m+\underline{\delta}_n$ random binary matrix having rank less than $m$ can be bounded as
\begin{align*}
\Pr(\mathcal{E}_{1a}^c\cap\mathcal{E}_{1b}) \leq 2^{-\underline{\delta}_n},
\end{align*}
which implies, as desired,
\begin{align*}
\lim_{n\rightarrow\infty}\Pr(\mathcal{E}_{1a}^c\cup\mathcal{E}_{1b}) = 0.
\end{align*}
We now revisit the analysis and note that $\mathcal{E}_{1a}$ may be thought of as the event where the actual amount of randomly available side information was ``not enough'' because it deviated significantly from the mean. On the other hand $\mathcal{E}_{1a}^c\cap\mathcal{E}_{1b}$ describes the case where there was a sufficient amount of side information, but the randomly generated coding scheme failed to communicate the remaining desired message bits. For each type of error we applied a different analysis technique. To address the first, we applied a concentration inequality to show that the probability that the amount of randomly available side information deviates significantly from the mean vanishes as $n$ grows large. To address the second, we applied existing analysis on the properties of randomly generated matrices to show that the probability of a rank-deficient encoding matrix vanishes as $n$ grows large.
To prove that the probabilities of error events $\mathcal{E}_2$ and $\mathcal{E}_3$ also vanish as $n$ grows large, we must systematically break down these error events into subevents of these two types. Since these two users apply the same decoding process, we now focus on User~2, and we identify such subevents.
Recall the User~2 first uses its side information and Phase~5 transmissions (i.e., RLCs with only $\vec{w}_1$ content) to decode $\vec{w}_1$. Let $\mathcal{E}_{2,1}$ be the event where User~2 fails to decode $\vec{w}_1$ which we further breakdown into the following subevents, $\mathcal{E}_{2,1a}$ and $\mathcal{E}_{2,1b}$:
\begin{description}
\item[$\mathcal{E}_{2,1a}$: ] User~2 does not receive enough side information, i.e., $\mathbf{1}^\top\vec{g}_{21} < \mu_{21}nr_{sym} +\underline{\delta}_n$, where $\underline{\delta}_n$ grows with $n$ and $0 < \underline{\delta}_n < \delta_n$.
\item[$\mathcal{E}_{2,1b}$: ] The random matrix that describes the transformation of $\vec{w}_1$ to Phase~5 RLCs is rank deficient,
\end{description}
One can verify using the same methods as in the analysis of $\mathcal{E}_1$ that the probability of either $\mathcal{E}_{21a}$ or $\mathcal{E}_{21a}^c\cap\mathcal{E}_{21a}$ occurring vanishes with large $n$ as long as the rate $r_{sym}$ satisfies (\ref{eq:3uACH2x}).
Next, recall that after decoding $\vec{w}_1$, User~2 removes $\vec{w}_1$ content from Phases~1--4, and proceeds to decode both $\vec{w}_2$ and $\vec{w}_3$. Let $\mathcal{E}_{2,\{2,3\}}$ denote the event where User~2 fails to decode $\{\vec{w}_2,\vec{w}_3\}$, and we now study specifically $\mathcal{E}_{2,1}^c \cap \mathcal{E}_{2,\{2,3\}}$.
Notice that by construction, after the $\vec{w}_1$ content has been removed, User~2 will receive some uncoded bits of $\vec{w}_2$ from Phase~4. Furthermore, User~2 can also use its side information to clean the $\vec{w}_3$ component from some transmissions during Phase~3 to receive more (independent) uncoded bits from $\vec{w}_2$.
Noting this observation, we can now specify the final two error events for $\mathcal{E}_2$ analysis:
\begin{description}
\item[$\mathcal{E}_{2,1}^c \cap \mathcal{E}_{2,\{23\}a}$: ] \quad \quad \quad \quad After decoding $\vec{w}_1$ and removing its content from Phases~1--4, the number of bits about $\vec{w}_3$ User~2 learns from side information and the number of clean uncoded bits about $\vec{w}_2$ User~2 learns from Phases~3 and 4 is significantly less than the mean.
\item[$\mathcal{E}_{2,1}^c \cap \mathcal{E}_{2,\{23\}b}$: ] \quad \quad \quad \quad After decoding $\vec{w}_1$ and removing its content from Phases~1--4, the random matrix that describes the transformation of $\vec{w}_2$ and $\vec{w}_3$ to transmissions in Phases~1 and 2 is rank deficient.
\end{description}
Again, one can verify using the same methods as in the analyses of $\mathcal{E}_{1a}$ and $\mathcal{E}_{1b}$ that the probabilities of these events occurring vanish with large $n$ as long as the rate $r_{sym}$ satisfies (\ref{eq:3uACH2x}). Using the analyses of these subevents, we have
\begin{align*}
\lim_{n\rightarrow\infty}\Pr\left[\mathcal{E}_2\right]
={}& \lim_{n\rightarrow\infty}\Pr\left[\mathcal{E}_{2,1}\right] + \Pr\left[\mathcal{E}_{2,1}^c \cap \mathcal{E}_{2,\{23\}}\right] \\
={}& \lim_{n\rightarrow\infty}\Pr\left[\mathcal{E}_{2,1a}\right] + \Pr\left[\mathcal{E}_{2,1a}^c\cap\mathcal{E}_{2,1b}\right]
+ \Pr\left[\mathcal{E}_{2,1}^c \cap \mathcal{E}_{2,\{23\}a}\right]
+ \Pr\left[\mathcal{E}_{2,1}^c \cap \mathcal{E}_{2,\{23\}a}^c\cap \mathcal{E}_{2,\{23\}b}\right] \\
={}& 0.
\end{align*}
Through similarly identifying subevents of $\mathcal{E}_3$ we can also establish that $\Pr[\mathcal{E}_3]\rightarrow 0$ as $n\rightarrow\infty$. Therefore, the probability of decoding error at each user vanishes as $n$ grows large.\hfill\qed
\section{Outer Bound}\label{sec:converse}
In this section, we present an outer bound on the capacity region of the BIC problem. We will first state and prove the bound for the 3-user setting and remark on its implications. We then introduce a key lemma and prove the 3-user outer bound. Finally we state a general expression for an outer bound on the general $K$-user BIC capacity region. Its proof is relegated to Appendix~\ref{app:KuOB}.
\subsection{3-user Outer Bound}
We begin by stating the following result:
\begin{theorem}\label{thm:3uOB}
Consider a 3-user BIC problem. Rates $(r_1,r_2,r_3)$ are achievable only if,
\begin{align}
r_i +
\mu_{ij}r_j +
\left(\mu_{ik}-\frac{\plus{\mu_{ij}-\mu_{kj}}\plus{\mu_{ik}-\mu_{jk}}}{1-\mu_{kj}}\right)r_k
\leq{}& 1\label{eq:thm:3uOB},
\end{align}
for any $i\neq j\neq k\in\{1,2,3\}$ and $\plus{a} \triangleq \max\{a,0\}$.
\end{theorem}
\begin{remark}
If the sender is not blind (i.e., the side information is known), our BIC problem can be converted to an analogous classic index coding problem with each user, $i$, desiring four different messages whose rates sum to by $r_i$ and whose proportion are determined by $\mu_{ji}$ and $\mu_{ki}$ for $i\neq j\neq k$.
For this resulting classic index coding problem, using the coding techniques of~\cite{ABKSW2013:isit}, it can be shown that rate tuples $(r_1,r_2,r_3)$ satisfying, for all $i\neq j\neq k \in\{1,2,3\}$,
\begin{align*}
r_i +\mu_{ij}r_j +\mu_{ik}\mu_{jk}r_k\leq 1
\end{align*}
are achievable. Notice that for some side information parameters (e.g., when $\mu_{23}=\mu_{32}=0$ and $\mu_1j>0$ for $j=2,3$), the rates achieved by a non-blind sender can be greater than the BIC outer bound, (\ref{eq:thm:3uOB}). The key difference in expressions is the third term on the left side of (\ref{eq:thm:3uOB}), which captures (at least partially) the capacity loss due to sender blindness.
\end{remark}
\begin{remark}
By evaluating Theorem~\ref{thm:3uOB} and comparing with the condition for achievability using conventional random coding (\ref{eq:randomnetworkrate}) we arrive at the following result:
\begin{proposition}\label{prop:2u}
Consider a 2-user BIC defined by parameters $\mu_{12}$ and $\mu_{21}$. The capacity region is the set of all rate pairs $(r_1,r_2)$ satisfying
\begin{align}
r_1 + \mu_{12}r_2 \leq{}& 1,\label{eq:2u1}\\
\mu_{21}r_1 + r_2 \leq{}& 1.\label{eq:2u2}
\end{align}
\end{proposition}
\begin{proof}
The converse results from Theorem~\ref{thm:3uOB} by letting $i,j\in\{1,2\}$, $k=3$ and fixing $r_3=0$, while achievability is a result of evaluation of (\ref{eq:randomnetworkrate}).
\end{proof}
\end{remark}
\subsection{Proof of Theorem~\ref{thm:3uOB}}\label{sec:3uOB}
To prove Theorem~\ref{thm:3uOB}, we start by stating and proving a key lemma:
\begin{lemma}\label{lem:split}
Consider a BIC problem with side information parameters $\{\mu_{ij}\}$.
Then, for any $(r_1,\ldots,r_K)$ scheme with block length $n$ and any random variable $V$ that is independent of $\vec{w}_j$ and $\vec{g}_{ij}$ with $i\neq j$ (but may depend on other messages and channel parameters), we have
\begin{align}
H\left(\vec{\mathbf{x}}\middle|\vec{\phi}_{ij},\vec{g}_{ij},V\right) \geq{}& \mu_{ij}H\left(\vec{\mathbf{x}}\middle|V\right) + \left(1-\mu_{ij}\right)H\left(\vec{\mathbf{x}} \middle|\vec{w}_j,V\right).\label{eq:lem:split:1}
\end{align}
Additionally, if $\mu_{kj}\leq\mu_{ij}$ where $i\neq j \neq k$, then
\begin{align}
H\left(\vec{\mathbf{x}}\middle|\vec{\phi}_{ij},\vec{g}_{ij},V\right) \geq{}& \frac{\mu_{ij} - \mu_{kj}}{1-\mu_{kj}}H\left(\vec{\mathbf{x}}\middle|V\right)
+ \frac{1-\mu_{ij}}{1-\mu_{kj}}H\left(\vec{\mathbf{x}} \middle|\vec{\phi}_{kj},\vec{g}_{kj},V\right).\label{eq:lem:split:2}
\end{align}
\end{lemma}
\begin{remark}
Inequality (\ref{eq:lem:split:1}) captures an intuition that can be illustrated through the following toy problem. Consider a scenario where the sender has 4 bits $b_1,b_2,c_1,c_2$. It knows that User~2 knows $c_1$ and $c_2$ already and User~3 knows $b_1$ and $b_2$. On the other hand, the sender only knows that User~1 knows \emph{either} $b_1$ or $b_2$ (but not both) and \emph{either} $c_1$ or $c_2$ (but not both) and that both of these uncertainties are the result of a (fair) coin flip. If the sender sends a single transmission such that both User~2 and User~3 learn something new about $b_1,b_2,c_1,c_2$, what is the minimum probability that User~1 also learns something new?
One possible transmission would be to send $b_1\oplus c_1$. In this case, User~2 learns $c_1$ and User~3 learns $b_1$, and there is a 75\% chance that User~1 learns either $b_1$, $c_1$, or $b_1\oplus c_1$. In comparison, we can evaluate (\ref{eq:lem:split:1}) for the porposed transmission by letting $i=1$, $j=2$, $k=3$, $\mu_{12}=\mu_{13} = \frac{1}{2}$, and assuming $\vec{w}_{2}=[b_1\quad b_2]$, $\vec{w}_3 = [c_1\quad c_2]$, and $V = (\vec{\phi}_{13},\vec{g}_{13})$.
In doing so, we see that the right hand side of (\ref{eq:lem:split:1}) evaluates to $ \mu_{12}(1) + (1-\mu_{12})(\mu_{13})= \frac{3}{4}$, signifying that the 75\% chance of ``leaking'' information to User~1 is the lowest possible.
Notice that, as stated, Lemma~\ref{lem:split} does not assume decodability of any message. Moreover, it applies regardless of the number of channel uses, whereas the toy example assumed only a single channel use. Consequently, Lemma~\ref{lem:split} can be viewed as a powerful extension of the intution from the toy example to vector (i.e., coded) representations of message bits.
\label{rem:toyexample}\end{remark}
\begin{remark}\label{rem:alignment2}
One can note that if the transmitter is not blind, the sender can construct a signal that invalidates 4. For example, consider the scenario in Remark~\ref{rem:toyexample}, but now assume that the sender is aware that User~1 knows $b_1$ and $c_1$. The sender can now use this knowledge to send a single transmission $b_1\oplus c_1$. One can easily verify that, for this one transmission, the left hand side now evaluates to $0$, while the right hand side evaluates to $\frac{1}{2}$ which violates the claim. Therefore, the inequality specifically captures the impact of a blind sender.
\end{remark}
\begin{remark}
Inequality (\ref{eq:lem:split:1}) can more generally be interpreted as follows. Note that $H(\vec{x}|V)$ corresponds to the case that there is no side information about $\vec{w}_j$ provided in the conditioning, and $H(\vec{x}|V,\vec{w}_j)$ corresponds to the case that all of $\vec{w}_j$ is provided as the side information in the conditioning. Therefore inequality (\ref{eq:lem:split:1}) lower bounds $H(\vec{\mathbf{x}}|\vec{\phi}_{ij},\vec{g}_{ij},V)$ with a weighted average of two extreme cases, where either none or all of $\vec{w}_j$ is provided as side information. A similar interpretation holds for (\ref{eq:lem:split:2}), where $\vec{w}_j$ is replaced with $(\vec{\phi}_{kj},\vec{g}_{kj})$.
\end{remark}
\begin{proof}
To prove Lemma~\ref{lem:split}, we first define a virtual side information signal, $\vec{\phi}^\prime$, such that $\vec{\phi}_{ij}$ is a physically degraded version of $\vec{\phi}^\prime$. To do so we also specify two channel state sequences, $\vec{g}^\prime$ and $\vec{g}^{\ddagger}$ drawn i.i.d from two different Bernoulli distributions that take a values of zero with probabilities $\mu^\prime$ and $\delta = \frac{\mu_{ij}-\mu^\prime}{1-\mu^\prime}$, respectively. The side information signals are constructed such that for $\ell\in\{1,\ldots,m_j\}$,
\begin{align}
\phi^\prime[\ell] ={}& g^\prime[\ell]{w}_j[\ell] \qquad \text{ and }\qquad
g_{ij}[\ell] = g^\prime[\ell]g^\ddagger[\ell],
\end{align}
which necessarily implies $\mu^\prime \leq\mu_{ij}$.
We now establish a relationship between the virtual side information signal, $\vec{\phi}^\prime$, and the degraded side information, $\vec{\phi}_{ij}$, using a strong data processing inequality proven in~\cite{AGKN2014:isit} which states, for random variables $U \leftrightarrow X \leftrightarrow Y$, that form a Markov chain,
\begin{align*}
I(Y;U) \leq s^*(X;Y)I(X;U),
\end{align*}
where
\begin{align*}
s^*(X;Y) \triangleq{}& \sup_{Q_X\neq P_X}\frac{D(Q_Y||P_Y)}{D(Q_X||P_X)},
\end{align*}
and $Q_Y$ is the marginal distribution of $Y$ from the joint distribution $Q_{XY} = P_{Y|X}Q_X$.
We apply the strong data processing inequality by letting $U = (\vec{\mathbf{x}},V)$, $X = (\vec{\phi}^\prime,\vec{g}^\prime)$, and $Y = (\vec{\phi}_{ij},\vec{g}_{ij})$, to show
\begin{align}
H\left(\vec{\mathbf{x}}\middle|\vec{\phi}_{ij},\vec{g}_{ij},V\right)
={}& -I\left(\vec{\phi}_{ij},\vec{g}_{ij};\vec{\mathbf{x}} \middle| V\right)+H\left(\vec{\mathbf{x}} \middle| V \right)\nonumber\\
={}& -I\left(\vec{\phi}_{ij},\vec{g}_{ij};\vec{\mathbf{x}},V\right)+H\left(\vec{\mathbf{x}} \middle|V\right)\nonumber\\
\geq{}& -s^*\left(\left(\vec{\phi}^\prime,\vec{g}^\prime\right);\left(\vec{\phi}_{ij},\vec{g}_{ij}\right)\right)
I\left(\vec{\phi}^\prime,\vec{g}^\prime;\vec{\mathbf{x}},V\right) + H\left(\vec{\mathbf{x}}\middle|V\right)\nonumber\\
\stackrel{(a)}{=}{}& -\frac{1-\mu_{ij}}{1-\mu^\prime}
\left(I\left(\vec{\phi}^\prime,\vec{g}^\prime;\vec{\mathbf{x}}\middle|V\right) + I\left(\vec{\phi}^\prime,\vec{g}^\prime;V\right)\right)+H\left(\vec{\mathbf{x}}\middle|V\right)\nonumber\\
={}& -\frac{1-\mu_{ij}}{1-\mu^\prime}
\left(H\left(\vec{\mathbf{x}}\middle|V\right)-H\left(\vec{\mathbf{x}}\middle|V,\vec{\phi}^\prime,\vec{g}^\prime\right)\right)+H\left(\vec{\mathbf{x}}\middle|V\right)\nonumber\\
={}& \frac{\mu_{ij}-\mu^\prime}{1-\mu^\prime} H\left(\vec{\mathbf{x}} \middle|V\right) + \frac{1-\mu_{ij}}{1-\mu^\prime}H\left(\vec{\mathbf{x}}\middle|V,\vec{\phi}^\prime,\vec{g}^\prime\right).\label{eq:lemproof1}
\end{align}
Step (a), where we evaluated $s^*\left(\left(\vec{\phi}^\prime,\vec{g}^\prime\right);\left(\vec{\phi}_{ij},\vec{g}_{ij}\right)\right)$, is proven in Appendix~\ref{app:sstar}.
Recall that we only require $\mu^\prime<\mu_{ij}$ in order for the virtual signal to be properly defined, and we notice the following to complete the proof:
\begin{itemize}
\item If $\mu^\prime=0$, then $(\vec{\phi}^\prime,\vec{g}^\prime) = (\vec{w}_j,\vec{1})$ and we prove (\ref{eq:lem:split:1}),
\item If $\mu^\prime=\mu_{kj}<\mu_{ij}$, then $(\vec{\phi}^\prime,\vec{g}^\prime)$ is statistically equivalent to $(\vec{\phi}_{kj},\vec{g}_{kj})$ and we prove (\ref{eq:lem:split:2}).\vspace{-3ex}
\end{itemize}\end{proof}
We now use Lemma~\ref{lem:split} to prove Theorem~\ref{thm:3uOB}. First, we note that two side info parameter relationships affect the form of (\ref{eq:thm:3uOB}): The term $\frac{\plus{\mu_{ij}-\mu_{kj}}\plus{\mu_{ik}-\mu_{jk}}}{1-\mu_{kj}}$ is nonzero only if both $\mu_{kj} < \mu_{ij}$ and $\mu_{jk} < \mu_{ik}$. In this case,
\begin{align}
r_i +
\mu_{ij}r_j +
\left(\mu_{ik}-\frac{(\mu_{ij}-\mu_{kj})(\mu_{ik}-\mu_{jk})}{1-\mu_{kj}}\right)r_k
\leq{}& 1.\label{eq:3u2}
\end{align}
Otherwise, if either $\mu_{kj} \geq \mu_{ij}$ or $\mu_{jk} \geq \mu_{ik}$, then
\begin{align}
r_i +
\mu_{ij}r_j +
\mu_{ik}r_k
\leq{}& 1.\label{eq:3u1}
\end{align}
We prove these two cases separately, and only address the first case, (\ref{eq:3u2}), here. The proof of (\ref{eq:3u1}) will use similar techniques, and may be found in Appendix~\ref{app:3u2}. We therefore assume $\mu_{kj}< \mu_{ij}$ and $\mu_{jk}< \mu_{ik}$, and start with Fano's inequality at User~$i$:
\begin{align}
nr_i
\leq{}& I\left(\vec{x},\vec{\phi}_{ij},\vec{g}_{ij},\vec{\phi}_{ik},\vec{g}_{ik};\vec{w}_i\right) + o(n)\nonumber\\
={}& H\left(\vec{x}\middle|\vec{\phi}_{ij},\vec{g}_{ij},\vec{\phi}_{ik},\vec{g}_{ik}\right) - H\left(\vec{x}\middle|\vec{w}_i,\vec{\phi}_{ij},\vec{g}_{ij},\vec{\phi}_{ik},\vec{g}_{ik}\right) + o(n)\nonumber\\
\leq{}& n - H\left(\vec{x}\middle|\vec{w}_i,\vec{\phi}_{ij},\vec{g}_{ij},\vec{\phi}_{ik},\vec{g}_{ik}\right) + o(n)\label{eq:3uprooffano}\\
\stackrel{(a)}{\leq}{}& n - \frac{\mu_{ij} - \mu_{kj}}{1-\mu_{kj}}\overbrace{H\left(\vec{\mathbf{x}}\middle|\vec{w}_i,\vec{\phi}_{ik},\vec{g}_{ik}\right)}^A
- \frac{1-\mu_{ij}}{1-\mu_{kj}}\overbrace{H\left(\vec{\mathbf{x}} \middle|\vec{w}_i,\vec{\phi}_{kj},\vec{g}_{kj},\vec{\phi}_{ik},\vec{g}_{ik}\right)}^B,\label{eq:3uprooffano2}
\end{align}
where in step (a) we applied (\ref{eq:lem:split:2}) from Lemma~\ref{lem:split} by letting $V=\left(\vec{w}_i,\vec{\phi}_{ik},\vec{g}_{ik}\right)$. Notice there are two negative entropy terms, $A$ and $B$, to account for. To address the quantity $A$, we enhance side information at User~$j$ from $\left(\vec{\phi}_{ji},\vec{g}_{ji}\right)$ to $\vec{w}_i$, and observe:
\begin{align}
nr_j
\leq{}& I\left(\vec{x},\vec{\phi}_{ji},\vec{g}_{ji},\vec{\phi}_{jk},\vec{g}_{jk};\vec{w}_j\right) +o(n)\nonumber\\
\leq{}& I\left(\vec{x},\vec{w}_{i},\vec{\phi}_{jk},\vec{g}_{jk};\vec{w}_j\right) +o(n)\nonumber\\
={}& H\left(\vec{x}\middle|\vec{w}_{i},\vec{\phi}_{jk},\vec{g}_{jk}\right) - H\left(\vec{x}\middle|\vec{w}_{i},\vec{w}_j,\vec{\phi}_{jk},\vec{g}_{jk}\right) + o(n)\nonumber\\
\stackrel{(b)}{\leq}{}& H\left(\vec{x}\middle|\vec{w}_{i},\vec{\phi}_{jk},\vec{g}_{jk}\right) - \mu_{jk}H\left(\vec{x}\middle|\vec{w}_{i},\vec{w}_j\right) + o(n)\nonumber\\
\stackrel{(c)}{\leq}{}& \overbrace{H\left(\vec{x}\middle|\vec{w}_{i},\vec{\phi}_{ik},\vec{g}_{ik}\right)}^{A} - \mu_{jk}H\left(\vec{x}\middle|\vec{w}_{i},\vec{w}_j\right) + o(n),\label{eq:3uproof2A1}
\end{align}
where in (b) we used (\ref{eq:lem:split:1}) from Lemma~\ref{lem:split} while letting $V=(\vec{w}_i,\vec{w}_j)$, and
in (c) we observe that, since $\mu_{kj} < \mu_{ij}$ and that $g_{jk}[\ell]=0$ implies $(\phi_{jk}[\ell]=0,g_{jk}[\ell]=0)$ is independent of $\vec{w}_i$ and $\vec{x}$, replacing $\left(\vec{\phi}_{jk},\vec{g}_{jk}\right)$ with $\left(\vec{\phi}_{ik},\vec{g}_{ik}\right)$ reduces the effective conditioning (see Claim~\ref{cl:staten} in Appendix~\ref{app:KuOB}). At User~$k$ we enhance side information from $\left(\vec{\phi}_{ki},\vec{g}_{ki},\vec{\phi}_{kj},\vec{g}_{kj}\right)$ to $(\vec{w}_j,\vec{w}_k)$ to find:
\begin{align}
nr_k
\leq{}& I\left(\vec{x},\vec{\phi}_{ki},\vec{g}_{ki},\vec{\phi}_{kj},\vec{g}_{kj};\vec{w}_k\right) +o(n)\nonumber\\
\leq{}& I\left(\vec{x},\vec{w}_{i},\vec{w}_j;\vec{w}_k\right) +o(n)\nonumber\\
\leq{}& H\left(\vec{x}\middle|\vec{w}_{i},\vec{w}_j\right) + o(n).\label{eq:3uproof2A2}
\end{align}
To account for the quantity $B$, we observe
\begin{align}
n\mu_{ik}r_k
={}& nr_k - n(1-\mu_{ik})r_k \nonumber\\
\leq{}& I\left(\vec{x},\vec{\phi}_{ki},\vec{g}_{ki},\vec{\phi}_{kj},\vec{g}_{kj};\vec{w}_k\right) - n(1-\mu_{ik})r_k + o(n)\nonumber\\
\leq{}& I\left(\vec{x},\vec{w}_{i},\vec{\phi}_{kj},\vec{g}_{kj},\vec{\phi}_{ik},\vec{g}_{ik};\vec{w}_k\right) - n(1-\mu_{ik})r_k +o(n)\nonumber\\
={}& H\left(\vec{x}|\vec{w}_{i},\vec{\phi}_{kj},\vec{g}_{kj},\vec{\phi}_{ik},\vec{g}_{ik}\right) - H\left(\vec{x}\middle|\vec{w}_{i},\vec{\phi}_{kj},\vec{g}_{kj},\vec{w}_k\right)
+ I\left(\vec{\phi}_{ik},\vec{g}_{ik};\vec{w}_k\right) - n(1-\mu_{ik})r_k + o(n)\nonumber\\
={}& H\left(\vec{x}\middle|\vec{w}_{i},\vec{\phi}_{kj},\vec{g}_{kj},\vec{\phi}_{ik},\vec{g}_{ik}\right) - H\left(\vec{x}\middle|\vec{w}_{i},\vec{\phi}_{kj},\vec{g}_{kj},\vec{w}_k\right) + o(n)\nonumber\\
\stackrel{(d)}{\leq}{}& H\left(\vec{x}\middle|\vec{w}_{i},\vec{\phi}_{kj},\vec{g}_{kj},\vec{\phi}_{ik},\vec{g}_{ik}\right) - \mu_{ij}H\left(\vec{x}\middle|\vec{w}_{i},\vec{w}_k\right)+ o(n),\nonumber\\
\leq{}& \overbrace{H\left(\vec{x}\middle|\vec{w}_{i},\vec{\phi}_{kj},\vec{g}_{kj},\vec{\phi}_{ik},\vec{g}_{ik}\right)}^{B} - \mu_{kj}H\left(\vec{x}\middle|\vec{w}_{i},\vec{w}_k\right) + o(n).\label{eq:3uproof2B1}
\end{align}
In step (d) we used (\ref{eq:lem:split:2}). Also, like (\ref{eq:3uproof2A2}), we find
\begin{align}
nr_j
\leq{}& H(\vec{x}|\vec{w}_{i},\vec{w}_j) + o(n).\label{eq:3uproof2B2}
\end{align}
By appropriately scaling (\ref{eq:3uproof2A1}), (\ref{eq:3uproof2A2}), (\ref{eq:3uproof2B1}), and (\ref{eq:3uproof2B2}), and then summing with (\ref{eq:3uprooffano2}) we arrive at (\ref{eq:3u2}), as desired.\hfill\qed
\subsection{$K$-user Outer Bound}
The construction of the bound is governed by a recursion specified using a tree data structure which we refer to as an \emph{outer bound tree}:
\begin{definition}[Outer Bound Tree (OBT)]\label{def:obt}
A $K$-user OBT is directed labeled tree with $K$ levels where each node in the first $K-2$ levels has 2 children and each node in the $K-1$-th level has one child. The label of the $i$-th node in level $\ell$ is denoted as $v[\ell,i]\in\{1,\ldots,K\}$, where if $\ell<K$ then $i\in \{1,\ldots,2^{\ell-1}\}$ and if $\ell=K$ then $i\in \{1,\ldots,2^{\ell-2}\}$. The index $i$ specifies the precise location in the level: nodes $i=2j-1$ and $i=2j$ in level $\ell<K$ are the left and right children, respectively, of a node $j$ in level $\ell-1$. Node $i$ in level $K$ is the sole child of node $i$ in level $K-1$. Finally, the labels of an OBT must satisfy the following:
\begin{enumerate}
\item For any path from the root node of the tree to any leaf node, no labels are repeated.
\item Any two nodes with the same parent cannot have the same label.
\end{enumerate}
\end{definition}
The first requirement is equivalent to saying that the sequence of labels along any path from root to leaf is a permutation of $\{1,\ldots,K\}$. This is demonstrated in Figure~\ref{fig:tree4}, where we provide an example of an 4-user OBT.
\begin{figure}
\centering
\begin{tikzpicture}[yscale=1.35]
\node (a) at (0,0) [draw,thick,rounded corners] {$v[1,1] = 1$};
\node (b) at (-2,-1) [draw,thick,rounded corners] {$v[2,1] = 2$};
\node (c) at (2,-1) [draw,thick,rounded corners] {$v[2,2] = 3$};
\node (d) at (-3,-2) [draw,thick,rounded corners] {$v[3,1] = 3$};
\node (e) at (-1,-2) [draw,thick,rounded corners] {$v[3,2] = 4$};
\node (f) at (1,-2) [draw,thick,rounded corners] {$v[3,3] = 2$};
\node (g) at (3,-2) [draw,thick,rounded corners] {$v[3,4] = 4$};
\node (h) at (-3,-3) [draw,thick,rounded corners] {$v[4,1] = 4$};
\node (i) at (-1,-3) [draw,thick,rounded corners] {$v[4,2] = 3$};
\node (j) at (1,-3) [draw,thick,rounded corners] {$v[4,3] = 4$};
\node (k) at (3,-3) [draw,thick,rounded corners] {$v[4,4] = 2$};
\draw[thick,-latex] (a)--(b);
\draw[thick,-latex] (a)--(c);
\draw[thick,-latex] (b)--(d);
\draw[thick,-latex] (b)--(e);
\draw[thick,-latex] (c)--(f);
\draw[thick,-latex] (c)--(g);
\draw[thick,-latex] (d)--(h);
\draw[thick,-latex] (e)--(i);
\draw[thick,-latex] (f)--(j);
\draw[thick,-latex] (g)--(k);
\end{tikzpicture}
\caption{Possible OBT for $K=4$ users. Notice that the sequence of labels along each root-to-leaf path is a permutation of the user indices.}\label{fig:tree4}\end{figure}
We now state the following outer bound for the $K$-user BIC:
\begin{theorem}[]\label{thm:KuOB} Consider a $K$-user BIC with $K\geq 3$, defined by parameters $\{\mu_{ij}\}$. The rate tuple $(r_1,\ldots,r_K)$ is achievable only if it satisfies,
\begin{align}
\Gamma_A[1,1] \leq 1,
\end{align}
for any $K$-user OBT, where
\begin{align}
\Gamma_A[\ell,i]
&=
\begin{cases}
r_{v[\ell,i]} + \zeta[\ell,i]\Gamma_A[\ell+1,2i-1] + (1-\zeta[\ell,i])\Gamma_B[\ell+1,2i] &\text{ if }\ell<K-1 \cr
r_{v[\ell,i]} + \zeta[\ell,i]r_{v[\ell+1,i]} &\text{ if }\ell=K-1 \cr
0 &\text{ otherwise } \cr
\end{cases},\label{eq:KuOBrec1}\\
\Gamma_B[\ell,i]
&=
\begin{cases}
\eta_{v[\ell,i]}\left[\ell-1,\left\lceil\frac{i}{2}\right\rceil\right]r_{v[\ell,i]}
+ \zeta[\ell,i]\Gamma_A[\ell+1,2i-1]
+ (1-\zeta[\ell,i])\Gamma_B[\ell+1,2i] &\text{ if }\ell<K-1 \cr
\eta_{v[\ell,i]}\left[\ell-1,\left\lceil\frac{i}{2}\right\rceil\right]r_{v[\ell,i]}
+ \zeta[\ell,i]r_{v[\ell+1,i]} &\text{ if }\ell=K-1 \cr
0 &\text{ otherwise } \cr
\end{cases},\label{eq:KuOBrec2}\\
\zeta[\ell,i]
&=
\begin{cases}
\frac{\displaystyle \plus{\eta_{v[\ell+1,2i-1]}[\ell,i]-\mu_{v[\ell+1,2i],v[\ell+1,2i-1]}}}
{\displaystyle 1-\mu_{v[\ell+1,2i],v[\ell+1,2i-1]}} &\text{ if }\ell<K-1 \cr
\eta_{v[\ell+1,2i-1]}[\ell,i]&\text{ if }\ell=K-1 \cr
0 &\text{ otherwise } \cr
\end{cases},\label{eq:KuOBrec3}
\end{align}
where we have,
if $\ell>1$,
\begin{align}
\eta_j[\ell,i]
&=
\begin{cases}
1 &\text{ if } j=v[\ell,i],\ i\text{ is odd}\cr
0 &\text{ if } j=v\left[\ell-1,\left\lceil\frac{i}{2}\right\rceil\right]\cr
\min\left\{\mu_{v[\ell,i],j},\eta_j\left[\ell-1,\left\lceil\frac{i}{2}\right\rceil\right]\right\} & \text{ otherwise }
\end{cases},\label{eq:KuOBrec4a}
\end{align}
and if $\ell=1$
\begin{align}
\eta_j[1,1] =
\begin{cases}
1 & \text{ if } j=v[1,1]\cr
\mu_{v[1,1],j} & \text{ otherwise }
\end{cases}. \label{eq:KuOBrec4}
\end{align}
\end{theorem}
The proof of Theorem~\ref{thm:KuOB} may be found in Appendix~\ref{app:KuOB}. Here we remark on how the intuitions from Theorem~\ref{thm:3uOB} are extended to $K$ users.
\begin{remark}
Consider the 3-user bound with respect to the more general statement of Theorem~\ref{thm:KuOB}. Figure~\ref{fig:tree3} depicts the exact assignment of labels for a 3-user OBT that results in Theorem~\ref{thm:3uOB}.
\begin{figure}
\centering
\begin{tikzpicture}[yscale=1.35]
\node (a) at (0,0) [draw,thick,rounded corners] {$v[1,1] = i$};
\node (b) at (-1,-1) [draw,thick,rounded corners] {$v[2,1] = j$};
\node (c) at (1,-1) [draw,thick,rounded corners] {$v[2,2] = k$};
\node (d) at (-1,-2) [draw,thick,rounded corners] {$v[3,1] = k$};
\node (e) at (1,-2) [draw,thick,rounded corners] {$v[3,2] = j$};
\draw[thick,-latex] (a)--(b);
\draw[thick,-latex] (a)--(c);
\draw[thick,-latex] (b)--(d);
\draw[thick,-latex] (c)--(e);
\end{tikzpicture}
\caption{The OBT for Theorem~\ref{thm:3uOB}.}\label{fig:tree3}\end{figure}
Recall that the construction of the outer bound in Theorem~\ref{thm:3uOB} began with applying Fano's inequality at User $i$ (i.e., the root node label of the OBT), and then applying (\ref{eq:lem:split:2}) of Lemma~\ref{lem:split}. Applying (\ref{eq:lem:split:2}) resulted in two terms $A$ and $B$ in (\ref{eq:3uprooffano2}), each of which was canceled by analysis of a different user with enhanced side information. This is reflected in the first case of (\ref{eq:KuOBrec1}), where in addition to the rate of the user associated with the node label, we have the quantities $\Gamma_A[\cdot]$ and $\Gamma_B[\cdot]$ associated with the expressions that will cancel $A$ and $B$, respectively. The scaling terms $\zeta[\ell,i]$ reflect the appropriate scaling terms needed for the cancellation; e.g., consider the final step in the proof of Theorem~\ref{thm:3uOB} where we took a weighted sum of (\ref{eq:3uprooffano2})--(\ref{eq:3uproof2B2}). The last quantity, $\eta_j[\ell,i]$, tracks the side information enhancement through each level of recursion.
\end{remark}
\begin{remark}
It is worth noting that the terms associated with the $K-1$-th layer of the OBT are special: This layer represents the ``base case'' of the recursion, and in the 3-user scenario, we reached this base case after only one application of (\ref{eq:lem:split:2}). At the $K-1$-th layer, instead of (\ref{eq:lem:split:2}) we apply (\ref{eq:lem:split:1}) which is reflected by the associated value of $\zeta[\ell,i]$ in (\ref{eq:KuOBrec3}).
\end{remark}
\begin{remark}
By evaluating Theorem~\ref{thm:KuOB} and comparing with the condition for achievability using conventional random coding (\ref{eq:randomnetworkrate}) we arrive at the following result:
\begin{proposition}\label{prop:KuSym}
Consider a $K$-user BIC where $\mu_{ij}=\mu$ for all $i\neq j$. The capacity region is the set of all rate tuples $(r_1,\ldots,r_K)$ satisfying for every $i\in\{1,\ldots,K\}$
\begin{align}
r_i + \mu\sum_{j\neq i} r_j \leq{}& 1\label{eq:KuSym}.
\end{align}
\end{proposition}
\begin{proof}
Achievability results directly from evaluation of (\ref{eq:randomnetworkrate}). To prove the converse, we observe that when $\mu_{ij}=\mu$ for all $j\neq i$, for all $\ell$
\begin{align}
\zeta[\ell,i] = 0,
\end{align}
and if $\ell>1$,
\begin{align*}
\eta_j[\ell,i]
=
\begin{cases}
1 &\text{ if } j=v[\ell,i]\text{ and $i$ is odd}\cr
0 &\text{ if } j=v[\ell-1,\left\lceil\frac{i}{2}\right\rceil]\cr
\mu & \text{ otherwise }
\end{cases}.
\end{align*}
Evaluating recursively through the OBT yields
\begin{align*}
\Gamma_A[1,1] = r_{v[1,1]} + \mu r_{v[2,2]} + \ldots \mu r_{v[K,2^K]} \leq{}1.
\end{align*}
Since the path from root to leaf is a permutation of user indices (i.e., all user indices are represented and there exist no repeats), we arrive at (\ref{eq:KuSym}).
\end{proof}
\end{remark}
\section{Numerical Results}\label{sec:num}
In this section we perform numerical analysis of inner and outer bounds to illustrate 1) the gain in achievable rate of hybrid coding over conventional random coding, and 2) the gap between our derived inner and outer bounds.
To limit the scope of possible configurations (parameterized by $\mu_{ij}$ terms), we focus on two symmetric scenarios for a representative set of parameters. In the first scenario, we consider side information that is ``one-sided symmetric'' (i.e., network parameters such that $\mu_{ij}=\mu_{ik}$ for all $i\neq j \neq k$) while in the second, we consider side information that is ``pairwise symmetric'' (i.e., network parameters such that $\mu_{ij}=\mu_{ji}$ for all $i\neq j$). For each scenario, we will assume that, the size of side information at User~1 is the least and at User~3 is the most, and we plot the following:
\begin{enumerate}
\item The 3-user outer bound of Theorem~\ref{thm:3uOB}, applied to the symmetric rate.
\item The achieved symmetric rate of the hybrid coding scheme described in Section~\ref{sec:achieve},
\item The \emph{grouped} random coding strategy (described at the beginning of Section~\ref{sec:example}) wherein first, a sufficient number of random equations are sent such that Users~2 and 3 can decode $\vec{w}_2$ and $\vec{w}_3$, and then $\vec{w}_1$ is sent,
\item The conventional random coding strategy (also described at the beginning of Section~\ref{sec:example}) wherein a sufficient number of random equations are sent such that all users can decode all messages,
\end{enumerate}
In Figures~\ref{fig:q1a} and~\ref{fig:q2b}, we demonstrate the gap between our BIC inner and outer bounds while focusing on varying the amount of information at the user with the \emph{least} side information. Figure~\ref{fig:q1a} demonstrates the gap between inner and outer bounds on symmetric capacity for a one-sided symmetric BIC problem. In particular, we fix $\mu_{21}=\mu_{23}=\frac{1}{2}$ and $\mu_{31}=\mu_{32}=\frac{1}{3}$ and consider the impact of varying $\mu_{12}=\mu_{13}=a$ across the range from $\frac{1}{2}$ to 1.
Figure~\ref{fig:q2b} demonstrates the gap between inner and outer bounds on symmetric capacity for a pairwise symmetric BIC problem. In particular, we fix $\mu_{23}=\mu_{32}=\frac{1}{3}$ and $\mu_{13}=\mu_{31}=\frac{1}{2}$
and consider the impact of varying $\mu_{12}=\mu_{21}=a^\prime$ across the range from $\frac{1}{2}$ to 1.
In Figures~\ref{fig:q1c} and~\ref{fig:q2d}, we demonstrate the gap between our BIC inner and outer bounds while focusing on varying the amount of information at the user with the \emph{most} side information. Specifically, in Figure~\ref{fig:q1c} we look at a one-sided symmetric scenario and fix $\mu_{12}=\mu_{13}=\frac{1}{2}$ and $\mu_{21}=\mu_{23}=\frac{1}{3}$, while varying $\mu_{31}=\mu_{32}=c$ across a range from 0 to $\frac{1}{2}$, while in Figure~\ref{fig:q2d} we look at the pairwise symmetric scenario and fix $\mu_{12}=\mu_{21}=\frac{2}{3}$ and $\mu_{13}=\mu_{31}=\frac{1}{2}$, while varying $\mu_{23}=\mu_{32}=c^\prime$ across a range from 0 to $\frac{1}{2}$.
In the two BIC problems depicted in Figures~\ref{fig:q1a} and~\ref{fig:q2b}, we point out that as the user with the least amount of side information loses even more side information (increasing $a$ or $b$), the rate achievable by conventional random codes decreases. At some point in each Figures~\ref{fig:q1a} and~\ref{fig:q2b}, it is in fact to better to apply a grouped random coding strategy and assume that User~1 will not attempt to decode $\vec{w}_2$ and $\vec{w}_3$. On the other hand, in the BIC problems depicted in Figures~\ref{fig:q1c} and~\ref{fig:q2d}, since amount of side information of the least knowledgeable user remains constant (i.e., $\mu_{12}$ and $\mu_{13}$ are fixed), the rate achieved by conventional random coding is constant across the range.
\begin{figure}[ht]
\centering
\subfigure[$\mu_{12}=\mu_{13}=a$, $\mu_{21}=\mu_{23}=\frac{1}{2}$, and $\mu_{31}=\mu_{32}=\frac{1}{3}$ ]{
\begin{tikzpicture}[font=\scriptsize,%
every axis/.style={
ymax=0.55,%
ymin=0.3,%
ytick={0.3,0.35,...,0.55}}]
\begin{axis}[
width=9cm,
height=6cm,
xlabel=$a$,
ylabel=$r_{sym}$ (bits),
every axis y label/.style=
{at={(ticklabel cs:0.5)},rotate=90,anchor=near ticklabel},
legend style={at={(0.005,0.01)},anchor=south west}]
\addplot[thick,color=red,mark=o,mark repeat=4]
plot file {1a1213-UB.data};
\addlegendentry{Upper Bound};
\addplot[thick,color=blue,mark=x,mark repeat=4]
plot file {1a1213-HC.data};
\addlegendentry{Hybrid Coding};
\addplot[color=black!50!black,mark=+,thick,dashed,mark repeat=4]
plot file {1a1213-GRC.data};
\addlegendentry{Grouped Random Coding};
\addplot[color=black!50!black,mark=+,mark repeat=4]
plot file {1a1213-RC.data};
\addlegendentry{Conventional Random Coding};
\end{axis}
\end{tikzpicture}\label{fig:q1a}
}
\subfigure[$\mu_{12}=\mu_{21}=b$, $\mu_{13}=\mu_{31}=\frac{1}{2}$, and $\mu_{23}=\mu_{32}=\frac{1}{3}$ ]{
\begin{tikzpicture}[font=\scriptsize,%
every axis/.style={
ymax=0.55,%
ymin=0.3,%
ytick={0.3,0.35,...,0.55}}]
\begin{axis}[
width=9cm,
height=6cm,
xlabel=$b$,
ylabel=$r_{sym}$ (bits),
every axis y label/.style=
{at={(ticklabel cs:0.5)},rotate=90,anchor=near ticklabel},
legend style={at={(0.005,0.01)},anchor=south west}]
\addplot[thick,color=red,mark=o,mark repeat=4]
plot file {2a1213-UB.data};
\addlegendentry{Upper Bound};
\addplot[thick,color=blue,mark=x,mark repeat=4]
plot file {2a1213-HC.data};
\addlegendentry{Hybrid Coding};
\addplot[color=black!50!black,mark=+,thick,dashed,mark repeat=4]
plot file {2a1213-GRC.data};
\addlegendentry{Grouped Random Coding};
\addplot[color=black!50!black,mark=+,mark repeat=4]
plot file {2a1213-RC.data};
\addlegendentry{Conventional Random Coding};
\end{axis}
\end{tikzpicture} \label{fig:q2b}
}\\
\subfigure[$\mu_{12}=\mu_{13}=\frac{2}{3}$, $\mu_{21}=\mu_{23}=\frac{1}{2}$, and $\mu_{31}=\mu_{32}=c$ ]{
\begin{tikzpicture}[font=\scriptsize,%
every axis/.style={
ymax=0.55,%
ymin=0.3,%
ytick={0.3,0.35,...,0.55}}]
\begin{axis}[
width=9cm,
height=6cm,
xlabel=$c$,
ylabel=$r_{sym}$ (bits),
every axis y label/.style=
{at={(ticklabel cs:0.5)},rotate=90,anchor=near ticklabel},
legend style={at={(0.005,0.01)},anchor=south west}]
\addplot[thick,color=red,mark=o,mark repeat=4]
plot file {1c1213-UB.data};
\addlegendentry{Upper Bound};
\addplot[thick,color=blue,mark=x,mark repeat=4]
plot file {1c1213-HC.data};
\addlegendentry{Hybrid Coding};
\addplot[color=black!50!black,mark=+,thick,dashed,mark repeat=4]
plot file {1c1213-GRC.data};
\addlegendentry{Grouped Random Coding};
\addplot[color=black!50!black,mark=+,mark repeat=4]
plot file {1c1213-RC.data};
\addlegendentry{Conventional Random Coding};
\end{axis}
\end{tikzpicture}
\label{fig:q1c}
}
\subfigure[$\mu_{12}=\mu_{21}=\frac{2}{3}$, $\mu_{13}=\mu_{31}=\frac{1}{2}$, and $\mu_{23}=\mu_{32}=d$ ]{
\begin{tikzpicture}[font=\scriptsize,%
every axis/.style={
ymax=0.55,%
ymin=0.3,%
ytick={0.3,0.35,...,0.55}}]
\begin{axis}[
width=9cm,
height=6cm,
xlabel=$d$,
ylabel=$r_{sym}$ (bits),
every axis y label/.style=
{at={(ticklabel cs:0.5)},rotate=90,anchor=near ticklabel},
legend style={at={(0.005,0.01)},anchor=south west}]
\addplot[thick,color=red,mark=o,mark repeat=4]
plot file {2c1213-UB.data};
\addlegendentry{Upper Bound};
\addplot[thick,color=blue,mark=x,mark repeat=4]
plot file {2c1213-HC.data};
\addlegendentry{Hybrid Coding};
\addplot[color=black!50!black,mark=+,thick,dashed,mark repeat=4]
plot file {2c1213-GRC.data};
\addlegendentry{Grouped Random Coding};
\addplot[color=black!50!black,mark=+,mark repeat=4]
plot file {2c1213-RC.data};
\addlegendentry{Conventional Random Coding};
\end{axis}
\end{tikzpicture}
\label{fig:q2d}
}
\caption{Inner and outer bounds on the symmetric capacity of example 3-user BIC problems: (a) One-sided side information symmetry, and (b) pairwise side information symmetry, while varying the least knowledgeable user's side information under; and
(c) one-sided side information symmetry, and (d) pairwise side information symmetry, while varying the most knowledgeable user's side information. }\label{fig:q}
\end{figure}
With the figures, we highlight the following observations about our inner and outer bounds:
\begin{enumerate}
\item There exists a threshold for side information parameters where below this threshold, in the best hybrid coding strategy all three users decode all messages and thus the achieved rate is the same as conventional random codes. In particular, this is true for small $a$ and $b$ in Figures~\ref{fig:q1a} and \ref{fig:q2b} and larger $c$ and $d$ in Figures~\ref{fig:q1c} and \ref{fig:q2d}, respectively. However, beyond this threshold (larger $a$ and $b$ and smaller $c$ and $d$), we observe a clear potential for increased rate from hybrid codes. It is worth noting that the regimes where hybrid codes offer a rate increase are those further from the fully symmetric BIC problem (where all network parameters, $\mu_{ij}$, are the same). Recall that for the fully symmetric BIC problem the entire capacity region is achievable using conventional random coding (see Proposition~\ref{prop:KuSym}).
\item In Figures~\ref{fig:q1a} and~\ref{fig:q2b}, when $a=1$ or $b=1$ there exist no opportunities at all to exploit the side information at User~1. Hence, both hybrid coding and grouped random coding achieve the genie upper bound.
\item Although there exists a gap between our inner and outer bounds, we highlight a specific case where our new hybrid coding scheme both provides strictly positive rate gain over conventional random coding \emph{and meets the new upper bound}: in Figure~\ref{fig:q2d} when $d=0$. This scenario is related to the one considered in the motivating example of Section~\ref{sec:example}, in the sense that Users~2 and 3 know each other's complete message as side information.
\end{enumerate}
\section{Blind Index Coding over Wireless Channels}\label{sec:BICW}
In this section, we generalize the BIC problem model further to consider the impact of uncertainty not only within the side information given to users, but also in the sender-to-user broadcast channel (recall that in the BIC problem this channel was error free). In particular, we emulate loss of packetized transmissions due to fading in wireless channels using a binary fading model for the sender-to-user broadcast. Consequently, the problem considered here will be referred to as blind index coding over wireless channels (BICW).
As we will see, considering wireless transmissions adds new challenges to the problem, and surprisingly repetition of uncoded bits (within the hybrid coding framework) will become a powerful technique for increasing achievable rate. Unlike the BIC problem considered in the previous sections, even the 2-user BICW problem is nontrivial. Hence, in this section we focus on a a 2-user problem representative of general BICW problems. After formally defining the representative problem, we define a hybrid coding scheme that not only XORs randomc combinations of some messages with uncoded bits of others, but also uses repetition of uncoded bits. We derive the achievable rate regions of these hybrid codes with repetitions, and then denomstrate numerically the resulting gain in achievable rate that our scheme provides over conventional methods.
\subsection{Wireless Broadcast Channel Model}
\label{sec:BICWmod}
In the BICW scenario the channel output received by by User~$i$, $\vec{y}_i$, is governed by a binary fading process. Specifically, let $\vec{\gamma}_{i}$ be a binary vector with the same length as the channel input vector $\vec{x}$ and drawn i.i.d from a Bernoulli$(1-\epsilon_{i})$ distribution. The channel output for User~$i$ is given by the input-output relationship
\begin{align}
y_{i}[\ell] = \gamma_{i}[\ell]{x}[\ell].
\end{align}
User~$i$ knows $\vec{\gamma}_{i}$, however the sender is only aware of parameters $\{\epsilon_{i}\}$, which govern the probabilistic behavior of the sender-to-user broadcast channel.
In this section, we assume the model depicted in Figure~\ref{fig:BICW}, containing only two users where $\epsilon_1 < \epsilon_2$, $\mu_{12}=1$, and $\mu_{21}=\mu$ (i.e., User~1 has a better channel than User~2 but no side information).
\begin{figure}[ht]
\centering
\begin{tikzpicture}[yscale=0.28,font=\footnotesize]
\node (m) at (0,0) [] {$\vec{w}_1, \vec{w}_2$};
\node (s) at (1.5,0) [draw,thick] {$\mathsf{S}$};
\node (e1) at (4,1) [draw,rounded corners,inner sep=2pt] {$P_\mathrm{erase} = \epsilon_1$};
\node (e2) at (4,-1) [draw,rounded corners,inner sep=2pt] {$P_\mathrm{erase} = \epsilon_2$};
\node (eSI) at (4,-3) [draw,rounded corners,inner sep=2pt] {$P_\mathrm{erase} = \mu$};
\node (d1) at (6,1) [draw,thick] {$\mathsf{U}_1$};
\node (d2) at (6,-1) [draw,thick] {$\mathsf{U}_2$};
\draw[thick,-latex] (m) -- (s);
\draw[thick] (s) -- node[above] {$\vec{\mathbf{x}}^n$} (2.75,0);
\draw[thick] (2.75,0) |- (e1);
\draw[thick] (2.75,0) |- (e2);
\draw[thick,-latex] (e1) -- node[above] {$\vec{\mathbf{y}}_1^n$}(d1);
\draw[thick,-latex] (e2) -- node[above] {$\vec{\mathbf{y}}_2^n$}(d2);
\draw[decorate,decoration={brace},thick] (-0.1,-0.5) -- (-0.6,-0.5);
\draw[thick] (-0.35,-0.6) |- (eSI);
\draw[thick,-latex] (eSI) -| node[right] {$\vec{\mathbf{\psi}}$}(d2);
\end{tikzpicture}
\caption{2-user instance of the BICW problem.}\label{fig:model2}
\end{figure}
\begin{remark}
We assume that $\epsilon_1 < \epsilon_2$ and that side information was only given to User~2 (i.e., $\mu_{21}=1$) for ease of exposition. In all other 2-user settings (i.e., arbitrary $\epsilon_1$ and $\epsilon_2$ and side information at either user), either there is no index coding gain even if the sender knows the side information or the natural generalization of our proposed scheme recovers some index coding gain to outperform conventional approaches.
\end{remark}
Our main result for this setting is as follows.
\begin{theorem}\label{thm:HRC}
For the 2-user BICW problem defined above, the rate region $\mathcal{R}$ is achievable, where $\mathcal{R}$ is the set of all non-negative rate pairs $(r_1,r_2)$ satisfying,
\begin{align}
r_1+r_2 \leq{}& 1-\epsilon_1,\label{eq:ratereg_A}\\
\omega_1(L)r_1 +\omega_2(L) r_2 \leq{}& 1-\epsilon_2,\quad L=1,\ldots,L_{max}\label{eq:ratereg_B}
\end{align}
where
\begin{align}
\omega_1(L) ={}& \frac{1-\epsilon_2}{1-\epsilon_1}\epsilon_1^{L} + \mu(1-\epsilon_2^{L})\omega_2(L)
+ L(1-\epsilon_2)\left(1 -\omega_2(L)\right),\label{eq:weight1}\\
\omega_2(L) ={}& \min\left\{\frac{1-\epsilon_1^{L}}{1-\mu\epsilon_2^{L}},1\right\},\label{eq:weight2}\\
L_{max} \triangleq{}& 1+\left\lfloor\frac{\log(\mu)}{\log(\epsilon_1/\epsilon_2)}\right\rfloor.\label{eq:Lmax}
\end{align}
\end{theorem}
\begin{remark}
Notice that as $\epsilon_2\rightarrow 0$ (and by the assumption $\epsilon_2>\epsilon_1$, as $\epsilon_1\rightarrow 0$), the BICW problem reverts to a BIC problem. Moreover as $\epsilon_2\rightarrow 0$, $\omega_1(L)\rightarrow\mu$
and $\omega_2(L)\rightarrow 1$, resulting in the achievable region of rate pairs satisfying:
\begin{align*}
r_1+r_2\leq{}& 1,\\
\mu r_1+r_2\leq{}& 1,
\end{align*}
which is equivalent (given assumptions on $\mu_{12}$ and $\mu_{21}$) to the 2-user BIC capacity region (formally stated in Proposition~\ref{prop:2u}).
\end{remark}
\subsection{Proof of Theorem~\ref{thm:HRC}}
\label{sec:ach}
This section is organized as follows. We first define the hybrid coding scheme by specifying a class of generator matrices which map length-$m$ message vectors to length-$n$ codewords, and which are parametrized by three quantities: $\rho$, $L$, and $\alpha$. For each $n$, the transmitter maps two messages, $\vec{w}_1$ and $\vec{w}_2$ to codewords using corresponding generator matrices (with different parameters), and XORs the two codewords to produce the channel input vector.
We then specify the method of decoding and establish the achievable rate region for our coding scheme when fixing the generator matrix parameters for all $n$. By doing so, we show that for any $(r_1,r_2)\in \mathcal{R}$ (as defined in Theorem~\ref{thm:HRC}) there exists a choice of parameters such that $(r_1,r_2)$ is achievable, thus proving Theorem~\ref{thm:HRC}.
\subsubsection{Encoding}
Our hybrid coding scheme encodes $\vec{w}_1$ and $\vec{w}_2$ separately and linearly, before combining the resulting codewords through bit-wise XOR.
The codeword for each message is constructed in a manner similar to the component of BIC hybrid codes from the previous section specific to a single message component: uncoded repetitions of message bits are supplemented by a random linear combinations. The specific mapping from message to codeword is formalized in the following definition, parametrized for a given $n$ by three quantities $\rho$, $L$ and $\alpha$:
\begin{definition}[Repetition plus Random Parity (RRP) Matrix]
\label{def:RRP}
An $n\times m$ RRP matrix with parameters $\rho\in [0,1]$, $L\in\mathbb{N}$, and $\alpha\in[0,1]$ is a binary matrix, $\mathbf{U}$, with the form:
\begin{align}
\mathbf{U} ={}& \begin{bmatrix}
\mathbf{B}^\top &
\mathbf{A}_{1}^\top &
\ldots &
\mathbf{A}_{L+1}^\top &
\mathbf{0}
\end{bmatrix}^\top,\label{eq:U1}
\end{align}
where\vspace{-0.25cm}
\begin{align*}
\mathbf{A}_\ell = \begin{cases}
\mathbf{I}_{m} & \text{ if } \ell\leq L\cr
[\mathbf{I}_{\alpha m}\quad \mathbf{0}] & \text{ else}
\end{cases},
\end{align*}
and $\mathbf{B}$ is a $\rho n\times m$ matrix with entries drawn i.i.d. from $\mathrm{Bernoulli}\left(\frac{1}{2}\right)$. For feasibility, we require that $\alpha m$ is an integer, and
\begin{align}
(L+\alpha)\frac{m}{n}+\rho \leq 1. \label{eq:feasible}
\end{align}
\end{definition}
\begin{remark}
Simply stated, an RRP matrix maps a length-$m$ message vector to a length-$n$ codeword by repeating each uncoded message bit either $L$ or $L+1$ times. The parameter $\alpha$ specifies the fraction of bits repeated $L+1$ times, while $\rho$ specifies the proportion of length-$n$ codeword reserved for random linear coded parity. Inequality (\ref{eq:feasible}) ensures that $\mathbf{U}$ is an $n\times m$ matrix.
It is worth noting that in the hybrid encoding scheme described for 3-user (non-wireless) BIC, the mapping of message $\vec{w}_1$, $\vec{w}_2$, and $\vec{w}_3$ to sequences before XOR (i.e., the individually colored bars in Figure~\ref{fig:3coding}) could be interpreted as RRP matrices. For $\vec{w}_1$, we chose $L=\alpha=0$ and for messages $\vec{w}_2$ and $\vec{w}_3$ we chose $L=1$ and $\alpha=0$.
The use of RRP matrices with $L>1$ and $\alpha>0$ (i.e., the \emph{repetition} of uncoded message bits) is the key innovation to hybrid coding that enables higher rate in the wireless setting.
\end{remark}
Using the defined RRP matrices, we now describe the encoding scheme that maps messages $\vec{w}_1$ and $\vec{w}_2$ to a length-$n$ channel input vector.
Let $n$, $m_1^{(n)}$, and $m_2^{(n)}$ be given. For each $n$, let $\mathbf{U}_1$ be a $n\times m_1^{(n)}$ RRP matrix with parameters $(\rho_1,L_1,\alpha_1)$ and $\mathbf{U}_2$ be a $n\times m_2^{(n)}$ RRP matrix with parameters $(\rho_2,L_2,\alpha_2)$.
The channel input vector $\vec{\mathbf{x}}^n$ is given by (assuming modulo-2 addition):
\begin{align*}
\vec{\mathbf{x}}^n
={}& \begin{bmatrix}\mathbf{U}_1 & \mathbf{U}_2\end{bmatrix}\begin{bmatrix}\vec{w}_1 \\ \vec{w}_2\end{bmatrix}\\
={}& \mathbf{U}_1\vec{w}_1 + \mathbf{U}_2\vec{w}_2.
\end{align*}
Figure~\ref{fig:BICWcoding} depicts an example hybrid encoding with repetitions for the 2-user BICW setting. In this particular example, $L=2$ and $\alpha=0.5$.
\begin{figure}
\centering
\begin{tikzpicture}[xscale=1.7,yscale=0.66,font=\footnotesize]
\filldraw [draw=green!50!black,fill=green!60!black!50,thick] (0,1) rectangle (5,1.5);
\draw (-0.45,1.25) node[] {$\mathbf{U}_2\vec{w}_2$:\ };
\draw (-0.45,1.75) node[] {$\oplus$\ };
\draw (3.75,2.312) node[]{\footnotesize $\vec{w}_1$};
\fill[white] (3.75,2) rectangle (4.75,2.5);
\filldraw [draw=red,fill=red!50,thick] (0,2) rectangle (1.25,2.5);
\draw [draw=red,thick] (1.25,2) rectangle (2.25,2.5);
\draw [draw=red,thick] (2.25,2) rectangle (3.25,2.5);
\draw [draw=red,thick] (3.25,2) rectangle (3.75,2.5);
\draw (1.75,2.312) node[]{\footnotesize $\vec{w}_1$};
\draw (2.75,2.312) node[]{\footnotesize $\vec{w}_1$};
\draw (-0.45,2.25) node[] {$\mathbf{U}_1\vec{w}_1$:\ };
\draw[latex-latex,thin] (0,3) --node [fill=white,inner sep=1pt]{$\rho_1=0.25$} (1.25,3);
\draw[latex-latex,thin] (1.25,3) --node [fill=white,inner sep=1pt]{$L_1=2$} (3.25,3);
\draw[latex-latex,thin] (3.25,3) -- (3.75,3);
\draw[thin] (3.5,3) -- (3.5,3.5) node [fill=white,inner sep=1pt]{$\alpha_1=0.5$};
\draw[latex-latex,thin] (2.25,1.75) --node [fill=white,inner sep=1pt]{$m_1$} (3.25,1.75);
\draw[latex-latex,thin] (0,0.625) --node [fill=white,inner sep=1pt]{$\rho_2=1$} (5,0.625);
\draw[latex-latex,thin] (0,0.125) --node [fill=white,inner sep=1pt]{$n$} (5,0.125);
\draw[dotted,thin] (0,3.25) -- (0,0);
\draw (1.875,1.75) node {$\oplus$};
\draw[dotted,thin] (5,3.25) -- (5,0);
\end{tikzpicture}\vspace{-2.5ex}
\caption{An example hybrid coding scheme for the 2-user BICW setting, where $(\rho_1,L_1,\alpha_1)=(0.25,2,0.5)$, and $(\rho_2,L_2,\alpha_2)=(1,0,0)$. Outlined boxes represent uncoded bits, shaded boxes represent RLCs of a single message.
\vspace{-4.5ex}}\label{fig:BICWcoding}
\end{figure}
\subsubsection{Decoding}
We now specify the decoding strategy and then characterize the achievable rates for our scheme with fixed parameters $\rho_i$, $L_i$ and $\alpha_i$, $i=1,2$. In what follows, we choose $(\rho_2,L_2,\alpha_2) =(1,0,0)$ (i.e., User~2's generator matrix, $\mathbf{U}_2$, is a random matrix). Choosing parameters $(\rho_1,L_1,\alpha_1)$ is more nuanced and will be addressed within the analysis. For brevity, we will not explicitly analyze the error rates of our scheme for given $n$, but instead provide a sketch of the achievability proof using existing results for random linear codes over point-to-point erasure channels.
In our decoding strategy, User~1 first decodes $\vec{w}_2$ and peels its interfering contribution from its received signal, and then decodes its desired message, $\vec{w}_1$. User~2 only decodes $\vec{w}_2$. We first describe decoding $\vec{w}_2$ at each user.
Recall that the channel input at any time, $t$, is given by
\mbox{$\mathbf{x}[t]=\mathbf{U}_1(t,:)\vec{w}_1 + \mathbf{U}_2(t,:)\vec{w}_2$},
where $\mathbf{U}_i(t,:)$ is the $t$-th row of generator matrix $\mathbf{U}_i$.
The decoding strategy for $\vec{w}_2$ used by both users is based on the following observation. If $t$ and $t^\prime\neq t$ both correspond to a repetition of the same message bit from $\vec{w}_1$, then the modulo-2 sum of these yields
\mbox{$\mathbf{x}[t]+\mathbf{x}[t^\prime] = (\mathbf{U}_2(t,:)+\mathbf{U}_2(t^\prime,:))\vec{w}_2$},
which is a random linear combination of only $\vec{w}_2$ bits (since $\rho_2=1$). By this method we ``clean'' equations of $\vec{w}_1$. User~2 has the additional option of using its side information to clean equations, which has the same essence.
The cleaned random linear equations are used by each user in conjunction with those that by construction were only functions of $\vec{w}_2$ (i.e., for those $t$ where in (\ref{eq:U1}) $\mathbf{U}_1(t,:)=0$) to decode $\vec{w}_2$.
After decoding $\vec{w}_2$, User~1 removes the contribution of $\vec{w}_2$ from its received signal before decoding $\vec{w}_1$.
If any of these decodings fail, then an error occurs. We now claim that the decoding scheme yields the following achievable rates, proven in Appendix~\ref{app:lem:achieveBICW}:
\begin{lemma}\label{lem:achieveBICW}
Consider the 2-user BWIC problem defined by network parameters $\epsilon_1$, $\epsilon_2$, and $\mu$, and let $\rho_1\in[0,1]$, $L_1\in\mathbb{N}$, and $\alpha_1\in[0,1)$ be fixed. A rate pair $(r_1,r_2)$ is achievable if it satisfies,%
\begin{align}
r_1 \leq{}& \frac{1-\rho_1}{L_1+\alpha_1},\label{eq:achJLA0}\\
r_1 \leq{}& \rho_1\frac{1-\epsilon_1}{\epsilon_1^{L_1}-\alpha_1(\epsilon_1^{L_1}-\epsilon_1^{L_1+1})},\label{eq:achJLA1}\\
[1-\epsilon_1^{L_1}+\alpha_1(\epsilon_1^{L_1}-\epsilon_1^{L_1+1})]r_1+r_2 {}\leq{}& (1-\epsilon_1)(1-\rho_1),\label{eq:achJLA2}\\
\mu[1-\epsilon_2^{L_1}+\alpha_1(\epsilon_2^{L_1}-\epsilon_2^{L_1+1})]r_1+r_2 {}\leq{}& (1-\epsilon_2)(1-\rho_1).\label{eq:achJLA3}
\end{align}
\end{lemma}
From Lemma~\ref{lem:achieveBICW}, it is clear that by considering the union or achievable rate pairs over all $(\rho_1,L_1,\alpha_1)$ we arrive at the rate region achievable by our schemes. Specifically, let $\mathcal{R}(\rho_1,L_1,\alpha_1)$ for $\rho_1\in[0,1]$, $L_1\in\mathbb{N}$, and $\alpha_1\in[0,1]$ be defined as the set of all pairs $(r_1,r_2)$ satisfying (\ref{eq:achJLA0})--(\ref{eq:achJLA3}), and we define a rate region:
\begin{align}
\overline{\mathcal{R}}\triangleq \bigcup_{\rho_1,L_1,\alpha_1} \mathcal{R}(\rho_1,L_1,\alpha_1).\label{eq:BICW_ratereg_bar}
\end{align}
To complete the proof of Theorem~\ref{thm:HRC}, we now demonstrate that the region $\mathcal{R}$ (as defined in Theorem~\ref{thm:HRC}) is contained within $\overline{\mathcal{R}}$ (given in (\ref{eq:BICW_ratereg_bar})), and thus is achievable. To do so, we need only show that for every rate pair $(r_1,r_2)\in\mathcal{R}$, there exists parameters $(\rho_1,L_1,\alpha_1)$ such that (\ref{eq:achJLA0})--(\ref{eq:achJLA3}) are satisfied. We therefore fix $r_1$ to any value in the interval $[0,1-\epsilon_1]$, and choose parameters $\rho_1^*$, $L_1^*$, and $\alpha_1^*$ as
\begin{align}
L_1^* ={}&
\begin{array}[t]{cl}
\maximize & \min\{L,L_{max}\}\\
\subjectto & L\in\mathbb{N} \\
& \frac{\epsilon_1^{L}}{1-\epsilon_1} r_1 \leq 1 - L r_1\end{array},\label{eq:opar2}\\
\alpha_1^* ={}& \begin{cases}
0 & \text{ if }L_1^*=L_{max}\cr
\frac{1-r_1\left(\frac{\epsilon_1^{L_1^*}}{1-\epsilon_1}+L_1^*\right)}{r_1(1-\epsilon_1^{L_1^*})} & \text{ if }L_1^*<L_{max}
\end{cases},\label{eq:opar3}\\
\rho_1^* ={}& \frac{\epsilon_1^{L_1^*}-\alpha_1^*(\epsilon_1^{L_1^*}-\epsilon_1^{L_1^*+1})}{1-\epsilon_1}r_1,\label{eq:opar1}
\end{align}
where $L_{max}$ is as defined in (\ref{eq:Lmax}).
Notice that given $r_1$, we first determine the appropriate $L_1^*$, then $\alpha_1^*$, then finally $\rho_1^*$ and that both (\ref{eq:achJLA0}) and (\ref{eq:achJLA1}) are satisfied by the chosen parameters.
Substituting these into (\ref{eq:achJLA2}) and (\ref{eq:achJLA3}), we see that $r_2$ is achievable if it satisfies both of the following inequalities:
\begin{align}
r_2
\leq{}& (1-\epsilon_1)-\rho_1^*(1-\epsilon_1)
- r_1\left[1-\epsilon_1^{L_1^*}+\alpha_1^* (\epsilon_1^{L_1^*}-\epsilon_1^{L_1^*+1}) \right]\nonumber\\
={}& 1-\epsilon_1-r_1,\label{eq:thmproof0a}\\
r_2 \leq{}& (1-\epsilon_2)-\rho_1^*(1-\epsilon_2) - r_1\mu\left[1-\epsilon_2^{L_1^*}+\alpha_1^* (\epsilon_2^{L_1^*}-\epsilon_2^{L_1^*+1}) \right]
\nonumber\\
={}& (1-\epsilon_2)\left(1 - r_1\left[\frac{\epsilon_1^{L_1^*}}{1-\epsilon_1}+\mu\frac{1-\epsilon_2^{L_1^*}}{1-\epsilon_2} - \alpha_1^*\left(\epsilon_1^{L_1^*}-\mu\epsilon_2^{L_1^*}\right)\right]\right)
\nonumber\\
\stackrel{(a)}{=}{}& \frac{1-\epsilon_2-\omega_1(L_1^*)r_1}{\omega_2(L_1^*)},\label{eq:thmproof0b}
\end{align}
where in (a) we compared the evaluated expression with $\omega_1(L)$ and $\omega_2(L)$ as defined in (\ref{eq:weight1}) and (\ref{eq:weight2}) evaluated at $L_1=L_1^*$. We now point out that (\ref{eq:thmproof0a}) is equivalent to (\ref{eq:ratereg_A}) and (\ref{eq:thmproof0b}) is equivalent to (\ref{eq:ratereg_B}) evaluated at $L=L_1^*$. Moreover, since
\begin{align}
\frac{1-\epsilon_2-\omega_1(L_1^*)r_1}{\omega_2(L_1^*)}
\geq{}& \min_L\ \frac{1-\epsilon_2-\omega_1(L)r_1}{\omega_2(L)},\label{eq:thmproof0}
\end{align}
and the right hand side of inequality represents the tightest version of (\ref{eq:ratereg_B}) for fixed $r_1$, we observe that any $(r_1,r_2)$ satisfying (\ref{eq:ratereg_A}) and (\ref{eq:ratereg_B}) for all $L\leq L_{max}$ (i.e., any $(r_1,r_2) \in \mathcal{R}$) is indeed achievable, thus completing the proof of Theorem~\ref{thm:HRC}.
\subsection{Numerical Results}
For blind index coding over wireless channels, we recall that the key difference was the usefulness of \emph{repeating} uncoded bits within the hybrid coding scheme. Therefore, we now provide numerical results for three BICW scenarios, characterized by $\epsilon_1$, $\epsilon_2$, and $\mu$. In each, we plot $\mathcal{R}$ and highlight regimes (along x-axes) wherein the number of repetitions used in our scheme increases. For each scenario, we point out the gain in $r_2$ offered by repetion-based hybrid codes over conventional schemes, and for further comparison we also depict rate regions achieved by: 1) Conventional random codes as defined in the beginning of Section~\ref{sec:BIC:achieve}, 2) Time-Division between separate random encoding of $\vec{w}_1$ and $\vec{w}_2$, and 3) the following genie-aided upper bound:
\begin{proposition}
For the 2-user BICW problem setting considered in Theorem~\ref{thm:HRC}, an achievable rate pair $(r_1,r_2)$ must satisfy
\begin{align}
\max\left\{r_1 + r_2,\mu r_1 + \frac{1-\epsilon_1}{1-\epsilon_2} r_2\right\} \leq{}& 1-\epsilon_1.
\end{align}
\end{proposition}
\begin{proof}
The bound may be separated into two outer bounds that correspond to the first and second terms within the $\max$, respectively:
\begin{itemize}
\item $r_1+r_2 \leq 1-\epsilon_1$,
\item $\mu \frac{1-\epsilon_2}{1-\epsilon_1} r_1 + r_2 \leq 1-\epsilon_2$.
\end{itemize}
Denote the subvector of $\vec{w}_1$ given as side information as $\vec{w}_1^+$ and the complementary subvector as $\vec{w}_1^-$. We prove the first bound by applying Fano's inequality at each user to observe:
\begin{align}
nr_1 \leq{}& I\left(\vec{y}_1,\vec{\gamma}_1;\vec{w}_1\right) + o(n)\nonumber\\
={}& I\left(\vec{y}_1,\vec{\gamma}_1;\vec{w}_1\right) + o(n)\nonumber\\
={}& H\left(\vec{y}_1|\vec{\gamma}_1\right) - H\left(\vec{y}_1|\vec{\gamma}_1,\vec{w}_1\right) + o(n)\nonumber\\
\stackrel{(a)}{\leq}{}& H\left(\vec{y}_1|\vec{\gamma}_1\right) - H\left(\vec{y}_2|\vec{\gamma}_2,\vec{w}_1\right) + o(n)\nonumber\\
\leq{}& H\left(\vec{y}_1|\vec{\gamma}_1\right) - H\left(\vec{y}_2|\vec{\gamma}_2,\vec{w}_1^+\right) + o(n)\nonumber\\
\leq{}& n(1-\epsilon_1) - H\left(\vec{y}_2|\vec{\gamma}_2,\vec{w}_1^+\right) + o(n), \label{eq:genieprooffano1}\\
nr_2 \leq{}& I\left(\vec{y}_2,\vec{\gamma}_2,\vec{\phi}_{21},\vec{g}_{21};\vec{w}_2\right) + o(n)\nonumber\\
={}&H\left(\vec{y}_2|\vec{\gamma}_2,\vec{\phi}_{21},\vec{g}_{21}\right)- H\left(\vec{y}_2|\vec{\gamma}_2,\vec{\phi}_{21},\vec{g}_{21},\vec{w}_2\right) + o(n)\nonumber\\
\leq{}&H\left(\vec{y}_2|\vec{\gamma}_2,\vec{\phi}_{21},\vec{g}_{21}\right) + o(n)\nonumber\\
={}&H\left(\vec{y}_2|\vec{\gamma}_2,\vec{w}_1^+\right) + o(n), \label{eq:genieprooffano2}
\end{align}
where in step (a) we observed that because the sender does not know the fading channel state of the sender-to-user channel. We complete the proof of the first bound by combining (\ref{eq:genieprooffano1}) and (\ref{eq:genieprooffano2}) and normalizing by $n$ as $n$ grows large.
To prove the second bound, we consider a genie which provides the sender of knowledge regarding which bits of $\vec{w}_1$ are given as side information to User~2. We again applying Fano's inequality at each user, but in a different way, to observe
\begin{align}
n\mu r_1
\leq{}& I\left(\vec{y}_1,\vec{\gamma}_1;\vec{w}_1^- \right) + o(n)\nonumber\\
\leq{}& I\left(\vec{y}_1,\vec{\gamma}_1,\vec{w}_1^+,\vec{w}_2;\vec{w}_1^- \right) + o(n)\nonumber\\
\leq{}& H\left(\vec{y}_1\middle| \vec{\gamma}_1,\vec{w}_1^+,\vec{w}_2\right) + o(n),\label{eq:genieprooffano3}\\
nr_2 \leq{}& I\left(\vec{y}_2,\vec{\gamma}_2,\vec{\phi}_{21},\vec{g}_{21};\vec{w}_2\right) + o(n)\nonumber\\
={}&H\left(\vec{y}_2|\vec{\gamma}_2,\vec{\phi}_{21},\vec{g}_{21}\right)- H\left(\vec{y}_2|\vec{\gamma}_2,\vec{\phi}_{21},\vec{g}_{21},\vec{w}_2\right) + o(n)\nonumber\\
\leq{}& n(1-\epsilon_2) - H\left(\vec{y}_2|\vec{\gamma}_2,\vec{w}_1^+,\vec{w}_2\right) + o(n),\nonumber\\
\stackrel{(b)}{\leq}{}& n(1-\epsilon_2) - \frac{1-\epsilon_1}{1-\epsilon_2}H\left(\vec{y}_1|\vec{\gamma}_1,\vec{w}_1^+,\vec{w}_2\right) + o(n),\label{eq:genieprooffano4}
\end{align}
where in step (b) we applied Lemma~1 of~\cite{VMA2014} which when applied to our problem states that (because the sender does not know the binary fading channel states $\{\vec{\gamma}_i\}$),
\begin{align*}
H\left(\vec{y}_2|\vec{\gamma}_2,\vec{w}_1^+,\vec{w}_2\right)\geq{}&\frac{1-\epsilon_1}{1-\epsilon_2}H\left(\vec{y}_1|\vec{\gamma}_1,\vec{w}_1^+,\vec{w}_2\right).
\end{align*}
To complete the proof of the second outer bound, we scale (\ref{eq:genieprooffano3}) by $\frac{1-\epsilon_2}{1-\epsilon_1}$ and combine with (\ref{eq:genieprooffano4}).
\end{proof}
In Figure~\ref{fig:p1}, $\epsilon_1=\frac{1}{2}$, $\epsilon_2=\frac{3}{4}$, $\mu=\frac{1}{2}$ notice that when $r_1$ is near the point-to-point capacity of 0.5, hybrid coding recovers all of the available index coding gain. This is because when $r_1$ is near 0.5, the primary challenge is not blindly exploiting side information, but rather accounting for interference incurred at User~1. For this set of network parameters, we point out that for any fixed value of $r_1$, hybrid coding offers at least 62\% of the available index coding gain.
In Figure~\ref{fig:p2}, $\epsilon_1=\frac{1}{2}$, $\epsilon_2=\frac{9}{10}$, $\mu=\frac{1}{10}$ we consider a BICW setting where side information is plentiful (User~2 knows 90\% of $\vec{w}_1$). In this case, $L_{max}=4$ and the piece-wise linear boundary of the hybrid coding achievable rate region has more linear segments, with segments corresponding to the number of repetitions used. For this setting and for any fixed $r_1$, HRC always achieves at least 68\% of the available index coding gain.
Finally, in Figure~\ref{fig:p3}, $\epsilon_1=\frac{1}{2}$, $\epsilon_2=\frac{3}{4}$, $\mu=\frac{9}{10}$ we consider a BICW setting with very little side information (User~2 knows 10\% of $\vec{w}_1$). In this case, $L_{max}=1$ and from the figure, it is apparent that although any index coding gain is modest, it is still strictly positive for all $r_1\notin\{0,1-\epsilon_1\}$.
\begin{figure}[ht]
\centering
\subfigure[ $\epsilon_1=\frac{1}{2}$, $\epsilon_2=\frac{3}{4}$, $\mu=\frac{1}{2}$]{
\begin{tikzpicture}[xscale=13,yscale=13,font=\scriptsize]
\filldraw[draw=black,fill=white] (0,0.25) -- node [above,pos=0.7,sloped]{Genie-Aided Upper Bound} (0.3333,0.1667) -- (0.5,0) -- (0,0) -- cycle;
\filldraw[draw=red,fill=red!4!white] (0,0.25) -- (0.39,0.11)node [anchor=north east,red,inner sep=1pt]{Hybrid} -- (0.5,0) -- (0,0) -- cycle;
\filldraw[draw=green!70!black,dashed,fill=green!70!black!8!white] (0,0.25) -- (0.5,0) -- (0,0) -- cycle;
\node at(0.255,0.075)[above,green!70!black,inner sep=2pt] {Random Code};
\node at(0.255,0.075)[green!70!black,inner sep=2pt] {+};
\node at(0.255,0.075)[below,green!70!black,inner sep=2pt] {Time Division};
\filldraw[draw=blue,dotted,fill=blue!10!white] (0,0.25) -- (0.25,0) -- (0.5,0) -- (0,0) -- cycle;
\node at(0.1,0.02)[above,blue,inner sep=2pt] {Conventional Random Code};
\draw[] (0,0) rectangle (0.55,0.3);
\node at (0.275,-0.05)[below] {$r_1$};
\node at (-0.05,0.15)[rotate=90,above] {$r_2$};
\draw[] (0.1,0.01) -- (0.1,-0.01) node [below] {0.1};
\draw[] (0.2,0.01) -- (0.2,-0.01) node [below] {0.2};
\draw[] (0.3,0.01) -- (0.3,-0.01) node [below] {0.3};
\draw[] (0.4,0.01) -- (0.4,-0.01) node [below] {0.4};
\draw[] (0.5,0.01) -- (0.5,-0.01) node [below] {0.5};
\draw[] (0.01,0.1) -- (-0.01,0.1) node [left] {0.1};
\draw[] (0.01,0.2) -- (-0.01,0.2) node [left] {0.2};
\draw[] (0.01,0.3) -- (-0.01,0.3) node [left] {0.3};
\draw[thick,dotted] (0,0.325) -- (0,0);
\draw[thick,dotted] (0.4,0.325) -- (0.4,0);
\draw[thick,dotted] (0.5,0.325) -- (0.5,0);
\node at (0.45,0.32)[]{$L_1^*=1$};
\draw[latex-latex] (0,0.32) -- node [fill=white,inner sep=1pt]{$L_1^*=2$} (0.4,0.32);
\end{tikzpicture}
\label{fig:p1}
}\\
\subfigure[$\epsilon_1=\frac{1}{2}$, $\epsilon_2=\frac{9}{10}$, $\mu=\frac{1}{10}$]{
\begin{tikzpicture}[xscale=13,yscale=26,font=\scriptsize]
\draw[] (0,0.1) -- (0.4082,0.0918) -- (0.5,0) -- (0,0) -- cycle;
\filldraw [draw=red,fill=red!4!white] (0,0) -- (0,0.1) -- (0.246154,0.088) -- (0.32,0.082426) -- (0.4,0.0724) -- (0.438,0.062) -- (0.5,0);
\filldraw [dashed,draw=green!70!black!80!white,fill=green!70!black!8!white] (0,0) -- (0,0.1) -- (0.5,0) -- cycle;
\filldraw [dotted,draw=blue!80!white,fill=blue!10!white] (0,0) -- (0,0.1) -- (0.1,0) -- cycle;
\draw[dotted,thick] (0.5,0.1325) -- (0.5,0);
\draw[dotted,thick] (0.4,0.1325) -- (0.4,0);
\draw[dotted,thick] (0.32,0.1325) -- (0.32,0);
\draw[dotted,thick] (0.246154,0.1325) -- (0.246154,0);
\draw[dotted,thick] (0,0.1325) -- (0,0);
\node at (0.45,0.13)[]{$L_1^*=1$};
\node at (0.36,0.13)[]{$L_1^*=2$};
\node at (0.282,0.13)[]{$L_1^*=3$};
\draw[latex-latex] (0,0.13) -- node [fill=white,inner sep=1pt]{$L_1^*=4$} (0.246154,0.13);
\draw[] (0,0) rectangle (0.55,0.12);
\node at (0.275,-0.025)[below] {$r_1$};
\node at (-0.05,0.06)[rotate=90,above] {$r_2$};
\draw[] (0.1,0.005) -- (0.1,-0.005) node [below] {0.1};
\draw[] (0.2,0.005) -- (0.2,-0.005) node [below] {0.2};
\draw[] (0.3,0.005) -- (0.3,-0.005) node [below] {0.3};
\draw[] (0.4,0.005) -- (0.4,-0.005) node [below] {0.4};
\draw[] (0.5,0.005) -- (0.5,-0.005) node [below] {0.5};
\draw[] (0.01,0.05) -- (-0.01,0.05) node [left] {0.05};
\draw[] (0.01,0.1) -- (-0.01,0.1) node [left] {0.10};
\end{tikzpicture}
\label{fig:p2}
}\\
\subfigure[$\epsilon_1=\frac{1}{2}$, $\epsilon_2=\frac{3}{4}$, $\mu=\frac{9}{10}$]{
\begin{tikzpicture}[xscale=13,yscale=13,font=\scriptsize]
\filldraw[draw=black,fill=white] (0,0.25) -- (0.45455,0.04545) -- (0.5,0) -- (0,0) -- cycle;
\filldraw[draw=red,fill=red!4!white] (0,0.25) -- (0.47619,0.02381) -- (0.5,0) -- (0,0) -- cycle;
\filldraw[draw=green!70!black,dashed,fill=green!70!black!8!white] (0,0.25) -- (0.5,0) -- (0,0) -- cycle;
\filldraw[draw=blue,dotted,fill=blue!10!white] (0,0.25) -- (0.25,0) -- (0.5,0) -- (0,0) -- cycle;
\draw[] (0,0) rectangle (0.55,0.3);
\node at (0.275,-0.05)[below] {$r_1$};
\node at (-0.05,0.15)[rotate=90,above] {$r_2$};
\draw[] (0.1,0.01) -- (0.1,-0.01) node [below] {0.1};
\draw[] (0.2,0.01) -- (0.2,-0.01) node [below] {0.2};
\draw[] (0.3,0.01) -- (0.3,-0.01) node [below] {0.3};
\draw[] (0.4,0.01) -- (0.4,-0.01) node [below] {0.4};
\draw[] (0.5,0.01) -- (0.5,-0.01) node [below] {0.5};
\draw[] (0.01,0.1) -- (-0.01,0.1) node [left] {0.1};
\draw[] (0.01,0.2) -- (-0.01,0.2) node [left] {0.2};
\draw[] (0.01,0.3) -- (-0.01,0.3) node [left] {0.3};
\draw[thick,dotted] (0,0.325) -- (0,0);
\draw[thick,dotted] (0.5,0.325) -- (0.5,0);
\draw[latex-latex] (0,0.32) -- node [fill=white,inner sep=1pt]{$L_1^*=1$} (0.5,0.32);
\end{tikzpicture}
\label{fig:p3}
}
\caption{Rate regions achieved by different schemes --- Conventional random codes (blue), time-division between separate random codes (green), hybrid coding (red), and genie-aided (non-blind) index coding (white) --- for three different 2-user BICW problems. The number of repetitions used in the hybrid coding scheme is stated along the $x$-axis. (a) For this setting, $L_{max}=2$; (b) For this setting, $L_{max}=4$ and we have emphasized using dashed lines bounds (\ref{eq:ratereg_A}) and (\ref{eq:ratereg_B}) for all $L$ that comprise the boundary of $\mathcal{R}$; (c) For this setting, $L_{max}=1$ and notice even with very little side information, our hybrid coding scheme strictly outperforms conventional schemes.}
\label{fig:BICW}
\end{figure}
From the scenarios depicted i Figure~\ref{fig:BICW}, we make the following unifying conclusions:
\begin{enumerate}
\item Regardless of the network parameters ($\epsilon_1$, $\epsilon_2$, and $\mu$) hybrid coding always increases the achievable rate region.
\item If we consider a fixed $r_1$, the number of repetitions used in the hybrid encoding scheme increases when User~2 has a weaker channel and more side information (i.e., $\epsilon_2$ grows larger and $\mu$ grows smaller).
\item Hybrid coding can be capacity achieving, as seen on the boundary of the rate regions in all three figures when $r_1$ is close to it's maximum.
\end{enumerate}
\section{Concluding Remarks}\label{sec:concl}
In this paper, we introduced a generalization of index coding called \emph{blind index coding}, which captures key issues in distributed caching and wireless settings. We demonstrated that the BIC problem introduces novel and interesting challenges that require new analytical tools through three main contributions: 1) we proposed a class of hybrid coding schemes which mix uncoded bits of a subset of messages with randomly linear combinations of other messages, 2) we presented new outer bounds that leveraged a lemma based on a strong data processing to capture the lack of knowledge at the sender, and 3) we demonstrated that in scenarios where the sender-to-user channel is not error-free (specifically, a wireless binary fading channel) repetition of uncoded bits within hybrid codes can further increase the achievable rate.
To further emphasize the importance of analyzing BIC problems, we refer the reader Figure~\ref{fig:2hEBC} which depicts the setting considered~\cite{KMA2014:isit}, which itself was a specific case in the broader class of multiple unicast and multiple multicast problems in wireless erasure networks~\cite{DGPHE2006}. Such problems consider the communication of multiple distinct messages to different users in a wireless network over probabilistic lossy links.
\begin{figure}[ht]
\centering\vspace{-0.25cm}
\begin{tikzpicture}[scale=2,font=\footnotesize]
\node (w1) at (-0.3,0) [anchor=south east] {$\vec{w}_1$};
\node (w2) at (-0.3,0) [anchor=north east] {$\vec{w}_2$};
\node (s) at (0,0) [draw,thick] {$\mathsf{S}$};
\node (r1a) at (1.82,0.7) [draw,circle,inner sep=1pt] {};
\node (r2a) at (1.82,-0.7) [draw,circle,inner sep=1pt] {};
\node (r1) at (2,0.7) [draw,thick] {$\mathsf{R}_1$};
\node (r2) at (2,-0.7) [draw,thick] {$\mathsf{R}_2$};
\node (d1) at (4.58,0.6) [draw,thick] {$\mathsf{D}_1$};
\node (d1a) at (4.4,0.7) [draw,circle,inner sep=1pt] {};
\node (d1b) at (4.4,0.5) [draw,circle,inner sep=1pt] {};
\node (d2) at (4.58,-0.6) [draw,thick] {$\mathsf{D}_2$};
\node (d2a) at (4.4,-0.7) [draw,circle,inner sep=1pt] {};
\node (d2b) at (4.4,-0.5) [draw,circle,inner sep=1pt] {};
\draw[dotted,thick] (s) -- node[above,pos=0.4]{$\vec{u}$} (0.82,0);
\draw[-latex,dotted,thick] (0.82,0) --node[sloped,draw,solid,fill=white,inner sep=2pt,rounded corners]{$\epsilon_1$} (r1a) node[anchor=south east]{$\vec{v}_1$};
\draw[-latex,dotted,thick] (0.82,0) --node[sloped,draw,solid,fill=white,inner sep=2pt,rounded corners]{$\epsilon_1$} (r2a) node[anchor=north east]{$\vec{v}_2$};
\draw[thick] (r1) -- (2.6,0.7) node[above]{$\vec{x}_1$};
\draw[-latex,thick] (2.6,0.7) --node[sloped,draw,solid,fill=white,pos=0.4,inner sep=2pt,rounded corners]{$\epsilon_2$} (d1a) node[anchor=south east]{$\vec{y}_{1}$};
\draw[-latex,thick] (2.6,0.7) --node[sloped,draw,solid,fill=white,pos=0.3,inner sep=2pt,rounded corners]{$\epsilon_3$} (d2b) node[anchor=east]{$\vec{y}_{2}$\ };
\draw[dotted,thick] (r2) -- (2.6,-0.7)
\draw[-latex,dotted,thick] (2.6,-0.7) --node[sloped,draw,solid,fill=white,pos=0.3,inner sep=2pt,rounded corners]{$\epsilon_3$} (d1b) node[anchor=north]{\normalsize\textcolor{red}{?}\ \ };
\draw[-latex,dotted,thick] (2.6,-0.7) --node[sloped,draw,solid,fill=white,pos=0.4,inner sep=2pt,rounded corners]{$\epsilon_2$} (d2a) node[anchor=north]{\normalsize\textcolor{red}{?}\ \ };
\node (w1h) at (4.8,0.6) [right] {$\widehat{\vec{w}_1}$};
\node (w2h) at (4.8,-0.6) [right] {$\widehat{\vec{w}_2}$};
\fill [white!95!red!95!black,opacity=0.475] (1.55,1) -- (1.55,0) -- (3.3,-1) -- (-0.7,-1) -- (-0.7,1) -- cycle;
\draw [red,very thick,rounded corners] (1.55,1) -- (1.55,0) -- (3.3,-1) -- (5.2,-1) -- (5.2,1) -- cycle;
\end{tikzpicture}\vspace{-0.25cm}
\caption{The symmetric two-hop erasure broadcast channel from~\cite{KMA2014:isit} with focus on the embedded BICW problem seen by Relay~1. The network consists of two hops of communication. The first hop is an erasure broadcast channel, whereas the second consists of two parallel, non-interfering erasure broadcasts. Destination~1 wants message $\vec{w}_1$ and Destination~2 wants $\vec{w}_2$, but with no knowledge of erasures, Relay~1 is unaware of the (side) information provided by Relay~2.}\label{fig:2hEBC}
\end{figure}
A key contribution of~\cite{KMA2014:isit} was the revelation that it was \emph{strictly suboptimal} for relays within such a network to apply conventional random network coding. Instead, relays imparted structure into their network coded transmissions by XORing \emph{unmixed received bits} of one message with random combinations of another; i.e., relays applied a version of hybrid coding to their received signals in order to outperform conventional random network codes.
From results presented in this work, one arrives at such a relaying strategy naturally. From the point of view of either relay, the transmissions of the other relay are \emph{side information}, and more importantly, due to the lossy nature of links the relay is \emph{blind} as to what side information was provided. Additionally, the transmission model from relays to destinations matches precisely the lossy sender-to-user broadcast considered in Section~\ref{sec:BICW}.
It is important to point out that the BIC and BICW problems in the general setting remains an open problem. Therefore, to conclude the paper we revisit one class of interesting symmetric side information BIC problems (from Section~\ref{sec:num}) that remains unsolved and yet offers a simple and concrete enough case for progress to be made, potentially revealing new insights.
Consider the following 3-user BIC scenario when side information parameters are pairwise symmetric:
$\mu_{12}=\mu_{21}=a$, $\mu_{13}=\mu_{31}=b$, $\mu_{23}=\mu_{32}=c$ with $a\geq b\geq c$ (for a concrete example we refer the reader to Figure~\ref{fig:q2d}).
From Theorem~\ref{thm:3uACH}, we find the symmetric achievable rate:
\begin{align}
r_{sym}={}&\max\left\{\frac{1}{1+a+b+c-ab},\frac{1}{1+a+b}\right\},\label{eq:abc}
\end{align}
and from Theorem~\ref{thm:3uOB}, we have the capacity bound:
\begin{align}
r_{sym}\leq{}&\frac{1}{1+a+b-\frac{(a-c)(b-c)}{1-c}}.
\end{align}
Notice first that, as in the numerical example of Figure~\ref{fig:q2d}, if $c=0$ or $c=b$ the upper bound is tight and capacity is achieved. However, within the interval $c\in(0,b)$ there exists a gap between achievability and converse.
Additionally, recall that the first quantity in the max of (\ref{eq:abc}) is the rate achieved by hybrid coding and the second is by conventional random coding. Clearly, hybrid coding provides a rate gain when $c < ab$. This regime is one where the side information Users~2 and 3 have about each others' messages is large and thus Phases~1 and~2 in Figure~\ref{fig:3coding} are small. Our hybrid coding assumes that User~1 ignores these phases, but when they are larger (i.e., as $c$ grows) these transmissions may be used by User~1 to decode messages $\vec{w}_2$ and $\vec{w}_3$. In particular, the case where $c=ab$ (a point notably within the interval $(0,b)$) represents a threshold where the structure of our hybrid code can no longer expect to hide linear subspaces of $\vec{w}_2$ and $\vec{w}_3$ from User~1.
We conjecture that at this threshold, any method of encoding $\vec{w}_2$ and $\vec{w}_3$ that satisfies the decodability condition at Users~2 and 3 also allows User~1 to decode $\vec{w}_2$ and $\vec{w}_3$ (i.e., at this threshold it is the converse and not achievable scheme that may be tightened).
\bibliographystyle{IEEEtran}
|
1,477,468,750,301 | arxiv | \section{Introduction}
The evolution of non-equilibrium systems involves energy exchange through the system boundary with the surroundings. It is of broad interest to understand how such evolution can be triggered and what the function of external perturbation is. A famous example is the laminar-turbulent transition of the pipe flow first reported by Reynolds from his pioneering experiment~\cite{Reynolds-1883}. The continuous devoted efforts~\cite{Barkley-Nature-2015, Avila2011, Wu-PNAS-2015} have greatly enriched our understanding. For example, by measuring the puff decay and splitting time the critical Reynolds number in the 3D pipe flow can be numerically estimated (around 2040)~\cite{Avila2011,Orszag1970, Blackburn2004}. Besides, with the help of direct numerical simulation (DNS) with very fine resolution, both spatially and temporally, Wu et al. \cite{Wu-PNAS-2015} demonstrated the transition sensitivity to the pipe entrance condition. Physically laminar-turbulent transition is by nature closely relevant to disturbances, which can be both external and internal. It is widely believed that randomness is an intrinsic property in turbulence. However, till now our understanding of the origin and evolution mechanism of such intrinsic randomness is still unclear.
Numerically the Navier-Stokes (NS) equations can be solved by DNS with exactly the same initial/boundary conditions so as to exclude the external disturbances. Unfortunately, the sensitivity of nonlinear systems to numerical inaccuracy leads to severe deficiency of the solutions. As discovered by Lorenz, dynamic systems governed by the NS equations are essentially chaotic, i.e. due to the butterfly effect~\cite{Lorenz1963} the solutions have sensitive dependance, not only on the initial conditions (SDIC)~\cite{Lorenz1963} but also on numerical algorithms (SDNA)~\cite{Lorenz2006}. Because of the inevitable numerical noises, e.g. round-off error and truncation error, the solution reliability of chaotic systems is very controversial~\cite{Yao2008}. Some spurious turbulence evolution cases from DNS have been reported in the literature~\cite{Wang2009, Pugachev2015}. In this sense DNS results are strongly numerical noise contaminated, although meaningful from the statistical point of view.
On the other hand, Wolfram \cite{Wolfram2002} mentioned that the Lorenz equations with the famous butterfly-effect are highly simplified and thus do not contain terms that represent viscous effects, and therefore he believes that these terms would tend to damp out small perturbations.
Fortunately, such kind of man-made uncertainty of numerical experiments can be well controlled by means of the clean numerical simulation (CNS)~\cite{Liao2009-Tellus,Wang2012,Liao2013-CSF,Liao2014,Li2014,Liao2015-IJBC}, which is based on an arbitrary-order Taylor series method (TSM) \cite{Barrio2005} and the arbitrary multiple-precision (MP) data \cite{MP}, together with solution verification check. For chaotic dynamic systems such as the well known three-body problem, the round-off error and truncation error can be largely reduced by CNS, even much less than the microscopic physical uncertainty due to wave-particle duality~\cite{Liao2013-CSF, Liao2014,Li2014, Liao2015-IJBC}, which is extremely small but inevitable. The obtained results ~\cite{Liao2013-CSF, Liao2014, Li2014, Liao2015-IJBC} indicate that macroscopic randomness in the three-body system can be self excited from the intrinsic microscopic physical uncertainty, at absence of any {\em external} disturbances. These convincing results are inspiring; however, more on the physics need to be explored to understand if such scenario can be generally valid in other more complicated systems (such as turbulence, although there are some tentative discussion on self randomization~\cite{Tsinober} and pattern formation~\cite{Cross}.
\begin{figure}[t]
\begin{center}
\begin{tabular}{cc}
\includegraphics[width=10cm]{FIGURE1.eps}
\end{tabular}
\caption{{\bf Schematic representation of the Rayleigh-B{\'e}nard convective flow}. The two-dimensional incompressible fluid between two parallel free surfaces separated by $H$ obtains heat from the bottom boundary because of the prescribed constant temperature difference $\Delta T > 0$, where $g$ is the gravity acceleration.} \label{Schematic}
\end{center}
\end{figure}
\section{Methods}
\subsection{The governing equations and the spectral representation}
The numerical model considered here is a two-dimensional Rayleigh-B{\'e}nard (RB) system. As shown in Fig.~\ref{Schematic}, the incompressible fluid between two parallel free surfaces separated by $H$ obtains heat from the bottom boundary because of the prescribed constant temperature difference $\Delta T$, from which a reference velocity can be constructed as $\sqrt{g \alpha H\Delta T}$, where $g$ is the gravity acceleration and $\alpha$ is the thermal expansion coefficient of the fluid, respectively. This well-defined classic system has been extensively studied~\cite{Rayleigh1916, Saltzman1962, Getling1998, Malkus1954A, Malkus1954B, Roche2002, Niemela2006, Ahlers2012, Zhou2013} either at its critical~\cite{Ahlers} or turbulent state~\cite{Grossman}. As described by Saltzman~\cite{Saltzman1962}, the corresponding non-dimensional governing equations in the form of stream function $\psi$ with the Boussinesq approximation read
\begin{eqnarray}
\frac{\partial}{\partial t}\nabla^2\psi+\frac{\partial\left(\psi,
\nabla^2\psi\right)}{\partial(x, z)}-\frac{\partial
\theta}{\partial x} - {\cal C}_a\nabla^4\psi=0,\label{GEq01}\\
\frac{\partial\theta}{\partial t}+\frac{\partial\left(\psi,
\theta\right)}{\partial(x, z)} - \frac{\partial \psi}{\partial x}
- {\cal C}_b\nabla^2\theta =0,\label{GEq02}
\end{eqnarray}
where $\theta$ is the temperature departure from a linear variation background, $(x,z)$ are the horizontal and vertical spatial coordinates, $t$ denotes time,
$\nabla^2$ is the Laplace operator defined as $\nabla^4 = \nabla^2 \nabla^2$,
\[ \cfrac{\partial(a,b)}{\partial(x, z)}= \cfrac{\partial a}{\partial x}\cfrac{\partial b}{\partial z}-\cfrac{\partial b}{\partial x}\cfrac{\partial a}{\partial z} \]
is the Jacobian operator, ${\cal C}_a = \sqrt{Pr/Ra}$ and ${\cal C}_b=1/\sqrt{Pr Ra}$ with the Rayleigh number $Ra=g\alpha H^3\Delta T/(\nu\kappa)$ and the Prandtl number $Pr=\nu/\kappa$, in which $\nu$ is the kinematic viscosity and $\kappa$ is the thermal diffusivity, respectively.
The free-slip boundary conditions at the upper and lower free surfaces read
\begin{eqnarray}
\frac{\partial\left(\psi, \nabla^2\psi\right)}{\partial(x,z)}=\frac{\partial\left(\psi, \theta\right)}{\partial(x, z)}=0.
\end{eqnarray}
Following Saltzman~\cite{Saltzman1962}, we express the stream function $\psi$ and temperature departure $\theta$ in the double Fourier expansion modes as
\begin{eqnarray}
\psi(x,z,t)=\sum_{m=-\infty}^{+\infty}\sum_{n=-\infty}^{+\infty}\Psi_{m,n}(t)
\exp\left[2\pi H
i\left(\frac{m}{L}x+\frac{n}{2H}z\right)\right],\label{PsiFExp}\\
\theta(x,z,t)=\sum_{m=-\infty}^{+\infty}\sum_{n=-\infty}^{+\infty}\Theta_{m,n}(t)
\exp\left[2\pi H
i\left(\frac{m}{L}x+\frac{n}{2H}z\right)\right],\label{ThetaFExp}
\end{eqnarray}
where $m,n$ are the wave numbers in the $x$ and $z$ directions, $\Psi_{m,n}(t)$ and $\Theta_{m,n}(t)$ denote the amplitudes of the stream function and temperature components with the wave numbers $m$ and $n$, respectively. Substituting the above Fourier series into the original equations, we have the nonlinear dynamic system
\begin{eqnarray}
\dot\Psi_{m,n}(t)\!\!\!&=&\!\!\!\sum_{p=-\infty}^{+\infty}\sum_{q=-\infty}^{+\infty}
\frac{C_{m,n,p,q}\alpha_{p,q}^2}{\alpha_{m,n}^2}\Psi_{p,q}\Psi_{m-p,n-q}
-\frac{l^*m}{\alpha_{m,n}^2}\,i\,\Theta_{m,n}\nonumber\\
&-&{\cal C}_{a}\,\alpha_{m,n}^2\Psi_{m,n},\label{DotPsimn}\\
\dot\Theta_{m,n}(t)\!\!\!&=&\!\!\!-\sum_{p=-\infty}^{+\infty}\sum_{q=-\infty}^{+\infty}
C_{m,n,p,q}\Psi_{p,q}\Theta_{m-p,n-q} +l^*m\,i\,\Psi_{m,n}\nonumber\\
&-&{\cal C}_{b}\,\alpha_{m,n}^2\Theta_{m,n},\label{DotThetamn}
\end{eqnarray}
where $C_{m,n,p,q}=l^*h^*(mq-np)$, $l^*=2\pi H/L$, $h^*=\pi$ and $\alpha_{m,m}^2=(l^{*2}m^2+h^{*2}n^2)$.
Write
\begin{eqnarray}
\Psi_{m,n}=\Psi_{1,m,n} -
i\,\Psi_{2,m,n},\;\;\;\;\Theta_{m,n}=\Theta_{1,m,n} -
i\,\Theta_{2,m,n}, \label{CmplxType}
\end{eqnarray}
with the definitions
\[ \Psi_{1,m,n} = \Psi_{1,-m,-n}, \Psi_{2,m,n} = -\Psi_{2,-m,-n},\]
\[ \Theta_{1,m,n} = \Theta_{1,-m,-n}, \Theta_{2,m,n}=-\Theta_{2,-m,-n}.\]
It thus yields the following set of coupled nonlinear differential equations
\begin{small}
\begin{eqnarray}
\dot\Psi_{1,m,n}\!\!\!&=&\!\!\!\sum_{p=-\infty}^{+\infty}\sum_{q=-\infty}^{+\infty}
\frac{C_{m,n,p,q}\alpha_{p,q}^2}{\alpha_{m,n}^2}\Big(\Psi_{1,p,q}\Psi_{1,m-p,n-q}
-\Psi_{2,p,q}\Psi_{2,m-p,n-q}\Big)\nonumber\\
\!\!\!&-&\!\!\!\frac{l^*m}{\alpha_{m,n}^2}\Theta_{2,m,n}-{\cal
C}_{a}\,\alpha_{m,n}^2\Psi_{1,m,n},\label{DotPsi01mn}\\
\dot\Psi_{2,m,n}\!\!\!&=&\!\!\!\sum_{p=-\infty}^{+\infty}\sum_{q=-\infty}^{+\infty}
\frac{C_{m,n,p,q}\alpha_{p,q}^2}{\alpha_{m,n}^2}\Big(\Psi_{1,p,q}\Psi_{2,m-p,n-q}
+\Psi_{2,p,q}\Psi_{1,m-p,n-q}\Big)\nonumber\\
\!\!\!&+&\!\!\!\frac{l^*m}{\alpha_{m,n}^2}\Theta_{1,m,n}-{\cal
C}_{a}\,\alpha_{m,n}^2\Psi_{2,m,n},\label{DotPsi02mn}\\
\dot\Theta_{1,m,n}\!\!\!&=&\!\!\!-\sum_{p=-\infty}^{+\infty}\sum_{q=-\infty}^{+\infty}
C_{m,n,p,q}\Big(\Psi_{1,p,q}\Theta_{1,m-p,n-q}
-\Psi_{2,p,q}\Theta_{2,m-p,n-q}\Big)\nonumber\\
\!\!\!&+&\!\!\!l^*m\Psi_{2,m,n}-{\cal
C}_{b}\,\alpha_{m,n}^2\Theta_{1,m,n},\label{DotTheta01mn}\\
\dot\Theta_{2,m,n}\!\!\!&=&\!\!\!-\sum_{p=-\infty}^{+\infty}\sum_{q=-\infty}^{+\infty}
C_{m,n,p,q}\Big(\Psi_{1,p,q}\Theta_{2,m-p,n-q}
+\Psi_{2,p,q}\Theta_{1,m-p,n-q}\Big)\nonumber\\
\!\!\!&-&\!\!\!l^*m\Psi_{1,m,n}-{\cal
C}_{b}\,\alpha_{m,n}^2\Theta_{2,m,n}. \label{DotTheta02mn}
\end{eqnarray}
\end{small}
The free-slip boundary condition implies
\begin{eqnarray}
\Psi_{1,m,n} = -\Psi_{1,m,-n}=-\Psi_{1,-m,n}, \\
\Psi_{2,m,n} = -\Psi_{2,m,-n} = \Psi_{2,-m,n},\\
\Theta_{1,m,n} =-\Theta_{1,m,-n}=-\Theta_{1,-m,n}, \\
\Theta_{2,m,n}=-\Theta_{2,m,-n}=\Theta_{2,-m,n}, \label{ThetaPro04}
\end{eqnarray}
with
\begin{eqnarray}
\Psi_{1,0,n}=\Theta_{1,0,n}=\Psi_{1,m,0}=\Psi_{2,m,0}
=\Theta_{1,m,0}=\Theta_{2,m,0}=0
\end{eqnarray}
at $z=0$ and $z=1$. For more details, please refer to Saltzman \cite{Saltzman1962}.
Numerically, only a finite number of wave numbers can be considered, i.e. $|m|\leq M, |p|\leq M$ and $|n|\leq N$, $|q|\leq N$. In principal, the turbulence physics can be well described if the mode numbers $M$ and $N$ are large enough, the same rules for DNS as well~\cite{Orszag1970, Blackburn2004, Ashley2009}. For the present Rayleigh-B{\'e}nard flow with $Ra = 10^7$, $M=N=127$ is large enough to investigate the laminar-turbulent transition.
It should be emphasized that the above nonlinear dynamic system might evolve to be chaotic and the numerical behaviors might be influenced by the butterfly-effect, i.e. the sensitive dependence on the initial conditions (SDIC) in which a small change of this deterministic nonlinear system results in large difference in a later state~\cite{Lorenz1963, Lorenz2006}. Therefore this dynamic system might be very sensitive to numerical noises, which could evolve exponentially with time~\cite{Lorenz1963, Lorenz2006}. To avoid the loss of accuracy, the conventional fast Fourier transform method is {\it not} used here for the nonlinear terms. This is very different from DNS~\cite{Orszag1970, Blackburn2004, Ashley2009}. However, the computational cost need to increase largely.\\
\subsection{Thermal fluctuation as the initial random condition}
The thermal fluctuation plays an important role on hydrodynamic instability \cite{Wu1995, Ahlers2003}. Recently, Wang et al. \cite{Wang-PNAS-2015} investigated the instability of the two-dimensional Poiseuille flow via DNS by considering the evolution from the laminar state under the action of different initial Gaussian white noise at the macroscopic level. In the present work, we use the Gaussian white noise as the initial condition of the laminar flow as well. However, unlike Wang et al. \cite{Wang-PNAS-2015}, the Gaussian white noises is set here as the thermal fluctuation at the micro-level, which is physically inevitable with clear meaning. For the studied cases, the fluid is water at the room temperature of 20$^{o}$C, the standard deviations for the temperature and velocity field can be estimated from statistical mechanics~\cite{Khinchin, Gorodetsky2004, Landau} as $\sigma_T=10^{-10}$ and $\sigma_u=10^{-9}$, respectively.
To ensure the solution accuracy, it requires that the numerical noises must be even less than the thermal fluctuation in a long enough time interval for the onset of turbulence, which is rather difficult to achieve for the chaotic system under consideration. Fortunately, the clean numerical simulation (CNS) makes it possible to attack this numerical challenge~\cite{Liao2009-Tellus, Liao2013-CSF, Liao2014, Liao2015-IJBC} in the way described below.
\subsection{The clean numerical simulation (CNS)}
Due to the famous butterfly-effect, chaotic dynamic systems have sensitive dependance not only on the initial conditions (SDIC)~\cite{Lorenz1963} but also on numerical algorithms (SDNA)~\cite{Lorenz2006}. Unfortunately, numerical noises such as round-off error and truncation error are inevitable in practice, which make the convergent numerical simulations of chaotic systems rather difficult to obtain in a desired (finite but long enough) time interval. This challenge leads to intense arguments on the reliability and feasibility of numerical simulations of chaos. It is even believed that ``all chaotic responses are simply numerical noise and have nothing to do with the solutions of differential equations''~\cite{Yao2008, Lorenz2008}. The Lorenz equations \cite{Lorenz1963} are the much simplified model of the Navier-Stokes equations, which suggests that dynamic systems related to the Navier-Stokes equations should be sensitive to numerical noises as well. Indeed, some {\em spurious} and non-physical evolutions of turbulence from DNS have currently been reported~\cite{Wang2009, Pugachev2015}, which originate either from round-off error or dependence upon the time step size. Currently, Hoovers \cite{Hoover2015} applied two symplectic and five Runge-Kutta integrators to investigate a chaotic Hamiltonian system and found that all of these schemes can {\em not} gain convergent trajectories. Therefore it is necessary to develop a numerical technique to obtain convergent and reliable simulation results of chaotic dynamic systems in a finite but long enough time interval.
The so-called clean numerical simulation (CNS)~\cite{Liao2009-Tellus, Liao2013-CSF, Liao2014,Li2014,Liao2015-IJBC} was developed recently for this purpose. CNS is based on the Taylor series method~\cite{Corliss1982,Barrio2005} at {\em arbitrary} order (in time) and data in {\em arbitrary} precision~\cite{MP}, together with a solution verification in the temporal domain. The Taylor series method has an advantage that its formula at an arbitrarily high order can be easily expressed and analyzed to deduce truncation error to a required level. Moreover, the multiple-precision (MP) data \cite{MP} is used here to control the round-off error to a required level in CNS. A remarkable example of the MP data application is to calculate the value of $\pi$ to millions of digit numbers.
In 2009, Liao~\cite{Liao2009-Tellus} first successfully implemented CNS to obtain a convergent chaotic solution of the Lorenz equation in the time interval $[0,1000]$, with 400th-order Taylor series and 800-digit MP data. The reliability of this CNS result has been confirmed~\cite{Wang2012} by CNS with 1000th-order Taylor series and 2100-digit MP data in a longer time interval [0,2500]. Currently, using 1200 CPUs at the National Supercomputer TH-A1 (in Tianjin, China) and a parallel CNS algorithm with a 3500th-order Taylor expansion and 4180-digit MP data, Liao and Wang~\cite{Liao2014} have successfully obtained, for the first time, a convergent and reliable solution of the Lorenz equation in a rather long interval [0,10000], which is several hundred times longer than those from the traditional numerical algorithms (such as the Runge-Kutta method). This brand-new simulation result, never reported in open literature before, provides us a numerical benchmark for mathematically reliable long-term prediction of chaos. The instability of some currently reported periodic solutions of three-body system was also investigated by means of the CNS \cite{Li2014}. In addition, the evolution of a chaotic three-body system with inherent uncertainty of the initial positions at the micro-level have been reliably simulated by CNS in a long enough time interval~\cite{Liao2015-IJBC}. Besides, it is found that, unlike the symplectic integrators, the CNS can give accurate trajectories of chaotic Hamiltonian systems in a long interval \cite{Li2016-A}. Furthermore, it is currently reported that the numerical noises even have a significant influence on statistics of chaotic dynamic systems in non-equilibrium \cite{Li2016-B}.
Similarly the convergent and reliable solution of the dynamic system (9)-(12) can be gained numerically by CNS as well. Let $\Delta t$ denote the time increment and $f^{(j)}$ the value of $f(t)$ at $t=j\Delta t$. The $P$th-order Taylor series of $\Psi_{i,m,n}$ and $\Theta_{i,m,n}$ are expressed as
\begin{eqnarray}
\Psi_{i,m,n}^{(j+1)}=\Psi_{i,m,n}(t_j+\Delta
t)=\Psi_{i,m,n}^{(j)}+\sum_{k=1}^P\beta_{i,m,n}^{j,k}\,(\Delta
t)^k,\label{TaySerOfPsi}\\
\Theta_{i,m,n}^{(j+1)}=\Theta_{i,m,n}(t_j+\Delta
t)=\Theta_{i,m,n}^{(j)}+\sum_{k=1}^P\gamma_{i,m,n}^{j,k}\,(\Delta
t)^k,\label{TaySerOfTht}
\end{eqnarray}
where
\begin{small}
\begin{eqnarray}
&& \beta_{1,m,n}^{j,k+1} \nonumber \\
&=& \left(\sum_{p=-M}^{M}\sum_{q=-N}^{N}C_{m,n,p,q}
\frac{\alpha_{p,q}^2}{\alpha_{m,n}^2}\sum_{l=0}^{k}\left[\beta_{1,p,q}^{j,l}\beta_{1,m-p,n-q}^{j,k-l}
-\beta_{2,p,q}^{j,l}\beta_{2,m-p,n-q}^{j,k-l}\right]\right. \nonumber\\
&& \left.-\frac{l^*
m}{\alpha_{m,n}^2}\gamma_{2,m,n}^{j,k} -{\cal
C}_{a}\,\alpha_{m,n}^2\beta_{1,m,n}^{j,k}\right)/(1+k),\\
&& \beta_{2,m,n}^{j,k+1} \nonumber \\
& = &\left(\sum_{p=-M}^{M}\sum_{q=-N}^{M}C_{m,n,p,q}
\frac{\alpha_{p,q}^2}{\alpha_{m,n}^2}\sum_{l=0}^{k}\left[\beta_{1,p,q}^{j,l}\beta_{2,m-p,n-q}^{j,k-l}
+\beta_{2,p,q}^{j,l} \beta_{1,m-p,n-q}^{j,k-l} \right] \right. \nonumber\\
&& \left.+\frac{l^* m}{\alpha_{m,n}^2}\gamma_{1,m,n}^{j,k} - {\cal C}_{a}\, \alpha_{m,n}^2\beta_{2,m,n}^{j,k} \right)/(1+k), \\
&& \gamma_{1,m,n}^{j,k+1} \nonumber\\
&=&\left(-\sum_{p=-M}^{M}\sum_{q=-N}^{+N}C_{m,n,p,q}
\sum_{l=0}^{k}\left[\beta_{1,p,q}^{j,l}\gamma_{1,m-p,n-q}^{j,k-l}-\beta_{2,p,q}^{j,l}\gamma_{2,m-p,n-q}^{j,k-l}\right]\right.\;\;\;\;\;\;\nonumber\\
&& \left.+ l^* m\beta_{2,m,n}^{j,k} -{\cal C}_{b}\,
\alpha_{m,n} \gamma_{1,m,n}^{j,k}\right)/(1+k),\\
&& \gamma_{2,m,n}^{j,k+1} \nonumber \\
&=& \left(-\sum_{p=-M}^{M}\sum_{q=-N}^{N}C_{m,n,p,q}
\sum_{l=0}^{k}\left[\beta_{1,p,q}^{j,l}\gamma_{2,m-p,n-q}^{j,k-l}+\beta_{2,p,q}^{j,l}\gamma_{1,m-p,n-q}^{j,k-l}\right]\right.\;\;\;\;\;\;\nonumber\\
&& \left. - l^* m\beta_{1,m,n}^{j,k} {\cal C}_{b}\, \alpha_{m,n}^2\gamma_{2,m,n}^{j,k}\right)/(1+k).
\end{eqnarray}
\end{small}
Here
\begin{eqnarray}
\beta_{1,m,n}^{j,0}=\Psi_{1,m,n}^{(j)}=\Psi_{1,m,n}(t_j), \\
\beta_{2,m,n}^{j,0}=\Psi_{2,m,n}^{(j)}=\Psi_{2,m,n}(t_j),\\
\gamma_{1,m,n}^{j,0}=\Theta_{1,m,n}^{(j)}=\Theta_{1,m,n}(t_j),\\
\gamma_{2,m,n}^{j,0}=\Theta_{2,m,n}^{(j)}=\Theta_{2,m,n}(t_j).
\end{eqnarray}
\section{Results}
\subsection{Modeling and numerical simulation}
To attack the randomness physics in turbulence, we focus on a two-dimensional Rayleigh-B\'{e}nard (RB) model system\cite{Rayleigh1916, Saltzman1962, Getling1998}, as shown in Fig.~1. Without loss of generality, we consider the case with the aspect ratio $\Gamma = L/H = 2\sqrt{2}$, Prandtl number $Pr = 6.8$ (water) and Rayleigh number $Ra =10^{7}$, corresponding to a linearly unstable case.
Similarly as in the 2D Poiseuille flow instability analysis by Wang et al.~\cite{Wang-PNAS-2015}, we formulate the inevitable thermal fluctuation as Gaussian white noise. It need to mention here that such Gaussian white noise is more than a random input, but physically meaningful to represent the thermal fluctuation in the nonlinear fluid system. To ensure numerical accuracy, CNS is adopted to simulate the evolution of the micro-level thermal fluctuation that is much less than the numerical noises of DNS. We set here the double Fourier expansion modes as $M = N = 127$, the multiple-precision data in 100-digits, the 10th-order ($P=10$) of the truncated Taylor series in time with the step size $\Delta t = 5 \times 10^{-3}$.
To verify the correctness of our CNS algorithm, we first of all calculated the Nusselt number for several Rayleigh numbers a little larger than $Ra_c$ with $M=N=31$. The results can be fit as
\begin{eqnarray}
Ra &=& 17.934 (Nu-1)^4 +52.599(Nu-1)^3\nonumber\\
&+& 131.01(Nu-1)^2 +330.66(Nu-1) \nonumber\\
&+& 657.46,
\end{eqnarray}
as shown in Fig.~\ref{VerifCNS}. As $Nu\to 1$, the critical Rayleigh number can be estimated as $Ra_c \approx 657.46$, which agrees very well with the theoretical value $Ra_c = 657.5$.
\begin{figure}[t]
\begin{center}
\begin{tabular}{cc}
\includegraphics[width=0.75\textwidth]{FIGURE2.eps}
\end{tabular}
\caption{{\bf Verification of CNS algorithm}. The Nusselt number $Nu$ is calculated by CNS (in symbol) with the double Fourier expansion modes $M = N = 31$ at different Rayleigh numbers above the critical value $Ra_c = 27\pi^4/4$. From the curve fitting (line) the estimated critical Rayleigh number agrees well with the theoretical value.} \label{VerifCNS}
\end{center}
\end{figure}
Furthermore, to check the reliability of the CNS, the results from different orders of the Taylor series in time, e.g. $P=10$ and $P=12$, are compared at three probe points ($3L/4, H/10$), ($3L/4, 2H/5$) and ($3L/4, H/2$). Considering the butterfly-effect of the nonlinear dynamic system, the reliable results for the temperature $\theta$ (departure from a linear variation background) filed and the velocity field require that the deviations using the {\em same} initial condition must be much less than their respective spatial root mean square, i.e. $\theta_{RMS}(t)$ and $\sqrt{E_{RMS}(t)}$. As shown in Fig.~\ref{CNS}, at all probe points the nondimensionalized deviations are 10 orders of magnitude less than unity, while results from DNS are too largely deviated (15 orders of magnitude larger) to work adequately. Therefore, the CNS results obtained from the 10th-order Taylor series is reliable in the time interval $t\in[0,50]$.
\begin{figure}[t]
\begin{center}
\begin{tabular}{cc}
\includegraphics[width=0.8\textwidth]{FIGURE3.eps}
\end{tabular}
\caption{{\bf Reliability check of the CNS results}. The results are for the case A (of the initial thermal fluctuation) with the Rayleigh number $Ra = 10^{7}$ and the double Fourier expansion modes $M = N = 127$ at three probe points: ({\bf a}) ($3L/4, H/2$), ({\bf b}) ($3L/4, 2H/5$) and ({\bf c}) ($3L/4, H/10$). The curves denote the dimensionless deviations of $\Delta^\theta_{10} = |\theta_{P=12}-\theta_{P=10}|/\theta_{RMS}$ ({\bf left}) and $\Delta^V_{10} = |V_{P=12}-V_{P=10}|/\sqrt{E_{RMS}}$ ({\bf right}), which are much less than the micro-level thermal fluctuation. Here $P$ is the order of the Taylor series in time; $\theta_{RMS}$ is the spatial root mean square of $\theta$ (the temperature departure from a linear variation background); $E_{RMS}$ is the spatial root mean square of the kinetic energy $(u^2+w^2)/2$, respectively.} \label{CNS}
\end{center}
\end{figure}
\begin{figure}[t]
\begin{center}
\begin{tabular}{cc}
\includegraphics[width=0.7\textwidth]{FIGURE4.eps}
\end{tabular}
\caption{{\bf Evolution of the $\theta$ (temperature departure from a linear variation background) field}. The results are for ({\bf a}) $t=0$; ({\bf b}) $t=2$; ({\bf c}) $t=8$; ({\bf d}) $t=28$ and ({\bf e}) $t=31$, with the Rayleigh number $Ra = 10^{7}$ and the double Fourier expansion modes $M = N = 127$. Case A and case B have different initial micro-level randomness, generated by the same variance of temperature $\sigma_T=10^{-10}$ and velocity $\sigma_u=10^{-9}$.} \label{structure}
\end{center}
\end{figure}
\subsection{Evolution of the flow structure}
Although the RB convection is modeled theoretically as external disturbance isolated, randomness at the microscopic level still exists because of the molecular thermal fluctuation. In different CNS cases the initial temperature and velocity fields are randomly generated as Gaussian white noise, with the same temperature variance $\sigma_T=10^{-10}$ and velocity variance $\sigma_u=10^{-9}$, respectively. As shown in Fig.~\ref{structure} (a), such tiny difference of the initial condition is negligibly small with respect to the background fields at the macroscopic level, and thus the initial status can be regarded as the {\em same} from the physical viewpoint. With time increases, the filed structures and scales evolve rapidly. The clear large-scale patterns appear even at the very early stage, as shown in Fig.~\ref{structure} (b) for $t=2$, although the magnitudes are still insensibly small. In the following stage e.g. at $t=8$ as in Fig.~\ref{structure} (c) the large-scale structures become more and more distinct. Interestingly, these intermediate structures remain stable in a long interval up to $t=28$, as shown in Fig.~\ref{structure} (d), while the field energy increases continuously. At a critical point once the field is too energetic to be stable, these large-scale structures disintegrate abruptly, leading to the turbulent status as shown in Fig.~\ref{structure} (e) at $t=31$. Note that the two flow structures in Fig.~\ref{structure} (e) are sharply different, which must originate from the different initial microscopic randomness due to thermal fluctuation.
We emphasize that CNS can achieve the reliable results in a prescribed time interval with the numerical inaccuracy much less than the physical uncertainty, while DNS fails because of the butterfly effect. Therefore the evidence provided by CNS indicates that turbulence in the Reyleigh-B\'{e}nard convection problem can be self-excited or `out of nothing'~\cite{Tsinober2009}, i.e. the origin of randomness in fluid turbulence is intrinsic.
\begin{figure}[t]
\begin{center}
\begin{tabular}{cc}
\includegraphics[width=0.6\textwidth]{FIGURE5.eps}
\end{tabular}
\caption{{\bf Evolution of the kinetic and thermal energy at different scales}. The results are for case A with Rayleigh number $Ra = 10^{7}$ and the double Fourier expansion modes $M = N = 127$, where $E$ and $E_\theta$ denote the total kinetic and thermal energy, $k$ is the wave number, respectively. Lines in black: kinetic energy; Lines in red: thermal energy. The points $A$ (black dot) and $B$ (red dot) represent transition when the nonlinear interaction (with the large scale components) is strong enough to dominate to evolution process.} \label{energy}
\end{center}
\end{figure}
\begin{figure}[t]
\begin{center}
\begin{tabular}{cc}
\includegraphics[width=0.5\textwidth]{FIGURE6.eps}
\end{tabular}
\caption{{\bf Evolution of the normalized kinetic energy spectrum}. The results are for case A with Rayleigh number $Ra = 10^{7}$ and the double Fourier expansion modes $M = N = 127$. ({\bf a}) Initially energy shifts from small scales to larger scales, at which the spatial structure remains stable in most of the evolution process. ({\bf b}) when turbulence transition occurs the large scale disintegrates and energy shifts inversely from large scales to small ones.} \label{spectrum}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\begin{tabular}{cc}
\includegraphics[width=0.7\textwidth]{FIGURE7.eps}
\end{tabular}
\caption{{\bf Correlation between the $\theta$ field and $w$ field}. Here $\theta$ denotes the temperature departure from a linear variation background and $w$ is the velocity component along the opposite gravity direction, respectively. The results are for ({\bf a}) $t=0$; ({\bf b}) $t=2$; ({\bf c}) $t=8$; ({\bf d}) $t=28$ and ({\bf e}) $t=31$, with the Rayleigh number $Ra = 10^{7}$ and the double Fourier expansion modes $M = N = 127$.} \label{Correlation}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\begin{tabular}{cc}
\includegraphics[width=0.6\textwidth]{FIGURE8.eps}
\end{tabular}
\caption{{\bf Evolution of the correlation coefficient and its PDF}. ({\bf a}) Evolution of the correlation coefficient $C(t)$ between $w$ and $\theta$ for the case A with $Ra = 10^{7}$ and the double Fourier expansion modes $M = N = 127$. From the initial state $C(t)$ increases rapidly (for $t<1$) from $C\sim 0$ because of the initial random independence to $C\sim 1$, corresponding to a strong correlation, which then remains till the transition to turbulence. ({\bf b}) Change of the PDF of the normalized kinetic energy source (for the case A with $Ra = 10^{7}$). The PDF is initially close to be symmetric and evolve rapidly to be positively skewed. At the turbulence state, the PDF returns to be symmetric, but broadens largely.} \label{Evolution}
\end{center}
\end{figure}
\subsection{Evolution of energy}
Considering the energy evolution, as shown in Fig.~\ref{energy}, both the total kinetic energy ($E$) and thermal energy ($E_\theta$) increase exponentially with identical slopes from the very beginning till the onset of turbulence at about $t = 31$, where a balance between energy absorption and dissipation is reached. For the individual components, the energy evolution is strongly dependent upon the wave number $k$. Most of the energy is contained in the large scale modes ($k<3$). As the wave number enlarges, energy increases exponentially first with a smaller growth rate at the beginning, but then increases superexponentially after a critical point, e.g. $A$ (or $B$) at about $t=22$ for $k=15$, till the onset of turbulence. For the mode with even larger wave number such as $k=25$, energy decays initially; and then increases after its critical point.
The energy change process can also be studied from the normalized kinetic energy spectrum. As shown in Fig.~\ref{spectrum}~(a), initially because of the strong influence from thermal fluctuation, the kinetic energy at higher wave number components decays rapidly so that spectrum recesses toward the large-scale side. Such large-scale dominant status remains till about $t=26$, when the spectrum begins to expand because more small scale modes are excited, as shown in Fig.~\ref{spectrum}~(b). During the unstable evolution process, the system gains energy from the background potential and restores most of the energy at the small wave number end.
Unstable evolution processes, including the RB convection, involve the following two interactions:
\begin{itemize}
\item Interaction between different scales (modes) due to nonlinearity.
\item Interaction between individual scales with the potential background, e.g. the mean temperature gradient in the RB convection.
\end{itemize}
\subsection{Channel of energy and randomness information transport}
Generally
Let $\dot{E}_{bg}(k)$, $\dot{E}_{nl}(k)$ and $\dot{E}_{dissip}(k)$ denote for the wave number $k$ mode the growth rate of energy absorbed from the potential background, the energy growth rate due to the nonlinear interaction, and the energy dissipation rate, respectively. As commented by Landau~\cite{Landau}, the essence of transition to turbulence is an increase of the number of the excited modes (degrees of freedom), i.e.
\begin{equation}
\dot{E}_{bg}(k)+\dot{E}_{nl}(k) > \dot{E}_{dissip}(k) \label{criterion}
\end{equation}
holds for large enough wave number $k$.
According to the instability theory \cite{Lin, Orszag1980}, {\em all} modes absorb energy exponentially from the potential background. Starting from the initial microscopic thermal randomness, scales are separated because of the system nonlinearity dispersion to generate smaller and larger scale components. At this stage, $\dot{E}_{nl}(k)$ is insignificant for all scales because of the tiny thermal fluctuation, i.e. $\dot{E}_{nl}(k) \approx 0$. For the large-scale modes ($k<3$), because energy dissipation is negligible, i.e. $\dot{E}_{dissip}(k) \approx 0$, Eq.~\eqref{criterion} always holds, which explains the exponential increase behavior in Fig.~\ref{energy}. As $k$ becomes larger, the energy dissipation is stronger to decrease the energy growth rate. If $k$ is even larger (such as $k=25$), too strong energy dissipation then leads to $\dot{E}_{bg}(k)+\dot{E}_{nl}(k)< \dot{E}_{dissip}(k)$, which explains the initial decay of the component energy, as shown in Fig.~\ref{energy} and Fig.~\ref{spectrum} (a).
As the system evolves to be more energetic, the nonlinear interaction part $\dot{E}_{nl}(k)$ becomes more important to transport energy from larger scales to smaller ones. The turning point $A$ (or $B$) in Fig.~\ref{energy} for $k=15$ indicates the dominance of $\dot{E}_{nl}(k)$. When the strong nonlinear interaction propagates towards the larger wave number components, more small scale modes are then excited, as shown in Fig.~\ref{spectrum} (b), which justifies the Landau's picture~\cite{Landau}.
Such background interaction $\mapsto$ nonlinear scale interaction $\mapsto$ scale dispersion scenario explains the structure and randomness evolution as well. As shown in Fig.~\ref{structure}, at the early stage the field changes rapidly to form a large-scale skeletal structure, which remains geometrically stable with continuous growth of the total kinetic and thermal energy till the transition to turbulence. Although initially the nonlinear part is negligibly small, it is still vital in information exchange in the following sense. Initial randomness at the microscopic level is transited to the large scale modes via the nonlinear interaction. Consequently such randomness information is inherited by the large scale modes, and survives with the evolution of these energy containing modes. When the nonlinear interaction is strong enough the structure information of the large scale modes can be transited back to small scale modes. This randomness transition mechanism may be important to understand ergodicity in turbulence at different scales. In summary the system nonlinearity in the unstable evolution process functions not only as instability excitation, but more as a channel to transport randomness information, and energy as well, from microscopic to macroscopic level.
\begin{figure}
\begin{center}
\begin{tabular}{cc}
\includegraphics[width=0.7\textwidth]{FIGURE9.eps}
\end{tabular}
\caption{{\bf Evolution of the $\theta$ (temperature departure from a linear variation background) field}. Here the Rayleigh number is $Ra = 2000$ and the double Fourier expansion modes $M = N = 31$. The results are for ({\bf a}) $t=0$; ({\bf b}) $t=0.5$; ({\bf c}) $t=1$; ({\bf d}) $t=5$; ({\bf e}) $t=50$; ({\bf f}) $t=400$. Case~1 and case~2 have different initial micro-level randomness due to thermal fluctuation, generated by the same variance of temperature $\sigma_T=10^{-10}$ and velocity $\sigma_u=10^{-9}$.} \label{RaEq2000}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\begin{tabular}{cc}
\includegraphics[width=0.7\textwidth]{FIGURE10.eps}
\end{tabular}
\caption{{\bf Comparison of evolution of the $\theta$ (temperature departure from a linear variation background) given by the CNS and DNS}. Here, Rayleigh number is $Ra = 10^{7}$. Left: CNS results, obtained using the same parameters in Figure 4 (Case A); Right: DNS results, obtained by means of the code DEDALUS using the resolution grid $M=N=127$, the initial time step $dt = 0.005$, $cfl = 0.2$ and the same initial guess as that of the CNS (Case A).} \label{comparison-CNS-DNS}
\end{center}
\end{figure}
\begin{figure}[t]
\begin{center}
\begin{tabular}{cc}
\includegraphics[width=0.7\textwidth]{FIGURE11.eps}
\end{tabular}
\caption{{\bf Reliability check of the numerical results given by the CNS and DNS}. 1) for DNS (dash line)
$\Delta^\theta = |\theta_{cfl=0.2}-\theta_{cfl=0.1}|/\theta_{RMS}$
and $\Delta^V = |V_{cfl=0.2}-V_{cfl=0.1}|/\sqrt{E_{RMS}}$; 2) for CNS (solid line) $\Delta^\theta = |\theta_{P=12}-\theta_{P=10}|/\theta_{RMS}$
and $\Delta^V = |V_{P=12}-V_{P=10}|/\sqrt{E_{RMS}}$. $\theta_{RMS}$ and $E_{RMS}$ are the root mean squares of the temperature $\theta$ and the kinetic energy $E=(u^2+w^2)/2$ in the domain $x\in[0,L], z\in[0,H]$. (a) At probe point ($3L/4, H/10$);
(b) At probe point ($3L/4, 2H/5$); (c) At probe point ($3L/4, H/2$).} \label{error-DNS}
\end{center}
\end{figure}
\subsection{Some additional results}
Physically the correlation between the velocity and temperature field accounts for the source of the unstable evolution. Considering the kinetic energy budget, the source term is proportional to $w\theta$, where $w$ is the velocity component opposite to the gravity direction. Fig.\ref{Correlation} shows the individual field of $w$ and $\theta$ at different moments. Starting from the initial random states, these two fields evolve rapidly to be highly similar till the occurrence of turbulence.
The relation between these two field structures can be further quantified by the correlation coefficient $C(t)$ in the entire domain, as shown in Fig.~\ref{Evolution} ({\bf a}). Initially $C\sim 0$ because of the random independence. Very rapidly $C(t)$ increases almost to $1$ and remains invariant till turbulence occurs. The large value of $C(t)\sim1$ indicates that the $\theta$ field is almost perfectly correlated with the $w$ field. After transition to turbulence, $C(t)$ plunges and then fluctuates, but on average still remains above zero to balance the kinetic energy dissipation. Interestingly, the two $C(t)$ curves for different initial settings almost collapse till the onset of turbulence, as the inherent micro-level uncertainty evolutes into the macroscopic randomness.
More details can be viewed from the probability density function (PDF) of $w\theta$, which is shown in Fig.\ref{Evolution} ({\bf b}), normalized by the overall mean of the instantaneous kinetic energy, i.e. $E_{RMS}(t)$. Initially, the part with negative $w\theta$ and the part with positive $w\theta$ are almost equal sized, which indicates the net contribution is close to be zero. However, in the evolution process the negative part shrinks rapidly and the PDF becomes strongly skewed toward the positive side, corresponding a significant net contribution from the source term. Once the flow transits to turbulent, the PDF approaches to be symmetrical again, but broadens largely, which corresponds to the strong macroscopic fluctuation inside the flow.
Moreover, it is also found from the present CNS results that the initial micro-level randomness can not be amplified when the Rayleigh number $Ra$ is under a critical value $Ra_c = 27\pi^4/4$.
Even at $Ra = 2000 > Ra_c$, instability triggers the transition to a steady large-scale laminar flow {\em without} macroscopic randomness, as shown in Fig.~\ref{RaEq2000}. Therefore instability because of the system nonlinearity seems to be a necessary, but not sufficient, condition for the intrinsic macroscopic randomness evolution.
\section{Conclusion and discussions}
Following Lorenz~\cite{Lorenz1963}, we propose here a so-called `thermal fluctuation effect' to summarize the origin of intrinsic randomness in the Rayleigh-B{\'e}rnard convection system: a tornado can be created and its path can be ultimately altered due to intrinsic thermal fluctuation, {\em without} any external disturbances even from the wing flap of a butterfly. In methodology we also need to address the reliability of CNS, because the numerical noise can be well controlled even much lower than the microscopic thermal fluctuation. Although more expensive than DNS, CNS may open a new direction to understand the behaviors of turbulence.
Wolfram \cite{Wolfram2002} mentioned that the Lorenz equations with the famous butterfly-effect are highly simplified and thus do not contain terms that represent viscous effects. So, he believes that these terms would tend to damp out small perturbations. However, our CNS results indicate that the viscous effects of the NS equations can not remove its sensitive dependance on the initial conditions (SDIC). Thus, not only the Lorenz equation but also the full NS equations possess the property of the sensitivity dependance on initial conditions, i.e. the butterfly-effect, which implies that the numerical noises should have a significant influence on numerical simulations of turbulence.
The open DNS code DEDALUS \cite{lecoanet2016}, which is available via http://dedalus-project.org/, is used to solve the same case (with Rayleigh number $Ra = 10^{7}$). The DNS solution shows very differently: the transition to turbulence begins at about $t=19$, which is much earlier than the onset of turbulence given by the CNS, i.e. $t \approx 28$, as shown in Fig.~\ref{comparison-CNS-DNS}. Besides, the DNS is strongly dependent upon control parameters (e.g. the time step). It is easy to understand this numerical phenomenon, since the numerical noises of the DNS are much larger than those of the CNS, which (the numerical noises of the DNS) quickly transfer into the same order of magnitude as the background temperature field, as shown in Fig.~\ref{error-DNS}. In other words, due to the butterfly effect, the numerical noises of the DNS themself become a large source of uncertainty. In this sense the numerical uncertainty from DNS might be large enough to overwhelm the physical fidelity. Strictly speaking, the feasibility of DNS on the non-equilibrium turbulence evolution still remains as an open question: current CNS results suggest that the numerical noises might have a significant influence even on statistics of chaotic dynamic systems in non-equilibrium \cite{Li2016-B}.
In addition, we also use the CNS to solve the Landau-Lifshitz Navier-Stokes (LLNS) equations \cite{Landau1959, Graham1974, Swift1977, Bell2010} of the same case (with Rayleigh number $Ra = 10^{7}$), where additional white noise fluxes are integrated into the N-S equations. For the sake of brevity, mathematical details are omitted here. The numerical results are qualitatively the same as those based on the NS equations. Even quantitatively, they are close as well: the onset of turbulence of the LLNS equation occurs at about $t=27$, a little earlier than $t=28$ for the NS equation. Physically, this is reasonable, since the thermal fluctuation always exists for the LLNS equations, but only exists at the beginning for the NS solution. This suggests that the NS equations plus random initial condition due to thermal fluctuation could be a good approximation of the LLNS equations for the RB convection under consideration.
Note also that the thermal fluctuation propagates much less slowly than the numerical noises. So, it should be rather difficult to accurately simulate evolution of thermal fluctuation by means of the DNS.
All of these suggest that the CNS could provide us a new, more precise tool to investigate complicated nonlinear dynamic systems with sensitivity dependance on initial conditions and numerical noises/algorithms, for which significant interactions occur at different scales ranging from microscopic to macroscopic, although its wide applications might need a new generation computer in future.
\section*{Acknowledgment}
This work is partly supported by the National Natural Science of China (approval numbers 11272209 and 11432009). The parallel algorithms were performed on TH-1A at National Supercomputer Centre in Tianjin, China. Thanks to Jing Li for using the open resource DEDALUS to gain the DNS results.
\section*{References}
|
1,477,468,750,302 | arxiv | \section{Introduction}
Novel beam-intercepting materials and targetry concepts are essential to improve the performance, reliability and operation lifetimes of next generation multi-megawatt (multi-MW) accelerator target facilities. The beam-intercepting materials and components must sustain an order-of-magnitude increase in particle beam intensities and are beyond the current state-of-the-art. With conventional materials and targetry technologies already limiting the scope of experiments \cite{Hylen2017,Kramer2014,Hasegawa2017}, it is crucial to investigate novel materials and concepts that will satisfy the requirements and maximize the physics benefits of future energy and intensity frontier experiments. High-power target innovation and novel materials R\&D are necessary to enable and ensure reliable operation of future target facilities.
In addition to the U.S. planned program for High Energy Physics (HEP) target facilities and upgrades such as the 2.4 MW Long-Baseline Neutrino Facility (LBNF) and muon-to-electron-conversion II experiment (Mu2e-II), international HEP program plans also feature challenging accelerator target facilities and related beam-intercepting devices. These include the Beam Dump Facility (BDF), HiLumi-LHC and Future Circular Collider (FCC) collimators at CERN, the T2K, COMET, and hadron experimental facilities at J-PARC, and beam-intercepting devices for the proposed International Linear Collider (ILC). Non-HEP accelerator target facilities, involving neutron sources, waste transmutation and nuclear physics applications also face similar challenges and include the 5-MW European Spallation Source, the Spallation Neutron Source (SNS) proton power upgrade project at Oak Ridge National Laboratory, the Facility for Rare Isotope Beams (FRIB) at Michigan State University, and the Material and Life Science Experimental Facility (MLF) at J-PARC. Meeting the demands of these accelerator target facilities and their planned beam power upgrades are of great concern to these institutions. Significant R\&D of novel targetry materials and concepts beyond the current state-of-the-art are therefore essential to enable these next-generation accelerator facilities.
\section{Challenges of beam-intercepting devices}
Beam-intercepting devices such as beam windows, beam dumps, collimators and particle production targets are designed to absorb the energy and power of the particle beam in order to produce and deliver the particles of interest to particular experiments. These devices are engineered to withstand the challenging beam-induced thermomechanical loads and optimized for physics performance. The continuous bombardment of these components by high-energy high-intensity pulses beams poses serious challenges to the operation and maintenance of target facilities. Beam-induced thermal shock and radiation damage effects in materials were identified as the leading cross-cutting challenges facing high-power target facilities \cite{Hurh2012}.
Thermal shock phenomena arise in beam-intercepting materials as a result of localized energy deposition by very short pulsed-beams (1-10 $\mu$s). The rapidly heated core volume of the material in the beam spot region expands but is constrained by the surrounding cooler target material. This condition creates a sudden localized region of compressive stress than propagates as stress waves through the material at sonic velocities after each beam pulse. If the initial dynamic stress exceeds the yield strength of the material, it will permanently deform and eventually fail. In addition, the cyclic loading environment from the pulsed beam progressively damages the material’s microstructure such that it can ultimately fail at stress levels that are actually lower than its failure strength (fatigue failure).
The bulk material properties change as a result of radiation damage in highly irradiated materials. Radiation damage disrupts the lattice structure of the material through the displacements of atoms, transmutation, and gas production after sustained particle beam bombardment. Irradiation-induced defects such as dislocation loops, point-defect clusters, fine-scale precipitates and voids that accumulate at the microstructural level ultimately affect the bulk properties of the material. Typical bulk property effects include embrittlement, hardening, swelling, reduction of thermal conductivity, and an increase in diffusion-dependent phenomena such as segregation of impurities, and phase transformation \cite{Kiselev2016}, all critical properties for the reliable and safe operation of beam-intercepting devices. As beam power and intensity increase, there is a pressing need to explore novel radiation damage and thermal shock tolerant beam-intercepting materials.
The removal of heat deposited into the beam-intercepting material upon interaction with the beam is another key challenge facing multi-MW beam-intercepting devices. Heat deposition in the material will increase as beam power increases, and therefore more effective cooling systems will be needed to safely operate these devices and to avoid compromising the physics performance. Liquid or gas cooling mediums are typically used to remove heat via forced convection from the boundary material. However, cooling efficiency from forced convection is largely dependent on the available cooling surface area which is usually driven by the desired physics performance (target size). Therefore, the limitation of forced convection cooling may impose significant constraints on the physics performance of future multi-MW devices. As a result, there is a real need to explore alternative advanced cooling technologies for next-generation beam-intercepting devices.
In addition to developing novel materials and understanding their behavior under high-energy high-intensity beam irradiation conditions, the development of novel targetry concepts and technologies are required to advance the design, fabrication and reliable operation of future multi-MW target facilities. Forward-looking concepts and technologies will push the current state-of-the-art and fully harness the benefits of novel materials to maximize the physics benefits of future experiments. The novel targetry concepts and technologies include pebble-bed, flowing and granular targets, rotating targets, liquid targets, specifically shaped targets to diffuse or damp stress waves, variable density targets taking advantage of additive manufacturing techniques, advanced cladding technologies, high heat-flux cooling and novel material coatings.
The following sections of this whitepaper discuss the current and novel targetry materials, concepts and technologies that will need to be explored and developed further over the next few years in order to address the challenges facing high-power beam-intercepting devices. Significant coordinated R\&D in these areas will be necessary to enable the safe and reliable operation of several future multi-MW target accelerator facilities.
\section{Novel targetry materials}
This section describes candidate novel materials that need to be explored and developed for specific beam-intercepting devices, capable of sustaining the increased beam power and intensity of future accelerator facilities.
\subsection{High-Entropy Alloys}
As opposed to the majority of relevant alloys used today (steels, aluminum alloys, titanium alloys, etc.), high-entropy alloys (HEAs) represent a fundamental departure from conventional metallurgy methodologies. Whereas conventional alloys are typically comprised of one or two principal elements with their properties tweaked by adding small alloying additions, HEAs consist of several principal elements often present in near equimolar quantities. The result is a material with a structure and properties that are not dictated primarily by a single element, but rather behave as an average of each primary constituent element. Beginning with two seminal papers in 2004, researchers Brian Cantor and Jien-Wei Yeh independently stumbled onto a trend in research which would grow exponentially over the next 15 years \cite{Cantor2004, Yeh2004}. Motivating their alloy design was a pursuit of maximizing the configurational entropy in expanded alloy systems in the hope that this increase in entropy would be sufficient to suppress the formation of intermetallic compounds, which are often deleterious to the mechanical properties of materials. Yeh is credited with coining the term “high-entropy alloy”, which he originally defined as equimolar alloys of five or more principal elements.
Over the years, HEAs have been demonstrated to exhibit a broad range of promising properties for both structural materials and functional materials including markedly higher yield strengths than Ni-based superalloys at high temperatures, fracture toughness comparable to the best cryogenic steels at low temperatures, high-temperature oxidation resistance, and superplastic behavior, as well as efficient catalysis of H$_2$ and CO$_2$ and the largest magnetocaloric effect experimentally observed in a material. But by far, the most relevant and intriguing property of HEAs for accelerator target applications is their microstructural response to irradiation damage. Many experimental studies have shown that certain HEA compositions outperform their less compositionally complex counterparts under irradiation \cite{Ullah2016, Yang2019, Tong2018, Atwani2019, Lu2016, Jin2016}, especially when comparing void swelling behavior. To understand the origins of the apparent irradiation tolerance exhibited by HEAs, many mechanisms have been proposed in the literature, the most commonly accepted being (i) a higher recombination during damage cascade, (ii) sluggish diffusion of point defects, (iii) broadening/overlapping defect migration energies, and (iv) sluggish diffusion of interstitial loops/clusters. While each of the above proposed mechanisms has been supported by some form of modeling and simulation, it still remains unclear which mechanism plays a dominant role in determining the performance of HEAs under irradiation.
HEAs therefore offer a unique opportunity to explore a broader range of novel radiation-damage resistant alloy systems with functional properties specific to accelerator beam-intercepting applications.
\subsection{Electrospun nanofiber materials}
Nanofiber materials offer promising applications as future multi-MW targets as they are intrinsically tolerant to both thermal shock and radiation damage. Recently Fermilab has engaged in the design and development of target materials with a sinuous microstructure consisting of ceramic nanofibers using a unique electrospinning set-up. Since the continuum is physically discretized at the microscale, issues such as thermal shock, thermal stress cycles and local heat accumulation can be mitigated \cite{Bidhar2021}. The microstructure consists of large number of randomly oriented one-dimensional nanofibers of less than hundred nanometers in diameter. Since the diameter of individual nanofibers are many orders of magnitude smaller than the beam spot size, there will not be any temperature gradient across the nanofiber cross-section. Moreover, there are a lot of gaps between the individual nanofibers which would prevent any kind of compressive stress waves to propagate. The large surface area of individual nanofibers and the porosity in the microstructure of the bulk nanofiber mat would also offer better cooling from the beam center. In-beam test of select nanofiber material specimens at CERN's HiRadMat facility revealed promising evidence of enhanced thermal shock resistance.
Owing to the nanopolycrystalline grains in individual nanofibers, it is hypothesized that they would also offer better resistance to radiation damage. The large number of grain boundaries and free surfaces would act as sinks to irradiation induced defects. The submicron diameter and ubiquitous grain boundaries of nanofibers reduce the mean free path for low-solubility transmutation products. Helium gas that forms due to high-energy proton beam interaction with the material can escape out of the material and hence avoid swelling which can cause the bulk material to crack. In order to evaluate their resistance against displacement damage, some of these nanofibers were irradiated under 1 MeV Kr++ beam at Argonne National lab IVEM Tandem facility. The samples received a 5 DPA-equivalent DPA in stainless steel. Selected area diffraction pattern (SADP) in TEM before and after irradiation shows no new peaks indicating phase stability of these nanofibers after irradiation. Comparison of d-spacing plots before and after shows there was also no change in d-spacing and peak location, implying no change in lattice parameters or any kind of amorphization. The dark and bright field TEM images also do not show any dislocation loops, clusters. However, more systematic studies are needed on the nanofiber material in order to correlate radiation damage and fluence effects caused by high-energy proton beam and low-energy heavy ion beam.
\subsection{SiC-coated graphite and SiC-SiC composites}
Graphite shows extremely high performance when used in proton beam target applications due to its thermal and mechanical properties, and chemical stability. However, graphite is also easily oxidized at high temperatures. If air is unexpectedly introduced into the primary beam line during a high-power beam operation, the graphite target can rapidly oxidized. And these graphite oxidation contaminants complicate the recovery procedures and downtimes. So, as an alternative to graphite, it is important to develop a material that is more resistant to oxidation.
Recently, work to investigate Silicon Carbide (SiC) coated graphite, an excellent candidate because of its good heat and high oxidation resistance, has begun. Under the Radiation Damage In Accelerator Target Environments (RaDIATE) collaboration \cite{RaDIATE}, a high-intensity proton beam exposure with 181 MeV energy was conducted at Brookhaven Linac Isotope Producer facility on various material specimens for accelerator target and beam window applications. The experiment included the SiC-coated graphite as a future target material in US and Japan high-intensity proton accelerator facilities. The radiation damage level of the SiC and graphite reached 0.24 and 0.05 DPA, respectively. Post Irradiation Examination testing of the irradiated specimens have been conducted at Pacific Northwest National Laboratory.
Nano-powder Infiltration and Transient Eutectoid (NITE) SiC/SiC composite, developed for the fusion and fission reactor at Muroran Institute of Technology, is another excellent candidate for target material because it is significantly denser than graphite \cite{Kohyama2011}. Higher efficiency of secondary-particles transport is estimated in some experiments by a Monte-Carlo simulation because the spatial volume of the source is reduced \cite{Makimura2020_1}. The NITE SiC/SiC exhibits much higher oxidation resistance than graphite \cite{Park2018} and exhibits a pseudo-ductile behavior, which enables it to withstand considerably higher stresses up to the fracture strength. The NITE SiC/SiC was irradiated and examined at CERN's HiRadMat facility as well \cite{Maestre2022}.
\subsection{Toughened Fine-Grained Recrystallized (TFGR) tungsten}
Tungsten (W) is a principal candidate as target material because of its high density and extremely high melting point. The use of W can provide 10 times higher brightness of muon/neutron than that of the current target materials \cite{Makimura2020_2}. While expected as the target material for the proton accelerator, W inherently has a critical disadvantage due ot its brittleness at around room temperature. The low temperature brittleness can be avoided by heavy plastic working, although the working can be sufficiently applied only to filaments or hot-rolled thin plates and its effect also depends on the working direction. However, even if the brittleness is alleviated, W exhibits significant embrittlement due to recrystallization that occurs when W is heated at and above the recrystallization temperature, which is almost one-third of the melting point. Moreover, it exhibits significant embrittlement by proton irradiation as well \cite{Linsmeier2017}. TFGR (Toughened, Fine Grained, Recrystallized) tungsten alloy, which was originally developed at Tohoku University and whose technology has been transferred to KEK and Metal Technology Co., LTD, has grain boundary reinforced nanostructures to overcome the embrittlement \cite{Kurishita2013,Makimura2021}.
\subsection{Dual-phase titanium alloys}
Titanium alloys are one of the most suitable materials for accelerator beam windows due to their low density and high strength \cite{Ishida2018}. In J-PARC neutrino facility, thin domes of titanium 64 alloy (Ti-6Al-4V, Ti-64) cooled by helium are used as the primary beam window between the target station (helium) and the accelerator (vacuum)\cite{Ishida2019}, and a thin tube as an airtight container covering the entire graphite target \cite{Densham2009, Fishenden2019}. They have maintained stable operation so far without any serious failures. The same design concept will be applied to the LBNF target \cite{Papa2018, Wilcox2019}. In the FRIB rare ion beam facility, a rotating, water-cooled Ti-64 thin drum is used as the beam dump and a water-cooled Ti-64 sheet is also being considered for the beam window of the ILC main beam dump.
Ti-64 is a particular titanium alloy that has achieved a superior balance between strength and ductility by creating a fine equiaxed structure with a mixture of $\alpha$ (HCP) and $\beta$ (BCC) two phases through elemental addition and thermomechanical treatment. It can be used in high-temperature environments up to about 300 $^{\circ}$C and has excellent corrosion resistance. However, under neutron or proton beam irradiation, the alloy hardens significantly and loses almost all of its ductility after only 0.1 DPA \cite{Mansur2008}. It has been suggested that this may be due to the embrittlement of the irradiation-induced $\omega$-phase in the $\beta$-phase matrix, in addition to the hardening caused by the dense dislocation loops in the $\alpha$-phase matrix \cite{Ishida2020}. Therefore, under pulsed beam injection with higher flux and higher repetition rate which is expected in the future, the possibility of fatigue failure due to thermal shock cannot be ruled out and should be addressed.
The requirements for high-strength titanium alloys as next-generation beam window materials are to maintain enough strength and ductility even when irradiated at a few DPA under an operating temperature of about 200-300 $^{\circ}$C and maintain a service life of a few to several operational years. Based on the results of past research, the R\&D issues described below need to be addressed.
\begin{itemize}
\item So far, the beam window has been procured from bulk materials of Ti-64 available on the market, subjected to simple heat treatment such as stress relief, and machined without further microstructural controls. In terms of availability and machinability, Ti-64 has so far been the most suitable titanium alloy for beam windows. Meanwhile, two-phase titanium alloys are generally characterized by the possibility of rich microstructural control by thermomechanical treatment to achieve the desired mechanical properties, such as equiaxed, bi-modal, and needle-like microstructures. The improvement of irradiation resistance by microstructure control is worth investigating. The microstructure control in combination with the near-net-shape manufacturing technology by 3D printing, which has been developed remarkably in recent years, is also worth considering.
\item A wide variety of titanium alloys have been developed mainly in response to the requirements of the aerospace industry, and it is worth to compare their radiation damage resistance with that of Ti-64. A single metastable $\beta$ phase alloy Ti-15-333 has the potential to be a material with high radiation damage tolerance that does not undergo irradiation hardening at room temperature \cite{Ishida2018_1}. This owes to the nano-sized dense precursor of the $\omega$ phase in the $\beta$ phase matrix, acting as very effective sink sites to absorb irradiation defects. Heat treatment to maintain the irradiation resistance up to high temperatures is being investigated. A similar effect would be expected for other metastable $\beta$ alloys that have been precipitation-strengthened, e.g. TIMET's beta21S. Precipitation strengthening has also been applied to some high-temperature near-$\alpha$ titanium alloys used in aircraft engines, which may have high resistance to radiation damage for the same reason as metastable $\beta$ alloys, e.g. DAIDO’s DAT-54.
\item In the J-PARC beam window, chemical corrosion marks, which may be caused by impurities in the helium atmosphere and beam current, have been observed on the target station side. As one of the methods to improve the corrosion resistance of titanium alloy, the coating treatment of TiN and TiAlN by Physical Vapor Deposition (not CVD) may be effective, and the evaluation of its irradiation resistance and thermal shock resistance is necessary.
\item The effect of embrittlement and swelling due to hydrogen and helium, which are spallation products of proton beam irradiation, on mechanical properties should be evaluated. This is necessary not only for titanium alloys but also for all target and beam window materials.
\end{itemize}
\subsection{Advanced graphitic materials}
Novel advanced graphitic materials are also essential for targetry and in general for beam intercepting devices, owing to their high temperature resistance, relatively low coefficient of thermal expansion and low density. Different grades and properties of graphitic based absorber exists. Isostatically-pressed graphite is a widely employed and cost-effective solution, which have been validated with beam impact exceeding 10 kJ/cm$^3$. This material also presents excellent performance under radiation damage and present, due to its crystal lattice configuration, good annealing capabilities. 2D carbon/carbon is another class of high-performance graphitic-based material, due to the conformation of the material structure. These materials have been regularly employed since the start of the Large Hadron Collider for highly demanding applications and its use has been further expanded in the framework of the LHC Injectors Upgrade (LIU) project, where a new class of 3D carbon/carbon materials are employed. 2D CC composites have large dimensions in the plane (directions 1 and 2) compared with their thickness (direction 3), and their mechanical properties are generally weaker in direction 3. In comparison, 3D CC composites can have relatively large dimensions in the third or Z direction, together with interesting mechanical properties \cite{Nuiry2019}. In specific cases, electrical conductivity plays an important part, to reduce the overall machine impedance and increase beam stability. For this purpose, a family of novel graphite-based composites reinforced with a dispersion of molybdenum carbide particles \cite{Valenzuela2018}, with very high thermal and electrical properties, have been developed at CERN in collaboration with the European industry, and further R\&D is necessary.
All the graphitic materials mentioned above have been tested at the HiRadMat facility at CERN, to demonstrate their capabilities to withstanding the extremely challenging operational conditions encountered in the accelerator chain.
\section{Novel targetry concepts and technologies}
This section covers some of the key novel targetry concepts and technnologies that need to be explored and optimized to enable and support future multi-MW accelerator target facilities.
\subsection{Rotating, flowing and circulating targets}
The Facility for Rare Isotope Beams (FRIB) at Michigan State University will use projectile fragmentation and induced in-flight fission of heavy-ion primary beams at energies of 200 MeV/u and up to 400 kW beam power for experiments in nuclear physics, nuclear astrophysics, and fundamental science \cite{Wei2012}. One of the major challenges of the FRIB project is the design and integration of the high power target systems. To achieve required high resolution of the fragment separator, the beam spot on the production target needs to be on the order of 1 mm size with about 100 kW beam power deposited in the target, leading to a power density of 20-60 MW/cm$^3$ and heavy ion induced radiation damage of the material. A rotating solid carbon disk was selected as the technical baseline concept for all primary beams up to Uranium.
The FRIB target uses a multi-slice design with a diameter of about 30 cm and rotational speed of about 5000 RPM to provide efficient heat dissipation by thermal radiation and to minimize temperature variations as well as the associated thermal mechanical stress and fatigue. This allows the beam spot temperature in the solid carbon material to maintain a maximum of 1900 $^{\circ}$C. To match the expected duration of typical experiments at FRIB, a lifetime of the order of two weeks would be required. And recently, the FRIB team successfully commissioned a krypton-86 primary beam using the FRIB rotating target \cite{Wei_soon}.
As FRIB beam power ramps up to 400 kW, the target disk module will be replaced by single-slice and multi-slice targets. The motor drives the target wheel up to 5000 RPM and the power deposited into the solid target can be radioactively cooled by surrounding water cooled heat exchanger and cover plates. A better understanding of the radiation damage of graphite under heavy ions is critical for FRIB beam power ramping up. Since the energy loss of the heavy ions in the material is much higher than that from protons, the radiation damage of the FRIB target, which could lead to a short lifetime, is of great concern. It has been observed that permanent damage occurred in the carbon stripper foils with heavy ion beams \cite{Marti2010}. With thicker graphite material and annealing temperature, it is expected that the graphite targets can satisfy FRIB target requirements but R\&D activities of graphite material study under heavy ion beam irradiation are needed for future developments.
With rotational speed up to 5000 RPM and high beam intensity, the bearing used in the system needs to be vacuum compatible, high radiation resistant and the lubricant needs to have high temperature tolerance. An induction heating system is being developed to investigate bearings made of various materials and temperature tolerance and radiation resistance of different vacuum compatible lubricants. Besides, collaboration with industrial providers on the design of 3D-printed targetry bearings made of the new filament materials is ongoing. \\
The European Spallation Source is designed to deliver a high neutron flux, produced by 5 MW, 3 ms proton pulse repeating at 14 Hz, which will enable unprecedented neutron base science \cite{Garoby2017}. One of the many challenges brought by the high power density is the design of the spallation target. The spallation material not only has to withstand the extremely high power density, but has to last long enough to support reliable operation with close to 100$\%$ availability. The materials not directly exposed to the proton beam are exposed to extremely high dose of hadron, neutrons and gamma rays particles.
The conceptual design for the operation of the target aims at controlling and reducing the power density of the proton beam, together with reducing the heat and radiation damage to the spallation material. The first aspect is achieved by sweeping the beam position across the target area during the passage of the pulse. The second aspect aims at lowering average dose on the spallation material. This is achieved with the rotating target concept.
It is clear that the condition for the spallation target (tungsten) of 5 MW the material are submitted to very high stress. Not to mentioned that an uncontrolled beam will invariably lead to damage of the materials, the high radiation dose associated with high temperature will lead to change of material properties and potential high damage and rupture of materials. The process of spallation will produce 56 neutrons for each of the 10$^{15}$ proton per pulse. The average current density on target is 53 $\mu$A/cm$^2$, so the proton fluence on spallation material per year is 6 $\times$ 10$^{21}$ protons per cm$^2$. For each pulse, the amount of energy for a 2-GeV beam is 357 kJ. Any material under these conditions will experience extremely high stresses. Thermal effect will heat materials in ms to 100s of degrees, generating stresses in the 50 to 100 MPa range. Materials on the target wheel will be exposed to the beam once every 2.57 s, which leaves time for cooling, but also leads to cyclic stresses.
In the tungsten bricks, the stress is close to 50 MPa, and it will go through close to 40 $\times$ 10$^6$ stress cycles. The total dose in the tungsten for the 5 years lifetime of the target is over 10 DPA. Studies on Tungsten showed that within a month of 5 MW operation, the tungsten will be totally brittle \cite{Habainy2019}. However, the stress level remains far from the rupture point. Therefore, the material is expected to survive 5 years in the ESS beam, but some uncertainty remains due to lack of data for this material in these condition of temperature and irradiation dose. The target wheel is one of the main components of the spallation source. However, there are other components highly critical to ensure a high availability for neutron science. The other components are the proton beam window, moderator-reflector system, the target wheel driving system, the containment and shielding components, the beam diagnostics components. All these components are exposed to the high flux of particles coming from the spallation target. In addition, the proton beam window, which confines the target environment from the accelerator environment, and the beam diagnostics components intercept the beam and are submitted to the same constraints and stresses as the target wheel and spallation material. R\&D activities for the design and materials behavior under irradiation of these components are key in enabling the full potential of ESS. The rotating target concept proposed for position-production for the proposed ILC also faces very similar challenges. \\
There are three possible target designs that are being studied at Fermilab for the Mu2e-II pion-production target. A rotation target design consists of a set of tungsten rods revolving around an axis alternately by switching the rods exposed to the beam. The advantage of this design would be to distribute the heat deposited and radiation damage uniformly over several rods. A constraint however is to fit the target system into the Mu2e baseline Heat and Radiation Sheild (HRS) 25-cm radius inner bore. A fixed granular target design consists of a matrix of granular tungsten to be cooled by the flow of gaseous helium. Such a design would fit the existing HRS dimensions but its high radiation damage would likely require frequent replacement of the target. A third design under consideration is the conveyor target. Spherical tungstens or carbon elements will be supplied to a pipe, moved to the beam interaction area, and then removed from HRS for cooling and replacement (when necessary). This design would occupy a relatively small space (consistent with the HRS). Helium gas could be used for both cooling and moving elements inside the conveyor's pipe. Radiation damage can also be distributed among a large number of replaceable elements. The design is however technically complex and will require prototyping and extensive testing.
Preliminary MARS15 Monte Carlo and ANSYS Finite Element Analysis (FEA) simulations indicated that the conveyor design will require about 285 spherical target elements to be situated in the pipe inside the HRS inner bore while 28 elements in the case of carbon or 11 elements in the case of tungsten, will be located in the beam interaction region at any particular time. The elements will be moving inside the pipe with a velocity of ~10 elements/s (0.1 cm/s). Further analyses and R\&D are ongoing and will be required to realize this proposed target design and optimize the target performance.\\
Flowing liquid metals have also been successfully used as high-power neutron-production targets \cite{Bauer01,Mason06,Arai09}. The technology allows higher time-averaged beam power on target without degrading neutron brightness incurred with increasing coolant volume in solid, stationary targets. However, pulsed beam liquid-metal targets can suffer from pressure waves that drive cavitation in the liquid and fatigue of the vessel that contains the liquid metal. Cavitation-induced erosion damage has been a severe issue for the target vessel container \cite{Haines05, Riemer2008}. The cavitation erosion and fatigue can gradually degrade the structural integrity of the target vessel and lead to premature failure that affects the operational reliability of the facility. Significant and sustained R\&D has been carried out to mitigate the effects of pressure waves by injecting helium gas bubbles into the liquid metal. The approach has been highly successful in allowing the targets to operate at high beam powers for more extended periods \cite{Jiang22, Clintock, Kogawa2017, Naoe2021, Naoe2020}. The success demonstrates the payoff in dedicated target R\&D investment. Nevertheless, as beam power continues to increase the issues of cavitation erosion and fatigue may appear again. It is instrumental in continuing the R\&D work in this area. \\
A muon collider has a combination of requirements that are well beyond the limit of any existing target technology. A high-Z target is required to be suspended within the bore of a high field solenoid and subject to the high pulsed power density of a multi-MW proton beam. Flowing granular tungsten pneumatically conveyed within a pipe is being proposed and explored as an alternative to the current baseline technology proposal of an open mercury jet. The High Power Targets group at RAL has developed a fluidised tungsten powder target technology which combines some of the advantages of a liquid metal with those of a solid. The granular material flowing within a pipe is expected to be able to withstand extremely intense pulsed beam powers without the cracking or radiation damage limitations of solid targets, and without the cavitation issues associated with liquid targets \cite{Riemer2008}. Moreover, its disruption speed inside a gaseous helium atmosphere has been measured to be of the order of a few m/s \cite{Chari}. However, a fluidised powder target introduces new challenges, such as achieving reliable circulation and continuous stable horizontal dense phase flow, managing heat dissipation, mitigating radiation damage and erosion of the containing pipework and beam windows, as well as ensuring reliable diagnostics and controls for the powder handling processes.
An offline test rig was built at the Rutherford Appleton Laboratory (RAL) in order to demonstrate the feasibility of pneumatic fluidization and conveyance of powdered tungsten \cite{Caretta2008,Densham2009_1}. The rig can fluidize and lift sub-250 micron powder using suction and eject it in solid dense phase as a coherent open jet or contained pipe flow with a bulk fraction of c.50$\%$. Air was used for the test rig but helium is proposed for actual target applications due to its favorable heat transfer properties and to minimize radiological issues. Contained flow is proposed as being most suitable for use as a particle-production target \cite{Davies2010}. Subsequent developments have enabled the rig to continuously recirculate the powder, providing an uninterrupted stream of target material.
The response of a static open trough of tungsten powder to a high energy proton beam was investigated in 2012 and 2015 at the HiRadMat facility at CERN \cite{Eft2011}. Eruption velocities from the free surface, due to the ionisation of the grains by the proton beam, were much lower than for liquid mercury subjected to the same energy density \cite{Caretta2014,Caretta2018,Davenne2018}, although for the latter this effect was significantly reduced by the capture solenoid magnetic field. It is anticipated that the beam-induced disruption should be considerably lower for a powder contained inside a tube although this would need to be demonstrated in a future HiRadMat experiment. Overall, a contained powder target system is expected to be considerably less damaging to its surroundings than an open liquid mercury jet, and also less problematic from a radiological point of view.
The fluidized tungsten powder concept shows promise as a target for a Muon collider or future high-intensity CLFV experiments \cite{Aoki2020}, but further development will be required to demonstrate its suitability for use in an operating facility. The fluidized tungsten powder concept currently operates entirely by timed “batch” processes involving a number of pneumatically operated sliding gate valves. An operating facility would ideally eliminate such moving parts. Careful selection of the pipework and beam window material will be required (e.g. SiC-SiC composite). Bespoke designs for high-erosion regions such as bends may be required, and long-term erosion measurements essential to demonstrate that the required target lifetimes of months or years can be achieved. Measurements of the heat transfer between the flowing tungsten powder and the surrounding containment tube would also be desirable. A future on-line experiment at HiRadMat is intended to investigate the effect of an intense pulsed proton beam on tungsten powder contained within a tube, using laser doppler velocimetry to measure any stresses transmitted from the granular material to the pipe wall. In addition to this practical work, an engineering feasibility study will investigate how to integrate the complete tungsten powder system within a capture solenoid for a muon collider target station. This will require input from a comprehensive physics design study, which will use simulation codes to calculate the predicted particle production rates and deposited energy densities in order to select the best geometry layout and beam parameters that will optimize the performance of the muon collider within the expected engineering constraints.
\subsection{Advanced cladding materials and technologies}
The European Laboratory for Particle Physics (CERN) have expanded the use of diffusion bonding assisted by Hot Isostatic Pressing (HIP) at CERN for beam intercepting devices systems, and applied for the LHC Injector Upgrade (LIU) Project \cite{Damerau2014} as well as for the Physics Beyond Colliders initiative \cite{Jaeckel2018}, in a similar fashion as the development done at existing spallation sources where Ta-cladded pure W are regularly employed for neutron production. Bonding dissimilar material with the HIP methods enhances the heat transfer coefficient between the different materials and therefore increases the capabilities to cope with power dissipation.
Cuprous materials (such as Cu-OFE, CuCr1Zr and ODS alloys such as Glidcop or Discup) have been bonded via HIP with stainless steel. The technology was already developed for fusion reactors \cite{Marois1996} but CERN have further expanded the technique for very large components (up to 2.5 meters), required for the fabrication of the LIU-SPS internal beam dump (so called TIDVG5 \cite{Pianese2018,Pianese2021}), which is operational since 2021. The technology has a large potential for dumps and absorbers for a variety of different facilities, not only for proton driven, but also for electron or photon driven (synchrotron light) facilities, where the dissipated power requirements could be in excess of 100 kW.
Refractory metals are also widely employed in laboratories worldwide for secondary beam production, such as for neutron production \cite{Thomason2018} or for proposed beam dump experiments \cite{Ahdida2019}, owing to their reduced nuclear interaction length and relatively low neutron inelastic cross-section. Nevertheless, directly cooling with water is not possible due to the relatively high hydrogen embrittlement. Usually tungsten is cladded via HIP with pure Ta, with good results \cite{Nelson2012}. For the proposed CERN’s Beam Dump Facility Project \cite{Ahdida2020}, CERN has developed advanced techniques to clad pure W as well as Mo-alloys (such as TZM) with pure Ta as well as, for the first time, Ta2.5W \cite{Lopez2019,Busom2019}. Beam irradiation of prototype target have been successfully executed during 2018 \cite{Lopez2019_1} with post irradiation examination (PIE) to be further expanded during 2021 and 2022. For high power facilities, decay heat on Ta and Ta-alloys may pose safety concerns: for this reason, within the framework of BDF, other cladding materials are being studied, including zircalloy or other Nb-alloys such as C103, which still possess excellent corrosion resistance capabilities, formability and resistance to high temperatures. C103 is a complex refractory metal, consisting of mainly Nb with addition of 10 wt$\%$ of hafnium and 1 wt$\%$ of titanium. These R\&D techniques will be beneficial for future Hidden Sector experiments \cite{Ahdida2019} as well as for other neutron production facilities, such as the ORNL SNS Second Target Station project \cite{ORNL}. The R\&D should be complemented with instrumented in-beam tests \cite{Lopez2019_1}, both at fast and slow extraction facilities, as well as with post irradiation examination (PIE) techniques, in order to validate the simulation packages and ensure the bonding quality (mechanical and thermal) after irradiation. \\
For neutron production at the ISIS Neutron and Muon Facility run by UKRI STFC at the Rutherford Appleton Laboratory, solid-plate water-cooled targets have been used for 38 years and monolithic water-cooled targets for 14 years. Initially the plates in the solid-plate targets were made of depleted uranium clad in zircaloy, but these uranium targets suffered from premature failure due to radiation-induced swelling. Consequently, in 1995, the plates were switched to tantalum entirely. And then in 2001, the solid-plate targets were designed with tungsten plates clad in 1-2 mm of tantalum. Since 2001 the TS1 (Target Station 1) tantalum-clad tungsten solid-plate targets, have run to the present day successfully with no apparent cladding issues (four targets in total). All of these targets were taken out of service simply because a few of the many thermocouples measuring plate temperatures eventually failed. TS1 is currently being upgraded and will include the next evolution of this tantalum-clad tungsten solid-plate target. In contrast, the TS2 (Target Station 2) tantalum-clad monolithic tungsten targets, which have run from 2008 to the present, have had a much shorter operational life than originally planned due to activation of the target water-cooling circuit. The activation by spallation products of tungsten and tantalum becomes very evident after approximately 18 months of operation of the target, and clearly indicates a breach in the tantalum cladding which is exposing the tungsten to the cooling water. Neutronically these targets are still performing as required, but they have to be replaced every two years or so to ensure that we keep radiation dose rates in our water plant to a sensible level for maintenance operations.
The current focus is to understand the mechanism for the cladding breach in the TS2 targets, and of course to find a solution to this problem. There are a number of projects looking at stresses and fatigue processes in the tantalum during operation and at residual stresses produced during the process of HIP-ing the tantalum cladding on to the tungsten core. Offline research is ongoing to investigate the potential for erosion and corrosion of the tantalum cladding, as well as to carry out to carry out manufacturing process and QA improvement programs on materials supply and particularly on the EB welds which are used to join the tantalum cladding components together prior to the HIP-ing process.
In addition, new collaborative studies into radiation damage effects in the tungsten cores have prompted a re-visit of the engineering design parameters for the target materials. In particular, apparent reductions in thermal conductivity and in tensile strength may need to be considered in greater detail in the designs, especially for higher-power higher-intensity targets planned for future ISIS upgrades. There is also a continuing concern about the tantalum contribution to decay heat, especially in an unexpected loss-of-coolant scenario. Thinning of the HIP-ed tantalum cladding is not a preferred option, particularly for the TS2 targets where there is not only potential concerns about water erosion and corrosion but also potential concerns about grain growth after EB welding which could lead to worryingly low numbers of grains in a thinner cladding layer and perhaps lead to accelerated cracking in the cladding.
For the future, efforts are underway to continue investigations into a better understanding of our current target materials (tungsten and tantalum), but also to explore the possibility of using TZM as a replacement cladding material. The immediate benefit of using the latter material would be a lower decay heat contribution which would help the loss-of-coolant scenario mitigation planning considerably. The development of advanced cladding materials technologies is fundamental in designing reliable and robust beam intercepting devices for planned upgrades and future Intensity and Energy Frontier facilities.
\subsection{Target geometry and composition optimization}
Designing a target system is a tedious process because the optimization is done with various iterations to maximize the physics output for the experiment and to minimize the operational risk like the target fail. Usually, a Monte Carlo (MC) numerical simulation and a Finite Element Analysis (FEA) are used to evaluate the physics performance and to examine the mechanical stress on the target system in several iterations. The number of iterations exponentially grows by increasing the number of variances. To simplify the optimization, the designed target shape is usually monolithic, and the target material is single substance. However, a fine dimensional tuning of the target shape and finding the best mixing of substances for the target material are required to increase the performance efficiency of multi-MW target systems.
The use of Machine Learning based on the Bayesian optimization is proposed for optimizing the target system. The Bayesian optimization will evaluate the previous result via the Gaussian process recurrence and set a new variance. This method is widely used in the material science field to find new materials/alloys. Combining additional simulator to design the novel target material at the atomic level may be used as well. This simulation effort will build a database server and utilize the available High Power Computing (HPC) for the target system optimization.
\subsection{High-heat flux cooling}
Alternative advanced cooling technologies will need to be explored in order to address the challenges of heat removal from future multi-MW beam-intercepting devices. Unconventional heat transfer techniques, such as controlled boiling or even flowing (liquid or granular) targets, where heat-flux cooling capacity can be significantly increased, is an area that needs to be explored and developed.
High heat-flux cooling techniques have been investigated in the past and continue to be researched within the fusion/fission communities, as well as in some accelerator target facilities. Methods to utilize boiling in a stable fashion (hyper-vapotron) have been developed for plasma-facing components in ITER, and can be applied to accelerator target applications. Radiative cooling is another cooling technique that will be used to cool the Mu2e tungsten target. Exploring other alternatives to forced convection cooling techniques is an area of research that should be addressed and critical in enabling future multi-MW target facilities.
\section{Conclusion}
R\&D of novel targetry materials, concepts and technologies is necessary to enable and optimize the operation of ambitious future accelerator target facilities. The high power targetry community, primarily through the RaDIATE collaboration \cite{RaDIATE}, has been working on several R\&D projects to address the material challenges of high power targets since 2012. This globally coordinated R\&D effort is essential to advance the current state-of-the-art in targetry. Increased funding and support for this research over the next several years is pivotal in achieving the objectives of the future research programs and experiments.
\newpage
|
1,477,468,750,303 | arxiv | \section{INTRODUCTION}
The emergence of Connected and Automated Vehicles (CAVs) along with new traffic infrastructure technologies \cite{li2013survey},\cite{9625017} over the past decade have brought the promise of resolving long-lasting problems in transportation networks such as accidents, congestion, and unsustainable energy consumption along with environmental pollution \cite{deWaard09},\cite{Schrank20152015UM},\cite{kavalchuk2020performance}. Meeting this goal heavily depends on effective traffic management, specifically at the bottleneck points of a transportation network such as intersections, roundabouts, and merging roadways \cite{VANDENBERG201643}.
To date, both centralized and decentralized methods have been proposed to tackle the control and coordination problem of CAVs in conflict areas; an overview of such methods may be found in \cite{7562449}. Platoon formation \cite{xu2019grouping}, \cite{wu2013mathematical}, \cite{rajamani2000demonstration} and reservation-based methods \cite{1373519},\cite{au2010motion},\cite{zhang2013analysis} are among the centralized approaches, which are limited by the need for powerful central computation resources and are typically prone to disturbances and security threats.
In contrast, in decentralized methods each CAV is responsible for its own on-board computation with information from other vehicles limited to a set of neighbors \cite{7313484}. Constrained optimal control problems can then be formulated with objectives usually involving minimizing acceleration or maximizing passenger comfort (measured as the acceleration derivative or jerk), or jointly minimizing travel time through conflict areas and energy consumption. These problems can be analytically solved in some cases, e.g., for optimal merging \cite{XIAO2021109333} or crossing
a signal-free intersection \cite{Zhang2018}.
However, obtaining such solutions becomes computationally prohibitive for real-time applications when an optimal trajectory involves multiple constraints becoming active. Thus, on-line control methods such as Model Predictive Control (MPC) techniques or Control Barrier function (CBFs) are often adopted for the handling of additional constraints.
In the MPC approach proposed in \cite{garcia1989model}, the time is normally discretized and an optimization problem is solved at each time instant with the addition of appropriate inequality constraints; then, the system dynamics are updated. Since both control and state are considered as the decision variables in the optimization problem, MPC is very effective for problems with simple (usually linear or linearized) dynamics, objectives, and constraints \cite{cao2015cooperative}. Alternatively, CBFs \cite{Xiao2019} \cite{CBF_QP(2017)} can overcome some shortcomings of the MPC method \cite{mukai2017model} as they do not need states as decision variables, instead mapping state constraints onto new ones that only involve the decision variables in a linear fashion. Moreover, CBFs can be used with nonlinear (affine in control) system dynamics and they
have a crucial forward invariance property which guarantees the satisfaction of safety constraints over all time as long as these constraints are initially satisfied.
An approach combining optimal control solutions with CBFs was recently presented in \cite{XIAO2021109592}. In this combined approach (termed OCBF), the solution of an \emph{unconstrained} optimal control problem is first derived and used as a reference control. Then, the resulting control reference trajectory is optimally tracked subject to
a set of CBF constraints which ensure the satisfaction of all constraints of the original optimal control problem. Finally, this optimal tracking problem is efficiently solved by discretizing time and solving a simple Quadratic Problem (QP) at each discrete time step over which the control input is held constant \cite{CBF_QP(2017)}. The use of CBFs in this approach exploits their forward invariance property to guarantee that all constraints they enforce are satisfied at all times if they are initially satisfied. In addition, CBFs are designed to impose \emph{linear} constraints on the control which is what enables the efficient solution of the tracking problem through a sequence of QPs. This approach can also be shown to provide additional flexibility in terms of using nonlinear vehicle dynamics (as long as they are affine in the control), complex objective functions, and tolerate process and measurement noise \cite{XIAO2021109592}.
However, in solving a sequence of QPs the control update interval in the time discretization process must be sufficiently small in order to always guarantee that every QP is feasible. In practice, such feasibility can be often seen to be violated due to the fact that it is extremely difficult to pick a proper discretization time which can be guaranteed to always work. In this paper, an \emph{event-triggered} and a \emph{self-triggered} approach are considered as two solutions to remedy this issue. We note that the idea of synthesizing event-triggered controllers and BFs or Lyapunov functions has been used in \cite{ong2018event} with the goal of improving stability, while
a unified event-driven scheme is proposed in \cite{taylor2020safety} with an Input-to-State barrier function to impose safety under an input disturbance.
The contribution of this paper is to replace the \emph{time-driven} nature of the discretization process used in the OCBF approach which involves a sequence of QPs by an \emph{event-driven} mechanism, hence achieving QP feasibility independent of a time step choice. In the event-triggering scheme, given the system state at the start of a given QP instance, we extend the approach introduced in \cite{Xiao2021EventTriggeredSC} for a multi-agent system to define events associated with the states of CAVs reaching a certain bound, at which point the next QP instance is triggered.
On the other hand, in the self-triggering scheme, we provide a minimum inter-event time guarantee by predicting the first time instant that any of the CBF constraints in the QP problem is violated, hence we can determine the triggering time for the next QP instance.
Both methods provide a guarantee for the forward invariance property of CBFs and eliminate infeasible cases due to time-driven inter-sampling effects (additional infeasibilities are still possible due to potentially conflicting constraints within a QP; this separate issue has been addressed in \cite{XIAO2022inf}).
The advantages of these event-driven schemes can be summarized as follows:
$(i)$ Infeasible QP instances due to inter-sampling effects are eliminated,
$(ii)$ There is no longer a need to determine a proper time step size required in the time-driven methods,
$(iii)$ The number of control updates under event-driven schemes is generally reduced, thereby reducing the overall computational cost, and
$(iv)$ Since the number of QPs that need to be solved is reduced, this also reduces
the need for unnecessary communication among CAVs. This reduced need for communication, combined with the unpredictability of event-triggering relative to a fixed time discretization approach, results in the system being less susceptible to malicious attacks.
The paper is organized as follows. In Section II, we provide an overview of the decentralized constrained optimal control for CAVs in any conflict area setting, along with a brief review of CBFs to set the stage for the OCBF approach. We also review the time-driven approach for solving such optimal control problems, motivating the proposed solutions to the problem. In Section III, both methods are separately presented, including the formulation and solution of QPs in both frameworks. In Section V, simulation results compare time-driven, event-triggered, and self-triggered schemes, in terms of their performance metrics, computational load, and infeasible cases to show how constraint violations can be reduced through the proposed approaches.
\section{Problem Formulation and Time-Driven Control Solutions}
\label{sec:problem}
In this section, we review the setting for CAVs whose motion is cooperatively controlled at conflict areas of a traffic network. This includes merging roads, signal-free intersections, roundabouts, and highway segments where lane change maneuvers take place. We define a Control Zone (CZ) to be an area within which CAVs can communicate with each other or with a coordinator (e.g., a Road-Side Unit (RSU)) which is responsible for facilitating the exchange of information (but not control individual vehicles) within this CZ. As an example, Fig. \ref{fig:merging} shows a conflict area due to vehicles merging from two single-lane roads and there is a single Merging Point (MP) which vehicles must cross from either road \cite{XIAO2021109333}.
In such a setting, assuming all traffic consists of CAVs, a finite horizon constrained optimal control problem can be formulated aiming to determine trajectories that jointly minimize travel time and energy consumption through the CZ while also ensuring passenger comfort (by minimizing jerk or centrifugal forces) and guaranteeing safety constraints are always satisfied.
Let $F(t)$ be the set of indices of all CAVs located in the
CZ at time $t$. A CAV enters the CZ at one of several origins (e.g., $O$ and $O'$ in Fig. \ref{fig:merging}) and leaves at one of possibly several exit points (e.g., $M$ in Fig. \ref{fig:merging}).
The index $0$ is used to denote a CAV that has just left the CZ. Let $N(t)$ be the
cardinality of $F(t)$. Thus, if a CAV arrives at time $t$, it is assigned the
index $N(t)+1$. All CAV indices in $F(t)$ decrease by one when a CAV passes over
the MP and the vehicle whose index is $-1$ is dropped.
The vehicle dynamics for each CAV $i\in F(t)$ along the lane to which it
belongs in a given CZ are assumed to be of the form
\begin{equation} \label{VehicleDynamics}
\left[
\begin{array}
[c]{c}%
\dot{x}_{i}(t)\\
\dot{v}_{i}(t)
\end{array}
\right] =\left[
\begin{array}
[c]{c}%
v_{i}(t)\\
u_{i}(t)
\end{array}
\right],
\end{equation}
where $x_{i}(t)$ denotes the distance from the origin at which CAV $i$ arrives, $v_{i}(t)$ denotes the velocity, and $u_{i}(t)$ denotes the control input (acceleration).
There are two objectives for each CAV, as detailed next.\\
\begin{figure}[H]
\vspace{-5mm}
\centering
$\hspace{-4mm}$\includegraphics[scale=0.8]{Model.pdf} \caption{The merging problem}%
\label{fig:merging}%
\end{figure}
\noindent {\bf Objective 1} (Minimize travel time): Let $t_{i}^{0}$ and $t_{i}^{f}$
denote the time that CAV $i\in F(t)$ arrives at its origin
and leaves the CZ at its exit point, respectively. We wish to minimize the travel time
$t_{i}^{f}-t_{i}^{0}$ for CAV $i$.\\
{\bf Objective 2} (Minimize energy consumption): We also wish to minimize
the energy consumption for each CAV $i$:
\begin{equation}
J_{i}(u_{i}(t),t_{i}^{f})=\int_{t_{i}^{0}}^{t_{i}^{f}}\mathcal{L}_i(|u_{i}(t)|)dt,
\end{equation}
where $\mathcal{L}_i(\cdot)$ is a strictly increasing function of its argument.\\
{\bf Constraint 1} (Safety constraints): Let $i_{p}$ denote the index of
the CAV which physically immediately precedes $i$ in the CZ (if one is
present). We require that the distance $z_{i,i_{p}}(t):=x_{i_{p}}(t)-x_{i}(t)$
be constrained by:
\begin{equation}
z_{i,i_{p}}(t)\geq\varphi v_{i}(t)+\delta,\text{ \ }\forall t\in\lbrack
t_{i}^{0},t_{i}^{f}], \label{Safety}%
\end{equation}
where $\varphi$ denotes the reaction time (as a rule, $\varphi=1.8s$ is used,
e.g., \cite{Vogel2003}) and $\delta$ is a given minimum safe distance.
If we define $z_{i,i_{p}}$ to be the distance from
the center of CAV $i$ to the center of CAV $i_{p}$, then $\delta$ depends on the length of these two CAVs (generally dependent on
$i$ and $i_{p}$ but taken to be a constant over all CAVs for simplicity).\\
{\bf Constraint 2} (Safe merging): Whenever a CAV crosses a MP, a lateral collision is possible and there must be adequate safe space for the CAV at this MP to avoid such collision, i.e.,
\begin{equation}
\label{SafeMerging}
z_{i,i_c}(t_{i}^{m})\geq\varphi v_{i}(t_{i}^{m})+\delta,
\end{equation}
where $i_c$ is the index of the CAV that may collide with CAV $i$ at merging point $m=\lbrace 1,...,n_i \rbrace$ where $n_i$ is the total number of MPs that CAV $i$ passes in the CZ. The determination of CAV $i_c$ depends on the policy adopted for sequencing CAVs through the CZ, such as First-In-First-Out (FIFO) based on the arrival times of CAVs, or any other desired policy. It is worth noting that this constraint only applies at a certain time $t_{i}^{m}$ which obviously depends on how the CAVs are controlled. As an example, in Fig. \ref{fig:merging} under FIFO, we have $i_c=i-1$ and $t_i^m=t_i^f$ since the MP defines the exit from the CZ.\\
{\bf Constraint 3} (Vehicle limitations): Finally, there are constraints
on the speed and acceleration for each $i\in F(t)$:
\begin{equation}
\begin{aligned} v_{\min} \leq v_i(t)\leq v_{\max}, \forall t\in[t_i^0,t_i^f]\end{aligned} \label{VehicleConstraints1}%
\end{equation}
\begin{equation}
\begin{aligned} u_{\min}\leq u_i(t)\leq u_{\max}, \forall t\in[t_i^0,t_i^f],\end{aligned} \label{VehicleConstraints2}%
\end{equation}
where $v_{\max}> 0$ and $v_{\min} \geq 0$ denote the maximum and minimum speed allowed
in the CZ for CAV $i$, $u_{{\min}}<0$ and $u_{\max}>0$ denote the minimum and maximum
control for CAV $i$, respectively.\\
\textbf{Optimal Control Problem formulation.} Our goal is to determine a control law achieving objectives 1-2 subject to constraints 1-3 for each $i \in F(t)$ governed by the dynamics (\ref{VehicleDynamics}). Choosing $\mathcal{L}_i(u_i(t))=\frac{1}{2}u_i^2(t)$ and normalizing travel time and $\frac{1}{2}u_{i}^{2}(t)$, we use the weight $\alpha\in[0,1]$ to construct
a convex combination as follows:
\begin{equation}\label{eqn:energyobja_m}
\begin{aligned}\min_{u_{i}(t),t_i^f} J_i(u_i(t),t_i^f)= \int_{t_i^0}^{t_i^f}\left(\alpha + \frac{(1-\alpha)\frac{1}{2}u_i^2(t)}{\frac{1}{2}\max \{u_{\max}^2, u_{\min}^2\}}\right)dt \end{aligned}.
\end{equation}
Letting $\beta:=\frac{\alpha\max\{u_{\max}^{2},u_{\min}^{2}\}}{2(1-\alpha)}$, we obtain a simplified form:
\begin{equation}\label{eqn:energyobja}
\min_{u_{i}(t),t_i^f}J_{i}(u_{i}(t),t_i^f):=\beta(t_{i}^{f}-t_{i}^{0})+\int_{t_{i}^{0}%
}^{t_{i}^{f}}\frac{1}{2}u_{i}^{2}(t)dt,
\end{equation}
where $\beta\geq0$ is an adjustable weight to penalize
travel time relative to the energy cost. Note that the solution is \emph{decentralized} in the sense that CAV $i$ requires information only from CAVs $i_p$ and $i_c$ required in (\ref{Safety}) and (\ref{SafeMerging}).
Problem (\ref{eqn:energyobja}) subject to (\ref{VehicleDynamics}), (\ref{Safety}), (\ref{SafeMerging}), (\ref{VehicleConstraints1})
and (\ref{VehicleConstraints2}) can be analytically solved in some cases, e.g., the merging problem in Fig. \ref{fig:merging}
\cite{XIAO2021109333}
and a signal-free intersection
\cite{Zhang2018}.
However, obtaining solutions for real-time applications becomes prohibitive when an optimal trajectory involves multiple constraints becoming active. This has motivated an approach which combines a solution of the unconstrained problem (\ref{eqn:energyobja}), which can be obtained very fast, with the use of Control Barrier Functions (CBFs) which provide guarantees that (\ref{Safety}), (\ref{SafeMerging}), (\ref{VehicleConstraints1}) and (\ref{VehicleConstraints2}) are always satisfied through constraints that are linear in the control, thus rendering solutions to this alternative problem obtainable by solving a sequence of computationally efficient QPs. This approach is termed Optimal Control with Control Barrier Functions (OCBF) \cite{XIAO2021109592}.
\textbf{The OCBF approach.} The OCBF approach consists of three steps: $(i)$ the solution of the \emph{unconstrained} optimal control problem (\ref{eqn:energyobja}) is used as a reference control, $(ii)$ the resulting control reference trajectory is optimally tracked subject to the constraint \eqref{VehicleConstraints2}, as well as a set of CBF constraints enforcing (\ref{Safety}), (\ref{SafeMerging}) and \eqref{VehicleConstraints1}. $(iii)$ This optimal tracking problem is efficiently solved by discretizing time and solving a simple QP at each discrete time step. The significance of CBFs in this approach is twofold: first, their forward invariance property \cite{XIAO2021109592} guarantees that all constraints they enforce are satisfied at all times if they are initially satisfied; second, CBFs impose \emph{linear} constraints on the control which is what enables the efficient solution of the tracking problem through the sequence of QPs in $(iii)$ above.
The reference control in step $(i)$ above is denoted by $u_{i}^{\textrm{ref}}(t)$. The unconstrained solution to (\ref{eqn:energyobja}) is denoted by $u_i^*(t)$, thus we usually set $u_{i}^{\textrm{ref}}(t)=u_i^*(t)$. However, $u_{i}^{\textrm{ref}}(t)$ may be chosen to be any desired control trajectory and, in general, we use $u_{i}^{\textrm{ref}}(t)=h(u_i^*(t),x_i^*(t), \textbf{x}_i(t))$ where $\textbf{x}_i(t)\equiv(x_i(t),v_i(t)), ~\textbf{x}_i \in \textbf{X}$ ($ \mathbf{X} \subset \mathbb{R}^2$ is the state space). Thus, in addition to the unconstrained optimal control and position $u_i^*(t),x_i^*(t)$, observations of the actual CAV state $\textbf{x}_i(t)$ provide direct feedback as well.
To derive the CBFs that ensure the constraints (\ref{Safety}), (\ref{SafeMerging}), and (\ref{VehicleConstraints1}) are always satisfied, we use the vehicle dynamics (\ref{VehicleDynamics}) to define $f(\textbf{x}_i(t))=[v_i(t),0]^T$ and $g(\textbf{x}_i(t))=[0,1]^T$. Each of these constraints can be easily written in the form of $b_q(\textbf{x}(t)) \geq 0$, $q \in \lbrace 1,...,n \rbrace$ where $n$ stands for the number of constraints and $\mathbf{x}(t)=[\mathbf{x}_1(t),\mathbf{x}_2(t),...,\mathbf{x}_{N(t)}(t)]$. The CBF method (details provided in \cite{XIAO2021109592}) maps a constraint $b_q(\textbf{x}(t)) \geq 0$ onto a new constraint which is linear in the control input $u_i(t)$ and takes the general form
\begin{equation} \label{CBF general constraint}
L_fb_q(\textbf{x}(t))+L_gb_q(\textbf{x}(t))u_i(t)+\gamma( b_q(\textbf{x}(t))) \geq 0,
\end{equation}
where $L_f,L_g$ denote the Lie derivatives of $b_q(\textbf{x}(t))$ along $f$ and $g$, respectively and $\gamma(\cdot)$ stands for any class-$\mathcal{K}$ function \cite{XIAO2021109592}. It has been established
\cite{XIAO2021109592}
that satisfaction of (\ref{CBF general constraint}) implies the satisfaction of the original problem constraint $b_q(\textbf{x}(t)) \geq 0$ because of the forward invariance property. It is worth observing that the newly obtained constraints are \emph{sufficient} conditions for the original problem constraints, therefore, potentially conservative.
We now apply (\ref{CBF general constraint}) to obtain the CBF constraint associated with the safety constraint (\ref{Safety}). By setting
\begin{align} \label{b1}
b_1(\textbf{x}_i(t),\textbf{x}_{i_p}(t))&=z_{i,i_{p}}(t)-\varphi v_{i}(t)-\delta \nonumber\\
&=x_{i_p}(t)-x_i(t)-\varphi v_i(t)-\delta,
\end{align} and since $b_1(\textbf{x}_i(t),\textbf{x}_{i_p}(t))$ is differentiable,
the CBF constraint for (\ref{Safety}) is
\begin{equation}\label{CBF1}\small
\underbrace{v_{i_p}(t)-v_i(t)}_{L_fb_1(\textbf{x}_i(t),\textbf{x}_{i_p}(t))}+\underbrace{-\varphi}_{L_gb_1(\textbf{x}_i(t))} u_i(t)+\underbrace{k_1(z_{i,i_p}(t)-\varphi v_i(t)-\delta)}_{\gamma_1(b_1(\textbf{x}_i(t),\textbf{x}_{i_p}(t)))} \geq 0,
\end{equation}
where the class-$\mathcal{K}$ function $\gamma(x)=k_1x$ is chosen here to be linear.
Deriving the CBF constraint for the safe merging constraint (\ref{SafeMerging}) poses a technical challenge due to the fact that it only applies at a certain time $t_i^{m}$, whereas a CBF is required to be in a continuously differentiable form. To tackle this problem. we apply a technique used in \cite{XIAO2021109592} to convert (\ref{SafeMerging}) to a continuous differentiable form as follows:
\begin{equation}
z_{i,i_c}(t)-\Phi(x_i(t)) v_{i}(t)-\delta \geq 0, \ {\ }\forall t\in[t_i^0,t_i^{m}],
\end{equation}
where $\Phi : \mathbb{R} \rightarrow \mathbb{R}$ may be any continuously differentiable function as long as it is strictly increasing and satisfies the boundary conditions $\Phi(x_i(t_i^0))=0$ and $\Phi(x_i(t_i^{m}))=\varphi$. In this case, a linear function can satisfy both conditions:
\begin{equation}
\Phi(x_i(t))=\varphi \frac{x_i(t)}{L},
\end{equation}
where $L$ is the length of road traveled by the CAV from its entry to the CZ to the MP of interest in (\ref{SafeMerging}).
Then by setting
\begin{align} \label{b1}
b_2(\textbf{x}_i(t),\textbf{x}_{i_c}(t))&=z_{i,i_c}(t)-\varphi v_{i}(t)-\delta \nonumber\\
&=x_{i_c}(t)-x_i(t)-\Phi(x_i(t)) v_i(t)-\delta,
\end{align}
proceeding as in the derivation of (\ref{CBF1}), we obtain:
\begin{align}\small \label{CBF2}
&\underbrace{v_{i_c}(t)-v_i(t)-\frac{\varphi}{L}v_i^2(t)}_{L_fb_2(\textbf{x}_i(t),\textbf{x}_{i_c}(t))}+\underbrace{-\varphi \frac{x_i(t)}{L}}_{L_gb_2(\textbf{x}_i(t))}u_i(t)+\nonumber \\ &\underbrace{k_2(z_{i,i_c}(t)-\varphi \frac{x_i(t)}{L} v_i(t)-\delta)}_{\gamma_2(b_2(\textbf{x}_i(t),\textbf{x}_{i_c}(t)))} \geq 0.
\end{align}
The speed constraints in \eqref{VehicleConstraints1} are also easily transformed into CBF constraints using (\ref{CBF general constraint}) by defining
\begin{equation} \label{b3}
b_3(\textbf{x}_i(t))=v_{\max}-v_i(t),
\end{equation}
\begin{equation}\label{b4}
b_4(\textbf{x}_i(t))=v_i(t)-v_{\min}.
\end{equation}
This yields:
\begin{equation} \label{CBF3}
\underbrace{-1}_{L_gb_3(\textbf{x}_i(t))}u_i(t)+\underbrace{k_3(v_{\max}-v_i(t))}_{\gamma_3(b_3(\textbf{x}_i(t)))} \geq 0
\end{equation}
\begin{equation}\label{CBF4}
\underbrace{1}_{L_gb_4(\textbf{x}_i(t))}u_i(t)+\underbrace{k_4(v_i(t)-v_{\min})}_{\gamma_4(b_4(\textbf{x}_i(t)))} \geq 0,
\end{equation}
for the maximum and minimum velocity constraints, respectively.
\textbf{Inclusion of soft constraints in (\ref{eqn:energyobja}).} As a last step in the OCBF approach, we can exploit the versatility of the CBF method to include soft constraints expressed as terminal state costs in (\ref{eqn:energyobja}), e.g., the CAV achieving a desired terminal speed. This is accomplished
by using a Control Lyapunov Function (CLF) to track specific state variables in the reference trajectory if desired. A CLF $V(\textbf{x}_i(t))$ is similar to a CBF (see \cite{XIAO2021109592}). In our problem, letting $V(\textbf{x}_i(t))=(v_i(t)-v_{i}^\textrm{ref}(t))^2$ we can express the CLF constraint associated with tracking the CAV speed to a desired value $v_{i}^\textrm{ref}(t)$ (if one is provided) as follows:
\begin{equation}\label{CLF}
L_fV(\textbf{x}_i(t))+L_gV(\textbf{x}_i(t))u_i(t)+\epsilon V(\textbf{x}_i(t))\leq e_i(t),
\end{equation}
where $\epsilon >0 $ and $e_i(t)$ makes this a soft constraint.
Now that all the original problem constraints have been transformed into CBF constraints, we can formulate the OCBF problem as follows:
\begin{equation}\label{QP-OCBF}\small
\min_{u_i(t),e_i(t)}J_i(u_i(t),e_i(t)):=\int_{t_i^0}^{t_i^f}\big[\frac{1}{2}(u_i(t)-u_{i}^{\textrm{ref}}(t))^2+\lambda e^2_i(t)\big]dt
\end{equation}
subject to vehicle dynamics (\ref{VehicleDynamics}), the CBF constraints (\ref{CBF1}), (\ref{CBF2}), \eqref{CBF3}, \eqref{CBF4}, the control constraint \eqref{VehicleConstraints2}, and CLF constraint (\ref{CLF}).
Note that this is a decentralized optimization problem, as it only requires information sharing with a small number of \enquote{neighbor} CAVs, i.e. CAV $i_p$ and $i_c$ (if they exist). We denote this set of CAV neighbors by $\mathcal{R}_i(t)$ at time $t$:
\begin{equation} \label{eq:neighborset}
\mathcal{R}_i(t)=\{i_p(t),i_c(t)\}.
\end{equation}
Note that $\mathcal{R}_i(t)$ in general can change over time, i.e., $i_c(t)$ changes when dynamic \enquote{resequencing} (discussed in \cite{2020Weidynreseq}) is carried out and $i_p(t)$ changes in the case of lane changing maneuvers. It is worth mentioning that in the single lane merging example in Fig. \ref{fig:merging} $i_p$ cannot change.
A common way to solve this dynamic optimization problem is to discretize $[t_i^0,t_i^f]$ into intervals $[t_i^0,t_i^0+\Delta),...,[t_i^0+k\Delta,t_i^0+(k+1)\Delta),...$
with equal length $\Delta$ and solving (\ref{QP-OCBF}) over each time interval. The decision variables $u_{i,k}=u_i(t_{i,k})$ and $e_{i,k}=e_i(t_{i,k})$ are assumed to be constant on each interval and can be easily calculated at time $t_{i,k}=t_i^0+k\Delta$ through solving a QP at each time step:
\begin{align} \label{QP}
\min_{u_{i,k},e_{i,k}}&[ \frac{1}{2}(u_{i,k}-u_{i}^{\textrm{ref}}(t_{i,k}))^2+\lambda e_{i,k}^{2}],
\end{align}
subject to the CBF constraints (\ref{CBF1}), (\ref{CBF2}), \eqref{CBF3}, \eqref{CBF4}, and control input bounds \eqref{VehicleConstraints2} and CLF constraint (\ref{CLF}) where all constraints are linear in the decision variables. We refer to this as the \emph{time-driven} approach, which is fast and can be readily used in real time.
The main problem with this approach is that a QP may become infeasible at any time instant because the decision variable $u_{i,k}$ is held constant over a given time period $\Delta$. Since this is externally defined, there is no guarantee that it is small enough to ensure the forward invariance property of a CBF, thereby also failing to ensure the satisfaction of the safety constraints. In other words, in this time-driven approach, there is a critical (and often restrictive) assumption that the control update rate is high enough to avoid such a problem. There are several additional issues worth mentioning: $(i)$ imposing a high update rate makes the solution of multiple QPs inefficient since it increases the computational burden, $(ii)$ using a common update rate across all CAVs renders their synchronization difficult, and $(iii)$ the predictability of a time-driven communication mechanism across CAVs makes the whole system susceptible to malicious attacks.
As we will show next, the two event-driven solutions proposed in this paper alleviate these problems by eliminating the need to select a time step $\Delta$.
\section{EVENT-DRIVEN SOLUTIONS}
\label{sec:event-triggered}
There are several possible event-driven mechanisms one can adopt to invoke the solution of the QPs in (\ref{QP}) subject to the CBF constraints (\ref{CBF1}), (\ref{CBF2}), \eqref{CBF3}, \eqref{CBF4} along with control input bounds \eqref{VehicleConstraints2}. One approach is to adopt an \emph{event-triggering} scheme
such that we only need to solve a QP (with its associated CBF constraints) when one of two possible events (as defined next) is detected. We will show that this provides a guarantee for the satisfaction of the safety constraints which cannot be offered by the time-driven approach described earlier. The key idea is to ensure that the safety constraints are satisfied while the state remains within some bounds and define events which coincide with the state reaching these bounds, at which point the next instance of the QP in (\ref{QP}) is triggered.
Another idea is to create a \emph{self-triggering} framework with a minimum inter-event time guarantee by
predicting at $t_{i,k}$ the first time instant that any of the CBF constraints in the QP problem (\ref{QP}) is subsequently violated. We then select that as the next
time instant $t_{i,k+1}$ when CAV $i$ communicates with the coordinator and updates the control.
\subsection{Event-triggered Control}
Let $t_{i,k}$, $k=1,2,...$, be the time instants when the QP in (\ref{QP}) is solved by CAV $i$. Our goal is to guarantee that the state trajectory does not violate any safety constraints within any time interval $[t_{i,k},t_{i,k+1})$ where $t_{i,k+1}$ is the next time instant when the QP is solved.
Define a subset of the state space of CAV $l$ at time $t_{i,k}$ such that:
\begin{equation} \label{bound}
\textbf{x}_i(t_{i,k})-\textbf{s}_i \leq \textbf{x}_i(t) \leq \textbf{x}_i(t_{i,k})+\textbf{s}_i,
\end{equation}
where $\textbf{s}_i =\left[s_{i_x} \ \ s_{i_v} \right]^T \in \mathbb{R}_{>0}^2$ is a parameter vector whose choice will be discussed later. Intuitively, this choice reflects a trade-off between \emph{computational efficiency} (when the $\textbf{s}_i$ values are large and there are fewer instances of QPs to be solved) and \emph{conservativeness} (when the values are small). We denote the set of states of CAV $i$ that satisfy \eqref{bound} at time $t_{i,k}$ by
\begin{equation} \label{event bound}
S_i(t_{i,k}) = \Bigl\{ \textbf{y}_i \in \textbf{X}: ~\textbf{x}_i(t_{i,k})-\textbf{s}_i \leq \textbf{y}_i \leq \textbf{x}_i(t_{i,k})+\textbf{s}_i\Bigl\}.
\end{equation}
In addition, let $C_{i,1}$ be the feasible set of our original constraints \eqref{Safety}, \eqref{SafeMerging} and \eqref{VehicleConstraints1} defined as
\begin{equation} \label{event:ci1}
C_{i,1}:=\Bigl\{ \mathbf{x}_i\in \mathbf{X}: ~b_q(\mathbf{x}_i)\geq 0, \ q \in \lbrace 1,2,3,4 \rbrace \Bigl\}.
\end{equation}
Next, we seek a bound and a control law that satisfies the safety constraints within this bound. This can be accomplished by considering the minimum value of each component in \eqref{CBF general constraint} for every $q \in \lbrace 1,2,3,4 \rbrace $ as shown next.
Let us start with the first of the three terms in \eqref{CBF general constraint}, $L_fb_q(\textbf{x}(t))$. Observing that not all state variables are generally involved in a constraint $b_q(\textbf{x}(t)) \ge 0$, we can rewrite this term as
$L_fb_q(\textbf{y}_i(t),\textbf{y}_r(t))$ with $\mathbf{y}_i(t)$ as in (\ref{event bound}) and where $r$ stands for \enquote{relevant} CAVs affecting the specific constraint of $i$, i.e., $r \in \mathcal{R}_i(t)$ in (\ref{eq:neighborset}).
Let $b^{\min}_{q,f_i}(t_{i,k})$ be the minimum possible value of the term $L_fb_q(\textbf{y}_i(t),\textbf{y}_r(t))$
over the time interval $[t_{i,k},t_{i,k+1})$ for each $q= \lbrace 1,2,3,4 \rbrace $ over the set $\Bar{S_i}({t_{i,k}}) \cap \Bar{S_r}({t_{i,k}})$:
\begin{equation}\label{minfi}
b^{\min}_{q,f_i}(t_{i,k})=\displaystyle\min_{\textbf{y}_i \in \Bar{S}_i({t_{i,k}}) \atop \textbf{y}_r \in \Bar{S}_r({t_{i,k}})}L_fb_q(\textbf{y}_i(t),\textbf{y}_r(t)),
\end{equation}
where $\Bar{S}_i({t_{i,k}})$ is defined as follows:
\begin{equation}
\Bar{S}_i({t_{i,k}}):=\lbrace\mathbf{y}_i \in C_{i,1} \cap S_i(t_{i,k}) \rbrace
\end{equation}
Similarly, we can define the minimum value of the third term in \eqref{CBF general constraint}:
\begin{equation}\label{mingammai}
b^{\min}_{\gamma_q}(t_{i,k})=\displaystyle\min_{\textbf{y}_i \in \Bar{S}_i({t_{i,k}}) \atop \textbf{y}_r \in \Bar{S}_r({t_{i,k}})} \gamma_q(\textbf{y}_i(t),\textbf{y}_r(t)).
\end{equation}
For the second term in \eqref{CBF general constraint}, note that $L_gb_q(\mathbf{x}_i)$ is a constant for $ q=\{1,3,4\} $, as seen in
(\ref{CBF1}), (\ref{CBF3}) and (\ref{CBF4}),
therefore there is no need for any minimization. However, $L_gb_2(\mathbf{x}_i)=-\varphi \frac{x_i(t)}{L}$ in (\ref{CBF2}) is state-dependent and needs to be considered for the minimization. Since $x_i(t) \ge 0$, note that $L_gb_2(\mathbf{x}_i)$ is always negative, therefore, we can determine the limit value $b^{\min}_{2,g_i}(t_{i,k}) \in \mathbb{R}, $ as follows:
\begin{eqnarray}\label{mingi} \small
b^{\min}_{2,g_i}(t_{i,k})=\begin{cases}
\displaystyle\min_{\textbf{y}_i \in \Bar{S}_i({t_{i,k}}) \atop \textbf{y}_r \in \Bar{S}_r({t_{i,k}})}L_gb_2(\textbf{x}_i(t)), \ \textnormal{if}\ u_{i,k} \geq 0\\
\\
\displaystyle\max_{\textbf{y}_i \in \Bar{S}_i({t_{i,k}}) \atop \textbf{y}_r \in \Bar{S}_r({t_{i,k}})}L_gb_2(\textbf{x}_i(t)), \ \ \ \textnormal{otherwise},
\end{cases}
\end{eqnarray}
where the sign of $u_{i,k}, \ i \in F(t_{i,k})$ can be determined by simply solving the CBF-based QP \eqref{QP} at time $t_{i,k}$.
Thus, the condition that can guarantee the satisfaction of \eqref{CBF1}, \eqref{CBF2} and \eqref{CBF3}, \eqref{CBF4} in the time interval $\left[t_{i,k},t_{i,k+1}\right)$ is given by
\begin{equation} \label{minCBF}
b^{\min}_{q,f_i}(t_{i,k})+b^{\min}_{q,g_i}(t_{i,k})u_{i,k}+b^{\min}_{\gamma_q}(t_{i,k})\geq 0,
\end{equation}
for $q=\lbrace1,2,3,4\rbrace$. In order to apply this condition to the QP \eqref{QP}, we just replace \eqref{CBF general constraint} by \eqref{minCBF} as follows:
\begin{align} \label{eq:QPtk}
\min_{u_{i,k},e_{i,k}}& \Bigl[ \frac{1}{2}(u_{i,k}-u_i^{\textmd{ref}}(t_{i,k}))^2+\lambda e_{i,k}^{2}\Bigl]\nonumber\\
&\textnormal{s.t.} \ \ \eqref{CLF},\eqref{minCBF},\eqref{VehicleConstraints2}
\end{align}
It is important to note that each instance of the QP \eqref{eq:QPtk} is now triggered by one of the following two events where $k =1,2,\ldots$ is a local event (rather than time step) counter:
\begin{itemize}
\item \textbf{Event 1:} the state of CAV $i$ reaches the boundary of $S_i(t_{i,k-1})$.
\item \textbf{Event 2:} the state of CAV $r \in \mathcal{R}_i(t_{i,k-1})$ reaches the boundary of $S_{r}(t_{i,k-1})$, if $\mathcal{R}_i(t_{i,k-1})$ is nonempty. In this case either $r=i_p$ or $r=i_c$ (e.g., in the merging problem $i_c=i-1 \neq i_p$ if such a CAV exists). Thus, Event 2 is further identified by the CAV which triggers it and denoted accordingly by \textbf{Event 2}($r$), $r \in \mathcal{R}_i(t_{i,k-1})$.
\end{itemize}
As a result, $t_{i,k},k=1,2,...$ is unknown in advance but can be determined by CAV $i$ through:
\begin{align} \label{events}
t_{i,k}=\min \Big\{ t>t_{i,k-1}:\vert\textbf{x}_i(t)-\textbf{x}_i(t_{i,k-1})\vert=\textbf{s}_i \\ \nonumber
\text{or} \ \ \vert\textbf{x}_{i_p}(t)-\textbf{x}_{i_p}(t_{i,k-1})\vert=\textbf{s}_{i_p} \\ \nonumber
\text{or} \ \ \vert\textbf{x}_{i_c}(t)-\textbf{x}_{i_c}(t_{i,k-1})\vert=\textbf{s}_{i_c}\Big\},
\end{align}
where $t_{i,0}=t_i^0$.
Note that $k$ is a \emph{local} event counter for each $i$ so, strictly speaking, we should use $k_i$. Instead, the index $k$ can be dropped and we can write $\textbf{x}_i(t_{i,\textrm{last}})$ rather than $\textbf{x}_i(t_{i,k-1})$. However, when there is no ambiguity, we will simply write $\textbf{x}_i(t_{i})$ to indicate that $t_i$ is the ``last event'' occurring at $i$.
The definition above is based on events which directly affect CAV $i$ (leading to $i$ solving a QP) whether they are triggered by $i$ or $r \neq i$. Alternatively, we may think of any CAV $i$ as generating Event 1 leading to a new QP solution by CAV $i$ itself and Event 2($i$) which affects some $j \in \{l |i \in \mathcal{R}_l(t)\}$, i.e., $i$ is relevant to some $j \neq i$. In this case, a violation of the bound of $S_i(t_{i,k-1})$ or $S_i(t_{j,k-1})$ by the evolving state of CAV $i$ triggers events relevant to CAV $i$ or $j$, respectively.
Events 1,2($r$) can be detected through the dynamics in \eqref{VehicleDynamics} or from on-board state measurements, if available, along with state information from relevant other CAVs (e.g., CAVs $i_p$ and $i_c$ in Fig. \ref{fig:merging}) through the coordinator. Finally, note that because of the Lipschitz continuity of the dynamics in \eqref{VehicleDynamics} and the fact that the control is constant within an inter-event interval, Zeno behavior does not occur in this framework.
The following theorem formalizes our analysis by showing that if new constraints of the general form \eqref{minCBF} hold, then our original CBF constraints \eqref{CBF1}, \eqref{CBF2} and \eqref{CBF3}, \eqref{CBF4} also hold.
\begin{theorem}\label{as:1} Given a CBF $b_q(\mathbf{x(t)})$ with relative degree one, let $t_{i,k}$, $k=1,2,\ldots$ be determined by \eqref{events} with $t_{i,0}=t_i^0$ and $b^{\min}_{q,f_i}(t_{i,k})$, $b^{\min}_{\gamma_q}(t_{i,k})$, $b^{\min}_{q,g_i}(t_{i,k})$ for $q=\{1,2,3,4\}$ obtained through \eqref{minfi}, \eqref{mingammai}, and \eqref{mingi}. Then, any control input $u_{i,k}$ that satisfies \eqref{minCBF} for all $q \in \lbrace 1,2,3,4 \rbrace$ within the time interval $[t_{i,k},t_{i,k+1})$ renders the set $C_{i,1}$ forward invariant for the dynamic system defined in (\ref{VehicleDynamics}).
\end{theorem}
\begin{proof}
The proof follows along similar lines as Theorem 2 in \cite{Xiao2021EventTriggeredSC}.
By \eqref{event bound}, we can write:
\begin{equation}
\textbf{y}_i(t) \in S_i(t_{i,k}), ~\textbf{y}_{r}(t) \in S_{r}(t_{i,k}), ~\textbf{y}_i(t) \in C_{i,1}
\end{equation}
for all $t \in [t_{i,k},t_{i,k+1}), k=1,2,...$.
\begin{equation}
L_fb_q(\textbf{x}_i(t)) \geq b^{\min}_{q,f_i}(t_k),
\end{equation}
\begin{equation}
\gamma_q(\textbf{x}_i(t)) \geq b^{\min}_{\gamma_q}(t_k),
\end{equation}
\begin{equation}
L_gb_q(\textbf{x}_i(t))u_i(t_k) \geq b^{\min}_{q,g_i}(t_k)u_i(t_k),
\end{equation}
for $q \in \lbrace1,2,3,4\rbrace$, by adding these inequalities which have the same direction it follows that
\begin{align}
L_fb_q(\textbf{x}_i(t))+L_gb_q(\textbf{x}_i(t))u_i(t_k)+\gamma_q(\textbf{x}_i(t))\\ \nonumber
\geq b^{\min}_{q,f_i}(t_k)+b^{\min}_{q,g_i}(t_k)u_i(t_k) +b^{\min}_{\gamma_q}(t_k) \geq 0.
\end{align}
i.e., \eqref{CBF general constraint} is satisfied.
By Theorem 1 of \cite{XIAO2021109592} applied to \eqref{CBF general constraint}, if $x_i(0) \in
C_{i,1}$, then any Lipschitz continuous controller $u_i(t)$
that satisfies \eqref{CBF general constraint} $\forall t \geq 0$ renders $C_{i,1}$ forward
invariant for system \eqref{VehicleDynamics}. Therefore, $C_{i,1}$ is forward invariant for the
dynamic system defined in (\ref{VehicleDynamics}).
\end{proof}
\textbf{Remark} 1:
Expressing \eqref{minCBF} in terms of the minimum value of each component separately may become overly conservative if each minimum value corresponds to different points in the decision variable space. Therefore, an alternative approach is to calculate the minimum value of the whole term.
\textbf{Selection of parameters $\mathbf{s}_i$.} The importance of properly selecting the parameters $\mathbf{s}_i$ is twofold. First, it is necessary to choose them such that all events are observed, i.e., given the sensing capabilities and limitations of a CAV $i$, the value of $\mathbf{s}_i$ must be large enough to ensure that no events will go undetected.
In particular, the variation of the states of CAV $i$ within the sensor sampling time must not be greater than bounds $\mathbf{s}_i$.
Therefore, letting $T_s$ be a given sensor sampling time, the maximum state (position and speed) variation during this sampling time must satisfy:
\begin{equation}
\begin{aligned} \label{min_s_x}
x_i(t+T_s)-x_i(t) \leq v_{\max}T_s
\end{aligned}
\end{equation}
\begin{equation}\label{min_s_v}
\begin{aligned}
v_i(t+T_s)-v_t(t) \leq \max(u_{\max}T_s,|u_{\min}|T_s)
\end{aligned}
\end{equation}
where $v_{\max}$, $u_{\max}$, and $u_{\min}$ are given CAV $i$ specifications. Therefore, we need to pick lower bounds given by the maximum state variations in \eqref{min_s_x} and \eqref{min_s_v} as follows:
\begin{equation}
\mathbf{s}_i=\left[
\begin{array}{cc}
s_{ix} \\
s_{iv}
\end{array}\right]\geq \left[\begin{array}
[c]{c}%
v_{\max}T_s\\
\max(u_{\max}T_s,|u_{\min}|T_s)
\end{array}
\right].
\end{equation}
Second, the choice of $\mathbf{s}_i$ captures the trade-off between computational cost and conservativeness: the larger the value of each component of $\mathbf{s}_i$ is, the smaller the number of events that trigger instances of the QPs becomes, thus reducing the total computational cost. At the same time, the control law must satisfy the safety constraints over a longer time interval as we take the minimum values in \eqref{minfi}-\eqref{mingi}, hence rendering the approach more conservative.
{\bf Communication Scheme}. As mentioned earlier, a coordinator is responsible for exchanging information among CAVs (but does not exert any control). To accommodate event-triggered communication, the coordinator table in Fig. \ref{fig:merging} is extended as shown in Table \ref{Table c} so that it includes ``relevant CAV info'' data for each CAV $i$. In particular, in addition to the states of CAV $i$ in column 2, denoted by $\textbf{x}_i(t_{i})$, the states of CAVs $r \in \mathcal{R}_i(t_i)$ are included, denoted by $\textbf{x}_r(t_{i})$, in column 3, where CAV $r$ affects the constraints of CAV $i$, i.e., $r \in \mathcal{R}_i(t_i)$. In an event-driven scheme,
frequent communication is generally not needed, since it occurs only when an event is triggered. CAV $i$ updates its state in the coordinator table and re-solves a QP in two cases depending on which event occurs:
$(i)$ Event 1 triggered by $i$. The first step is state synchronization: CAV $i$ requests current states from all relevant CAVs and the coordinator updates these (column 3), as well as the state $\textbf{x}_i(t_i)$ (column 2).
CAV $i$ then solves its QP while the coordinator notifies all CAVs $r \in \mathcal{R}_i(t_i)$ of the new CAV $i$ state
so they can update their respective boundary set $S_r(t_{i})$. This may trigger an Event 2($r$) to occur at some future event time as in \eqref{events}; such an event cannot be triggered instantaneously, as it takes some finite time for a bound in $S_r(t_{i})$ to be reached because of Lipschitz continuity in the dynamics. In addition, the coordinator notifies all CAVs $j$ such that $i \in \mathcal{R}_j(t_i)$ (i.e. $i$ is relevant to $j$) so that they can update their bounds $S_j(t_i)$ respectively.
$(ii)$ Event 2($r$) is triggered by $r \in \mathcal{R}_i(t_i)$. When CAV $r$ reaches the boundary set $S_r(t_{i})$ it notifies the coordinator to update its state (column 2). The coordinator passes on this information to all CAVs $j$ where $r \in \mathcal{R}_j(t_i)$, which includes $i$ since $r \in \mathcal{R}_i(t_i)$, and the corresponding state of $r$ is updated (column 3). Then, CAV $i$ re-solves its QP and the coordinator updates $t_i$ to the current time and the state $\textbf{x}_i(t_i)$ (column 2) and the state $\textbf{x}_r(t_i)$(column 3). The rest of the process is the same as in case $(i)$.
Note that any update in CAV $i$'s state due to a triggered event can immediately affect only CAVs $l>i$ such that $i$ is relevant to $l$. If an \enquote{event chain} ensues, the number of events is bounded by $N(t_i)$.
\textbf{Remark} 2:
It is possible to simplify the communication scheme by assuming that each CAV can measure (through local sensors) the state of its relevant CAVs (i.e. in the case of CAV $i$, the states of the CAV $r \in \mathcal{R}_i(t_i)$ ). Thus, CAVs can check for violations not only in their own state boundaries $S_i(t_i)$ but also in their relevant CAV state boundaries, $S_r(t_i)$. The same applies to the case where CAVs have a direct vehicle-to-vehicle (V2V) communication capability.
\begin{table}
\begin{center}
\begin{tabular}{|c|c|c|c|}
\hline
\multicolumn{4}{|c|}{Extended Coordinator Table} \\
\hline
Index & CAV Info & Relevant CAV info & Lane\\
\hline
0 & $\boldsymbol{x}_\textnormal{0}(t_{0})$ & - & Main\\
\hline
1 & $\textbf{x}_\textnormal{1}(t_{1})$ & - & Main\\
\hline
2 & $\textbf{x}_\textnormal{2}(t_{2})$ & $\textbf{x}_\textnormal{1}(t_{2})$ & Merging\\
\hline
3 & $\textbf{x}_\textnormal{3}(t_{3})$ & $\textbf{x}_\textnormal{1}(t_{3})$ ,$\textbf{x}_\textnormal{2}(t_{3})$ & Merging\\
\hline
4 & $\textbf{x}_\textnormal{4}(t_{4})$ & $\textbf{x}_\textnormal{1}(t_{4})$ ,$\textbf{x}_3(t_{4})$ & Main\\
\hline
5 & $\textbf{x}_\textnormal{5}(t_{5})$ & $\textbf{x}_\textnormal{4}(t_{5})$ & Main\\
\hline
\end{tabular}
\caption{Extended coordinator table from Fig \ref{fig:merging}\\
for event triggered control. }
\label{Table c}
\end{center}
\end{table}
\subsection{Self-Triggered Control}
As an alternative to event-triggered control, a self-triggered asynchronous control scheme can be used where each CAV $i$ communicates with the coordinator at specified time instants $\{t_{i,k}\},{k\in \mathbb{Z}^+}$. At each such instant $t_{i,k}$, CAV $i$ uploads its own state information $\textbf{x}_i(t_{i,k})$, the calculated control input $u_i(t_{i,k})$ that is going to be applied over the time interval $[t_{i,k},t_{i,k+1})$, and the next time when CAV $i$ will communicate with the coordinator and resolve its QP, denoted as $t_{i,{\textmd{next}}}$. The data stored at the coordinator for all vehicles are shown in Table \ref{tab:1}. We denote the most recent stored information of the $i$-th CAV at the coordinator as $\mathcal{I}_i=[t_{i,{\textmd{last}}},t_{i,{\textmd{next}}},x_i(t_{i,{\textmd{last}}}),v_i(t_{i,{\textmd{last}}}),u_i(t_{i,{\textmd{last}}})]
$.
\begin{table}
\begin{center}
\begin{tabular}{|l|l|}
\hline
$t_{i,{\textmd{last}}}$ & Last time CAV $i$ communicated.\\
$t_{i,{\textmd{next}}}$ & Next time CAV $i$ will communicate.\\
$x_i(t_{i,{\textmd{last}}})$ & Last updated position of CAV $i$.\\
$v_i(t_{i,{\textmd{last}}})$ & Last updated velocity of CAV $i$.\\
$u_i(t_{i,{\textmd{last}}})$ & Last control input of CAV $i$.\\
\hline
\end{tabular}
\caption{Data stored on the coordinator for self-triggered control}\label{tab:1}
\end{center}
\end{table}
The goal is to develop a self-triggered asynchronous algorithm to determine the sequence of time instants $t_{i,k}$ and the control input $u_i(t), t\in [t_{i,k},t_{i,k+1})$ for each CAV to solve the problem formed in \eqref{QP-OCBF}. Providing a lower bound for the inter-event time interval is an imperative feature in a self-triggered scheme. It is worth mentioning that since Zeno behavior never occurs under Lypschitz continuity, such a guarantee is not necessary for the event-triggered control algorithm. To provide such a guarantee for the generated time instants $t_{i,k}$, there should exist some $T_d>0$ such that $|t_{i,k+1}-t_{i,k}|\geq T_d$. This is a design parameter which depends on the sensor sampling rate, as well as the clock of the on-board embedded system on each CAV. For the same reason, the time-instants $t_{i,k}$ are calculated such that $(t_{i,k}~\textrm{mod}~T_d)=0$ where $\textrm{mod}$ denotes the modulo operator.
In contrast to the time-driven scheme with a fixed sampling time $\Delta$, each CAV $i \in F(t_{i,k})$ calculates the time instant $t_{i,k}$ in which the QP problem must be solved in a self-triggered fashion. As in the event-triggered scheme, at each time instant $t_{i,k}$, CAV $i$ solves its QP problem to obtain $u_i(t_{i,k})$. However, unlike the event-triggered scheme, CAV $i$ also calculates the next time instant $t_{i,k+1}$ at which it should resolve the QP problem. Note that similar to the time-driven scheme, the newly obtained control input, $u_i(t_{i,k})$ is held constant over the time interval $[t_{i,k},t_{i,k+1})$ for CAV $i$.
We address two problems in the following. First, it will be shown how a lower bound $T_d$ on the inter-event time interval can be ensured. Second, we will show how each CAV $i \in F(t_{i,k})$ specifies the time instants $t_{i,k}$.
\subsubsection{Minimum Inter-event Time, $T_d$}
In this subsection, it is shown how the CBF constraints \eqref{CBF1}, \eqref{CBF2}, \eqref{CBF3}, and \eqref{CBF4} for the CAV $i$ should be modified to ensure a minimum inter-event time $T_d$. This is achieved by adding extra positive terms
to the right hand side of these constraints.
First, consider the maximum speed CBF constraint \eqref{CBF3} to be satisfied when solving the QP problem at $t_{i,k}$ with feasible solution $u_i(t_{i,k})$. Thus, we have:
\begin{align} \label{cbf_i1}
\mathcal{C}_{i,1}(t_{i,k},u_i(t_{i,k}))&:=-u_i(t_{i,k})+k_3b_3(\textbf{x}_i(t_{i,k})) \geq 0
\end{align}
However, the CBF constraint should be satisfied over the entire time interval $[t_{i,k},t_{i,k}+T_d]$ to ensure the minimum inter-event time. Therefore, for all $t\in [t_{i,k},t_{i,k}+T_d]$:
\begin{align}
\mathcal{C}_{i,1}(t,u_i(t_{i,k}))&=-u_i(t_{i,k})+k_3b_3(\textbf{x}_i(t)) \geq 0.\label{eq_c1}
\end{align}
By defining $\tau=t-t_{i,k}$ as the elapsed time after $t_{i,k}$, and recalling that the acceleration is kept constant over the inter-event time, we can derive an expression for the velocity $v_i(t)$ as follows:
\begin{equation} \label{vi}
v_i(\tau)=v_i(t_{i,k})+u_i(t_{i,k})\tau, \ \ \tau \in [0,T_d].
\end{equation}
Now by using \eqref{CBF3}, \eqref{cbf_i1}, and, \eqref{vi} we can rewrite \eqref{eq_c1} as follows:
\begin{align} \label{cbf_i1-eq_c1}
\mathcal{C}_{i,1}(t,u_i(t_{i,k}))&=\mathcal{C}_{i,1}(t_{i,k},u_i(t_{i,k}))-k_3u_i(t_{i,k})\tau \ \ \tau \in [0,T_d],
\end{align}
In what follows, we show that if $\mathcal{C}_{i,1}(t_{i,k},u_i(t_{i,k}))\geq \sigma_{i,1}(T_d)$ holds, then:
\begin{equation} \label{temp 1}
\mathcal{C}_{i,1}(t,u_i(t_{i,k}))\geq 0, \ \ \ \forall t \in [t_{i,k},t_{i,k}+T_d],
\end{equation}
where $\sigma_{i,1}(T_d):=k_3u_MT_d$ and $u_M=\max(|u_{\min}|,u_{\max}) >0$. To prove \eqref{temp 1}, we can rewrite $\mathcal{C}_{i,1}(t_{i,k},u_i(t_{i,k}))\geq \sigma_{i,1}(T_d)$ as follows:
\begin{align} \label{inequality1}
\mathcal{C}_{i,1}(t_{i,k},u_i(t_{i,k}))&- \mathcal{C}_{i,1}(t,u_i(t_{i,k})) \nonumber \\&+ \mathcal{C}_{i,1}(t,u_i(t_{i,k}))\geq \sigma_{i,1}(T_d)
\end{align}
By combining \eqref{inequality1} with \eqref{cbf_i1-eq_c1}, for all $t \in [t_{i,k},t_{i,k}+T_d]$ and $\tau \in [0,T_d]$ we have:
\begin{align}
\mathcal{C}_{i,1}(t,u_i(t_{i,k})) \geq \sigma_{i,1}(T_d)&-k_3u_i(t_{i,k})\tau \geq 0,
\end{align}
where the non-negativity of the inequality follows from the definition of $\sigma_{i,1}(t_{i,k},T_d)=k_3u_MT_d$, i.e., $k_3u_MT_d-k_3u_i(t_{i,k})\tau \geq 0$ for $\tau \in [0,T_d]$. Hence, in order to ensure the minimum inter-event interval $T_d$, the CBF constraint \eqref{CBF3} should be modified to:
\begin{align}\label{cbf_modified_1}
\mathcal{C}_{i,1}(t,u_i(t))\geq \sigma_{i,1}(T_d).
\end{align}
Following a similar derivation of the modified CBF constraint for the minimum speed \eqref{CBF3}, it follows that \eqref{CBF4} should be modified to:
\begin{align}\label{cbf_modified_2}
\mathcal{C}_{i,2}(t,u_i(t)) \geq \sigma_{i,2}(T_d),
\end{align}
where
\begin{align}
\mathcal{C}_{i,2}(t,u_i(t))&:=u_i(t)+k_4b_4(\textbf{x}_i(t)) \nonumber \\ \sigma_{i,2}(T_d)&:=k_4u_{M}T_d.
\end{align}
Next, let us consider the safety CBF constraint \eqref{CBF1} to be satisfied when solving the QP problem at $t_{i,k}$ with a feasible solution $u_i(t_{i,k})$. It follows that
\begin{align} \nonumber
\mathcal{C}_{i,3}(t_{i,k},u_i(t_{i,k})):= & v_{i_p}(t_{i,k})-v_i(t_{i,k})-{\varphi} u_i(t_{i,k})\\ &+k_1b_1(\textbf{x}_i(t_{i,k}),\textbf{x}_{i_p}(t_{i,k}))\geq 0. \label{eq_c2}
\end{align}
Once again, we need to ensure that the CBF constraint is satisfied over the entire time interval $[t_{i,k},t_{i,k}+T_d]$ as follows:
\begin{align} \nonumber
\mathcal{C}_{i,3}(t,u_i(t_{i,k})) &=v_{i_p}(t)-v_i(t)-{\varphi} u_i(t_{i,k})\\&+k_1b_1(\textbf{x}_i(t),\textbf{x}_{i_p}(t)) \geq 0, \ \ t \in [t_{i,k},t_{i,k}+T_d]. \label{eq_30}
\end{align}
For ease of notation, we use the following definitions:
\begin{equation} \label{viip_tik}
\Delta v_{i,i_p}(t_{i,k})=v_{i_p}(t_{i,k})-v_i(t_{i,k}),
\end{equation}
\begin{equation}\label{uiip_tik}
\Delta u_{i,i_p}(t_{i,k})=u_{i_p}(t_{i,k})-u_i(t_{i,k}).
\end{equation}
Similar to the procedure of deriving the lower bound for constraints \eqref{CBF3} and \eqref{CBF4}, by using \eqref{CBF1}, \eqref{viip_tik}, and \eqref{uiip_tik}, we rewrite \eqref{eq_30} as follows:
\begin{align}
\mathcal{C}_{i,3}(t,u_i(t_{i,k}))= & \mathcal{C}_{i,3}(t_{i,k},u_i(t_{i,k}))+\Delta u_{i,i_p}(t_{i,k})\tau \nonumber \\ &+ k_1 \bigl( 0.5\Delta u_{i,i_p}(t_{i,k})\tau^2 +\Delta v_{i,i_p}(t_{i,k})\tau \nonumber\\&-\varphi u_i(t_{i,k})\tau \bigl) \geq 0, \ \ \ \tau \in [0,T_d] \label{Ci3}.
\end{align}
To further ease up the notation, we define
\begin{equation} \label{M3}
\mathcal{M}_{i,3}(t,t_{i,k},u_i(t_{i,k})):= \mathcal{C}_{i,3}(t_{i,k},u_i(t_{i,k}))-\mathcal{C}_{i,3}(t,u_i(t_{i,k})),
\end{equation}
which will be used later on.
Similarly, in the following we intend to show that if $\mathcal{C}_{i,3}(t_{i,k},u_i(t_{i,k}))\geq \sigma_{i,3}(t_{i,k},T_d)$ holds, it follows:
\begin{equation} \label{temp 3}
\mathcal{C}_{i,3}(t,u_i(t_{i,k}))\geq 0, \ \ \ t \in [t_{i,k},t_{i,k}+T_d],
\end{equation}
where
\begin{align}
\sigma_{i,3}(t_{i,k},T_d) := &|u_{i_p}(t_{i,k})|+k_1 \bigl(0.5 T_d^2 (|u_{i_p}(t_{i,k})|+u_M) \nonumber \\
&+(|v_{i,i_p}(t_{i,k})|+(1+\varphi)u_M)T_d\bigl).
\end{align}
To demonstrate \eqref{temp 3}, we follow the same procedure as before by starting with
\begin{equation} \label{temp 2}
\mathcal{C}_{i,3}(t_{i,k},u_i(t_{i,k}))\geq \sigma_{i,3}(t_{i,k},T_d),
\end{equation}
and then rewrite \eqref{temp 2} in the following form:
\begin{align} \label{inequality2}
\mathcal{C}_{i,3}(t_{i,k},u_i(t_{i,k}))&- \mathcal{C}_{i,3}(t,u_i(t_{i,k})) \nonumber \\&+ \mathcal{C}_{i,3}(t,u_i(t_{i,k}))\geq \sigma_{i,3}(t_{i,k},T_d)
\end{align}
Then, combining \eqref{M3} and \eqref{inequality2}, for $t \in [t_{i,k},t_{i,k}+T_d]$ follows that:
\begin{align}
\mathcal{C}_{i,3}(t,u_i(t_{i,k})) &\geq \sigma_{i,3}(t_{i,k},T_d) - \mathcal{M}_{i,3}(t,t_{i,k},u_i(t_{i,k}))
\end{align}
where $\sigma_{i,3}(t_{i,k},T_d)$, i.e. the upper bound of $\mathcal{M}_{i,3}(t,t_{i,k},u_i(t_{i,k}))$, is chosen such that the left hand side of the inequality is always positive:
\begin{align}
\sigma_{i,3}(t_{i,k},T_d) - \mathcal{M}_{i,3}(t,t_{i,k},u_i(t_{i,k})) \geq 0,
\end{align}
hence, by modifying the CBF constraint \eqref{CBF1} to:
\begin{align} \label{cbf_modified_3}
\mathcal{C}_{i,3}(t,u_i(t))\geq \sigma_{i,3}(t,T_d),
\end{align}
one can enforce \eqref{eq_30}.
Following a similar approach, to provide a minimum inter-event time $T_d$, the CBF constraint \eqref{CBF2} should be modified to,
\begin{align} \label{cbf_modified_4}
\mathcal{C}_{i,4}(t,u_i(t))\geq \sigma_{i,4}(t,T_d),
\end{align}
where
\begin{align*}
\mathcal{C}_{i,4}(t,u_i(t))=&v_{i_c}(t)-v_i(t)-\frac{\varphi}{L}v_i^2(t)-\varphi \frac{x_i(t)}{L}u_i(t)+\nonumber \\ &k_2(b_2(\textbf{x}_i(t),\textbf{x}_{i_c}(t)),
\end{align*}
\begin{align}
\sigma_{i,4}(t,T_d):=&0.5\frac{\varphi}{L}u^2_M T_d^3 + \nonumber \\
+&k_4\Big( \frac{3\varphi}{2L}(u_M^2+|v_i(t)| u_M)+0.5 (|u_{i_c}(t)|+u_M) \Big)T^2_d \nonumber \\
+&\Big(|u_{i_c}(t)|+ ( \frac{3\varphi}{L}|v_i(t)|+\frac{\varphi}{L}|x_i(t)|+1)u_M\nonumber \\
+&|v_{i_c}(t)|+|v_i(t)|+\frac{\varphi}{L}v_i^2(t) \Big)k_4 T_d.
\end{align}
Finally, since the CLF constraint \eqref{CLF} is added optionally for an optimal trajectory, it can be relaxed in the presence of safety constraints and there is generally no need to ensure that it is satisfied during the whole time-interval $t \in [t_{i,k},t_{i,k}+T_d]$ with the same relaxation variable value $e_i(t_{i,k})$. Therefore, there is no need to modify it as was necessary for the CBF constraints. In conclusion, to ensure the minimum inter-event time $T_d$, at each time instant $t_{i,k}$, CAV $i$ needs to solve the following QP:
\begin{align} \label{QP2}
\min_{u_{i,k},e_{i,k}}~~\frac{1}{2}(u_{i,k}-u_i^{\textmd{ref}}(t_{i,k}))^2+\lambda e_{i,k}^2
\end{align}
subject to the modified CBF constraints \eqref{cbf_modified_1}, \eqref{cbf_modified_2}, \eqref{cbf_modified_3}, and \eqref{cbf_modified_4}, the control input bounds \eqref{VehicleConstraints2} and the CLF constraint \eqref{CLF}. In the next subsection, it will be shown how the time-instant $t_{i,k}$ should be obtained for CAV $i$.
\subsubsection{Self-Triggered Time Instant Calculation}
The key idea in the self-triggered framework is to predict the first time instant that any of the CBF constraints \eqref{CBF1}, \eqref{CBF2}, \eqref{CBF3} or \eqref{CBF4}, is violated and select that as the next time instant $t_{i,k+1}$. CAV $i$ then communicates with the coordinator and requests the necessary information to solve its next QP and obtain a new control input $u_i(t_{i,k+1})$ and update its stored data in the coordinator table. Note that it is not required to consider the modified CBF constraints \eqref{cbf_modified_1}, \eqref{cbf_modified_2}, \eqref{cbf_modified_3}, and \eqref{cbf_modified_4} here, since these are obtained purely for ensuring the minimum inter-event time $T_d$, while the original CBF constraints \eqref{CBF1}, \eqref{CBF2}, \eqref{CBF3}, and \eqref{CBF4} are sufficient for satisfying constraints 1, 2, and 3 (state limitation constraint) in problem \eqref{eqn:energyobja}.
For the speed constraint \eqref{CBF3}, it is clear that if $u_i(t_{i,k})\leq0$ (decelerating), then this constraint always holds, hence there is no need to check it. However, for $u_i(t_{i,k}) >0$ (accelerating), the constraint \eqref{CBF3} can be violated. To calculate the time instant, $t^1_{i,k}$, when this occurs we need to solve the following equation:
\begin{equation} \label{temp 4}
-u_i(t_{i,k})+k_3(v_{\max}-v_i(t))=0, \ \ \ \ t > t_{i,k}.
\end{equation}
Recalling that the acceleration is held constant in the inter-event time, \eqref{temp 4} can be rewritten as
\begin{equation}\label{temp 5}
-u_i(t_{i,k})+k_3(v_{\max}-v_i(t_{i,k})-u_i(t_{i,k}) (t-t_{i,k}))=0
\end{equation}
and its solution yields:
\begin{align*}
t^1_{i,k}=t_{i,k}+\frac{-u_i(t_{i,k})+k_3v_{\textmd{max}}-k_3v_i(t_{i,k})}{k_3u_i(t_{i,k})}.
\end{align*}
{Observe that at $t_{i,k}$, the QP in \eqref{QP2} is solved, therefore the constraint \eqref{cbf_modified_1} is satisfied at $t=t_{i,k}$ and we have $-u_i(t_{i,k})+k_3(v_{\textmd{max}}-v_i(t_{i,k})) \geq \sigma_{i,1}(T_d) > 0$. It follows that $t^1_{i,k}\geq t_{i,k}+T_d$. }
For the second speed constraint \eqref{CBF4}, it is clear that if $u_i(t_{i,k})\geq0$ (accelerating), then this constraint is satisfied, hence there is no need to check it. However, for $u_i(t_{i,k}) <0$ (decelerating), the constraint \eqref{CBF4} can be violated. Similar to the previous case, we can solve the following equation for $t$ to obtain $t^2_{i,k}$ as the first time instant that constraint \eqref{CBF4} is violated:
\begin{equation}\label{temp 6}
u_i(t_{i,k})+k_4(v_i(t_{i,k})+u_i(t_{i,k}) (t-t_{i,k})-v_{\textmd{min}}) = 0 \ \ \ \ t > t_{i,k}.
\end{equation}
Solving \eqref{temp 6} leads to
\begin{align*}
t^2_{i,k}=t_{i,k}+\frac{-u_i(t_{i,k})+k_4v_{\textmd{min}}-k_4v_i(t_{i,k})}{k_4u_i(t_{i,k})},
\end{align*}
and it can be shown, similar to the previous case, that $t^2_{i,k}\geq t_{i,k}+T_d$.
For the rear-end safety constraint \eqref{CBF1}, we need to find the first time instant $t>t_{i,k}$ such that $\mathcal{C}_{i,3}(t,u_i(t_{i,k}))=0$ in
\eqref{Ci3}. This leads to the following quadratic equation:
\begin{align*}
k_1&\big (0.5 \Delta u_{i,i_p}(t_{i,k})\big)\tau^2+\big ( \Delta u_{i,i_p}(t_{i,k})+k_1(\Delta v_{i,i_p}(t_{i,k})\\&-\varphi u_i(t_{i,k}) )\big )\tau+ \mathcal{C}_{i,3}(t_{i,k},u_i(t_{i,k}))=0.
\end{align*}
The least positive root of the above equation is denoted as $\tau_{i,3}$ and we define $t^3_{i,k}=t_{i,k}+\tau_{i,3}$. { The case of having both roots negative corresponds to the constraint \eqref{CBF1} not being violated, hence $t^3_{i,k}=\infty$. } Moreover, due to the added term in \eqref{cbf_modified_3}, it follows that $t^3_{i,k}\geq t_{i,k}+T_d$.
Similarly for the safe merging constraint \eqref{CBF2}, the first time instant $t>t_{i,k}$ such that $\mathcal{C}_{i,4}(t,u_i(t_{i,k}))=0$ can be obtained by solving the following cubic equation:
\begin{align*}
&-k_4\frac{\varphi}{2L} u^2_i(t_{i,k})\tau^3+ \big (0.5 \Delta u_{i,i_c}(t_{i,k}) -k_4 \frac{3\varphi}{2L}u^2_i(t_{i,k})+
\\&-k_4\frac{3\varphi}{2L} v_{i}(t_{i,k}) u_i(t_{i,k}) \big ) \tau^2
+k_4\Big (\Delta u_{i,i_c}(t_{i,k})-\frac{3\varphi}{L}v_i(t_{i,k})u_i(t_{i,k})\\
&+ ( \Delta v_{i,i_c}(t_{i,k})-\frac{\varphi}{L} v^2_i(t_{i,k})-\frac{\varphi}{L} u_i(t_{i,k})x_{i}(t_{i,k})) \Big)\tau\\
&+\mathcal{C}_{i,4}(t_{i,k},u_i(t_{i,k}))=0,
\end{align*}
where
\begin{align*}
\Delta v_{i,i_c}(t_{i,k})=v_{i_c}(t_{i,k})-v_i(t_{i,k})\\
\Delta u_{i,i_c}(t_{i,k})=u_{i_c}(t_{i,\textmd{k}})-u_i(t_{i,k}).
\end{align*}
The least positive root is denoted as $\tau_{i,4}$ and we define $t^4_{i,k}=t_{i,k}+\tau_{i,4}$. Moreover, due to solving QP in \eqref{QP2} subject to the modified CBF constraint derived in \eqref{cbf_modified_4}, it follows that $t^4_{i,k}\geq t_{i,k}+T_d$. The case of having all roots negative corresponds to the constraint \eqref{CBF2} not being violated, hence $t^4_{i,k}=\infty$.
\subsubsection{Self-Triggered Scheme}
First, it should be noted that the time instants $t^q_{i,k}$, $q=1,\dots,4$ are obtained based on the safety constraints \eqref{Safety} and \eqref{SafeMerging}, as well as the vehicle state limitations \eqref{VehicleConstraints1}. However, this choice can compromise the optimal performance of CAVs in the CZ. In particular, it is possible that the acceleration of a CAV stays constant for a long period of time if there are no safety constraints or vehicle state limit violations, whereas, as shown in \cite{XIAO2021109592}, the optimal acceleration trajectory of the CAV in fact changes linearly. Therefore, in order to avoid this issue and minimize deviations from the optimal acceleration trajectory, one can impose a maximum allowable inter-event time, denoted by $T_{\max}$. To accomplish this, we can define
\begin{align} \label{eq:tmin}
t^{\min}_{i,k}=\min \Bigl\{t^1_{i,k},t^2_{i,k},t^3_{i,k},t^4_{i,k},t_{i,k}+T_{\max} \Bigl\}.
\end{align}
The next update time instant for CAV $i$, i.e. $t_{i,k+1}=t_{i,\textmd{next}}$ should now be calculated. Towards this goal, consider the case where $t^{\min}_{i,k} \leq \min(t_{i_p,\textmd{next}},t_{i_c,\textmd{next}})$,
which corresponds to the next update time instant of CAV $i$ occurring before the next control update of the preceding vehicle $i_p$ or the conflict CAV $i_c$. Then, we can set $t_{i,k+1}=t_{i,\textmd{next}}=t^{\min}_{i,k}$ from (\ref{eq:tmin}).
The only remaining case is when $t^{\min}_{i,k}> \min(t_{i_p,\textmd{next}},t_{i_c,\textmd{next}})$, which corresponds to either CAV $i_p$ or $i_c$ updating its control input sooner than CAV $i$, hence CAV $i$ does not have access to their updated control input. Consequently, checking the constraints \eqref{CBF1} and \eqref{CBF2} is no longer valid. In this case, $t_i^\textmd{next}= \min(t_{i_p,\textmd{next}},t_{i_c,\textmd{next}})+T_d$ which implies that CAV $i$'s next update time will be immediately after the update time of CAV $i_p$ or $i_c$ with a minimum inter-event time interval $T_d$.
By setting $t_{r,\textrm{next}}^{\min}=\min(t_{{i_p},\textmd{next}},t_{i_c,\textmd{next}})$, we can summarize the selection of the next self-triggered time instant as follows:
\begin{align}\label{t_next}
t_{i,{\textrm{next}}}=\left \{ \begin{array}{ll}
t_{i,k}^{\min}, \ \ \ \ \ \ \ \ \ t_{i,k}^{\min}\leq t_{r,\textrm{next}}^{\min} \\
t_{r,\textrm{next}}^{\min}+ T_d, \ \ \ \ \textrm{otherwise},
\end{array} \right.
\end{align}
Finally, in order to have $(t_{i,k}~\textrm{mod}~T_d)=0$, we set $t_{i,\textmd{next}}=\lfloor \frac{t_{i,\textmd{next}}}{T_d} \rfloor \times T_d$.
It should be noted that the case of $t_{i,{\textrm{next}}}=t_{i_c,{\textrm{next}}}$ or $t_{i,{\textrm{next}}}=t_{i_p,{\textrm{next}}}$ corresponds to having identical next update times for CAV $i$ and CAV $i_c$ or CAV $i_p$ so that they need to solve their QPs at the same time instant. However, in order for CAV $i$ to solve its QP at the time instant $t_{i,k+1}=t_{i,\textmd{next}}$, it requires the updated control input of CAV $i_c$ or CAV $i_p$, i.e. $u_{i_c}(t_{i,k+1})$ or $u_{i_p}(t_{i,k+1})$; this is practically not possible. In order to remedy this issue, whenever $t_{i,{\textrm{next}}}=t_{i_c,{\textrm{next}}}$ or $t_{i,{\textrm{next}}}=t_{i_p,{\textrm{next}}}$, CAV $i$ solves its QP at $t_{i,k+1}$ by using $u_M$ instead of $u_{i_c}(t_{i,k+1})$ and $u_{i_p}(t_{i,k+1})$ in \eqref{cbf_modified_3} and \eqref{cbf_modified_4}. This corresponds to considering the worst case in $\sigma_{i,3}(t,T_d)$ and $\sigma_{i,4}(t,T_d)$. Moreover, since calculating the next update time $t_{i,k+2}$ also depends on $u_{i_c}(t_{i,k+1})$ and $u_{i_p}(t_{i,k+1})$, CAV $i$ in this case acts similar to the time-driven case with assigned $t_{i,k+2}=t_{i,k+1}+T_d$. Then, at the next time instant $t_{i,k+2}$, CAV $i$ can obtain the updated control inputs of CAV $i_c$ and CAV $i_p$ from the coordinator and follows the proposed self-triggered scheme.
{\bf Communication Scheme}.
In view of the constraints \eqref{CBF1} and \eqref{CBF2}, CAV $i$ requires knowledge of $t_{i_p,last}, v_{i_p}(t_{i,k})$, \ $x_{i_p}(t_{i,k})$, $t_{i_c,last}, v_{i_c}(t_{i,k})$, and $x_{i_c}(t_{i,k})$ at time instant $t_{i,k}$. Hence, at each time instant that it accesses the coordinator, it needs to download the recorded data of CAV $i_p$ and $i_c$.
Then, the required updated information at $t_{i,k}$ for CAV $i_p$ can be calculated as
\begin{equation}
v_{i_p}(t_{i,k})= v_{i_p}(t_{i_p,\textmd{last}})+(t_{i,k}-t_{i_p,\textmd{last}})u_{i_p}(t_{i_p,\textmd{last}})
\end{equation}
\begin{align}
x_{i_p}(t_{i,k})= x_{i_p}(t_{i_p,\textmd{last}})+&(t_{i,k}-t_{i_p,\textmd{last}})v_{i_p}(t_{i_p,\textmd{last}})\nonumber \\
+&\frac{1}{2}(t_{i,k}-t_{i_p,\textmd{last}})^2u_{i_p}(t_{i_p,\textmd{last}})
\end{align}
with similar information calculated for CAV $i_c$. Note that the information for CAV $i_p$ may also be obtained from the on-board sensors at CAV $i$ if such are available. There are two key differences between the event-triggered and self-triggered approaches in the communication scheme as follows: (i). In the self triggered approach in addition to the states of the CAVs $\mathbf{x}_i(t_{i,last})$, control input $u_i(t_{i,last})$, current time instant $t_{i,last}$, and the next time instant of solving QP $t_{i,next}$ have to be shared. Whereas in the event-triggered only states of the CAVs and the states of the relevant CAVs at the time of the QP solving is needed. (ii). In this scheme, unlike the event-triggered scheme, the coordinator does not notify other relevant CAVs when a particular CAV solves QP and updates its data since the next QP solving time is known and stored in the coordinator table. For example, when CAV $i$ solves its QP there is no need for the CAVs $j$ where $i \in R_j(t_i)$ to be notified as they are already aware. Instead, the coordinator only receives and stores the current time instant, states, control input, and the next time instant of solving QP of the CAV $i$. Also upon download request from a particular CAV at the time of solving QP, access to the data of the relevant CAVs $r$ will be given to that particular CAV by coordinator.
\section{SIMULATION RESULTS}
\label{sec:simulation}
All algorithms in this section have been implemented using \textsc{MATLAB}.
We used \textsc{quadprog} for solving QPs of the form
\eqref{QP}, \eqref{eq:QPtk} and \eqref{QP2}, \textsc{lingprog} for solving the linear programming in \eqref{minfi}, \eqref{mingammai} and \eqref{mingi}, \textsc{fmincon} for a nonlinear optimization problem arising when \eqref{minfi} and \eqref{mingammai} become nonlinear, and \textsc{ode45} to integrate the vehicle dynamics.
We have considered the merging problem shown in Fig. \ref{fig:merging} where CAVs are simulated according to Poisson arrival processes with an arrival rate which is fixed for the purpose of comparing the time-driven approach and the event-driven schemes (over different bound values in \eqref{events} for the event-triggered scheme and with different $T_{\max}$ for the self-triggered scheme). The initial speed $v_{i}(t_{i,0})$ is also randomly generated with a uniform distribution over $[15 \textnormal{m/s}, 20\textnormal{m/s}]$ at the origins $O$ and $O^{\prime}$, respectively. The
parameters for \eqref{QP-OCBF}, \eqref{eq:QPtk}, and \eqref{QP2}
are: $L = 400\textnormal{m}, \varphi = 1.8\textnormal{s}, \delta = 0\textnormal{m}, u_{\max} = 4.905 \textnormal{m/s}^2, u_{\min} = -5.886\textnormal{m/s}^2, v_{\max} = 30\textnormal{m/s}, v_{\min} = 0\textnormal{m/s}, k_1=k_2=k_3=k_4=1, \lambda= 10$ and $T_d=0.05$. The sensor sampling rate is $20$Hz, sufficiently high to avoid missing any triggering event as discussed earlier.
The control update period for the time-driven control is $\Delta t=0.05$s. For the event-triggered scheme, we let the bounds $S=[s_x,s_v]$ be the same for the all CAVs in the network and vary them between the values of $\lbrace[0.5,1.5],[0.5,2],[0.5,2.5]\rbrace$. For the self-triggered scheme, we set $T_{\max} \in \{0.5,1,1.5,2\}$ to allow a comprehensive comparison.
In our simulations, we included the computation of a more realistic energy consumption model \cite{kamal2012model} to supplement the simple surrogate $L_2$-norm ($u^2$) model in our analysis:
$f_{\textrm{v}}(t)=f_{\textrm{cruise}}(t)+f_{\textrm{accel}}(t)$ with
\begin{align*}
f_{\textrm{cruise}}(t) &= \omega_0+\omega_1v_i(t)+\omega_2v^2_i(t)+\omega_3v^3_i(t),\\
f_{\textrm{accel}}(t) &=(r_0+r_1v_i(t)+r_2v^2_i(t))u_i(t).
\end{align*}
where we used typical values for parameters $\omega_1,\omega_2,\omega_3,r_0,r_1$ and, $r_2$ as reported in \cite{kamal2012model}.
Our results from several simulations corresponding to the three different methods under the same conditions with different values for the relative weight of energy vs time are shown in Tables \ref{Table event} and \ref{Table self}: the time-driven method, the event-triggered scheme, and, the self-triggered scheme.
We observe that by using the event-triggered and self-triggered approaches we are able to significantly reduce the number of infeasible QP cases (up to $95\%$) compared to the time-driven approach. At the same time, the overall number of instances when a QP needs to be solved has also decreased up to $68\%$ and $80\%$ in the event-triggered and self-triggered approaches, respectively.
Note that the large majority of infeasibilities is due to holding acceleration constant over an inappropriate sampling time, which can invalidate the forward invariance property of CBFs over the entire time interval. These infeasible cases were eliminated by the event-triggered and self-triggered schemes. However, another source of infeasibility is due to conflicts that may arise between the CBF constraints and the control bounds in a QP. This cannot be remedied through the proposed event-triggered or self-triggered QPs; it can, however, be dealt with by the introduction of a sufficient condition that guarantees no such conflict, as described in \cite{XIAO2022inf}.
In Tables \ref{Table event} and \ref{Table self}, we can also observe some loss of performance (i.e. average travel time increases hence road throughput decreases) in both approaches as the values of the bound parameters in the event triggered approach and $T_{\max}$ in the self triggering approach increases, hence increasing conservativeness. On the other hand, this decreases the computational load expressed in terms of the number of QPs that are solved in both methods, illustrating the trade-off discussed in previous sections.
There is also an apparent discrepancy in the energy consumption results: when the $L_2$-norm of the control input is used as a simple metric for energy consumption, the values are higher under event-triggered and self triggered control, whereas the detailed fuel consumption model shows lower values compared to time-driven control. This is due to the fact that $u_i^2$ penalizes CAVs when they decelerate, whereas this is not actually the case under a realistic fuel consumption model.
\begin{table*}\scriptsize
\centering
\begin{tabular}{|c|c|c|c|c|c|}
\cline{1-6}
&Item & \multicolumn{3}{|c|}{Event triggered} & Time driven\\
\cline{2-6}
& Bounds & $s_v=0.5, s_x=1.5$ & $s_v=0.5, s_x=2$ & $s_v=0.5, s_x=2.5$ & $\Delta t = 0.05$\\
\hline
\multirow{4}{*}{\makecell{$\alpha=0.1$ }} & Ave. Travel time & 19.61 & 19.73 & 19.65 & 19.42\\
\cline{2-6}
& Ave. $\frac{1}{2} u^2$ & 4.45 & 4.81 & 5.16 & 3.18\\
\cline{2-6}
& Ave. Fuel consumption &31.77 & 31.51 & 31.04 & 31.61\\
\cline{2-6}
&Computation load (Num of QPs solved) & 50\% (17853)& 47\% (16778) & 34\% (12168) & 100\% (35443) \\
\cline{2-6}
& Num of infeasible cases & \textcolor{blue}{42} & \textcolor{blue}{42} & \textcolor{blue}{43} & \textcolor{red}{315}\\
\hline
\multirow{4}{*}{\makecell{$\alpha=0.25$ }} & Ave. Travel time & 15.82 & 15.88 & 15.95 & 15.44\\
\cline{2-6}
& Ave. $\frac{1}{2} u^2$& 13.93 & 14.06 & 14.25 & 13.34\\
\cline{2-6}
& Ave. Fuel consumption & 52.12 & 51.69 & 51.42 & 55.81 \\
\cline{2-6}
&Computation load (Num of QPs solved) &51\% (14465) & 51\% (14403) & 48\% (13707) & 100\% (28200)\\
\cline{2-6}
& Num of infeasible cases & \textcolor{blue}{27} & \textcolor{blue}{27} & \textcolor{blue}{28} & \textcolor{red}{341} \\
\hline
\multirow{4}{*}{\makecell{$\alpha=0.4$ }} & Ave. Travel time & 15.4 & 15.46 & 15.53 & 15.01 \\
\cline{2-6}
& Ave. $\frac{1}{2} u^2$& 18.04 & 18.13 & 18.22 & 17.67\\
\cline{2-6}
& Ave. Fuel consumption & 53.155 & 52.77 & 52.42 & 56.5\\
\cline{2-6}
&Computation load (Num of QPs solved) & 54\% (14089) & 53\% (14072) & 49\% (13573) & 100\% (27412)\\
\cline{2-6}
& Num of infeasible cases & \textcolor{blue}{25} & \textcolor{blue}{25} & \textcolor{blue}{25} & \textcolor{red}{321} \\
\hline
\multirow{4}{*}{\makecell{$\alpha=0.5$ }} & Ave. Travel time & 15.05 & 15.11 & 15.17 & 14.63 \\
\cline{2-6}
& Ave. $\frac{1}{2} u^2$ &24.94 & 24.88 & 24.93 & 25.08\\
\cline{2-6}
& Ave. Fuel consumption & 53.65 & 53.41 & 53.21 & 56.93 \\
\cline{2-6}
&Computation load (Num of QPs solved) & 51\% (13764) & 51\% (13758) & 50\% (13415) & 100\% (26726) \\
\cline{2-6}
& Num of infeasible cases & \textcolor{blue}{20} & \textcolor{blue}{20} & \textcolor{blue}{20} & \textcolor{red}{341}\\
\hline
\end{tabular}
\caption{CAV metrics under event-triggered (see section III.A) and time-driven control. }
\label{Table event}
\end{table*}
\begin{table*}\scriptsize
\centering
\begin{tabular}{|c|c|c|c|c|c|c|c|}
\cline{1-8}
&Item & \multicolumn{4}{|c|}{Self-Triggered} & Time-driven & Time-driven\\
& & \multicolumn{4}{|c|}{} & Modified CBF & \\
\cline{2-8}
& $T_{\max}$ & $0.5$ & $1$ & $1.5$ & $2$ & $T_s=T_d = 0.05$ & $T_s = 0.05$\\
\hline
\multirow{5}{*}{{$\alpha=0.1$ }} & Ave. Travel time & 19.5 & 19.48 & 19.48& 19.49& 19.5 &19.42\\
\cline{2-8}
& Ave. $\frac{1}{2} u^2$ & 4.27 & 5.00 & 5.93 & 7.2 & 3.37 & 3.18\\
\cline{2-8}
& Ave. Fuel consumption & 31.86 & 32.21 & 32.64 & 33.23 & 31.32 &31.61\\
\cline{2-8}
& Computation load (Num of QPs solved)& 20.46\% (7252)& 11.9\% (4218) & 10.87\% (3854) &10.32\%(3658)& 100.5\% (35636) &100\% \\
\cline{2-8}
& Num of infeasible cases & \textcolor{blue}{42} & \textcolor{blue}{42} & \textcolor{blue}{43} & \textcolor{blue}{32}& \textcolor{red}{190} & \textcolor{red}{315}\\
\hline
\multirow{5}{*}{{$\alpha=0.25$ }} & Ave. Travel time & 15.57 & 15.56 & 15.57 &15.62 &15.58&15.44\\
\cline{2-8}
& Ave. $\frac{1}{2} u^2$& 14.33 & 15.10 & 15.68 & 16.68 & 13.38 &13.34\\
\cline{2-8}
& Ave. Fuel consumption & 54.45 & 53.51 & 52.57 & 52.94 & 54.17 &55.81 \\
\cline{2-8}
&Computation load (Num of QPs solved) &19.5\% (5495) & 13.68\% (3857) & 12.34\% (3479) & 12.72\% (3588) & 100.9\% (28461) & 100\%(28200)\\
\cline{2-8}
& Num of infeasible cases & \textcolor{blue}{27} & \textcolor{blue}{27} & \textcolor{blue}{28} &\textcolor{blue}{24} & \textcolor{red}{249} &\textcolor{red}{341} \\
\hline
\multirow{5}{*}{{$\alpha=0.4$ }} & Ave. Travel time & 15.15 & 15.15 & 15.18 &15.2& 15.16 &15.01 \\
\cline{2-8}
& Ave. $\frac{1}{2} u^2$& 18.5 & 19.32 & 19.73 & 20.36 & 17.64 & 17.67\\
\cline{2-8}
& Ave. Fuel consumption & 55.23 & 53.35 & 52.67 & 52.95& 54.93 &56.5\\
\cline{2-8}
& Computation load (Num of QPs solved) & 20.4\% (5591) & 14.85\% (4071) &13.69\% (3754) & 13.60\% (3727) & 101.0 \% (27695) &100\% (27412)\\
\cline{2-8}
& Num of infeasible cases & \textcolor{blue}{25} &\textcolor{blue}{25}& \textcolor{blue}{25} & \textcolor{blue}{20} & \textcolor{red}{220} &\textcolor{red}{321} \\
\hline
\multirow{5}{*}{{$\alpha=0.5$}} & Ave. Travel time & 14.79 & 14.79 & 14.82 & 14.89& 14.8 &14.63 \\
\cline{2-8}
& Ave. $\frac{1}{2} u^2$ &25.5 & 25.84 & 26.43 & 27.5 &24.86 &25.08\\
\cline{2-8}
& Ave. Fuel consumption & 55.5 & 53.15 & 52.9 & 53.45 & 55.5 &56.93 \\
\cline{2-8}
&Computation load (Num of QPs solved) & 21.8\% (5841) &16.7\% (4322) & 15.09\% (4034) & 15.17\% (4054) & 101.1\% (27033) & 100\%(26726) \\
\cline{2-8}
& Num of infeasible cases &\textcolor{blue}{19} &\textcolor{blue}{20} & \textcolor{blue}{20}& \textcolor{blue}{20} & \textcolor{red}{250} & \textcolor{red}{341}\\
\hline
\end{tabular}
\caption{CAV metrics under self-triggered (see section III.B) and time-driven control. }
\label{Table self}
\end{table*}
We can also visualize the results presented in Tables \ref{Table event} and \ref{Table self} by showing the variation of the average objective functions in \eqref{eqn:energyobja_m} with respect to $\alpha$ for different choices of $[s_x,s_v]$ and $T_{\max}$ respectively. As seen in Fig. \ref{fig:Objective function comparison}, by selecting higher values for bounds in the event-triggered scheme and for $T_{\max}$ in the self-triggered scheme (being more conservative) the objective functions will also attain higher values, while the lowest cost (best performance) is reached under time-driven control.
\begin{figure}
\centering
\includegraphics[scale=0.38]{ave_obj_function.pdf} \caption{Average objective function value with respect to $\alpha$ (time weight with respect to energy in \eqref{eqn:energyobja_m}) for different selection of bounds in event-triggered approach (see Sec III.A).}
\label{fig:Objective function comparison}
\end{figure}
\begin{figure}
\centering
\includegraphics[scale=0.38]{ave_obj_function_self_triggered.pdf} \caption{Average objective function value with respect to $\alpha$ (time weight with respect to energy in \eqref{eqn:energyobja_m}) for different selection of bounds in self-triggered approach (see Sec III.B).}
\label{fig:Objective function comparison}
\end{figure}
{\bf Constraint violation}.
It is worth noting that an ``infeasible'' QP does not necessarily imply a constraint violation, since violating a CBF constraint does not always imply the violation of an original constraint in \eqref{Safety}, \eqref{SafeMerging}, and \eqref{VehicleConstraints1}. This is due to the conservative nature of a CBF whose intent is to
\emph{guarantee} the satisfaction of our original constraints.
In order to explicitly show how an infeasible case may lead to a constraint violation and how this can be alleviated by the event-triggered and self-triggered schemes, we simulated 12 CAVs in the merging framework of Fig. \ref{fig:merging} with the exact same parameter settings as before and with $S=[0.5,1.5]$ in the event-triggered scheme, $T_{\max}=1$ in the self-triggered scheme and $\beta = 5$. Figure \ref{fig:rear_end} shows the values of the rear-end safety constraint over time. One can see that the satisfaction of safety constraints is always guaranteed with the event-triggered and self-triggered approach as there is no infeasible case and the value of the constraint $b_1(\textbf{x}(t)))$ is well above zero. In contrast, we see a clear violation of the constraint in the time-driven scheme in the cases of CAVs 8 depicted by the blue line.
{\bf Robustness}.
We have investigated the robustness of both schemes with respect to different forms of uncertainty, such as modeling and computational errors, by adding two noise terms to the vehicle dynamics: $\dot{x}_{i}(t) = v_{i}(t)+w_1(t)$, $\dot{v}_{i}(t) = u_{i}(t)+w_2(t)$,
where $w_1(t),w_2(t)$ denote two random processes defined in an appropriate probability space which, in our simulation, are set to be uniformly distributed over $[-2,2]$ and $[-0.2,0.2]$, respectively. We repeated the prior simulation experiment with added noise and results shown in Figs. \ref{fig:rear_end_noisy} and \ref{fig:Lateral_noisy}. We can see that the event-triggered and self-triggered schemes with almost similar performance because of their conservativeness keep the functions well away from the unsafe region (below $0$)
in contrast to the time-driven approach where we observe constraint violations due to noise, e.g., CAV 8 in Fig. \ref{fig:rear_end_noisy} and CAVs 3, 4, and, 9 in Fig. \ref{fig:Lateral_noisy}.
\begin{figure}[t]
\centering
\includegraphics[scale=0.42]{Fig_rear_end_without_noise.pdf} \caption{The variation of rear-end safety constraints for the time-driven, event-triggered and self-triggered approaches.}
\label{fig:rear_end}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[scale=0.45]{Fig_rear_end_noisy.pdf} \caption{The variation of rear-end safety constraints for the time-driven, event-triggered, and self-triggered approaches in the presence of noise. Note the constraint violation under time-driven control.}%
\label{fig:rear_end_noisy}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[scale=0.45]{Fig_lateral_noisy.pdf} \caption{The variation of safe merging constraints for the time-driven, event-triggered and self-triggered approaches in the presence of noise. Note the constraint violations under time-driven control.}%
\label{fig:Lateral_noisy}
\end{figure}
\section{CONCLUSIONS}
The problem of controlling CAVs in conflict areas of a traffic network subject to hard safety constraints can be solved through a combination of tractable optimal control problems and the use of CBFs. These solutions can be derived by discretizing time and solving a sequence of QPs. However, the feasibility of each QP cannot be guaranteed over every time step. When this is due to the lack of a sufficiently high control update rate, we have shown that this problem can be alleviated through either an event-triggered scheme or a self-triggered scheme, while at the same time reducing the need for communication among CAVs, thus lowering computational costs and the chance of security threats.
Ongoing work is targeted at eliminating all possible infeasibilities through the use of sufficient conditions based on the work in \cite{XIAO2022inf} added to the QPs, leading to complete solutions of CAV control problems with full safety constraint guarantees.
\label{sec:conclude}
|
1,477,468,750,304 | arxiv |
\section{Introduction}
Grounding natural language in fine-grained image regions is essential for a broad variety of vision-language tasks, such as robotic navigation~\citep{tellex2011understanding,Anderson_2018_CVPR}, visual question answering~\citep{antol2015vqa,anderson2018bottom}, visual dialogue~\citep{das2017visual}, and visual commonsense reasoning~\citep{zellers2019recognition}. Recently Pre-Trained Vision-Language Models (VL-PTMs) have shown promising capabilities in visual grounding. Typically, generic cross-modal representations are first pre-trained on large-scale image-caption data in a self-supervised fashion, and then fine-tuned to adapt to downstream tasks~\citep{lu2019vilbert,su2019vl,li2020oscar,radford2021learning}. This \textit{pre-training-then-fine-tuning} paradigm of VL-PTMs has greatly pushed forward the state-of-the-art of many cross-modal tasks.
Despite the success, we note that there exists a significant gap between the objective forms of pre-training and fine-tuning of VL-PTMs. As illustrated in Figure~\ref{fig:framework}, during pre-training, most VL-PTMs are optimized based on the masked language modeling objective, trying to recover the masked token from the cross-modal context. However, during fine-tuning, downstream tasks are usually conducted by classifying unmasked token representations into semantic labels, where task-specific parameters are typically introduced. The gap hinders the effective adaptation of VL-PTMs to downstream tasks. As a result, a large amount of labeled data is typically required to stimulate the visual grounding capabilities of VL-PTMs for downstream tasks.
In this work, inspired by recent progress in pre-trained language models in natural language processing~\citep{brown2020language,schick-schutze-2021-just,liu2021pre}, we present Cross-modal Prompt Tuning (\colorfulmodelname, alternatively, Colorful Prompt Tuning), a novel paradigm for tuning VL-PTMs. The key insight is that by adding color-based co-referential markers in both image and text, visual grounding can be reformulated into a fill-in-the-blank problem, maximally mitigating the gap between pre-training and fine-tuning. As shown in Figure~\ref{fig:framework}, to ground natural language expressions in image data, \modelname consists of two components: (1) a \textit{visual sub-prompt} that uniquely marks image regions with colored blocks or segmentation masks, and (2) a \textit{textual sub-prompt} that puts the query text into a color-based query template. Explicit grounding to the target image region can then be achieved by recovering the corresponding color text from the masked token in the query template. In addition, we present a principled method to search for high-quality cross-modal prompt configurations (i.e., visual appearances and texts of colors) for CPT.
By mitigating the gap from pre-training, CPT enables strong few-shot and even zero-shot visual grounding capabilities of VL-PTMs. Experimental results show that the prompt-tuned VL-PTMs outperform their fine-tuned counterparts by a large margin. For example, using colored blocks as visual sub-prompts, CPT achieves $17.3\%$ absolute accuracy improvement, and $73.8\%$ relative standard deviation reduction on average with one shot in RefCOCO evaluation. In the same setting, when equipped with colored segmentation masks as visual sub-prompts, CPT can further achieve $20.0\%$ absolute accuracy improvement, and $76.2\%$ relative standard deviation reduction than the vanilla fine-tuning approach. In addition to the object position output tasks such as visual grounding, we show that CPT can also be applied to achieve strong zero- and few-shot performance for position input tasks such as visual relation detection.
Our contributions are summarized as threefold: (1) We present a novel cross-modal prompt tuning paradigm for VL-PTMs. To the best of our knowledge, this is the first attempt in both cross-modal prompt tuning for VL-PTMs, and zero- and few-shot visual grounding independent of object types. (2) We present a principled approach to search for high-quality cross-modal prompt configurations for CPT. (3) We conduct comprehensive experiments which demonstrate the effectiveness of CPT.
\begin{figure*}[t]
\centering
\includegraphics[width=\textwidth]{iclr2022/figures/framework.pdf}
\caption{Illustration of (a) pre-training for VL-PTMs with masked language modeling (MLM) head, (b) vanilla fine-tuning with new classification (CLS) head, and (c) our colorful cross-modal prompt tuning (\modelname) framework that reformulates visual grounding into a fill-in-the-blank problem with reused MLM head. Only square parts of relevant image regions are shown for illustration. }
\label{fig:framework}
\end{figure*}
\section{Preliminary}
In the literature, visual grounding is typically formulated as a referring expression comprehension (REC) problem~\citep{plummer2015flickr30k,mao2016generation}. Given an image $I$ and a query text of referring expression $q$, REC aims to locate the target region in $I$ that corresponds to $q$. In this section, we introduce the vanilla fine-tuning approach for VL-PTMs.
A common practice for REC is to first detect a set of region proposals $\{v_1, v_2, \dots, v_n\}$ via object detectors, and then classify or rank the proposals to select the target region~\citep{lu2019vilbert,chen2020uniter}. Specifically, visual and textual inputs are first transformed into a sequence of input tokens $\{[\texttt{IMG}], v_1, v_2, \dots, v_n, [\texttt{CLS}], w_1, w_2, \dots, w_m, [\texttt{SEP}]\}$, where $\{w_1, w_2, \dots, w_m\}$ are textual tokens of $q$, and $[\texttt{IMG}]$, $[\texttt{CLS}]$ and $[\texttt{SEP}]$ are special tokens. To obtain input representations, the feature of image regions is extracted by visual encoders, and the embeddings of textual and special tokens are obtained by a lookup table. Then input representations are fed into the pre-trained transformers to produce the hidden representations $\{\mathbf{h}_{\texttt{[IMG]}}, \mathbf{h}_v^1, \mathbf{h}_v^2, \dots, \mathbf{h}_v^n, \mathbf{h}_{\texttt{[CLS]}}, \mathbf{h}_w^1, \mathbf{h}_w^2, \dots, \mathbf{h}_w^m, \mathbf{h}_{\texttt{[SEP]}}\}$. Finally the hidden representation of the target region is optimized against negative ones via classification or ranking loss, where new task-specific parameters are introduced. As a result, fine-tuned VL-PTMs need a large mount of labeled instances to stimulate the visual grounding capability.
\section{Cross-modal Prompt Tuning (\modelname)}
\label{sec:cpt method}
In this section, we introduce the framework of \modelname, and how to apply \modelname to zero-shot, few-shot and fully supervised visual grounding.
\subsection{Overview}
The key to visual grounding is to establish fine-grained connections between image regions and textual expressions. Therefore, a good cross-modal prompt tuning framework should take full advantage of co-referential signals from both image and text, and maximally mitigate the gap between pre-training and tuning. To this end, \modelname reformulates visual grounding into a fill-in-the-blank problem, as shown in Figure~\ref{fig:framework}.
Specifically, the \modelname framework consists of two components: (1) a \textit{visual sub-prompt} that uniquely marks the image regions with colored blocks or segmentation masks, and (2) a \textit{textual sub-prompt} that puts the query text into a color-based query template. Equipped with \modelname, it is then straightforward for VL-PTMs to ground the query text by filling the masked token with the color text of the target image region, where the objective form is identical to pre-training.
\subsection{Visual Sub-prompt}
\label{sec:visual sub-prompt}
Given an image $I$ and its region proposals $\mathcal{R}=\{v_1, v_2, \dots, v_n\}$, visual sub-prompt aims to uniquely mark the image regions with natural visual makers. Interestingly, we note that colored bounding boxes are widely used to uniquely mark objects in images \textit{for visualization} in the literature. Inspired by this, we bridge the image regions and query text through a set of colors $\mathcal{C}$, where each color $c_i = (c_{v}^i, c_{w}^i) \in \mathcal{C}$ is defined by its visual appearance $c_{v}^i$ (e.g., RGB ($255$, $0$, $0$)) and color text $c_{w}^i$ (e.g., \textit{red}). Then we mark each region proposal $v_i$ in the image with a unique color $c_v^i$ for grounding, resulting in a set of colored image proposals $\Psi(\mathcal{R}; \mathcal{C})$, where $\Psi(\cdot)$ denotes visual sub-prompt.
As for the shape of the visual sub-prompt, in principle, there are multiple plausible choices to mark the regions with colors, including colored bounding boxes, solid blocks, or solid object segmentation masks. In our experiments, we find that coloring the object with solid blocks and segmentation masks yields better results than bounding boxes, since solid colors that fit the outlines of objects are more common in real-world images (e.g., \textit{red shirt} and \textit{blue car}). Note that the addition of visual sub-prompt to the raw image does not change the architecture or parameters of VL-PTMs.
\subsection{Textual Sub-prompt}
Textual sub-prompt aims to prompt VL-PTMs to establish the connections between the query text and image regions marked by visual sub-prompt. Specifically, the query text $q$ (e.g., ``\textit{the horse watched by the woman}'') is transformed into a fill-in-the-blank query using a template $\mathcal{T}_g(\cdot)$ as:
\vspace{0.7em}
\centerline{$\mathcal{T}_g(q) =$ \texttt{[CLS]} $q$ is in \texttt{[MASK]} color \texttt{[SEP]}}
In this way, VL-PTMs are prompted to decide the color of which region is more appropriate to fill in the mask (e.g., \textit{red} or \textit{blue}) as follows:
\begin{equation}
\small
\label{eq:predict}
P(v={v_i}|\mathcal{R}, q) = P(\texttt{[MASK]} = c_w^i| \Psi(\mathcal{R}; \mathcal{C}), \mathcal{T}_g(q))
= \frac{\exp(\mathbf{h}_{\texttt{[MASK]}}^\top \mathbf{c}_{w}^i)}{\sum_{c_j \in \mathcal{C}}\exp(\mathbf{h}_{\texttt{[MASK]}}^\top \mathbf{c}^j_w)},
\end{equation}
where $v$ is the target region, $\mathbf{c}_w^i$ is the embedding of $c_w^i$ in the pre-trained MLM head. Note that the procedure does not introduce any new parameters, and also mitigates the gap between pre-training and tuning, and therefore improves the data efficiency for tuning VL-PTMs.
\subsection{Training and Inference}
\label{sec:training and inference}
Equipped with \modelname, VL-PTMs can readily perform zero-shot visual grounding without any labeled data, since the cross-modal representations of colors and their composition with other concepts (e.g., objects, attributes and relations) have been well learned by VL-PTMs during pre-training. When a few or full labeled instances are available, VL-PTMs can be further tuned by \modelname using the entropy-based objective: $\mathcal{L} = -\sum_{(\mathcal{R}, q, v^\star)\in \mathcal{D}_\text{train}} \log P(v^\star|\mathcal{R}, q)$, where $\mathcal{D}_\text{train}$ is the training set.
Although it is appealing to bridge the image and text through a color-based prompt, we identify two key challenges in its design: (1) how to determine the configurations of the color set $\mathcal{C}$, and (2) how to deal with the large number of image regions with limited pre-trained colors.
\noindent
\textbf{Cross-Modal Prompt Search.} Previous works in textual prompt tuning show that prompt configurations (e.g., textual templates) have a significant influence on the performance~\citep{jiang2020can}. In this work, we make the first investigation in searching the cross-modal prompt configuration (i.e., the color set $\mathcal{C}$). Intuitively, $\mathcal{C}$ should consist of colors to which VL-PTMs are the most sensitive. To obtain a color $c_i = (c_{v}^i, c_{w}^i)$, a naive approach is to adopt the most frequent color text in the pre-training text as $c_{w}^i$, and its standard RGB as $c_{v}^i$ (e.g., $c_i=((255, 0, 0), \textit{red})$). However, this solution is sub-optimal, since it determines the color text without considering its visual appearance, and the visual appearance of a color in real-world images often differs from its standard RGB.
To address the challenge, we present a principled cross-modal prompt search (CPS) algorithm for CPT, which jointly considers visual and textual semantics in real-world cross-modal data. Specifically, we first identify a candidate set of color texts $\hat{\mathcal{C}_w}$ and visual appearances $\hat{\mathcal{C}_v}$. For each visual appearance candidate $\hat{c_v} \in \hat{\mathcal{C}_v}$, we feed into VL-PTMs a pseudo-data instance consisting of a pure colored block of $\hat{c_v}$ and a text: ``\texttt{[CLS]} a photo in \texttt{[MASK]} color \texttt{[SEP]}''. Then we compute the decoding score $s(\hat{c_v}, \hat{c_w})$ for each color text candidate $\hat{c_w} \in \hat{\mathcal{C}_w}$ as in Equation~\ref{eq:predict}, where a larger decoding score indicates higher correlation between $\hat{c_v}$ and $\hat{c_w}$. To select the color texts that are sensitive by VL-PTMs, we retain the color texts that achieve the largest decoding scores for visual appearance candidates: $\mathcal{C}_w =\{c_w|c_w=\argmax_{\hat{c}_w^j \in \hat{\mathcal{C}}_w} s(\hat{c}_v^i, \hat{c}_w^j), \hat{c}_v^i \in \hat{\mathcal{C}}_v\}$. Similarly, we can obtain the visual appearances according to the largest decoding score, resulting in the color set: $\mathcal{C} =\{(c_v, c_w)|c_v= \argmax_{\hat{c}_v^i \in \hat{\mathcal{C}}_v} s(\hat{c}_v^i, c_w^j), c_w^j \in \mathcal{C}_w\}$. We refer readers to Section~\ref{sec:pseudo code} for the pseudo-code of the algorithm. In experiments, we find that the resultant colors yield better results than the naive ones. To make the raw content of the colored image regions available to VL-PTMs, a transparency hyperparameter $\alpha \in (0, 1)$ is further applied to color visual appearances in practice.
\noindent
\textbf{Image Region Batching.} In visual grounding, the number of region proposals in an image usually exceeds the size of $\mathcal{C}$ ($\sim10$). Besides, we observe that heavily overlapped colored blocks can hinder visual grounding. Therefore, we divide the image regions into batches, where each batch contains a handful of moderately overlapping image regions, and mark each batch with a visual sub-prompt respectively. To handle the batches that do not contain the target region, we further introduce a new candidate text \textit{none} in the decoding vocabulary, to indicate that there is no target region in the batch.
\begin{figure*}[t]
\centering
\vspace{-0.6em}
\includegraphics[width=\textwidth]{iclr2022/figures/VRD_framework.pdf}
\caption{CPT for visual relation detection by filling-in-the-blank with reused MLM head.}
\label{fig:VRD_framework}
\vspace{-0.5em}
\end{figure*}
\subsection{CPT for Visual Relation Detection}
In the previous sections, we introduced CPT for visual grounding. In fact, CPT can also be easily adapted to other cross-modal tasks, such as visual relation detection. Given an object pair (including the categories and bounding boxes) in an image, visual relation detection aims to classify the relation into a relation set $\mathcal{P}$, providing structured image representations that can facilitate many cross-modal tasks~\citep{johnson2015image,DBLP:conf/nips/HudsonM19,shi2019explainable}. In the literature, since the ground-truth relations cannot be exhaustively annotated during evaluation, to avoid false negatives, previous works typically score the triplets and evaluate the recall of top-N triplets~\citep{xu2017scene,zellers2018neural,chen2019knowledge,tang2019learning}.
\textbf{Visual and Textual Sub-prompts.} As shown in Figure~\ref{fig:VRD_framework}, to perform visual relation detection, CPT first marks the image regions with visual sub-prompt as in Section~\ref{sec:visual sub-prompt}, and puts the object pair in the query template as follows:
\vspace{0.7em}
\centerline{$\mathcal{T}_r(s, o) =$ \texttt{[CLS]} The $s_w$ in $c_w^i$ color is \texttt{[MASK]} the $o_w$ in $c_w^j$ color \texttt{[SEP]}}
where $s_w$ is the subject text, $o_w$ is the object text, and $c_w^i$ and $c_w^j$ are the corresponding color texts. Then VL-PTMs are prompted to recover the relation texts from masked tokens in the template. To accommodate the varied number of tokens in relation texts (e.g., \textit{wearing}, \textit{walking on}, typically 1$\sim$3 tokens), we introduce a variable $l$ indicating the number of tokens in a relation text (e.g., $l = 2$ for \textit{walking on}). The template $\mathcal{T}(\cdot;l)$ will have $l$ consecutive masked tokens for relation prediction. For each template $\mathcal{T}(\cdot;l)$, we introduce a special \texttt{NA} relation consisting of $l$ tokens, which indicates that there is no relation between the entity pair under $\mathcal{T}(\cdot;l)$. Specifically, in our experiments, the \texttt{NA} relation is \textit{irrelevant}, \textit{no relation}, \textit{no relation with} for $l=1,2,3$ respectively.
\textbf{Training}. Given a relational triplet $(s, r, o)$, after decorating the input image regions and the object pair with visual and textual sub-prompts, VL-PTMs are optimized with the MLM loss to recover the relational tokens. Specifically, denote the number of tokens in $r$ as $|r|$. (1) For templates where $l = |r|$, models are asked to reconstruct the $i$th masked token in $\mathcal{T}(s, o; l)$ with the $i$th relational token $r_i$ using the MLM head. (2) For templates where $l \neq |r|$, since there is no relation between $(s, o)$ under $\mathcal{T}(s, o; l)$, models are asked to reconstruct the \texttt{NA} relation. For $(s, o)$ that do not have any relation in the image, models are asked to reconstruct the \texttt{NA} relation for all $\mathcal{T}(s, o;l)$.
\textbf{Inference}. During inference, given an object pair $(s, o)$, we score the relations based on their fitness to the prompt context. Specifically, the score of each relation $r \in \mathcal{P} \cup \{\texttt{NA}\}$ is obtained by the aggregated MLM scores of its composing tokens under the corresponding template: $s(r) = \frac{1}{l}\sum_{i=1}^l \log P(\texttt{[MASK]}_i = r_i | \mathcal{T}(s, o; l))$, where $l=|r|$. Intuitively, larger $s(r)$ indicates that the relation $r$ better fits the prompt context. Finally, the triplets $(s, r, o)$ are ranked according to the relation score $s(r)$, where $r \in \mathcal{P}$.
Compared with visual grounding that aims to locate image regions for ungrounded texts, visual relation detection represents a different series of cross-modal tasks that aim to perform semantic recognition based on grounded inputs, such as visual commonsense reasoning~\cite{zellers2019recognition}, object classification~\citep{zhao2017survey} and scene graph classification~\citep{xu2017scene}. In addition to better data efficiency, a crucial advantage of using \modelname is that the semantic labels can be produced from open-world vocabularies, instead of fixed label sets.
\section{Experiments}
In this section, we empirically evaluate \modelname in prompting VL-PTMs for visual grounding in different settings, including zero-shot, few-shot and fully supervised settings. We refer readers to Section~\ref{sec:implementation details} for the implementation details.
\subsection{Experimental Settings}
\label{sec:experiment settings}
We first introduce the experimental settings of the visual grounding task, including datasets, training settings, evaluation protocols and baseline models in our experiments.
\noindent
\textbf{Datasets.} Following previous works~\citep{rohrbach2016grounding,zhang2018grounding}, we adopt three widely used visual grounding datasets collected from MSCOCO images~\citep{lin2014microsoft}, including RefCOCO~\citep{yu2016modeling}, RefCOCO+~\citep{yu2016modeling} and RefCOCOg~\citep{mao2016generation}. We refer readers to Section~\ref{sec:dataset details} for more dataset details.
\begin{table*}[t]
\centering
\caption{Main results. Accuracies (\%) of grounding referring expressions in zero-shot, few-shot and fully supervised settings. We report mean and standard deviation performance over 5 random splits. ZS: zero-shot. Blk: colored block, Seg: colored segmentation mask.}
\resizebox{\linewidth}{!}{%
\begin{tabular}{lcl ccc ccc cc}
\toprule
& \multirow{2}{*}{\textbf{Shot}} & \multirow{2}{*}{\textbf{Model}} & \multicolumn{3}{c}{\textbf{RefCOCO}} & \multicolumn{3}{c}{\textbf{RefCOCO+}} & \multicolumn{2}{c}{\textbf{RefCOCOg}} \\
\cmidrule(lr){4-6} \cmidrule(lr){7-9} \cmidrule(lr){10-11} & & & val & testA & testB & val & testA & testB & val & test \\
\midrule
\parbox[t]{2mm}{\multirow{3}{*}{\rotatebox[origin=c]{90}{ZS}}} & \multicolumn{1}{|c|}{\multirow{3}{*}{0}} & \multicolumn{1}{l|}{Random}& $15.9 \pm 0.2$ & $19.4 \pm 0.6$ & $13.4 \pm 0.4$ & $16.1 \pm 0.1$ & $13.3 \pm 0.6$ & $20.0 \pm 0.2$ & $18.8 \pm \hspace{1.7mm} 0.4$ & $19.2 \pm \hspace{1.7mm} 0.3$ \\
& \multicolumn{1}{|c|}{\multirow{3}{*}{}} & \multicolumn{1}{l|}{\modelname-Blk} & $26.9$ & $27.5$ & $27.4$ & $25.4$ & $25.0$ & $27.0$ & $32.1$ & $32.3$ \\
& \multicolumn{1}{|c|}{\multirow{3}{*}{}} & \multicolumn{1}{l|}{\modelname-Seg} & $\textbf{32.2}$ & $\textbf{36.1}$ & $\textbf{30.3}$ & $\textbf{31.9}$ & $\textbf{35.2}$ & $\textbf{28.8}$ & $\textbf{36.7}$ & $\textbf{36.5}$ \\
\midrule
\parbox[t]{2mm}{\multirow{15}{*}{\rotatebox[origin=c]{90}{Few-Shot}}} &\multicolumn{1}{|c|}{\multirow{3}{*}{1}} & \multicolumn{1}{l|}{Fine-tuning}& $16.5 \pm 4.9$ & $12.0 \pm 6.6$ & $23.5 \pm 5.7$ & $22.2 \pm 7.6$ & $20.6 \pm 9.3$ & $25.7 \pm 5.2$ & $26.9 \pm \hspace{1.7mm} 8.4$ & $26.9 \pm \hspace{1.7mm} 8.1$ \\
& \multicolumn{1}{|c|}{\multirow{3}{*}{}} & \multicolumn{1}{l|}{\modelname-Blk} & $34.1 \pm 1.3$ & $37.7 \pm 1.7$ & $32.2 \pm 1.5$ & $35.9 \pm 4.1$ & $40.4 \pm 5.4$ & $32.2 \pm 2.6$ & $39.7 \pm \hspace{1.7mm} 3.4$ & $39.9 \pm \hspace{1.7mm} 3.0$ \\
& \multicolumn{1}{|c|}{\multirow{3}{*}{}} & \multicolumn{1}{l|}{\modelname-Seg} & $\textbf{37.2} \pm \textbf{0.9}$ & $\textbf{41.5} \pm \textbf{1.5}$ & $\textbf{33.2} \pm \textbf{1.7}$ & $\textbf{37.9} \pm \textbf{4.0}$ & $\textbf{42.3} \pm \textbf{5.9}$ &
$\textbf{33.9} \pm \textbf{2.4}$ & $\textbf{43.1} \pm \hspace{1.7mm} \textbf{2.9}$ & $\textbf{43.4} \pm \hspace{1.7mm} \textbf{3.1}$ \\
\cmidrule(lr){2-11}
& \multicolumn{1}{|c|}{\multirow{3}{*}{2}} & \multicolumn{1}{l|}{Fine-tuning}& $22.5 \pm 4.5$ & $21.0 \pm 7.2$ & $25.9 \pm 4.7$ & $27.0 \pm 3.1$ & $27.8 \pm 4.2$ & $27.0 \pm 2.6$ & $28.4 \pm 12.0$ & $28.1 \pm 11.3$ \\
& \multicolumn{1}{|c|}{\multirow{3}{*}{}} & \multicolumn{1}{l|}{\modelname-Blk} & $35.3 \pm 3.2$ & $39.6 \pm 3.0$ & $30.9 \pm 1.7$ & $33.3 \pm 3.6$ & $37.5 \pm 4.8$ & $30.3 \pm 2.5$ & $40.1 \pm \hspace{1.7mm} 5.1$ & $40.0 \pm \hspace{1.7mm} 4.7$ \\
& \multicolumn{1}{|c|}{\multirow{3}{*}{}} & \multicolumn{1}{l|}{\modelname-Seg} & $\textbf{39.8} \pm \textbf{1.7}$ & $\textbf{45.6} \pm \textbf{3.2}$ & $\textbf{33.9} \pm \textbf{0.4}$ & $\textbf{38.6} \pm \textbf{3.6}$ & $\textbf{44.5} \pm \textbf{4.5}$ &
$\textbf{32.8} \pm \textbf{3.8}$ & $\textbf{44.7} \pm \hspace{1.7mm} \textbf{5.1}$ & $\textbf{44.3} \pm \hspace{1.7mm} \textbf{4.8}$ \\
\cmidrule(lr){2-11}
& \multicolumn{1}{|c|}{\multirow{3}{*}{4}} & \multicolumn{1}{l|}{Fine-tuning}& $29.1 \pm 5.0$ & $29.9 \pm 7.8$ & $29.8 \pm 5.3$ & $34.2 \pm 4.2$ & $37.7 \pm 5.2$ & $30.5 \pm 3.3$ & $34.0 \pm 13.1$ & $33.7 \pm 12.8$ \\
& \multicolumn{1}{|c|}{\multirow{3}{*}{}} & \multicolumn{1}{l|}{\modelname-Blk} & $38.3 \pm 2.1$ & $43.6 \pm 3.3$ & $34.0 \pm 1.6$ & $38.8 \pm 3.8$ & $44.4 \pm 6.4$ & $33.5 \pm 1.5$ & $40.6 \pm \hspace{1.7mm} 7.9$ & $40.9 \pm \hspace{1.7mm} 7.9$ \\
& \multicolumn{1}{|c|}{\multirow{3}{*}{}} & \multicolumn{1}{l|}{\modelname-Seg} & $\textbf{40.7} \pm \textbf{3.2}$ & $\textbf{47.4} \pm \textbf{4.1}$ & $\textbf{35.3} \pm \textbf{1.8}$ & $\textbf{40.3} \pm \textbf{2.0}$ & $\textbf{46.5} \pm \textbf{3.1}$ &
$\textbf{34.5} \pm \textbf{1.5}$ & $\textbf{44.4} \pm \hspace{1.7mm} \textbf{6.9}$ & $\textbf{44.4} \pm \hspace{1.7mm} \textbf{6.9}$ \\
\cmidrule(lr){2-11}
& \multicolumn{1}{|c|}{\multirow{3}{*}{8}} & \multicolumn{1}{l|}{Fine-tuning}& $34.6 \pm 4.8$ & $37.8 \pm 5.5$ & $31.4 \pm 5.1$ & $36.2 \pm 3.6$ & $40.1 \pm 4.6$ & $32.7 \pm 2.3$ & $40.6 \pm 11.2$ & $40.4 \pm 11.7$ \\
& \multicolumn{1}{|c|}{\multirow{3}{*}{}} & \multicolumn{1}{l|}{\modelname-Blk} & $41.0 \pm 1.5$ & $43.9 \pm 1.7$ & $\textbf{35.8} \pm \textbf{2.2}$ & $39.3 \pm 1.5$ & $46.1 \pm 1.8$ & $33.2 \pm 1.3$ & $43.4 \pm \hspace{1.7mm} 6.5$ & $43.6 \pm \hspace{1.7mm} 6.4$ \\
& \multicolumn{1}{|c|}{\multirow{3}{*}{}} & \multicolumn{1}{l|}{\modelname-Seg} & $\textbf{41.3} \pm \textbf{2.6}$ & $\textbf{48.2} \pm \textbf{4.6}$ & $35.7 \pm 2.5$ & $\textbf{42.6} \pm \textbf{2.9}$ & $\textbf{49.3} \pm \textbf{4.7}$ &
$\textbf{35.4} \pm \textbf{1.0}$ & $\textbf{47.4} \pm \hspace{1.7mm} \textbf{3.5}$ & $\textbf{47.4} \pm \hspace{1.7mm} \textbf{3.5}$ \\
\cmidrule(lr){2-11}
& \multicolumn{1}{|c|}{\multirow{3}{*}{16}} & \multicolumn{1}{l|}{Fine-tuning}& $39.8 \pm 4.2$ & $45.5 \pm 5.0$ & $34.9 \pm 3.0$ & $41.8 \pm 3.0$ & $47.3 \pm 3.1$ & $36.2 \pm 2.3$ & $47.5 \pm \hspace{1.7mm} 4.1$ & $47.8 \pm \hspace{1.7mm} 4.7$ \\
& \multicolumn{1}{|c|}{\multirow{3}{*}{}} & \multicolumn{1}{l|}{\modelname-Blk} & $44.8 \pm 3.3$ & $51.4 \pm 4.1$ & $\textbf{38.2} \pm \textbf{2.3}$ & $41.5 \pm 1.3$ & $48.2 \pm 2.1$ & $34.7 \pm 0.9$ & $47.8 \pm \hspace{1.7mm} 2.1$ & $48.2 \pm \hspace{1.7mm} 2.8$ \\
& \multicolumn{1}{|c|}{\multirow{3}{*}{}} & \multicolumn{1}{l|}{\modelname-Seg} & $\textbf{45.3} \pm \textbf{1.8}$ & $\textbf{53.3} \pm \textbf{3.0}$ & $37.5 \pm 1.3$ & $\textbf{44.8} \pm \textbf{0.9}$ & $\textbf{52.5} \pm \textbf{1.2}$ &
$\textbf{36.6} \pm \textbf{1.2}$ & $\textbf{51.0} \pm \hspace{1.7mm} \textbf{2.6}$ & $\textbf{51.4} \pm \hspace{1.7mm} \textbf{2.8}$ \\
\midrule
\parbox[t]{2mm}{\multirow{9}{*}{\rotatebox[origin=c]{90}{Fully Supervised}}} & \multicolumn{1}{|c|}{\multirow{9}{*}{$\left| \mathcal{D}_\text{train} \right|$}} & \multicolumn{1}{l|}{MAttNet}& $76.7$ & $81.1$ & $70.0$ & $65.3$ & $71.6$ & $56.0$ & $66.6$ & $67.3$ \\
& \multicolumn{1}{|c|}{\multirow{3}{*}{}} & \multicolumn{1}{l|}{VL-T5} & $-$ & $-$ & $-$ & $-$ & $-$ & $-$ & $71.2$ & $71.3$ \\
& \multicolumn{1}{|c|}{\multirow{3}{*}{}} & \multicolumn{1}{l|}{ViLBERT} & $-$ & $-$ & $-$ & $72.3$ & $78.5$ & $62.6$ & $-$ & $-$ \\
& \multicolumn{1}{|c|}{\multirow{3}{*}{}} & \multicolumn{1}{l|}{VLBERT} & $-$ & $-$ & $-$ & $71.6$ & $77.7$ & $61.0$ & $-$ & $-$ \\
& \multicolumn{1}{|c|}{\multirow{3}{*}{}} & \multicolumn{1}{l|}{ERNIE-ViL} & $-$ & $-$ & $-$ & $74.0$ & $80.3$ & $64.7$ & $-$ & $-$ \\
& \multicolumn{1}{|c|}{\multirow{3}{*}{}} & \multicolumn{1}{l|}{UNITER} & $81.2$ & $86.5$ & $73.9$ & $\textbf{75.3}$ & $\textbf{81.3}$ & $\textbf{65.6}$ & $74.3$ & $74.5$ \\
& \multicolumn{1}{|c|}{\multirow{3}{*}{}} & \multicolumn{1}{l|}{Fine-tuning} & $81.8$ & $\textbf{87.2}$ & $74.3$ & $74.5$ & $80.8$ & $64.3$ & $74.6$ & $\textbf{75.7}$ \\
& \multicolumn{1}{|c|}{\multirow{3}{*}{}} & \multicolumn{1}{l|}{\modelname-Blk} & $81.9$ & $87.1
$ & $73.8$ & $74.4$ & $80.4$ & $64.1$ & $\textbf{75.3}$ & $75.3$ \\
& \multicolumn{1}{|c|}{\multirow{3}{*}{}} & \multicolumn{1}{l|}{\modelname-Seg} & \textbf{81.9} & $86.4$ & \textbf{74.4} & $73.9$ & $79.5$ & $64.4$ & $74.4$ & $75.2$ \\
\bottomrule
\end{tabular}}
\label{table:main results}
\end{table*}
\textbf{Training Settings.} We report experimental results of different training settings, including (1) zero-shot setting, where no training data is available, (2) few-shot setting, where $K$ training instances are available ($K=1,2,4,8,16$), and (3) fully supervised setting, where the full training set is available.
\textbf{Evaluation Protocols.} (1) Evaluation metrics. Following~\citet{zhang2018grounding,lu2019vilbert}, we adopt accuracy of the grounding results as the evaluation metrics. An expression is considered correctly grounded if the IoU of the top predicted region and the ground truth is greater than $0.5$. (2) Model validation. To better approximate the few-shot scenario where only a few labeled instances are available, inspired by~\citet{gao2021making}, we use a few-shot validation set (consisting of $16$ instances) for few-shot and zero-shot experiments, and use full validation set for fully supervised experiments. (3) Robust evaluation. Previous works have shown that model training on limited data can suffer from instability~\citep{dodge2020fine,gao2021making}. For a robust and comprehensive evaluation, we report mean results over $5$ random training set splits, as well as the standard deviation. For fair comparisons, the training and validation sets are identical for our baselines and \modelname.
\textbf{Baselines.} We evaluate two variants of CPT, including CPT using colored blocks (CPT-Blk) and colored segmentation masks (CPT-Seg). We adopt the widely used VinVL~\citep{zhang2021vinvl} as the CPT backbone. We compare CPT with a series of strong baselines that utilize detected proposals, including vanilla fine-tuning of VinVL and other VL-PTMs (see Section~\ref{sec: baseline details} for more baseline details). For fair comparisons, we adopt the base size for all VL-PTMs. We refer readers to Section~\ref{sec:large result} for the results of large size VL-PTMs.
\subsection{Main Results}
\label{sec:main_result}
The main results are reported in Table~\ref{table:main results}, from which we observe that: (1) \modelname outperforms the random baseline and the strong fine-tuning baseline by a large margin in zero-shot and few-shot settings. For example, using colored blocks as visual sub-prompts, CPT achieves $17.3\%$ absolute accuracy improvement on average with one shot in RefCOCO evaluation. This indicates that \modelname can effectively improve sample efficiency in tuning VL-PTMs. (2)~Coloring objects with segmentation masks in visual sub-prompts (CPT-Seg) achieves even better results than blocks (CPT-Blk). The reason is that solid colors that fit the outlines of objects are more common in real-world images, making CPT-Seg more natural visual sub-prompts (despite requiring stronger annotation to train the segmentation tools). (3) Notably, \modelname achieves significantly smaller standard deviation than fine-tuning. For example, CPT-Blk achieves $73.8\%$ relative standard deviation reduction on average with one shot in RefCOCO evaluation. This shows that a coherent tuning approach from pre-training can lead to substantially more stable few-shot training, which is a crucial factor for evaluating few-shot learning models~\citep{gao2021making}. (4)~We note that \modelname-Blk slightly underperforms fine-tuning with $16$ shots in RefCOCO+ evaluation. The reason is that RefCOCO+ has more color-based expressions (e.g., \textit{the person in red shirt and blue hat}), which can disturb our color-based \modelname. However, this problem can be alleviated with more tuning instances in the fully supervised scenario, where models can learn to better distinguish colors in the query text and prompt template. (5)~\modelname models achieve comparable performance to strong fine-tuned VL-PTMs in the fully supervised settings. This shows that \modelname is a competitive tuning approach for VL-PTMs even in the fully supervised scenario. We note that \modelname-Blk slightly outperforms \modelname-Seg in the fully supervised setting, and we refer readers to Section~\ref{sec:analysis} for a detailed analysis. In summary, compared to the vanilla fine-tuning approach, \modelname achieves superior/comparable, and more stable performance in zero-shot, few-shot and fully supervised visual grounding.
\subsection{Influence of Colors in CPT's Visual Grounding}
In our analysis, we first investigate the influence of colors---the key ingredients---in the visual grounding performance of \modelname. Specifically, we compare colors obtained from the frequency-based baseline (\textbf{Freq}) (See Section \ref{sec:training and inference}) and our cross-modal prompt search method CPS (\textbf{Ours}) in two dimensions, including an overall evaluation of top-N colors and a zoom-in study of individual colors. Unless otherwise specified, all the following experiments are conducted based on CPT-Blk on the validation set of RefCOCO in $0,2,8$ shot settings.
\definecolor{std_red}{RGB}{255,0,0}
\definecolor{std_black}{RGB}{0,0,0}
\definecolor{std_blue}{RGB}{0,0,255}
\definecolor{std_green}{RGB}{0,255,0}
\definecolor{std_yellow}{RGB}{255,255,0}
\definecolor{std_brown}{RGB}{165,42,42}
\definecolor{cpt_red}{RGB}{240, 0, 30}
\definecolor{cpt_purple}{RGB}{155, 50, 210}
\definecolor{cpt_yellow}{RGB}{255, 255, 25}
\definecolor{cpt_blue}{RGB}{0, 10, 255}
\definecolor{cpt_pink}{RGB}{255, 170, 230}
\definecolor{cpt_green}{RGB}{0,255,0}
\newcommand{\sqboxs}{1.6ex
\newcommand{\sqbox}[1]{\textcolor{#1}{\rule{\sqboxs}{\sqboxs}}}
\begin{table*}[h]
\centering
\scriptsize
\renewcommand\arraystretch{1.2}
\caption{Top-6 colors from the frequency-based baseline and our CPS. Visual appearances and color texts are reported. Best viewed in color.}
\resizebox{\linewidth}{!}{%
\begin{tabular}{l|llllll}
\toprule
Model & Color \#1 & \hspace{-2mm}Color \#2 & \hspace{-2mm}Color \#3 & \hspace{-2mm}Color \#4 & \hspace{-2mm}Color \#5 & \hspace{-2mm}Color \#6 \\
\midrule
Freq & \sqbox{std_red} (255,\hspace{0.1mm}0,\hspace{0.1mm}0), \hspace{-0.2mm}red &
\hspace{-2mm}\sqbox{std_black} (0,\hspace{0.1mm}0,\hspace{0.1mm}0), \hspace{-0.2mm}black &
\hspace{-2mm}\sqbox{std_blue} (0,\hspace{0.1mm}0,\hspace{0.1mm}255), \hspace{-0.2mm}blue &
\hspace{-2mm}\sqbox{std_green} (0,\hspace{0.1mm}255,\hspace{0.1mm}0), \hspace{-0.2mm}green &
\hspace{-2mm}\sqbox{std_yellow} (255,\hspace{0.1mm}255,\hspace{0.1mm}0), \hspace{-0.2mm}yellow &
\hspace{-2mm}\sqbox{std_brown} (165,\hspace{0.1mm}42,\hspace{0.1mm}42), \hspace{-0.2mm}brown \\
Ours & \sqbox{cpt_red} (240,\hspace{0.1mm}0,\hspace{0.1mm}30), \hspace{-0.2mm}red
& \hspace{-2mm}\sqbox{cpt_purple} (155,\hspace{0.1mm}50,\hspace{0.1mm}210), \hspace{-0.2mm}purple
& \hspace{-2mm}\sqbox{cpt_yellow} (255,\hspace{0.1mm}255,\hspace{0.1mm}25), \hspace{-0.2mm}yellow
& \hspace{-2mm}\sqbox{cpt_blue} (0,\hspace{0.1mm}10,\hspace{0.1mm}255), \hspace{-0.2mm}blue
& \hspace{-2mm}\sqbox{cpt_pink} (255,\hspace{0.1mm}170,\hspace{0.1mm}230), \hspace{-0.2mm}pink
& \hspace{-2mm}\sqbox{cpt_green} (0,\hspace{0.1mm}255,\hspace{0.1mm}0), \hspace{-0.2mm}green
\\
\bottomrule
\end{tabular}
}
\label{tab:top colors}
\end{table*}
\textbf{Overall Evaluation of Top-N Colors.} We first show the top-6 colors recommended by each approach in Table~\ref{tab:top colors}. To evaluate the overall performance of the top colors from different models, we evaluate CPT equipped with each color from the top-6 colors respectively, and report the mean accuracy and standard deviation over different colors. From the experimental results in Figure~\ref{fig:overall}, we observe that the top colors produced by CPS achieve both higher mean accuracy and lower standard deviation than the baseline method in different shot-settings. The reason is that CPS jointly considers visual and textual semantics in searching cross-modal prompts, and therefore is able to effectively adjust and rank the colors for more accurate and stable visual grounding.
\begin{figure*}[!htb]
\centering
\subfloat[Overall performance.]{\includegraphics[width=0.24\textwidth]{iclr2022/figures/overall.pdf}\label{fig:overall}}
\subfloat[Zoom-in study of individual colors in different shots.]{\hspace{0.4em}\includegraphics[width=0.24\textwidth]{iclr2022/figures/0-shot.pdf}
\includegraphics[width=0.24\textwidth]{iclr2022/figures/2-shot.pdf}
\includegraphics[width=0.24\textwidth]{iclr2022/figures/8-shot.pdf}
\label{fig:individual}}
\caption{Results of utilizing different colors for visual grounding, including (a) an overall evaluation of top-6 colors from different models, and (b) a zoom-in study of aligned individual colors.}
\end{figure*}
\textbf{Zoom-In Study of Individual Colors.} To investigate the fine-grained influence of specific colors in CPT's visual grounding, we further perform a zoom-in study of individual colors. To align the colors for comparison, we merge the top-6 colors from the baseline and CPS, and remove the colors that are not included in the models' complete color sets (e.g., \textit{black} $\notin \mathcal{C}$ in CPS). We report the accuracies in Figure~\ref{fig:individual}, from which we observe that: (1)~The performance of different colors varies greatly in prompting VL-PTMs in the same shot-settings, and the optimal colors are different in different shot-settings. The results indicate the large influence of cross-modal prompt configurations, consistent with the findings from recent studies in textual prompt tuning~\citep{jiang2020can,gao2021making}. (2)~Colors produced by CPS achieve comparable or superior performance compared to the baseline in individual colors. The results show that given the color texts, CPS can properly adjust the color visual appearance (i.e., RGB) to improve the visual grounding performance. (3) We note that in some cases, colors produced by CPS slightly underperform the baseline. We hypothesize the reason is that, CPS uses a single textual template to compute the decoding scores for color adjustment, which can be biased. The problem can potentially be addressed by ensembling templates as in \citet{qin2021learning}, which we leave for future work.
\subsection{Case Study}
\label{sec:case study}
To provide a more intuitive understanding of CPT, we conduct a case study on the validation set of RefCOCO in 8-shot setting. From the results in Figure~\ref{fig:case}, we have the following observations: (1)~CPT enables VL-PTMs to distinguish target objects distracted by the same of type objects using only a few training instances, while the fine-tuning method struggles to succeed (Figure~\ref{fig:case1}). (2)~CPT can be distracted by hard candidates (e.g., objects of the same type as the target that requires complex reasoning to identify), but will typically produce reasonable predictions. For example, in Figure~\ref{fig:case2}, CPT predicts a nearby \textit{apple} while the fine-tuning baseline predicts a \textit{bowl}. The reason is that CPT maximally reuses the pre-trained parameters of VL-PTMs, which can help prevent outrageous predictions that typically happen in few-shot fine-tuning. (3) However, we find that CPT can be disturbed by colors in raw image regions and text. For example, it can be difficult for the model to identify a \textit{red bowl} when the candidate regions are colored by red blocks (Figure~\ref{fig:case3}).
\definecolor{proposal}{RGB}{0,170,170}
\definecolor{gt}{RGB}{253,49,253}
\definecolor{cpt}{RGB}{88,225,36}
\definecolor{ft}{RGB}{255,255,0}
\begin{figure*}[t]
\centering
\subfloat[Correctly predicted]{\includegraphics[width=0.3\textwidth]{iclr2022/figures/case1.pdf}\label{fig:case1}}\hspace{5mm}
\subfloat[Disturbed by objects of the same type, but still reasonable]{\includegraphics[width=0.3\textwidth]{iclr2022/figures/case2.pdf}\label{fig:case2}}\hspace{5mm}
\subfloat[Disturbed by colors in raw image regions and text]{\includegraphics[width=0.3\textwidth]{iclr2022/figures/case3.pdf}\label{fig:case3}}\hspace{5mm}
\caption{Case study. The bounding boxes given by image region proposals (olive), ground-truth annotation (pink), CPT (green), and fine-tuning baseline (yellow) are highlighted accordingly.}
\label{fig:case}
\end{figure*}
\begin{table*}[t]
\centering
\caption{Visual relation detection results on Visual Genome. ZS: zero-shot, FS: fully supervised. We report the mean and standard deviation performance over 2 random splits.}
\resizebox{\linewidth}{!}{%
\begin{tabular}{lcl cccc cccc}
\toprule
& \multirow{2}{*}{\textbf{Shot}} & \multirow{2}{*}{\textbf{Model}} & \multicolumn{4}{c}{\textbf{Val}} & \multicolumn{4}{c}{\textbf{Test}} \\
\cmidrule(lr){4-7} \cmidrule(lr){8-11}
& & & R@50 & R@100 & mR@50 & mR@100 & R@50 & R@100 & mR@50 & mR@100 \\
\midrule
\parbox[t]{2mm}{\multirow{2}{*}{\rotatebox[origin=c]{90}{ZS}}} & \multicolumn{1}{|c|}{\multirow{2}{*}{0}} & \multicolumn{1}{l|}{Random} & \hspace{1.7mm}$1.6 \pm 0.2$ & \hspace{1.7mm}$1.8 \pm 0.2$ & \hspace{1.7mm}$1.1 \pm 0.2$ & \hspace{1.7mm}$1.3 \pm 0.1$ & \hspace{1.7mm}$1.5 \pm 0.0$ & \hspace{1.7mm}$1.8 \pm 0.1$ & \hspace{1.7mm}$1.2 \pm 0.1$ & \hspace{1.7mm}$1.6 \pm 0.1$\\
& \multicolumn{1}{|c|}{\multirow{2}{*}{}} & \multicolumn{1}{l|}{\modelname-Blk}& $\textbf{33.6}$ & $\textbf{34.7}$ & $\textbf{14.8}$ & $\textbf{15.5}$ & $\textbf{29.3}$ & $\textbf{30.5}$ & $\textbf{13.0}$ & $\textbf{14.5}$\\
\midrule
\parbox[t]{2mm}{\multirow{8}{*}{\rotatebox[origin=c]{90}{Few-Shot}}} &\multicolumn{1}{|c|}{\multirow{2}{*}{1}} & \multicolumn{1}{l|}{Fine-tuning}& \hspace{1.7mm}$3.8 \pm 0.1$ & \hspace{1.7mm}$4.2 \pm 0.1$ & \hspace{1.7mm}$7.8 \pm 0.9$ & \hspace{1.7mm}$8.7 \pm 1.0$ & \hspace{1.7mm}$4.1 \pm 0.1$ & \hspace{1.7mm}$4.7 \pm 0.0$ & \hspace{1.7mm}$6.7 \pm 0.3$ & \hspace{1.7mm}$7.6 \pm 0.4$\\
& \multicolumn{1}{|c|}{\multirow{2}{*}{}} & \multicolumn{1}{l|}{\modelname-Blk} & $\textbf{16.3} \pm \textbf{2.0}$ & $\textbf{17.5} \pm \textbf{2.3}$ & $\textbf{25.2} \pm \textbf{0.7}$ & $\textbf{27.4} \pm \textbf{0.8}$ & $\textbf{18.0} \pm \textbf{2.8}$ & $\textbf{20.0} \pm \textbf{3.0}$ & $\textbf{23.9} \pm \textbf{0.3}$ & $\textbf{26.3} \pm \textbf{0.3}$\\
\cmidrule(lr){2-11}
& \multicolumn{1}{|c|}{\multirow{2}{*}{4}} & \multicolumn{1}{l|}{Fine-tuning}& \hspace{1.7mm}$7.1 \pm 1.9$ & \hspace{1.7mm}$7.6 \pm 2.0$ & $10.3 \pm 0.8$ & $11.7 \pm 0.8$ & \hspace{1.7mm}$7.3 \pm 1.5$ & \hspace{1.7mm}$7.9 \pm 1.7$ & $11.8 \pm 1.0$ & $13.2 \pm 0.9$\\
& \multicolumn{1}{|c|}{\multirow{2}{*}{}} & \multicolumn{1}{l|}{\modelname-Blk} & $\textbf{14.4} \pm \textbf{0.4}$ & $\textbf{15.4} \pm \textbf{0.4}$ & $\textbf{30.4} \pm \textbf{1.5}$ & $\textbf{32.8} \pm \textbf{1.6}$ & $\textbf{17.7} \pm \textbf{0.6}$ & $\textbf{19.3} \pm \textbf{0.6}$ & $\textbf{28.5} \pm \textbf{1.5}$ & $\textbf{32.1} \pm \textbf{1.0}$\\
\cmidrule(lr){2-11}
& \multicolumn{1}{|c|}{\multirow{2}{*}{16}} & \multicolumn{1}{l|}{Fine-tuning}& \hspace{1.7mm}$8.4 \pm 0.3$ & \hspace{1.7mm}$8.9 \pm 0.3$ & $20.7 \pm 0.6$ & $21.7 \pm 0.6$ & $10.4 \pm 0.7$ & $11.2 \pm 0.8$ & $19.7 \pm 0.1$ & $21.7 \pm 0.1$\\
& \multicolumn{1}{|c|}{\multirow{2}{*}{}} & \multicolumn{1}{l|}{\modelname-Blk} & $\textbf{15.0} \pm \textbf{0.6}$ & $\textbf{16.0} \pm \textbf{0.8}$ & $\textbf{33.0} \pm \textbf{0.2}$ & $\textbf{35.4} \pm \textbf{0.6}$ & $\textbf{18.4} \pm \textbf{1.0}$ & $\textbf{20.0} \pm \textbf{1.1}$ & $\textbf{32.5} \pm \textbf{0.5}$ & $\textbf{36.1} \pm \textbf{0.6}$\\
\cmidrule(lr){2-11}
& \multicolumn{1}{|c|}{\multirow{2}{*}{32}} & \multicolumn{1}{l|}{Fine-tuning}& \hspace{1.7mm}$9.7 \pm 1.1$ & $10.2 \pm 1.1$ & $21.9 \pm 0.6$ & $22.9 \pm 0.2$ & $11.7 \pm 0.2$ & $12.4 \pm 0.3$ & $22.0 \pm 0.1$ & $24.1 \pm 0.0$\\
& \multicolumn{1}{|c|}{\multirow{2}{*}{}} & \multicolumn{1}{l|}{\modelname-Blk} & $\textbf{17.2} \pm \textbf{0.4}$ & $\textbf{18.2} \pm \textbf{0.4}$ & $\textbf{34.6} \pm \textbf{0.2}$ & $\textbf{37.9} \pm \textbf{0.1}$ & $\textbf{20.8} \pm \textbf{0.1}$ & $\textbf{22.3} \pm \textbf{0.1}$ & $\textbf{34.0} \pm \textbf{0.1}$ & $\textbf{37.7} \pm \textbf{0.3}$\\
\midrule
\parbox[t]{2mm}{\multirow{4}{*}{\rotatebox[origin=c]{90}{FS}}} & \multicolumn{1}{|c|}{\multirow{4}{*}{$\left| \mathcal{D}_\text{train} \right|$}}
&\multicolumn{1}{l|}{Neural Motif} & \hspace{1.7mm}- & \hspace{1.7mm}- & \hspace{1.7mm}- & \hspace{1.7mm}- & $65.2$ & $67.0$ & $14.8$ & $ 16.1$ \\
& \multicolumn{1}{|c|}{\multirow{3}{*}{}} & \multicolumn{1}{l|}{BGNN} & \hspace{1.7mm}- & \hspace{1.7mm}- & \hspace{1.7mm}- & \hspace{1.7mm}- & $59.2$ & $61.3$ & $30.4$ & $32.9$\\
& \multicolumn{1}{|c|}{\multirow{3}{*}{}} & \multicolumn{1}{l|}{PCPL} & \hspace{1.7mm}- & \hspace{1.7mm}- & \hspace{1.7mm}- & \hspace{1.7mm}- & $50.8$ & $52.6$ & $35.2$ & $37.8$\\
& \multicolumn{1}{|c|}{\multirow{3}{*}{}} & \multicolumn{1}{l|}{DT2-ACBS}& \hspace{1.7mm}- & \hspace{1.7mm}- & \hspace{1.7mm}- & \hspace{1.7mm}- & $23.3$ & $25.6$ & $35.9$ & $39.7$\\
\bottomrule
\end{tabular}}
\label{table:SGG results}
\end{table*}
\subsection{Experiments on Visual Relation Detection}
To show the generalization capability of CPT, we further evaluate CPT on visual relation detection.
\textbf{Experimental Settings.} (1) Datasets. We adopt the popular Visual Genome dataset~\citep{krishna2017visual}, which contains $50$ visual relations. We refer readers to Section~\ref{sec:dataset details} for the dataset details. (2) Evaluation protocols. Following previous works~\citep{xu2017scene,chen2019knowledge}, we use recall@N (R@N) and mean recall@N (mR@N) as the evaluation metrics. During training, K labeled instances are provided for each relation. (3) Baselines. We adopt fine-tuning of VinVL as our most direct baseline model. Specifically, we feed the image regions and their categories into the model, and concatenate the visual hidden representations of the subject and object. Then the object pair representation is fed into a softmax classifier. All VL-PTMs are in base size. We also report the results of strong baselines that are tailored for the task, and are fully supervised with $315,642$ labeled triplets, including Neural Motif~\citep{zellers2018neural}, BGNN~\citep{li2021bipartite}, PCPL~\citep{yan2020pcpl} and DT2-ACBS~\citep{desai2021dt2}.
\textbf{Results.} From the results in Table~\ref{table:SGG results}, we observe that: (1) \modelname significantly outperforms the random baseline and the strong fine-tuning baseline in zero-shot and few-shot settings. For example, using $32$ shots, CPT achieves a strong mR@100 of $37.7\%$, outperforming fine-tuning by $13.6\%$ absolute points, and closely approaching state-of-the-art fully supervised DT2-ACBS. This indicates that \modelname can improve sample efficiency in tuning VL-PTMs. (2) We note that while the macro performance of CPT monotonically increases as the shot number grows, the micro performance drops first in 1- and 4-shot settings. This is due to the distribution gap between the balanced training set (i.e., K shot for each relation) and the long-tail test set. Since the relations in the pre-training corpora also follow a long-tail distribution, CPT can achieve a high starting point for micro performance.
\section{Related Work}
\textbf{Pre-trained Vision-language Models.} Existing VL-PTMs can be roughly divided into three categories according to their pre-training objectives and architectures: (1)~\textit{Masked language modeling} based VL-PTMs are mainly pre-trained to recover the masked tokens~\citep{lu2019vilbert,su2019vl,tan2019lxmert,li2020oscar,yu2021ernie}; (2)~\textit{Auto-regressive language modeling} based VL-PTMs model image and text tokens with Transformer decoders auto-regressively~\citep{ramesh2021zero,wang2021simvlm}; (3)~\textit{Contrastive learning} based VL-PTMs are pre-trained to holistically match image-text pairs~\citep{radford2021learning,li-etal-2021-unimo}. Note that our Cross-modal Prompt Tuning (\modelname) framework is orthogonal to VL-PTM design. In this work, without loss of generality, we focus on prompting masked language modeling based VL-PTMs due to their prevalence and superior performance, while applying \modelname to other VL-PTMs is also applicable.
\textbf{Prompt Tuning for NLP.} Prompt tuning for pre-trained language models is a rapidly emerging field in NLP~\citep{raffel2019exploring,brown2020language,liu2021pre}. Originally designed for probing knowledge in pre-trained language models~\citep{petroni2019language}, prompt tuning has now been extended to handle a variety of NLP tasks, including language understanding~\citep{schick-schutze-2021-just,schick2021exploiting} and generation~\citep{li2021prefix}.
To facilitate prompt engineering, \citet{shin2020eliciting} propose to automatically generate prompt templates via gradient-based search.
Most related to our work are \citet{tsimpoukelli2021multimodal,zhou2021learning,wang2021simvlm} that present textual prompt tuning for VL-PTMs, achieving promising results on some vision-language tasks. However, similar to existing works in NLP, they focus on prompt engineering in text, keeping images untouched, and therefore can only perform holistic implicit visual grounding. In comparison, to the best of our knowledge, \modelname is the first cross-modal prompt tuning framework tailored for both image and text, and is capable of explicitly grounding natural language to fine-grained image regions.
\textbf{Visual Grounding.} There is a general consensus that visual grounding plays an essential role in solving vision-language tasks~\citep{karpathy2015deep,plummer2015flickr30k,goodfellow2016deep,krishna2017visual,lu2019vilbert}. \citet{mao2016generation} propose the referring expression comprehension task to explicitly evaluate the visual grounding capability. To address the task, most models learn to classify or rank image region candidates based on the expressions in a fully supervised fashion~\citep{mao2016generation,zhang2018grounding,lu2019vilbert,chen2020uniter}, requiring large amounts of costly human-annotated data. To alleviate reliance on human annotation, some works have investigated zero-/few-shot grounding of new object types~\citep{sadhu2019zero,blukis2020few}, whereas amounts of training data are still needed for existing object types.
In comparison, we prompt general VL-PTMs for zero- and few-shot visual grounding in a reformulated fill-in-the-blank paradigm independent of specific object types.
\section{Conclusion and Future Work}
In this work, we present the first Cross-modal Prompt Tuning (\modelname) framework for VL-PTMs. To facilitate prompt engineering, we present a principled approach to search for cross-modal prompt configurations. Comprehensive experimental results demonstrate the effectiveness of \modelname on zero-shot, few-shot and fully supervised visual grounding. In future, we plan to address the color disturbance and improve the computation efficiency of \modelname, and also investigate the effectiveness of \modelname on other vision-language tasks. As the first attempt in cross-modal prompt tuning, we propose a color-based framework as one of the possible prompt tuning solutions. We leave exploring other plausible prompt tuning approaches of VL-PTMs for future work.
\section{Ethics Statement}
In this section, we discuss the main ethical considerations of \modelname: (1)~Intellectual property protection. The codes and data adopted from previous works are granted for research-purpose usage. (2)~Privacy. The data adopted in this work (i.e., the pre-training data and tuning data) is created by human annotators for research purposes, and should not cause privacy issues. (3)~Potential problems. VL-PTMs may be biased towards some objects and attributes. There are increasing efforts to address the problem in the community~\citep{ross2021measuring,zhao2021calibrate}.
\section{Reproducibility Statement}
To maximize the reproducibility, we provide a clear description of the methodology in Section~\ref{sec:cpt method}, the pseudo-code of the model in Section~\ref{sec:pseudo code}, implementation details in Section~\ref{sec:implementation details}, and detailed data characteristics and evaluation protocols in Section~\ref{sec:experiment settings}. All the data and codes will be available to facilitate future research.
|
1,477,468,750,305 | arxiv | \section{Introduction}
In his general theory of relativity, Albert Einstein predicted the
existence of gravitational waves (GWs) \cite{einstein:1916} ---
ripples in the fabric of spacetime caused by asymmetries in
catastrophic and highly accelerated events throughout the cosmos.
Though confirmed indirectly by observations of the binary pulsar PSR
$1913 + 16$ \cite{taylor:1989}, GWs have not been directly detected by
the global network of first generation detectors, such as Initial LIGO
in the United States \cite{abramovici:1992,ligo:2009}, Virgo in Italy
\cite{caron:1997,virgo:2012}, GEO 600 in Germany \cite{grote:2010},
and TAMA 300 in Japan \cite{ando:2005}.
The second generation of LIGO detectors, Advanced
LIGO~\cite{harry:2010}, are currently under construction and will
likely operate as early as 2015~\cite{AdvDet}. Advanced
Virgo~\cite{AdvVirgo} should come on-line in 2016, while the Japanese
KAGRA~\cite{KAGRA} detector will join the world-wide network later in
the decade. The ten-fold improvement in sensitivity of these
detectors \cite{harry:2010, AdvDet, smith:2009}, along with coherent
analysis between observatories, will significantly improve the chances
of detecting GWs from an astrophysical event in the Milky Way and
neighbouring galaxies. It is therefore likely that direct detection
of GWs will occur in the near future.
Potential sources of GWs include the inspiral of compact binary star
systems (of neutron stars or black holes) followed by black hole
formation \cite{thorne:1987}, pulsars \cite{culter:2002}, rotating
core collapse supernovae (CCSN) followed by protoneutron star
formation \cite{ott:2004}, gamma-ray bursts \cite{meszaros:2006}, and
cosmic string cusps \cite{damour:2005}.
Rotating CCSN are of particular interest in this paper. Like
neutrinos, GWs are emitted deep in the core of a progenitor and
propagate through the universe mostly unobscured by astrophysical
objects between the source and a detector on Earth. GWs act like
messengers, providing primary observables about the multi-dimensional
core collapse dynamics and emission mechanisms. It is in this way
that GW astronomy will open a new set of eyes to view the universe,
complementing the conventional electromagnetic-type observations.
Coalescing binary star systems are the most promising source of
detectable GWs \cite{thorne:1987}, with an expected observation rate
that could be as large as a few hundred events per year for Advanced
LIGO \cite{AdvDet,abbott:2005}. In contrast, the expected rate of
CCSN in the Milky Way is around three per century \cite{adams:2013}.
It is of great importance that appropriate data analysis techniques
are in place so we do not miss an opportunity to detect these rare
CCSN events.
The Bayesian statistical framework has proven to be a powerful tool
for parameter estimation in astrophysical and cosmological settings
\cite{loredo:1992}. Bayesian data analysis was first introduced to
the GW community by Christensen and Meyer \cite{christensen:1998}.
Christensen and Meyer \cite{christensen:2001} then demonstrated the
usefulness of the Gibbs sampler \cite{gelman:2013, geman:1984} for
estimating five physical parameters of coalescing binary signals.
Christensen, Meyer, and Libson \cite{christensen:2004a} then went on
to show how a custom-built Metropolis-Hastings algorithm
\cite{gelman:2013, metropolis:1953, hastings:1970}, a generalization
of the Gibbs sampler, was a superior and more suited routine for
eventual implementation into the LIGO Scientific Collaboration (LSC)
algorithm library (LAL). Parameter estimation for compact binary
inspirals has subsequently become more sophisticated in recent years
(see for example \cite{roever:2006, roever:2007a, roever:2007b,
raymon:2009, vdsluys:2008, veitch:2010, CBC-param}). Markov chain
Monte Carlo (MCMC) routines for inferring the physical parameters of
pulsars have also been developed \cite{christensen:2004b,
umstatter:2004, clark:2007}.
Due to the analytical intractability and complex multi-dimensional
nature of rotating core collapse stellar events, a significant amount
of computational time must go into numerically simulating the
gravitational waveforms. Unlike binary inspiral events, one cannot
simply use template search methods for supernova burst events as it is
computationally impossible to cover the entire signal parameter space.
It is therefore important to find alternative parameter estimation
techniques.
Summerscales \etal \cite{summerscales:2008} utilized the maximum
entropy framework to deconvolve noisy data from multiple (coherent)
detectors, with the goal of extracting a CCSN GW signal. Inference on
amplitude and phase parameters was conducted using cross correlation
between the recovered waveform and the set of competing waveforms from
the Ott \etal \cite{ott:2004} catalogue. A match was defined as the
model with the maximum cross correlation to the recovered waveform.
Heng \cite{heng:2009} first proposed a principal component analysis
(PCA) approach to simplify the problem by reducing a given supernova
waveform catalogue space down to a small number of basis vectors.
R\"{o}ver \etal \cite{roever:2009} extended this approach and created
a novel Metropolis-within-Gibbs sampler \cite{gelman:2013} to
reconstruct test signals from the Dimmelmeier \etal catalogue
\cite{dimmelmeier:2008} in noisy data using a principal component
regression (PCR) model with random effects and unknown signal arrival
time. They then attempted to exploit the structure of the posterior
principal component (PC) coefficients with a simple $\chi^2$ measure
of distance to determine which catalogue waveform best matched the
injected test signal. Although the Bayesian reconstruction method
showed much promise, extraction of the underlying physical parameters
had limited success.
Logue \etal \cite{logue:2012} used nested sampling
\cite{skilling:2006} to compute Bayesian evidence for PCR models under
three competing supernova mechanisms --- neutrino, magnetorotational,
and acoustic mechanisms. Each supernova mechanism has a noticeably
distinct gravitational waveform morphology, and the method was
successful at correctly inferring a large majority of injected
signals. They found that for signals embedded in simulated Advanced
LIGO noise, the magnetorotational mechanism could be distinguished to
a distance of up to 10 kpc, and the neutrino and acoustic mechanisms
up to 2 kpc.
Abdikamalov \etal \cite{abdikamalov:2013} generated a new
CCSN waveform catalogue and applied matched
filtering \cite{turin:1960} to infer total angular momentum to within $\pm
20\%$ for rapidly rotating cores. Slowly rotating cores had errors
up to $\pm 35\%$. Along with matched filtering, they employed
the Bayesian model selection method presented in \cite{logue:2012} to
illustrate that under certain assumptions of the rotation law, the
next generation of GW detectors (Advanced LIGO, Advanced Virgo, and
KAGRA), could also extract information about the degree of precollapse
differential rotation. The two methods worked particularly well for
rapidly rotating cores.
In this paper we present an alternative approach to parameter
estimation for rotating CCSN. Using the Abdikamalov \etal catalogue
\cite{abdikamalov:2013}, we fit a Bayesian PCR model to reconstruct a
GW signal in noisy data. Initially, the signal arrival time is
assumed to be known, and PC coefficients are sampled directly from the
posterior distribution. We extend the model to incorporate an unknown
signal arrival time and construct a Metropolis-within-Gibbs MCMC
sampler (as done in \cite{roever:2009}). We then use the posterior
means of the PC coefficients to fit the known physical parameters on
(using a linear model), and sample from the posterior predictive
distribution to make probabilistic statements about the ratio of
rotational kinetic energy to gravitational energy of the inner core at
bounce $\beta_{ic,b}$. We apply two supervised learning algorithms
--- na\"{i}ve Bayes classifier (NBC) and $k$-nearest neighbour
($k$-NN) --- to classify the closest level of precollapse differential
rotation $A$. We also introduce a constrained optimization approach
to model selection and attempt to find an optimal number of PCs for
the Bayesian PCR model.
The paper is organized as follows: in section 2 we describe the
simulated GW data catalogue used in our analysis;
section 3 introduces the statistical models and methods applied;
results of our analysis are presented in section 4; and a discussion
of our findings and future directions are provided in section 5.
\section{Gravitational wave data}
The waveforms used in this paper are the two-dimensional numerical
axisymmetric general-relativistic hydrodynamic rotating core collapse
and bounce supernova simulations generated by Abdikamalov \etal
\cite{abdikamalov:2013}. Based on findings that GW signals are
essentially independent of the progenitor zero age main sequence (ZAMS)
mass by Ott \etal \cite{ott:2012}, a single presupernova progenitor
model (the 12-$M_{\odot}$ at ZAMS solar-metallicity progenitor model
from \cite{woosley:2007}) was adopted. The cylindrical rotation law
from \cite{ott:2004} was also assumed.
The GW catalogue was partitioned into a base catalogue (BC), and a
test catalogue (TC). The BC contains $l = 92$ signals with five
levels of precollapse differential rotation $A$ (where higher values
of $A$ represent weaker differential rotation), a grid of values for
initial central angular velocity $\Omega_c$, and a grid of values for
the ratio of rotational kinetic energy to gravitational energy of the
inner core at bounce $\beta_{ic,b}$ (since $\beta_{ic,b}$ is a
function of $\Omega_c$ for a fixed progenitor structure). Each signal
in the BC was generated using the microphysical finite-temperature
Lattimer-Swesty (LS) equation of state (EOS) \cite{lattimer:1991},
parametrized deleptonization scheme from \cite{dimmelmeier:2008}, and
neutrino leakage scheme from \cite{ott:2012}. As well as varying $A$,
$\Omega_c$, and $\beta_{ic,b}$, the TC contains 47 signals with
differing EOS and deleptonization parametrizations $Y_e(\rho)$.
Specifically, some test signals were generated using the Shen \etal
EOS \cite{shen:1998}, or an increase/decrease in $Y_e(\rho)$
parametrization by $\sim 5\%$. The values of $\Omega_c$ and
$\beta_{ic,b}$ in the TC are in the same parameter space as those in
the BC, but with an alternative grid. The object of our analysis is to
predict the physical parameters ($\beta_{ic,b}$ and $A$) of the
signals in the TC using information gleaned about signals in the BC.
The signals were initially sampled at 100 kHz and subsequently
downsampled by a rational factor to 16384 Hz --- the sampling rate of
the Advanced LIGO detectors. Downsampling by a rational factor
essentially involved two steps: upsampling by an integer factor via
interpolation and then applying a low-pass filter to eliminate the
high frequency components necessary to avoid aliasing at lower
sampling rates; and downsampling by an integer factor to achieve the
desired sampling rate \cite{oppenheim:1999}. The resampled data was
zero-bufferred to ensure each signal was the same length, $N = 16384$,
which corresponded to 1 s of data at the Advanced LIGO sampling rate.
Each signal was then aligned so that the first negative peak (not
necessarily the global minimum), corresponding to the time of core
bounce, occurred halfway through the time series.
In this analysis, the source of a GW emission is assumed to be
optimally oriented (perpendicular) to a single interferometer. Each
signal is linearly polarized with zero cross-polarization.
\begin{center}
\includegraphics[width=1\linewidth]{gwPlot.pdf}
\captionof{figure}{A snapshot of the Abdikamalov \etal
\cite{abdikamalov:2013} catalogue. The top panel shows the GW
strain (scaled by source distance) for five models with different
levels of precollapse differential rotation (from strongest
differential rotation $A1$ to weakest $A5$), each with
$\beta_{ic,b} \sim 0.03$ (i.e., slowly rotating progenitors). The
bottom panel is the same, but for rapidly rotating progenitors
with $\beta_{ic,b} \sim 0.09$.}
\label{fig:gw}
\end{center}
We can see a general waveform morphology in figure~\ref{fig:gw}.
During core collapse, there is a slow increase in GW strain until the
first local maximum is reached (before 0.5 s). This is followed by
core bounce, where the strain rapidly decreases towards a local
minimum (at 0.5 s). This corresponds to the time when the inner core
expands at bounce. After this, there is a period of ring-down
oscillations. For slowly rotating progenitors, we see in the top
panel of figure~\ref{fig:gw} that the GW strain is essentially the
same during collapse and bounce and only differs during ring-down. For
the rapidly rotating progenitors presented in the bottom panel of
figure~\ref{fig:gw}, larger precollapse differential rotation results
in: a smaller local maximum during core collapse; a more negative
local minimum during core bounce; and a larger first ring-down peak.
Because of these patterns, Abdikamalov \etal \cite{abdikamalov:2013}
concluded that inferences about precollapse differential rotation
could in principal be made for rapidly rotating cores.
The data analyzed are CCSN GW signals injected in coloured Gaussian
noise using the Advanced LIGO noise curve with one-sided power
spectral density (PSD), $S_1(f)$. The data is then Tukey windowed to
mitigate spectral leakage. Rather than fixing source distance
to 10 kpc (as done in \cite{abdikamalov:2013}), this analysis assumes
a fixed signal-to-noise ratio (SNR) of $\rho = 20$. SNR is defined as
\begin{equation}
\label{eq:SNR}
\rho = \sqrt{4 \sum_j \frac{\frac{\Delta_t}{N}|\tilde{y_j}|^2}{S_1(f_j)}} ,
\end{equation}
where $\tilde{y_j}, j = 1, 2, \ldots, N$, are the Fourier transformed
data, $\Delta_t$ is the distance between two consecutive time points,
and $f_j, j = 1, 2, \ldots, N$, are the Fourier frequencies. As done
in \cite{roever:2009}, $S_1(f)$ is estimated \textit{a priori} by
averaging 1000 empirical periodograms from identically simulated
Advanced LIGO noise. This corresponds to a realistic scenario where
the noise spectrum must be estimated as well.
Although supernovae from the Milky Way will not produce SNRs as small
as $\rho = 20$, we choose this value to illustrate that our methods
are robust at lower SNRs.
\section{Methods and models}
\subsection{Bayesian inference}
Bayesian inference requires three pivotal quantities.
The \textit{likelihood} function $p(\mathbf{z} | \boldsymbol\theta)$
is the probability density function (PDF) of the data $\mathbf{z}$,
conditional on the random vector of model parameters
$\boldsymbol\theta$. The \textit{prior} $p(\boldsymbol\theta)$ is
the PDF of the model parameters, that takes into account all of the
information known about $\boldsymbol\theta$ before the data is
observed. The \textit{posterior} $p(\boldsymbol\theta |
\mathbf{z})$ is the updated PDF of model parameters after the data is
observed. These quantities are related via Bayes' theorem
\begin{eqnarray}
\label{eq:bayes}
p(\boldsymbol\theta | \mathbf{z})
& = & \frac{p(\boldsymbol\theta)p(\mathbf{z}|\boldsymbol\theta)}{m(\mathbf{z})} \\
& \propto & p(\boldsymbol\theta)p(\mathbf{z} | \boldsymbol\theta) ,
\end{eqnarray}
where $m(\mathbf{z}) = \int p(\boldsymbol\theta)p(\mathbf{z} |
\boldsymbol\theta) \mathrm{d}\boldsymbol\theta$ is the
\textit{marginal likelihood} and is treated as a normalizing constant
since it is independent of $\boldsymbol\theta$. That is, the
posterior is proportional to the prior multiplied by the likelihood.
Posterior sampling can be performed directly if the posterior PDF has
a closed analytical form. Otherwise, MCMC
techniques are a useful work-around. The key building blocks in MCMC
simulations are the Gibbs sampler \cite{geman:1984} and the
Metropolis-Hastings algorithm \cite{metropolis:1953, hastings:1970}.
We use a combination of the two --- the so-called
Metropolis-within-Gibbs sampler --- in this study. For a detailed
account of Bayesian inference and MCMC algorithms, refer to
\cite{gelman:2013}.
\subsection{Model 1: Bayesian PCR with known signal arrival time}
We aim to first reduce the dimension of the BC by a PCA, or
equivalently a singular value decomposition (SVD) as suggested by Heng
\cite{heng:2009}. Each BC waveform is represented as a linear
combination of orthonormal basis vectors, where the projection of the
data onto the first basis vector has maximum variance, the projection
onto the second basis vector has second highest variance, and so on.
By considering only projections on the first $d < l$ basis vectors,
the so-called $d$ PCs, a parsimonious representation of the catalogue
signals in $d$ dimensions is achieved that preserves as much of the
information of the original BC as possible.
Once PCA is conducted, the first $d$ PCs are treated as the
explanatory variables of a linear model. The data analyzed is a time
series vector $\mathbf{y}$ of length $N$ and decomposes into additive
signal and noise components. Let $\tilde{\mathbf{y}}$ be the Fourier
transformed data vector of length $N$ and let $\tilde{\mathbf{X}}$ be
the $N \times d$ design matrix, whose columns are the Fourier
transformed mean-centered PC vectors from the BC. The frequency
domain linear model is
\begin{equation}
\label{eq:freqLM}
\tilde{\mathbf{y}} = \tilde{\mathbf{X}} \boldsymbol\alpha + \tilde{\boldsymbol\epsilon},
\end{equation}
where $\boldsymbol\alpha$ is the vector of PCR
coefficients and $\tilde{\boldsymbol\epsilon}$ is the Fourier
transformed coloured zero-mean Gaussian noise vector whose variance
terms are proportional to the \textit{a priori known} one-sided
power spectral density $S_1(f)$. That is,
\begin{equation}
\label{eq:variances}
\sigma^2_{f_j} = \frac{N}{4 \Delta_t} S_1(f_j).
\end{equation}
Due to Hermitian symmetry, the frequency domain data vector
$\tilde{\mathbf{y}}$ contains only the non-redundant real and
imaginary components and is therefore the same length as the time
domain vector $\mathbf{y}$. Conversion between time and frequency
domains is conducted using a fast Fourier transform (FFT).
The likelihood for the Bayesian PCR model with known signal arrival
time is
\begin{equation}
\label{eq:likelihood1}
p(\tilde{\mathbf{y}} | \boldsymbol\alpha) \propto \exp \left( -2 \sum_{j = 1}^{N} \frac{\frac{\Delta_t}{N}\left( \tilde{y}_j - \left( \tilde{\mathbf{X}} \boldsymbol\alpha \right)_j \right)^2}{S_1 \left(f_j\right)} \right).
\end{equation}
Assuming flat ($\mathrm{Uniform}(-\infty, \infty)$) priors on
$\boldsymbol\alpha$, the posterior distribution for the PC
coefficients is
\begin{equation}
\label{eq:condpost1}
\mathrm{P}(\boldsymbol\alpha | \tilde{\mathbf{y}}) = \mathrm{N}(\boldsymbol\mu, \boldsymbol\Sigma),
\end{equation}
where
\begin{eqnarray}
\boldsymbol\Sigma &=& (\tilde{\mathbf{X}}^{'} \mathbf{D}^{-1} \tilde{\mathbf{X}})^{-1}, \\
\boldsymbol\mu &=& \boldsymbol\Sigma \tilde{\mathbf{X}}^{'} \mathbf{D}^{-1} \tilde{\mathbf{y}},
\end{eqnarray}
and $\mathbf{D} = \mathrm{diag}(\sigma^2_{f_j})$ is the diagonal
covariance matrix of individual variances for the noise component.
This multivariate normal distribution can be sampled directly with no
MCMC required.
Noninformative priors were chosen for this model. It was important to
keep the data and prior knowledge separate and distinct, and to avoid
using information from the waveform catalogue for both purposes. As
the only data available for analysis were the generated GWs, we
assumed complete prior ignorance on all model parameters.
\subsection{Model 2: Bayesian PCR with unknown signal arrival time}
The Bayesian PCR model presented in the previous section assumed a
known signal arrival time. The precise arrival time of a GW signal to
an interferometer will generally not be known in practice,
and must therefore be included as an additional unknown parameter in
the statistical model.
Let $T$ be a cyclical time shift representing
the unknown signal arrival time, and let $\tilde{\mathbf{X}}_T$ be the
Fourier transformed design matrix $\tilde{\mathbf{X}}$ shifted by lag
$T$, such that the Fourier transformed PCs are aligned with the
Fourier transformed data vector, $\tilde{\mathbf{y}}$. This
transformation can be done directly in the frequency domain as a phase
shift by multiplying the columns of $\tilde{\mathbf{X}}$ by $\exp(-2
\pi \mathrm{i} f T)$.
We build on the Bayesian signal reconstruction model presented in
\cite{roever:2009}, although our primary goal is inferring the
physical parameters of a supernova progenitor and not signal
reconstruction.
Using the same reasoning described in the previous section, we assume
flat priors on $\boldsymbol\alpha$ and $T$. The likelihood for the
Bayesian PCR model with unknown signal arrival time is
\begin{equation}
\label{eq:likelihood2}
p(\tilde{\mathbf{y}} | \boldsymbol\alpha, T) \propto \exp \left( -2 \sum_{j = 1}^{N} \frac{\frac{\Delta_t}{N}\left( \tilde{y}_j - \left( \tilde{\mathbf{X}}_T \boldsymbol\alpha \right)_j \right)^2}{S_1 \left(f_j\right)} \right).
\end{equation}
For a given time shift $T$, the conditional posterior
distribution for the PC coefficients $\boldsymbol\alpha | T$ is
\begin{equation}
\label{eq:condpost2}
\mathrm{P}(\boldsymbol\alpha | T, \tilde{\mathbf{y}}) = \mathrm{N}(\boldsymbol\mu_T, \boldsymbol\Sigma_T),
\end{equation}
where
\begin{eqnarray}
\boldsymbol\Sigma_T &=& (\tilde{\mathbf{X}}^{'}_T \mathbf{D}^{-1} \tilde{\mathbf{X}}_T)^{-1}, \\
\boldsymbol\mu_T &=& \boldsymbol\Sigma_T \tilde{\mathbf{X}}^{'}_T \mathbf{D}^{-1} \tilde{\mathbf{y}} .
\end{eqnarray}
To estimate $\boldsymbol\alpha$ and $T$, we construct a Markov chain
whose stationary distribution is the posterior distribution of
interest using Metropolis-within-Gibbs sampler \cite{gelman:2013}.
This is essentially a Gibbs sampler that alternates between the full
set of conditional posterior distributions $P(\boldsymbol\alpha | T,
\tilde{\mathbf{y}})$ and $P(T | \boldsymbol\alpha,
\tilde{\mathbf{y}})$. The former can be sampled directly using
equation~(\ref{eq:condpost2}), and the latter requires a random walk
Metropolis step, hence the name Metropolis-within-Gibbs.
After initialization, step $i + 1$ in the Metropolis-within-Gibbs algorithm is:
\begin{enumerate}
\item Directly sample the conditional posterior of $\alpha_{i+1} |
T_i$ using equation~(\ref{eq:condpost2});
\item Propose $T_{*}$ from $t_{\nu}(T_i, \zeta^2)$ and accept $T_{i+1}
= T_{*}$ with the Metropolis acceptance probability
\begin{equation}
\label{eq:metropolis}
r = \min\left(1, \frac{p(T_{*} | \boldsymbol\alpha, \tilde{\mathbf{y}})}{p(T_i | \boldsymbol\alpha, \tilde{\mathbf{y}})}\right).
\end{equation}
Otherwise reject and set $T_{i+1} = T_i$.
\end{enumerate}
A $t$-distribution was chosen as the proposal distribution for the
algorithm. It has a similar (symmetrical) shape to the normal
distribution but has heavier tails and an additional
degrees-of-freedom parameter, $\nu$. The heavier tails of the
$t$-distribution results in bolder and more robust proposals than the
normal distribution, ensuring the algorithm does not get stuck in
local modes \cite{gelman:2013}. The degrees-of-freedom parameter was
set to $\nu = 3$, which is the smallest integer that yields a
distribution with finite variance. The proposal for $T_{i+1}$ is
centered on $T_i$, and has scale parameter $\zeta^2$ that is initially
and arbitrarily set to 0.05, and subsequently automatically tuned
during the algorithm to ensure good mixing and acceptance rates.
\subsection{Posterior predictive distribution}
For each of the $l = 92$ signals in the BC and $m = 47$ signals in the
TC, we fit both Bayesian PCR models, with $d$ PCs (where the choice of
$d$ is explained below). We then construct an $l \times (d + 1)$
design matrix $\mathbf{A}$ whose rows are the \textit{posterior means}
of the $d$ PC coefficients, plus an intercept term, for each of the
$l$ signals in the BC. The primary goal is to exploit the posterior
PC coefficient space to make inferences on the physical parameters of
rotating core collapse stellar events in the TC. We accomplish this
by fitting a linear model with the known physical parameters from the
BC as the response variable on the design matrix $\mathbf{A}$ using
\begin{equation}
\label{eq:LM2}
\boldsymbol\xi = \mathbf{A} \boldsymbol\gamma + \boldsymbol\delta,
\end{equation}
where $\boldsymbol\xi$ is the vector of known continuous physical
parameters, $\boldsymbol\gamma$ is the vector of regression
coefficients, and $\boldsymbol\delta$ is an error term. The error
term is assumed to come from an independent and identically
distributed normal distribution with zero mean and variance
$\sigma^2$. Predictions using the \textit{posterior predictive
distribution} are the primary interest in this analysis, and not the
model parameters themselves.
Assuming the convenient noninformative prior distribution that is
uniform on $(\boldsymbol\gamma, \log \sigma)$, the posterior
predictive distribution for a normal linear model is a multivariate
$t$-distribution and can be sampled from directly with no MCMC
\cite{gelman:2013}. The formula is
\begin{equation}
\mathrm{P}(\check{\boldsymbol\xi} | \boldsymbol\xi) = t_{l - d}\left(\check{\mathbf{A}}\hat{\boldsymbol\gamma}, s^2\left(I + \check{\mathbf{A}} \mathbf{V}_{\boldsymbol\xi} \check{\mathbf{A}}^{'}\right)\right) ,
\end{equation}
where $\check{\boldsymbol\xi}$ is the vector of outcomes we wish to
predict (i.e., the physical parameters from signals in the TC),
$\check{\mathbf{A}}$ is the $m \times (d + 1)$ matrix whose rows are
the posterior means of the signals in the TC (and an intercept term)
from the Bayesian PCR step, $I$ is the $m \times m$ identity matrix,
and
\begin{eqnarray}
\mathbf{V}_{\boldsymbol\xi} &=& (\mathbf{A}^{'}\mathbf{A})^{-1}, \\
\hat{\boldsymbol\gamma} &=& \mathbf{V}_{\boldsymbol\xi} \mathbf{A}^{'}\boldsymbol\xi, \\
s^2 &=& \frac{1}{l - d} (\boldsymbol\xi - \mathbf{A}\hat{\boldsymbol\gamma})^{'}(\boldsymbol\xi - \mathbf{A}\hat{\boldsymbol\gamma}).
\end{eqnarray}
\subsection{Deviance information criterion and constrained optimization}
The \textit{deviance} is defined as $D = -2 \log p(\mathbf{z} |
\boldsymbol\theta)$ where $p(\mathbf{z} | \boldsymbol\theta)$ is the
likelihood of a statistical model, and $\boldsymbol\theta$ is the
vector of model parameters. The \textit{deviance information
criterion} (DIC) is a Bayesian model comparison technique and a
generalization of Akaike information criterion (AIC) for hierarchical
models \cite{spiegelhalter:2002}. DIC is defined as
\begin{eqnarray}
\mathrm{DIC} & = & \bar{D} + p_D \label{eqn:DIC1} \\
& = & 2\bar{D} - D(\bar{\boldsymbol\theta}) , \label{eqn:DIC2}
\end{eqnarray}
where $\bar{D}$ is the mean deviance from posterior samples, $p_D$ is
the effective number of parameters, and $D(\bar{\boldsymbol\theta})$
is the deviance evaluated at the posterior means of the
parameters. When comparing competing statistical models, the lowest
DIC is preferred. $\bar{D}$ is a measure of fit, and $p_D$ is a
measure of model complexity used to penalize models with too many
parameters. Equation~(\ref{eqn:DIC1}) therefore illustrates how DIC
incorporates Occam's razor, allowing one to select a parsimonious
model, balancing between fit and complexity. Equation~(\ref{eqn:DIC2}),
on the other hand, provides a simple method for computing DIC.
$\bar{D}$ is calculated by evaluating the deviance for each of the
stored model parameters $\boldsymbol\theta$ that have been sampled
from their joint posterior PDF, and then taking the average.
$D(\bar{\boldsymbol\theta})$ is calculated by finding the posterior
mean of each of the model parameters $\bar{\boldsymbol\theta}$ and
then evaluating the deviance.
DIC is the preferred model comparison technique in this analysis. A
popular alternative, Bayes factors, would require computing the
marginal likelihood from equation~(\ref{eq:bayes}), which involves
multi-dimensional integration over a large number of parameters.
Numerical techniques such as nested sampling \cite{skilling:2006} can
be used to derive the marginal likelihood but these methods require
significant computational time and power. On the other hand, DIC is
easily computed from posterior samples. Another benefit of using DIC
over Bayes factors is that improper priors (which we have assumed in
this analysis) do not violate any conditions of use. Bayes factors,
on the other hand, are no longer applicable when improper priors are
used.
The choice of the number of PCs has been arbitrary in most of the
supernova GW parameter estimation literature and this number has
usually been $d = 10$ (see for example \cite{roever:2009,
abdikamalov:2013}). We propose a method for selecting the optimal
choice of $d$ based on careful analysis of the DIC for competing
models and constrained optimization. Since PCs are ordered by the
total amount of variation they make up in the data set, PCA provides a
convenient ordering system for nested modelling. Let $M_d, d \in \{1,
2, \ldots, 92\}$, represent the set of possible PCR models,
where $d$ is the number of explanatory variables. The models are
nested such that $M_1$ has one explanatory variable (PC1), $M_2$ has
two explanatory variables (PC1 and PC2), and so on.
For each of the $l = 92$ signals in the BC (injected in Advanced LIGO
noise), all of the models $M_d, d \in \{1, 2, \ldots, 92\}$, are
fitted and then compared using DIC. The model with the lowest DIC is
the best fit to the data. However, models with an absolute
difference in DIC of $\lesssim 5$ are generally taken to be
indistinguishable from one another \cite{spiegelhalter:2002} and so to
prevent over-fitting, we propose a constrained optimization routine,
where we select the smallest $d$ such that the difference in DIC
between $M_d$ and the model with the minimum DIC is less than 5. More
specifically, let $M_{\min}$ be the model with the minimum DIC, then
find $d$ such that
\begin{equation}
\label{eq:optimization}
\argmin_d \bigg\{ \mathrm{DIC}(M_d) - \mathrm{DIC}(M_{\min}) < 5 \bigg\} .
\end{equation}
We employ this routine for each of the $l = 92$ BC signals, and look
at the distribution of $M_d$'s over all signals. The median of this
distribution seems a prudent choice for a general-purpose number of
PCs since these distributions tend to be skewed. It is
important to note here that we cannot choose a different value for $d$
for each signal when implementing these models as this would lead to a
very sparse design matrix $\mathbf{A}$ when sampling from the
posterior predictive distribution.
We refer the reader to figure~\ref{fig:occam} in the results section
of this paper for an example of this method in action.
\subsection{Na\"{i}ve Bayes classifier}
The NBC \cite{ripley:1996} is a common supervised learning
algorithm and discriminant method used to group objects into a
discrete set of classes based on a set of features. The algorithm
requires a \textit{training set} of objects with known groupings and
observed features. Once the algorithm has learnt from the training
set, each object in a \textit{test set} (containing a set of observed
features and potentially unknown classes) is assigned to the group
that it has the highest probability of belonging to.
The ``Bayes'' component of the method refers to Bayes' theorem
\begin{equation}
\label{eq:naive}
p(c|\mathbf{u}) \propto p(c) p(\mathbf{u} | c)
\end{equation}
where $c \in C$ is the class that an object could belong to, and
$\mathbf{u}$ are the features we wish to exploit to classify the
object. That is, given some observed features $\mathbf{u}$, what is
the posterior probability of an object belonging to class $c$?
The ``na\"{i}ve'' component refers to the assumption of conditional
independence of the model features $\mathbf{u} = (u_1, u_2, \ldots,
u_d)$. This assumption implies the joint PDF $p(\mathbf{u} | c)$ can
be factorized as the product of marginal distributions
\begin{equation}
\label{eq:factorize}
p(\mathbf{u} | c) = \prod_{i = 1}^d p(u_i | c),
\end{equation}
and so equation~(\ref{eq:naive}) becomes
\begin{equation}
\label{eq:naive2}
p(c | \mathbf{u}) = p(c) \prod_{i = 1}^d p(u_i | c).
\end{equation}
Given class $c$, each feature $(u_1, u_2, \ldots, u_d)$ is assumed to
be independently normally distributed. The model parameters are
approximated using the relative frequencies from the training set.
The class prior probabilities $p(c)$ are specified as the number of
objects in class $c$ in the training set divided by the total number
of objects. Objects are grouped into the class that yields the
highest posterior probability. This is known as the maximum \textit{a
posteriori} (MAP) decision rule.
\subsection{$k$-nearest neighbour}
An alternative machine learning algorithm to the NBC is the $k$-NN
\cite{ripley:1996}, which uses a measure of ``closeness'' between
objects rather than a probabilistic framework. We choose $k = 1$,
meaning that an object in the test set is assigned to the class of its
single nearest neighbour in the training set. Ties in distance are
settled at random.
The definition of closeness in this context depends on the choice
of metric. As commonly used in the literature \cite{ripley:1996}, a
Euclidean distance is assumed. For any object with features
$\mathbf{u} = (u_1, u_2, \ldots, u_d)$ in the test set, the $k$-NN
algorithm finds the object with features $\mathbf{v} = (v_1, v_2,
\ldots, v_d)$ in the training set that minimizes the Euclidean
distance
\begin{equation}
\label{eq:euclidean}
\mathrm{distance}(\mathbf{u}, \mathbf{v}) = \sqrt{\sum_{i = 1}^d (u_i - v_i)^2},
\end{equation}
and then assigns $\mathbf{u}$ to the class of $\mathbf{v}$.
\section{Results}
\subsection{Model selection}
An important statistical task is to select a prudent number of model
dimensions whilst incorporating Occam's razor into the decision making
process. More specifically, one needs to balance model fit against
complexity to ensure there is no over-fitting. In the context of PCA,
the decision is usually made based on the amount of variation the
first $d$ PCs contribute to the data set (i.e., analyzing Scree
plots). This approach is arbitrary and deals specifically with
dimension reduction, but not Occam's razor. We propose an alternative
approach, involving DIC and constrained optimization.
We analyze the change in DIC as model dimensionality
increases. Figure~\ref{fig:occam} illustrates DIC as a function of
model dimensionality for signal $A1O2.5$ from the Abdikamalov \etal
catalogue \cite{abdikamalov:2013}. This is the typical shape of the
DIC curve for all signals in the BC and a good visual aid of Occam's
razor in action. There tends to be a sharp decrease in DIC as the
model dimension increases at the beginning, where model fit is
improving. DIC flattens out and then reaches a minimum, where there
is the best balance between fit against complexity. After this, there
is a slow rise in DIC as the model dimension increases and becomes too
complex.
\begin{center}
\includegraphics[width=0.9\linewidth]{typicalDICplot.pdf}
\captionof{figure}{DIC as a function of model dimensionality for
model $A1O2.5$ from the Abdikamalov \etal catalogue
\cite{abdikamalov:2013}. The dashed vertical line to the right
represents the model with the minimum DIC ($M_{\min} = M_{22}$).
The dotted vertical line to the left represents the model
dimension after constrained optimization ($M_d = M_{13}$).}
\label{fig:occam}
\end{center}
The flat basin around the global minimum in figure~\ref{fig:occam} is
of particular interest. Since models with an absolute difference in
DIC of less than 5 are essentially indistinguishable, it is sensible
to select the model with the smallest number of dimensions in this
region to prevent over-fitting. For signal $A1O2.5$, we see a
significant decrease in model dimensionality from $M_{\min} = M_{22}$
to $M_d = M_{13}$. The choice of $d$ for this particular signal is $d
= 13$.
It is important to note that $d$ will differ between GW signals but we
must only choose one general-purpose value of this. We therefore
conduct the proposed constrained optimization model selection method
on all of the $l = 92$ BC signals and take the median of the
distribution of $d$'s as the general-purpose $d$. We prefer the
median to the mean as our central measure as it is more robust against
outliers.
\begin{center}
\includegraphics[width=1\linewidth]{dicOptimHistBin1.pdf}
\captionof{figure}{Distribution of model dimensionality for all $l =
92$ signals in the BC under our constrained optimization routine.}
\label{fig:dicHist}
\end{center}
The histogram in figure~\ref{fig:dicHist} shows the distribution of
$d$ for all $l = 92$ signals in the BC. It is highly skewed to the
right, with a median (and mode) of 14 PCs and mean of 17 PCs. We
choose $d = 14$ based on the median of this distribution, and use this
number of explanatory variables in both Bayesian PCR models. We
choose this as the model that minimizes the risks of both
over-fitting and under-fitting.
\subsection{Inferring the ratio of rotational kinetic energy to
gravitational energy of the inner core at bounce, $\beta_{ic,b}$}
We injected each of the $l = 92$ BC and $m = 47$ TC signals in
Advanced LIGO noise (SNR $\rho = 20$) and fitted the two Bayesian PCR
models with $d = 14$ PCs. We then regressed the known value of
$\beta_{ic,b}$ on the posterior means of the BC signals from these
models and sampled from the posterior predictive distribution of the
TC signals.
Figures~\ref{fig:betaKnownT}--\ref{fig:betaUnknownT_Other} show these
predictions of $\beta_{ic,b}$. The true value from the TC (red
triangle) is compared with the predicted value (blue circle) and
uncertainty is measured using 90\% credible intervals (black lines).
Figures~\ref{fig:betaKnownT}~and~\ref{fig:betaKnownT_Other} assume a
\textit{known} signal arrival time. $T$ is \textit{unknown} for
figures~\ref{fig:betaUnknownT}~and~\ref{fig:betaUnknownT_Other}. The
change in background gradient for
figures~\ref{fig:betaKnownT}~and~\ref{fig:betaUnknownT} represents the
varying precollapse differential rotation model $A$ for signals with
LS EOS and standard $Y_e(\rho)$ parametrization. For
figures~\ref{fig:betaKnownT_Other}~and~\ref{fig:betaUnknownT_Other},
the background shade represents GW signals (from a precollapse
differential rotation model $A1$) with a Shen EOS, or
increase/decrease in $Y_e(\rho)$ of $\sim 5\%$. $\beta_{ic,b}$ is
scaled up by a factor of 100 in these plots.
\begin{center}
\includegraphics[width=1\linewidth]{betaKnownT.pdf}
\captionof{figure}{90\% credible intervals of $\beta_{ic,b}$ for the
29 test signals with the LS EOS and standard $Y_e(\rho)$
parametrization. $T$ is \textit{known}.}
\label{fig:betaKnownT}
\end{center}
\begin{center}
\includegraphics[width=1\linewidth]{betaKnownT_Other.pdf}
\captionof{figure}{90\% credible intervals of $\beta_{ic,b}$ for the
18 test signals with varying EOS and $Y_e(\rho)$
parametrization. Note that $m$ refers to an increase in
$Y_e(\rho)$ of 5\%, $p$ refers to a decrease in $Y_e(\rho)$ of
5\%, and $s$ refers to the Shen EOS. $T$ is \textit{known}.}
\label{fig:betaKnownT_Other}
\end{center}
\begin{center}
\includegraphics[width=1\linewidth]{betaUnknownT.pdf}
\captionof{figure}{90\% credible intervals of $\beta_{ic,b}$ for the
29 test signals with the LS EOS and standard $Y_e(\rho)$
parametrization. $T$ is \textit{unknown}.}
\label{fig:betaUnknownT}
\end{center}
\begin{center}
\includegraphics[width=1\linewidth]{betaUnknownT_Other.pdf}
\captionof{figure}{90\% credible intervals of $\beta_{ic,b}$ for the
18 test signals with varying EOS and $Y_e(\rho)$
parametrization. Note that $m$ refers to an increase in
$Y_e(\rho)$ of 5\%, $p$ refers to a decrease in $Y_e(\rho)$ of
5\%, and $s$ refers to the Shen EOS. $T$ is \textit{unknown}.}
\label{fig:betaUnknownT_Other}
\end{center}
We yield accurate predictions of $\beta_{ic,b}$ for most of the test
signals in figure~\ref{fig:betaKnownT}. Signal 27 ($A5O3.25$ from the
catalogue) is an outlier and comes from a slowly rotating core with
uniform rotation. It is likely an outlier due to the strong
stochastic components in the GW signal from prompt postbounce
convection \cite{abdikamalov:2013}. The true values of $\beta_{ic,b}$
are on the boundary of the 90\% credible intervals for signals 3
($A1O10.25$), 9 ($A2O6.25$), 19 ($A3O5.25$), and 23 ($A4O3.25$), but
there is no distinguishable pattern between these signals. The
credible intervals are relatively small, at approximately four units
(times $10^{-2}$) long. This means that it is particularly easy to
distinguish $\beta_{ic,b}$ between GW signals.
The length of credible interval widens by a factor of $\sim 1.5$ when
changing from known to unknown signal arrival time. Incorporating an
unknown time shift increases the uncertainty of the PC coefficients
since the MCMC algorithm draws $\boldsymbol\alpha | T$. That is,
conditioning on an uncertain $T$ creates additional uncertainty for
$\boldsymbol\alpha$. However, predictions are still accurate in most
cases. We see in figure~\ref{fig:betaUnknownT}, that signal 27
($A5O3.25$) is an outlier again. Signal 23 ($A4O3.25$) is another
outlier with credible interval on the negative side of the number
line. This is an absurd and physically impossible range for a
strictly positive variable, and is a consequence of the fact that the
priors can only constrain the linear model parameters
$(\boldsymbol\gamma, \sigma^2)$. More specifically, we could not put
priors on the response variable of physical parameters
$\boldsymbol\xi$ to constrain the predicted physical parameters
$\check{\boldsymbol\xi}$. A similarity that this signal has with the
other outlier is that it comes from a slowly rotating core with weak
differential rotation.
Our methods work reasonably well when varying the EOS and
deleptonization parametrization, although we underestimate some
signals with moderate rotation in figure~\ref{fig:betaKnownT_Other}.
Three of these signals come from an increase of $Y_e(\rho)$
parametrization, one from a decrease of $Y_e(\rho)$ parametrization,
and two from the Shen EOS. When incorporating an unknown time shift
in figure~\ref{fig:betaUnknownT_Other}, the uncertainty of $T$
increases and covers the true parameters. The increase in the width
of credible interval makes it more difficult to distinguish
$\beta_{ic,b}$ between signals.
We can conclude that the methods employed in this study are moderately
sensitive to uncertainties in $Y_e(\rho)$ and EOS. It was found that
a GW signal has relatively weak dependence on the nuclear EOS by
\cite{dimmelmeier:2008}. We showed in an unpublished study
\cite{edwards:2013} that we could correctly identify between the LS
and Shen EOS for 50\% of the signals in the Dimmelmeier \etal
\cite{dimmelmeier:2008} waveform catalogue using model comparison
techniques. Note that 21\% were incorrectly identified and 29\%
unidentified. It could therefore be useful to incorporate EOS as an
additional unknown that we wish to infer in future statistical
analyses.
The results presented assume a SNR of $\rho = 20$. To test
robustness, we trialled the analysis on SNRs of $\rho = 50$ and $\rho
= 100$, which are more realistic levels for detecting CCSN events in
the Milky Way. Our predictions and credible intervals of
$\beta_{ic,b}$ were the same, regardless of the SNR. This can be
attributed to using only the posterior means of the PC coefficients in
constructing design matrix $\mathbf{A}$, and not the full spread of
the posterior distributions. This therefore removes uncertainty due
to LIGO noise and signal reconstruction when predicting $\beta_{ic,b}$
from the posterior predictive distribution.
\subsection{Classifying the precollapse differential rotation, $A$}
Precollapse differential rotation is treated as a categorical variable
with five different levels in this analysis. We define the set of
classes $C = \{A1, A2, A3, A4, A5\}$ and apply the NBC and $k$-NN
supervised learning algorithms to extract precollapse differential
rotation from each of the signals in the TC. The model features
$\mathbf{u}$ are the posterior means of the PC coefficients from the
Bayesian PCR models ($\bar{\boldsymbol\alpha}$ for the training set
and $\bar{\check{\boldsymbol\alpha}}$ for the test set). The goal of
this analysis is to let both algorithms learn from the training set to
discriminate GW signals in the test set.
\begin{table}[!h]
\caption{\label{tab:diffRot} Percentage of signals in the TC with correctly identified precollapse differential rotation $A$ using NBC and $k$-NN.}
\begin{indented}
\item[] \begin{tabular}{lcccc}
\br
&\multicolumn{2}{c}{Known $T$ (\%)}&\multicolumn{2}{c}{Unknown $T$ (\%)}\\
\cline{2-5}\noalign{\smallskip}
Differential Rotation, $A$&NBC&$k$-NN&NBC&$k$-NN\\
\mr
$A1$&&&&\\
-- Standard&83&83&83&83\\
-- $\uparrow Y_e(\rho)$&67&50&67&50\\
-- $\downarrow Y_e(\rho)$&67&83&83&100\\
-- Shen EOS&33&17&0&17\\
$A2$&50&75&50&50\\
$A3$&43&57&29&57\\
$A4$&0&80&20&80\\
$A5$&33&33&0&33\\
\br
\end{tabular}
\end{indented}
\end{table}
Table~\ref{tab:diffRot} shows the percentage of signals in the TC that
have a correctly identified level of $A$ using NBC and $k$-NN. We
compare how the methods work when using $\bar{\boldsymbol\alpha}$ and
$\bar{\check{\boldsymbol\alpha}}$ from data with known and unknown
signal arrival times.
The results between models with known and unknown signal arrival times
are quite similar. The standard GWs from class $A1$ are discriminated
well by both algorithms. The decrease (and to some degree, the
increase) in $Y_e(\rho)$ parametrization did not affect the
algorithms' abilities to discriminate. Both algorithms performed
particularly poorly for the Shen EOS test signals, which illustrates
that $A$ is sensitive to the EOS. This is in line with the findings
from \cite{abdikamalov:2013}.
The $k$-NN generally performs better than the NBC for GW signals with
weak to moderate differential rotation ($A3, A4, A5$). This could be
attributed to our choice in prior classes for the NB method. Since
models with stronger differential rotation are more populated in the
BC, they have a higher prior probability than those with weaker
differential rotation.
\section{Discussion}
We have presented a Bayesian framework for inferring the physical
parameters of CCSN from GW data. We have shown
that with a SNR of $\rho = 20$ and optimal orientation of detector to
source, we can extract $\beta_{ic,b}$ with reasonable levels of
uncertainty for the majority of injected test signals. Both of the
Bayesian PCR models presented in this paper worked well. The level
of uncertainty increased when incorporating an unknown signal arrival
time into the model, but this is no surprise as PC coefficients are
conditioned on the signal arrival time for that model. Further, we
found that our methods were moderately sensitive to varying EOS and
$Y_e(\rho)$ parametrizations, and predictions are generally good.
The chosen measure of uncertainty in this analysis was the 90\%
credible interval. A great benefit of the Bayesian framework is the
probabilistic interpretation of credible intervals, enabling one to
make statements such as, ``with probability 0.9, $\beta_{ic,b}$ is
between $2.5 \times 10^{-2}$ and $6.5 \times 10^{-2}$.''
A true strength of the methods presented in this paper is their
generality. We initially applied these techniques to the Dimmelmeier
\etal catalogue \cite{dimmelmeier:2008} as a proof of concept and then
easily transferred to the Abdikamalov \etal catalogue
\cite{abdikamalov:2013}. In this study we sampled $\beta_{ic,b}$ from
its posterior predictive distribution. This method could however be
conducted on any continuous variable of physical interest. Although
not presented here, predictions of the initial central angular
velocity $\Omega_c$ were comparable to what we found with
$\beta_{ic,b}$.
Choosing to only use the posterior means of the PC coefficients
$\bar{\boldsymbol\alpha}$ in the construction of the design matrix
$\mathbf{A}$ removed some of the variability due to LIGO noise and
signal reconstruction. The uncertainty from the Bayesian PCR
modelling step therefore does not flow onto the posterior predictive
sampling step. A more realistic case would be to incorporate this
uncertainty through an errors-in-variables model, which is commonly
used when there are measurement errors in the explanatory variables of
a regression model. We plan to explore this in a future study.
However, a benefit of our method was that predictions were essentially
independent of SNR (at least for $\rho \geq 20$).
An important task in Bayesian analysis is specifying the prior PDF to
describe our beliefs about model parameters before observing the data.
We wanted to avoid using information from the waveform catalogue as
both data and prior knowledge. It is in this light that we believe the
waveform catalogue should be used only as data, and assume complete
prior ignorance on all of the model parameters.
We applied the NBC and $k$-NN algorithm to extract precollapse
differential rotation. We found that results were comparable between
known and unknown signal arrival times. The $k$-NN algorithm
generally performed better than the NBC under the assumptions made.
In future work, we plan to investigate how the choice of prior for the
NBC affects classification, as well as exploring different metrics
such as the Mahalanobis distance (which takes correlations of the data
into account) for the $k$-NN. We are also investigating an
alternative classification routine, Bayesian ordinal probit
regression.
We introduced a constrained optimization approach to model selection
that allowed us to select an appropriate number of PCs for the
Bayesian PCR models. To our knowledge, this is the first attempt at
doing so. Techniques such as reversible jump MCMC (RJMCMC)
\cite{green:1995} have been utilized in GW data analysis contexts
\cite{umstatter:2005}. RJMCMC could prove to be a useful and more
sophisticated approach than the method presented in the current study.
Although our method required a lot of parallel computing, we found it
to be a novel solution to the model selection problem.
Our analysis assumed optimal orientation of a GW source to a single
interferometer. As presented in \cite{roever:2007a, roever:2007b} for
compact binary inspiral signals, we plan to extend the methods
presented in this study to a network of detectors. This is an important
generalization as one can triangulate the position of a GW source
using coherent data from multiple detectors. The ability to locate a
GW source would allow astronomers to compare and verify whether there
was a true astrophysical event or a glitch with electromagnetic
observations.
\ack
We thank Ik Siong Heng for a thorough reading of the script, Ernazar
Abdikamalov for providing us with the waveform catalogue and
supplementary materials, Christian D. Ott for helpful discussions, and
the New Zealand eScience Infrastructure (NeSI) for their high
performance computing facilities and support. NC's work is supported
by NSF grant PHY-1204371. This paper has been given LIGO Document
Number P1400034.
\section*{References}
\bibliographystyle{unsrt}
|
1,477,468,750,306 | arxiv | \section{\bfseries Introduction}
\
\par
We consider an obstacle $D$ immersed in a region $\Omega\subset \mathbb{R}^d$ $(d=2,3)$ which is filled with a viscous fluid.
Then, the velocity vector $u$ and the scalar pressure $p$ of the fluid in the presence of the obstacle $D$ fulfill the following boundary value
problem for the Stokes system:
\begin{equation}\label{P1}
\left\{\begin{array}{rllll}
-\mathrm{div}(\sigma(u,p))&=&0& ,& \textrm{ in }\Omega\setminus\overline{D},\\
\mathrm{div} u&=&0&,&\textrm{ in } \Omega\setminus\overline{D},\\
u&=&g&,& \textrm{ on }\partial\Omega,\\
u&=&0&,&\textrm{ on }\partial D,
\end{array}
\right.
\end{equation}
where $\sigma(u,p)=2\mu e(u)-pI$ is the stress tensor, $e(u)=\frac{(\nabla u+\nabla u^{T})}{2}$ is the strain tensor, $I$ is the identity matrix of order
$d\times d$, $n$ denotes the exterior unit normal to $\partial\Omega$ and $\mu >0$ is the kinematic viscosity. The condition $u|_{\partial D}=0$ is the so called
\emph{no-slip} condition.
Given the boundary velocity $g\in (H^{3/2}(\partial\Omega))^d$ satisfying the compatibility condition
\begin{equation*}
\int_{\partial\Omega} g\cdot n =0,
\end{equation*}
we consider the solution to Problem \eqref{P1}, $(u,p)\in (H^1(\Omega\backslash \overline{D}))^d\times L^2(\Omega\backslash \overline{D})$, and measure the corresponding Cauchy force
on $\partial\Omega$, $\displaystyle\psi=\sigma(u,p) n|_{\partial\Omega}$, in order to recover the obstacle $D$.
Then, it is well known that this inverse problem has a unique solution. In fact, in \cite{alvarez2005identification}, the authors prove uniqueness
in the case of the steady-state and evolutionary Stokes system using unique continuation property of solutions.
By uniqueness we mean the following fact: if $u_1$ and $u_2$ are two solutions of \eqref{P1} corresponding to a given boundary
data $g$, for obstacles $D_1$ and $D_2$ respectively, and $\sigma(u_1,p_1) n=\sigma(u_2,p_2) n$ on an open subset $\Gamma_0\subset \partial\Omega$,
then $D_1=D_2$. Moreover, in \cite{ballerini2010stable}, $\log-\log$ type stability estimates for the Hausdorff distance between the boundaries
of two cavities in terms of the Cauchy forces have been derived.
Reconstruction algorithms for the detection of the obstacle have been proposed in \cite{CCG2015} and in \cite{heck2007reconstruction}.
The method used in \cite{heck2007reconstruction} relies on the construction of special complex geometrical optics solutions for the
stationary Stokes equation with a variable viscosity. In \cite{CCG2015}, the detection algorithm is based on topological sensitivity and
shape derivatives of a suitable functional.
We would like to mention that there hold $\log$ type stability estimates for the Hausdorff distance between the boundaries of two cavities in terms of
boundary data, also in the case of conducting cavities and elastic cavities (see \cite{ABRV2001}, \cite{CHY2001} and \cite{MR2004}).
These very weak stability estimates reveal that the problem is severly ill posed limiting the possibility of efficient reconstruction of the
unknown object and motivating mathematically, but also from the point of view of applications, the importance of the identification of partial
information on the unknown obstacle $D$ like, for example, the size.
In literature we can find several results concerning the determination of inclusions or cavities and the estimate of their sizes related to
different kind of models. Without being exhaustive, we quote some of them.
For example in \cite{kang2012sharp} and \cite{kang2013bounds} the problem of estimating the volume of inclusions is analyzed using a finite number
of boundary measurements in electrical impedance tomography.
In \cite{doubova2015some}, the authors prove uniqueness, stability and reconstruction of an immersed obstacle in a system modeled by a linear wave equation.
These results are obtained applying the unique continuation property for the wave equation and in the
two dimensional case the inverse problem is transformed in a well-posed problem for a suitable cost functional.
We can also mention \cite{heck2007reconstruction}, in which it is analyzed the problem of reconstructing obstacles inside a bounded domain
filled with an incompressible fluid by means of special complex geometrical optics solutions for the stationary Stokes equation.
Here we follow the approach introduced by Alessandrini et al. in \cite{alessandrini2002detecting} and in \cite{morassi2003detecting}
and we establish a quantitative estimate of the size of the obstacle D, i.e. $|D|$, in terms of suitable boundary measurements. More precisely,
let us denote by $(u_0,p_0)\in (H^1(\Omega))^d\times L^2(\Omega)$ the velocity vector of the fluid and the pressure in the absence of the obstacle $D$,
namely the solution to the Dirichlet problem
\begin{equation}\label{ec1}
\left\{\begin{array}{rllll}
-\mathrm{div}(\sigma(u_0,p_0))&=&0& ,& \textrm{ in }\Omega,\\
\mathrm{div} \ u_0&=&0&,&\textrm{ in } \Omega,\\
u_0&=&g&,& \textrm{ on }\partial\Omega.
\end{array}
\right.
\end{equation}
and let $\displaystyle\psi_0=\sigma(u_0,p_0) n|_{\partial\Omega}$.
We consider now the following quantities
\begin{equation*}
W_0=\displaystyle\int_{\partial\Omega}g\cdot \psi_0\; \qquad \textrm{and} \qquad\; W=\displaystyle\int_{\partial\Omega}g\cdot \psi ,
\end{equation*}
representing the measurements at our disposal. Observe that the following identities hold true
\begin{equation*}
W_0=2\displaystyle\int_{\Omega}|e(u_0)|^2\; \qquad \textrm{and} \qquad\; W=2\int_{\Omega\backslash\overline D}|e(u)|^2 ,
\end{equation*}
giving us the information on the total deformation of the fluid in the corresponding domains, $\Omega$ and $\Omega \backslash \overline D$.
We will establish a quantitative estimate of the size of the obstacle D, $|D|$, in terms of the difference $W-W_0$. In order to accomplish this
goal, we will follow the main track of \cite{alessandrini2002detecting} and \cite{morassi2003detecting} applying fine interior regularity results,
Poincar\'{e} type inequalities and quantitative estimates of unique continuation for solutions of the stationary Stokes system.
The plan of the paper is as follows. In Section \ref{sec2} we provide the rigorous formulations of the direct problem and state the main results,
Theorems \ref{teo1}-\ref{teo2}. Section \ref{sec3} is devoted to the proofs of Theorems \ref{teo1}-\ref{teo2}. Finally in Section \ref{sec4} we show
some computational examples.
\setcounter{equation}{0}
\section{\bfseries Main results}\label{sec2}
\
\par
In this section we introduce some definitions and some preliminary results we will use through the paper and we will state our main theorems.
Let $x\in\mathbb R^d$, we denote by $B_{r}(x)$ the ball in $\mathbb R^d$ centered in $x$ of radius $r$. We will indicate by $\cdot$ the scalar product between
vectors or matrices. We set $x=(x_1,\ldots,x_{d})$ as $x=(x',x_{d})$, where $x'=(x_1,\ldots,x_{d-1})$.
\begin{definition}[Def. $2.1$ \cite{alessandrini2002detecting}]
Let $\Omega\subset\mathbb R^d$ be bounded domain. We say that $\partial\Omega$ is of class $C^{k,\alpha},$ with constants $\rho_0,\;M_0>0$, where $k$ is a nonnegative
integer and $\alpha\in[0,1)$, if, for any $x_0\in\partial\Omega,$ there exists a rigid transformation of coordinates, in which $x_0=0$ and
\begin{equation*}
\Omega\cap B_{\rho_0}(0)=\{x\in B_{\rho_0}(0):\;x_{n}>\varphi(x')\},
\end{equation*}
where $\varphi$ is a function of class $C^{k,\alpha}(B'_{\rho}(0)), \ k\geq 1$, such that
\begin{equation*}
\varphi(0) =0,\ \qquad
\nabla\varphi(0)=0 \ \qquad \textrm{and} \ \qquad
\|\varphi\|_{C^{k,\alpha}(B'_{\rho_0}(0))} \leq M_0\rho_0.
\end{equation*}
\end{definition}
When $k=0$ and $\alpha=1$ we will say that $\partial\Omega$ is of Lipschitz class with constants $\rho_0,M_0$.
\begin{remark}
We normalize all norms in such a way that they are dimensionally equivalent to their argument, and coincide with the usual norms when $\rho_0=1$.
In this setup, the norm taken in the previous definition is intended as follows:
\begin{equation*}
\|\phi\|_{C^{k,\alpha}(B'_{\rho_0}(0))}=\displaystyle\sum_{i=0}^{k}\rho_0^{i}\|D^{i}\phi\|_{L^{\infty}(B_{\rho_0}'(0))}+\rho_0^{k+\alpha}|D^{k}\phi|_{\alpha,B_{\rho_0}'(0)},
\end{equation*}
where $|\cdot|$ represents the $\alpha$-H\"older seminorm
\begin{equation*}
\displaystyle |D^{k}\phi|_{\alpha,B_{\rho_0}'(0)}=\sup_{x',y'\in B_{\rho_0}'(0),x'\neq y'}\frac{|D^{k}\phi(x')-D^{k}\phi(y')|}{|x'-y'|^{\alpha}},
\end{equation*}
and $D^{k}\phi=\{D^{\beta}\phi\}_{|\beta|=k}$ is the set of derivatives of order $k$. Similarly we set the norms
\begin{equation*}
\displaystyle\|u\|_{L^2(\Omega)}^2=\displaystyle\frac{1}{\rho_0^{d}}\int_{\Omega}|u|^2\ \quad \textrm{and} \ \quad
\displaystyle\|u\|_{H^1(\Omega)}^2=\displaystyle \frac{1}{\rho_0^{d}}\left(\int_{\Omega}|u|^2+\rho_0^2\int_{\Omega}|\nabla u|^2\right).
\end{equation*}
\end{remark}
\subsection{Some classical results for Stokes problem}
\
\par
We now define the following quotient space since, if we consider incompressible models, the pressure is defined
only up to a constant.
\begin{definition}
Let $\Omega$ be a bounded domain in $\mathbb R^d$. We define the quotient space
\begin{equation*}
L_{0}^{2}(\Omega)=L^2(\Omega)/\mathbb R,
\end{equation*}
represented by the class of functions of $L^2(\Omega)$ which differ by an additive constant.
We equip this space with the quotient norm
\begin{equation*}
\displaystyle\|v\|_{L_0^2(\Omega)}=\inf_{\alpha\in\mathbb R}\|v+\alpha\|_{L^2(\Omega)}.
\end{equation*}
\end{definition}
The Stokes problem has been studied by several authors and, since it is impossible to quote all the related relevant contributions, we refer
the reader to the extensive surveys \cite{girault2012finite} and \cite{temam2001navier}, and the references therein.
We limit ourselves to present some classical results, useful for the treatment of our problem, concerning existence, uniqueness, stability
and regularity of solutions to the following boundary value problem
for the Stokes system
\begin{equation}\label{2.1}
\left\{\begin{array}{rllll}
-\mathrm{div}(\sigma(u,p))&=&f& ,& \textrm{ in }\Omega,\\
\mathrm{div} \, u &=&0&,&\textrm{ in } \Omega,\\
u&=&g&,& \textrm{ on }\partial\Omega,
\end{array}
\right.
\end{equation}
where, for the sake of simplicity, from now on we assume $\mu(x)\equiv 1$, $\forall x\in\Omega$.
Concerning the well-posedness of this problem we have
\begin{theorem}[Existence and uniqueness, \cite{temam2001navier}] Let $\Omega\subset\mathbb R^d$ be a bounded domain of class $C^2$, with $d\geq 2$.
Let $f\in (H^{-1}(\Omega))^{d}$ and $g\in (H^{1/2}(\partial\Omega))^{d}$ satisfying the compatibility condition
\begin{equation}\label{2.2}
\displaystyle\int_{\partial\Omega}g\cdot n =0.
\end{equation}
Then, there exists a unique $(u,p)\in ((H^1(\Omega))^{d}\times L_0^{2}(\Omega))$ solution to problem \eqref{2.1}. Moreover, there exists a positive constant
$C$, depending only on $\Omega$, such that
\begin{equation*}
\|u\|_{H^1(\Omega)}+\|p\|_{L_0^2(\Omega)}\leq C(\|f\|_{H^{-1}(\Omega)}+\|g\|_{H^{1/2}(\partial\Omega)}).
\end{equation*}
\end{theorem}
Regarding the regularity, the following result holds
\begin{theorem}[Regularity of the Stokes problem, \cite{temam2001navier}]\label{TSR} Let $\Omega$ be a bounded domain of class $C^{k+1,1}$ in $\mathbb R^d$,
with $k \in \mathbb{N} \cup \{0\}$ and $d \geq 2$. Then, for any $f\in (H^{k}(\Omega))^{d}$ and $g\in (H^{k+3/2}(\partial\Omega))^{d}$ satisfying \eqref{2.2},
the unique solution to \eqref{2.1} is such that
\begin{equation*}
(u,p)\in (H^{k+2}(\Omega))^d\times H^{k+1}(\Omega).
\end{equation*}
Moreover, we have
\begin{equation*}
\|u\|_{H^{k+2}(\Omega)}+\|p\|_{H^{k+1}(\Omega)}\leq C(\|f\|_{H^{k}(\Omega)}+\|g\|_{H^{k+3/2}(\partial\Omega)}),
\end{equation*}
where $C$ is a positive constant depending only on $\Omega$.
\end{theorem}
\smallskip
\subsection{Preliminaries}
\
\par
In order to prove our main results we need the following a-priori assumptions on $\Omega$, $D$ and the boundary data $g$.
\begin{enumerate}
\item[\bfseries{(H1)}] $\Omega\subset\mathbb R^d$ is a bounded domain with a connected boundary $\partial\Omega$ of Lipschitz class with constants $\rho_0,M_0$.
Further, there exists $M_1>0$ such that
\begin{equation}\label{2}
|\Omega|\leq M_1\rho_{0}^{d}.
\end{equation}
\item[\bfseries{(H2)}] $D\subset\Omega$ is such that $\Omega\setminus\overline{D}$ is connected and it is strictly contained in $\Omega$, that is there exists a positive constant
$d_0$ such that
\begin{equation}\label{H2}
d(D,\partial\Omega)\geq d_0 >0.
\end{equation}
Moreover, $D$ has a connected boundary
$\partial D$ of class $C^{2,\alpha}$, $\alpha \in (0,1]$, with constants $\rho,L$.
\item[\bfseries{(H3)}]
$D$ satisfies $({\bf H2})$ and the scale-invariant fatness condition with constant $Q>0$, that is
\begin{equation}\label{3}
diam(D)\leq Q\rho.
\end{equation}
\item[\bfseries{(H4)}] $g$ is such that
\begin{equation*}
g\in (H^{3/2}(\partial\Omega))^{d},\quad g\not\equiv 0,\quad
\displaystyle\frac{\|g\|_{H^{1/2}(\partial\Omega)}}{\|g\|_{L^{2}(\partial\Omega)}}\leq c_0,
\end{equation*}
for a given constant $c_0>0$,
and satisfies the compatibility condition
\begin{equation*}
\displaystyle\int_{\partial\Omega}g\cdot n =0.
\end{equation*}
Also suppose that there exists a point $P\in\partial\Omega,$ such that,
\begin{equation*}
g=0\textrm{ on }\partial\Omega\cap B_{\rho_0}(P).
\end{equation*}
\item[\bfseries{(H5)}] Since one measurement $g$ is enough in order to detect the size of $D$, we choose $g$ in such a way
that the corresponding solution $u$ satisfies the following condition
\begin{equation}\label{bg}
\displaystyle\int_{\partial \Omega}\sigma(u,p)n=0.
\end{equation}
\end{enumerate}
Concerning assumption {\bfseries(H5)}, the following result holds.
\begin{proposition}
There exists at least one function $g$ satisfying $({\bf H4})$ and $(\bf {H5})$.
\end{proposition}
\begin{proof} Consider $(d+1)$ linearly independent functions $g_{i}$ satisfying $({\bf H4})$, $i=1,\ldots,d+1$.
Let
\begin{equation*}
\displaystyle\int_{\partial\Omega}\sigma(u_{i},p_{i})n=v_{i}\in\mathbb R^d,
\end{equation*}
where $(u_{i},p_{i})$ is the corresponding solution of \eqref{P1} associated to $g_{i}$, $i=1,\ldots,d+1$.
If, for some $i$, we have that $v_{i}=0$, then the result follows. So, assume that all the $v_{i}$ are different from the null vector.
Then, there exist some constants $\lambda_{i}$, with $i=1,\ldots,d+1$, not all zero,
such that
\begin{equation*}
\displaystyle\sum_{i=1}^{d+1}\lambda_{i}v_{i}=0
\end{equation*}
and we can choose our Dirichlet boundary data as
\begin{equation*}
g=\displaystyle\sum_{i=1}^{d+1}\lambda_{i}g_{i}.
\end{equation*}
Therefore, $g$ satisfies $({\bf H4})$ and since the Cauchy force is linear with respect to the Dirichlet boundary condition we have
\begin{equation*}
\displaystyle\int_{\partial\Omega}\sigma(u,p)n=0,
\end{equation*}
where $(u,p)$ is the corresponding solution to \eqref{P1}, associated to $g$.
\end{proof}
\begin{remark}
Integrating the first equation of \eqref{P1} on $\Omega\setminus\overline{D}$, applying the Divergence Theorem and using (\ref{bg}), we obtain
\begin{equation}\label{bgob}
\int_{\partial D}\sigma(u,p)n = 0.
\end{equation}
\end{remark}
\begin{remark}\label{R1}
Notice that the constant $\rho$ in $({\bf H2})$ already incorporates information on the size of $D$. In fact, an easy computation shows that if $D$
has a boundary of class $C^{2,\alpha}$ with constant $\rho$ and $L$, then we have
\begin{equation*}
|D|\geq C(L)\rho^{d}.
\end{equation*}
Moreover, if also condition $({\bf H3})$ is satisfied, then it holds
\begin{equation*}
|D|\leq C(Q)\rho^{d}.
\end{equation*}
\end{remark}
\begin{remark}\label{R1bis} If $D$ satisfies $({\bf H2})$, then there exists a constant $h_1>0$ such that (see \cite{alessandrini1998})
\begin{equation}\label{DH}
|D_{h_1}|\geq {1\over 2}|D|.
\end{equation}
where we set, for any $A \subset \mathbb{R}^d$ and $h>0$,
\begin{equation*}
A_{h}=\{x\in A:\;d(x,\partial A)>h \}.
\end{equation*}
\end{remark}
\subsection{Main results}
\
\par
Under the previous assumptions we consider the following boundary value problems. When the obstacle $D$ in $\Omega$ is present,
the pair given by the velocity and the pressure of the fluid in $\Omega\setminus\overline{D}$
is the weak solution $(u,p)\in (H^{2}(\Omega\setminus\overline{D}))^{d}\times H^1(\Omega\setminus\overline{D})$ to
\begin{equation}\label{4}
\left\{\begin{array}{rllll}
-\mathrm{div}(\sigma(u,p))&=&0& ,& \textrm{ in }\Omega\setminus\overline{D},\\
\mathrm{div} u&=&0&,&\textrm{ in } \Omega\setminus\overline{D},\\
u&=&g&,& \textrm{ on }\partial\Omega,\\
u&=&0&,&\textrm{ on }\partial D.
\end{array}
\right.
\end{equation}
Then we can define the function $\psi$ by
\begin{equation}\label{2.6}
\displaystyle\psi=\sigma(u,p) n|_{\partial\Omega}\in (H^{1/2}(\partial\Omega))^{d}
\end{equation}
and the quantity
\begin{equation*}
W=\displaystyle\int_{\partial\Omega}(\sigma(u,p) n)\cdot u=\int_{\partial\Omega}\psi\cdot g.
\end{equation*}
When the obstacle $D$ is absent, we shall denote by $(u_0,p_0)\in (H^2(\Omega))^{d}\times H^1(\Omega)$ the unique weak solution to the Dirichlet problem
\begin{equation}\label{5}
\left\{\begin{array}{rllll}
-\mathrm{div}(\sigma(u_0,p_0))&=&0& ,& \textrm{ in }\Omega,\\
\mathrm{div} u_0&=&0&,&\textrm{ in } \Omega,\\
u_0&=&g&,& \textrm{ on }\partial\Omega.\\
\end{array}
\right.
\end{equation}
Let us define
\begin{equation}\label{2.8}
\displaystyle\psi_0=\sigma(u_0,p_0) n|_{\partial\Omega}\in (H^{1/2}(\partial\Omega))^{d},
\end{equation}
and
\begin{equation*}
W_0=\displaystyle\int_{\partial\Omega}(\sigma(u_0,p_0) n)\cdot u_0=\int_{\partial\Omega}\psi_0 \cdot g.
\end{equation*}
Our goal is to derive estimates of the size of $D$, $|D|$, in terms of $W$ and $W_0$.
\begin{theorem}\label{teo1}
Assume $\bf{(H1)}$, $\bf{(H2)}$, $\bf{(H4)}$ and $\bf{(H5)}$ . Then, we have
\begin{equation}\label{7}
|D|\leq\displaystyle K\left(\displaystyle\frac{W-W_0}{W_0}\right),
\end{equation}
where the constant $K$ depends on $\Omega, d, d_0, h_1, \rho_0, M_0, M_1$, and $\|g\|_{H^{1/2}(\partial\Omega)}/\|g\|_{L^{2}(\partial\Omega)}$.
\end{theorem}
\begin{theorem}\label{teo2}
Assume $\bf{(H1)}$, $\bf{(H2)}$, $\bf{(H3)}$ and $\bf{(H4)}$.
Then, it holds
\begin{equation}\label{8}
\displaystyle C\frac{(W-W_0)^2}{\| g\|_{H^{3/2}(\partial\Omega)}^2 W_{0}}\leq |D|,
\end{equation}
where $C>0$ depends on $M_1,\rho_0, d, d_0, \rho, L$, and $Q$.
\end{theorem}
\begin {corollary}\label{corol1}
Assume $\bf{(H1)}$--$\bf{(H5)}$. Then, there exist two positive constant $K$ and $C$ as in \eqref{7} and \eqref{8} such that
\begin{equation}\label{7bis}
C\frac{(W-W_0)^2}{\| g\|_{H^{3/2}(\partial\Omega)}^2 W_{0}}\leq |D|\leq K\left(\displaystyle\frac{W-W_0}{W_0}\right).
\end{equation}
\end{corollary}
\begin{remark}
We expect that a result similar to the one obtained in Corollary \ref{corol1} can be derived when we replace the Dirichet
boundary data with the condition
\begin{equation*}
\sigma(u,p)n = g, \quad {\rm on}\, \, \partial \Omega,
\end{equation*}
$g$ satisfying suitable regularity assumptions and the compatibility condition
\begin{equation*}
\int_{\partial\Omega} g = 0.
\end{equation*}
\end{remark}
\setcounter{equation}{0}
\section{\bfseries Proofs of the main theorems}\label{sec3}
\
\par
The main idea of the proof of Theorem \ref{teo1} is an application of a three spheres inequality. In particular, we apply a result
contained in \cite{lin2010optimal} concerning the solutions to the following Stokes systems
\begin{equation}\label{9}
\left\{\begin{array}{rllll}
-\Delta u+A(x)\cdot\nabla u+B(x)u+\nabla p&=&0& ,& \textrm{ in }\Omega,\\
\mathrm{div} u&=&0&,&\textrm{ in } \Omega.
\end{array}
\right.
\end{equation}
Then it holds:
\begin{theorem}[Theorem 1.1 \cite{lin2010optimal}]
Consider $0\leq R_0\leq 1$ satisfying $B_{R_0}(0)\subset\Omega\subset\mathbb R^d$. Then, there exists a positive number $\tilde{R}<1$,
depending only on $d$, such that, if $0<R_1<R_2<R_3\leq R_0$ and $R_1/R_3<R_2/R_3<\tilde{R}$, we have
\begin{equation*}
\displaystyle\int_{|x|<R_2}|u|^2 dx\leq C\left(\int_{|x|<R_1}|u|^2 dx\right)^{\tau}\left(\int_{|x|<R_3}|u|^2 dx\right)^{1-\tau},
\end{equation*}
for $(u,p)\in(H^{1}(B_{R_0}(0)))^{d}\times H^{1}(B_{R_0}(0))$ solution to \eqref{9}. Here $C$ depends on $R_2/R_3$, $d$, and $\tau\in (0,1)$
depends on $R_1/R_3$, $R_2/R_3$, $d$. Moreover, for fixed $R_2$ and $R_3$, the exponent $\tau$ behaves like $1/(-\log R_1)$, when $R_1$ is sufficiently
small.
\end{theorem}
Based on this result, the following proposition holds:
\begin{proposition}[Lipschitz propagation of smallness, Proposition 3.1 \cite{ballerini2010stable}]\label{pro1}
Let $\Omega$ satisfy {\rm ({\bf H1})} and $g$ satisfies {\rm ({\bf H4})}.
Let $u$ be a solution to the problem
\begin{equation}\label{10}
\left\{\begin{array}{rllll}
-\mathrm{div}(\sigma(u,p))&=&0& ,& \textrm{ in }\Omega,\\
\mathrm{div} u&=&0&,&\textrm{ in } \Omega,\\
u&=&g&,& \textrm{ on }\partial\Omega.\\
\end{array}
\right.
\end{equation}
Then, there exists a constant $s>1$, depending only on $d$ and $M_0$, such that for every $r>0$ there exists a constant $C_{r}>0$, such that for every $x\in \Omega_{s r}$, we have
\begin{equation}\label{11}
\displaystyle\int_{B_{r}(x)}|\nabla u|^{2}dx\geq C_{r}\int_{\Omega}|\nabla u|^{2}dx,
\end{equation}
where the constant $C_{r}>0$ depends only on $d,M_0,M_1,\rho_0,r, \displaystyle\frac{\|g\|_{H^{1/2}(\partial\Omega)}}{\|g\|_{L^{2}(\partial\Omega)}}$.
\end{proposition}
\smallskip
Following the ideas developed in \cite{alessandrini2002detecting}, we establish a key variational inequality relating the boundary data
$W-W_0$ to the $L^2$ norm of the gradient of $u_0$ inside the cavity $D$.
\begin{lemma}\label{l1}
Let $u_0\in (H^{1}(\Omega))^{d}$ be the solution to problem \eqref{5} and $u\in (H^{1}(\Omega\setminus\overline{D}))^{d}$ be the solution to problem \eqref{4}.
Then, there exists a positive constant $C=C(\Omega)$ such that
\begin{equation}\label{12}
\displaystyle\int_{D}|\nabla u_0|^{2}\leq C(W - W_0)=C \int_{\partial D}u_0\cdot\sigma(u,p)n,
\end{equation}
where $n$ denotes the exterior unit normal to $\partial D$.
\end{lemma}
\begin{proof}
Let $(u,p)$ and $(u_0,p_0)$ be the solutions to problems \eqref{4} and \eqref{5}, respectively.
We multiply the first equation of \eqref{4} by $u_0$ and after integrating by parts, we have
\begin{equation}\label{3.5}
\displaystyle\int_{\Omega\setminus\overline{D}}\sigma(u,p)\cdot\nabla u_0-\int_{\partial\Omega}(\sigma(u,p) n)\cdot u_0+\int_{\partial D}(\sigma(u,p) n)\cdot u_0=0,
\end{equation}
where $n$ denotes either the exterior unit normal to $\partial\Omega$ or to $\partial D$.
In a similar way, multiplying the first equation of \eqref{5} by $u_0$, we obtain
\begin{equation}\label{3.6}
\displaystyle\int_{\Omega}\sigma(u_0,p_0)\cdot\nabla u_0-\int_{\partial\Omega}(\sigma(u_0,p_0) n)\cdot u_0=0.
\end{equation}
Now, replacing $\psi$ and $\psi_0$ into the equations \eqref{3.5}-\eqref{3.6}, we get
\begin{equation}\label{13}
\left\{\begin{array}{r}
\displaystyle\int_{\Omega\setminus\overline{D}}\sigma(u,p)\cdot\nabla u_0-\int_{\partial\Omega}\psi\cdot g+\int_{\partial D}(\sigma(u,p) n)\cdot u_0=0,\\
\displaystyle\int_{\Omega}\sigma(u_0,p_0)\cdot\nabla u_0-\int_{\partial\Omega}\psi_0\cdot g=0.
\end{array}
\right.
\end{equation}
Let us define
\begin{equation*}
\tilde{u}(x) = \left\{
\begin{array}{rl}
u&\textrm{ if }x\in\Omega\setminus\overline{D},\\
0&\textrm{ if }x\in\overline{D}.
\end{array}\right.
\end{equation*}
Since $u=0$ on $\partial D$, we have $\tilde{u}\in (H^1(\Omega))^{d}$.
So, multiplying \eqref{4} and \eqref{5} by $\tilde{u}$, we obtain
\begin{equation}\label{15}
\left\{\begin{array}{r}
\displaystyle\int_{\Omega\setminus\overline{D}}\sigma(u,p)\cdot\nabla \tilde{u}-\int_{\partial\Omega}\psi\cdot g+\underbrace{\int_{\partial D}(\sigma(u,p) n)\cdot\tilde{u}}_{=0}=0,\\
\displaystyle\int_{\Omega\setminus\overline{D}}\sigma(u_0,p_0)\cdot\nabla\tilde{u}-\int_{\partial\Omega}\psi_0\cdot g=0.
\end{array}
\right.
\end{equation}
Using the definition of $\sigma(u,p)$ in the first equation of \eqref{13}, we have
\begin{eqnarray*}
\label{3.9}
0=\displaystyle\int_{\Omega\setminus\overline{D}}\sigma(u,p)\cdot\nabla u_0-\int_{\partial\Omega}\psi\cdot g+\int_{\partial D}(\sigma(u,p) n)\cdot u_0\\
=\displaystyle\int_{\Omega\setminus\overline{D}}(2e(u)-pI)\cdot\nabla u_0-\int_{\partial\Omega}\psi\cdot g+\int_{\partial D}(\sigma(u,p) n)\cdot u_0\\
=\displaystyle\int_{\Omega\setminus\overline{D}}2e(u)\cdot\nabla u_0-\int_{\Omega\setminus\overline{D}}p(\mathrm{div} \ u_0)-\int_{\partial\Omega}\psi\cdot g+\int_{\partial D}(\sigma(u,p) n)\cdot u_0\\
=\displaystyle\int_{\Omega\setminus\overline{D}}2e(u)\cdot\nabla u_0-\int_{\partial\Omega}\psi\cdot g+\int_{\partial D}(\sigma(u,p) n)\cdot u_0,
\end{eqnarray*}
where we use the fact that $\mathrm{div} \, u_0=0$.
For the next step, we need a different expression for the term $e(u)\cdot\nabla u_0$.
We claim that, for every $v\in (H^1(\Omega))^{d}$ such that $\mathrm{div} \, v=0$, we have $e(u)\cdot\nabla v=e(u)e(v)$. Indeed,
\begin{equation*}
\begin{array}{rl}
2e(u)\cdot\nabla v&\displaystyle =\left(\frac{\partial u_{i}}{\partial x_{j}}+\frac{\partial u_{j}}{\partial x_{i}}\right)\frac{\partial v_{i}}{\partial x_{j}}\\[0.4em]
&\displaystyle =\frac{1}{2}\left(\frac{\partial u_{i}}{\partial x_{j}}+\frac{\partial u_{j}}{\partial x_{i}}\right)\frac{\partial v_{i}}{\partial x_{j}}
+\frac{1}{2}\left(\frac{\partial u_{i}}{\partial x_{j}}+\frac{\partial u_{j}}{\partial x_{i}}\right)\frac{\partial v_{j}}{\partial x_{i}}\\[0.6em]
&\displaystyle =e(u)\cdot\nabla v+e(u)\cdot\nabla v^{T}=2e(u)\cdot e(v).
\end{array}
\end{equation*}
Therefore, equalities \eqref{13} and \eqref{15} can be rewritten as
\begin{eqnarray}
\D2\int_{\Omega\setminus\overline{D}}e(u)\cdot e(u_0)-\int_{\partial\Omega}\psi\cdot g+\int_{\partial D}u_0\cdot(\sigma(u,p) n)=0,\label{18}\\
\D2\int_{\Omega}|e(u_0)|^{2}-\int_{\partial\Omega}\psi_0\cdot g=0,\label{19}\\
\D2\int_{\Omega\setminus\overline{D}}|e(u)|^{2}-\int_{\partial\Omega}\psi\cdot g=0,\label{16}\\
\D2\int_{\Omega\setminus\overline{D}}e(u_0)\cdot e(u)-\int_{\partial\Omega}\psi_0\cdot g=0.\label{17}
\end{eqnarray}
We note that if we subtract \eqref{17} from \eqref{18} we get
\begin{equation}\label{20}
\displaystyle\int_{\partial\Omega}(\psi-\psi_0)\cdot g=\int_{\partial D}u_0\cdot(\sigma(u,p) n).
\end{equation}
Now, let us consider the quadratic form
\begin{equation*}
\begin{array}{rl}
\displaystyle\int_{\Omega}e(\tilde{u}-u_0)\cdot e(\tilde{u}-u_0)& \displaystyle =\int_{\Omega}|e(u_0)|^{2}+\int_{\Omega\setminus\overline{D}}|e(u)|^{2}-2\int_{\Omega\setminus\overline{D}}e(u)\cdot e(u_0)\\
&\displaystyle =\frac{1}{2}\int_{\partial\Omega}\psi_0\cdot g+\frac{1}{2}\int_{\partial\Omega}\psi\cdot g-\int_{\partial\Omega}\psi_0\cdot g\\
&\displaystyle =\frac{1}{2}\int_{\partial\Omega}(\psi-\psi_0)\cdot g.
\end{array}
\end{equation*}
By Korn's inequality there exists a constant $C=C(\Omega) >0,$ such that
\begin{equation*}
\displaystyle\int_{\Omega}|\nabla (\tilde{u}-u_0)|^2\leq C\int_{\Omega}|e(\tilde{u}-u_0)|^2.
\end{equation*}
Finally, by the chain of inequalities
\begin{equation*}
\begin{array}{rl}
&\displaystyle\displaystyle\int_{D}|\nabla u_0|^{2}=\int_{D}|\nabla(\tilde{u}-u_0)|^2 \displaystyle\leq\int_{\Omega}|\nabla(\tilde{u}-u_0)|^2 \\
&\displaystyle\leq C\int_{\Omega}|e(\tilde{u}-u_0)|^2=C\int_{\partial\Omega}(\psi-\psi_0)\cdot g = C(W-W_0),
\end{array}
\end{equation*}
and \eqref{20} the claim follows.
\end{proof}
Now, using the previous results, we are able to prove Theorem \ref{teo1}.
\begin{proof}
The proof is based on arguments similar to those used in \cite{alessandrini2002detecting} and \cite{alessandrini2004detecting}.
Let us consider the intermediate domain $\Omega_{d_{0}/2}$. Recalling that $d(D,\partial\Omega)\geq d_0$, we have $d(D,\partial\Omega_{d_{0}/2})\geq\frac{d_0}{2}.$
Let $\epsilon=\min\left(\frac{d_0}{2},\frac{h_1}{\sqrt{d}}\right) >0$. Let us cover the domain $D_{h_1}$ with cubes $Q_{l}$ of side $\epsilon$,
for $l=1,\ldots,N$. By the choice of $\epsilon$, the cubes $Q_{l}$ are contained in $D$. Then,
\begin{equation}\label{22}
\displaystyle\int_{D}|\nabla u_{0}|^{2}\geq\int_{\cup_{l=1}^{N}Q_{l}}|\nabla u_0|^{2}\geq\frac{|D_{h_1}|}{\epsilon^{d}}\int_{Q_{\overline{l}}}|\nabla u_0|^{2},
\end{equation}
where $\overline{l}$ is chosen in such way that
\begin{equation*}
\displaystyle\int_{Q_{\overline{l}}}|\nabla u_0|^{2}=\min_{l}\int_{Q_{l}}|\nabla u_0|^{2}>0.
\end{equation*}
We observe that the previous minimum is strictly positive because, if not, then $u_0$ would be constant in $Q_{\overline l}$.
Thus, from the unique continuation property, $u_0$ would be constant in $\Omega$ and since there exists a point $P\in\partial\Omega,$ such that,
\begin{equation*}
g=0\textrm{ on }\partial\Omega\cap B_{\rho_0}(P),
\end{equation*}
we would have that $u_0 \equiv 0$ in $\Omega$, contradicting the fact that $g$ is different from zero. Then, the minimum is strictly positive.
Let $\overline{x}$ be the center of $Q_{\overline{l}}$. From the estimate \eqref{11} in Proposition \ref{pro1} with $x=\overline{x}$,
$r=\frac{\epsilon}{2}$, we deduce
\begin{equation}\label{3.17}
\displaystyle\int_{Q_{\overline{l}}}|\nabla u_0|^2 \geq C\int_{\Omega}|\nabla u_0|^2.
\end{equation}
On account of Remark \ref{R1bis}, we obtain
\begin{equation}\label{3.18}
\displaystyle\int_{D}|\nabla u_0|^2 \geq \displaystyle\frac{\frac{1}{2}|D|}{\epsilon^{d}}C\int_{\Omega}|\nabla u_0|^2=|D| {C}\int_{\Omega}|\nabla u_0|^2.
\end{equation}
We estimate the right hand side of \eqref{3.18}. First, using \eqref{19} we have
\begin{eqnarray}\label{3.19}
\displaystyle\int_{\partial\Omega}\psi_0\cdot g&=2\int_{\Omega}|e(u_0)|^2=2\int_{\Omega}\frac{|\nabla u_0+\nabla u_0^{T}|^2}{4}\\
&=2\left(\int_{\Omega}\frac{|\nabla u_0|^2+|\nabla u_0^{T}|^2+2\nabla u_0 \cdot\nabla u_0^{T}}{4}\right)
\end{eqnarray}
Now, H\"older's inequality implies
\begin{equation}\label{3.20}
\displaystyle\int_{\partial\Omega}\psi_0\cdot g\leq 2\int_{\Omega}|\nabla u_0|^2.
\end{equation}
Then, coming back to \eqref{3.18}, we obtain that there exists a constant $K$, depending on $\Omega,d, d_0, h_1, rho_0, M_0, M_1$,
and $\|g\|_{H^{1/2}(\partial\Omega)}/\|g\|_{L^{2}(\partial\Omega)}$ such that
\begin{equation}\label{3.21}
\displaystyle\int_{D}|\nabla u_0|^2 \geq |D| K\int_{\partial\Omega}\psi_0\cdot g.
\end{equation}
Combining \eqref{3.21} and Lemma \ref{l1} we have
\begin{equation}\label{3.22}
C\displaystyle\int_{\partial\Omega}(\psi-\psi_0)\cdot g \geq \int_{D}|\nabla u_0|^2\geq \left({K}\int_{\partial\Omega}\psi_0\cdot g\right)|D|.
\end{equation}
Therefore, we can conclude that
\begin{equation*}
|D|\leq \displaystyle {K}\frac{W-W_0}{W_0},
\end{equation*}
where $\tilde{K}$ is a constant depending on $\Omega,d, d_0, h_1, \rho_0,M_0, M_1$, and $\displaystyle\frac{\|g\|_{H^{1/2}(\partial\Omega)}}{\|g\|_{L^{2}(\partial\Omega)}}$.
\end{proof}
In order to prove Theorem \ref{teo2}, we make use of the following Poincar\'{e} type inequality.
\begin{proposition}[Proposition $3.2$ \cite{alessandrini2002detecting}]\label{pro2}
Let $D$ be a bounded domain in $\mathbb R^d$ of class $C^{2,\alpha}$ with constants $\rho,L$ and such that \eqref{3} holds.
Then, for every $u\in (H^{1}(D))^{d}$ we have
\begin{equation}\label{23}
\displaystyle\int_{\partial D}|u-\overline{u}|^{2}\leq \overline{C}\rho\int_{D}|\nabla u|^{2},
\end{equation}
where $\overline{u}=\displaystyle\frac{1}{|\partial D|}\int_{\partial D}u$
and the constant $\overline{C}>0$ depends only on $L,Q$.
\end{proposition}
Using this result and Lemma \ref{l1} we can prove now Theorem \ref{teo2}.
\begin{proof}
Let $\overline{u}_0$ be the following number
\begin{equation}\label{3.25}
\overline{u}_0=\displaystyle\frac{1}{|\partial D|}\int_{\partial D}u_0.
\end{equation}
Then, we deduce that
\begin{equation}\label{3.26}
\displaystyle\int_{\partial D}(\sigma(u,p) n)\cdot u_0=\int_{\partial D}(\sigma(u,p) n)\cdot u_0 - \int_{\partial D}(\sigma(u,p) n)\cdot\overline{u}_0,
\end{equation}
because $\int_{\partial D}\sigma(u,p)\cdot n=0$. From equality \eqref{20} in Lemma \ref{l1}, we have
\begin{equation}\label{3.27}
W-W_0=\displaystyle\int_{\partial D}(\sigma(u,p) n)\cdot u_0=\int_{\partial D}(\sigma(u,p) n)\cdot (u_0-\overline{u}_0).
\end{equation}
Applying H\"older inequality in the right hand side of \eqref{3.27} we obtain
\begin{equation}\label{3.28}
W-W_0\leq \left(\int_{\partial D}|u_0-\overline{u_0}|^2\right)^{1/2}\left(\int_{\partial D}|\sigma(u,p) n|^2\right)^{1/2}.
\end{equation}
Now, using Poincar\'e inequality \eqref{23} in the first integral on the right hand side of \eqref{3.28}, we get
\begin{equation}\label{3.29}
W-W_0\leq C\left(\int_{D}|\nabla u_0|^2\right)^{1/2}\left(\int_{\partial D}|\sigma(u,p) n|^2\right)^{1/2},
\end{equation}
where $C>0$ depends on $|\Omega|, Q, \rho$ and $L$.
The first integral on the right hand side of \eqref{3.29} can be estimated as
\begin{equation}\label{3.30}
\displaystyle\int_{D}|\nabla u_0|^2 \leq |D|^{1/2}\sup_{D}|\nabla u_0|.
\end{equation}
Now, we need to give an interior estimate for the gradient of $u_0$. For this, we observe that for the regularity of the Stokes problem we
have $u_0\in (H^2(\Omega))^d$. Then, we may take the Laplacian of the second equation in \eqref{5}
\begin{equation*}
\Delta \mathrm{div} \ u_0=0.
\end{equation*}
Therefore, commuting the differential operators, we obtain that the pressure is an harmonic function. This implies that each component of
$u_0$ is a biharmonic function.
Then, using interior regularity estimates for fourth order equations, we deduce that
\begin{equation}\label{3.31}
\displaystyle\sup_{D}|\nabla u_0|\leq C\|u_0\|_{L^2(\Omega)},
\end{equation}
where the constant $C$ depends on $Q$, $|\Omega|$ and $d_0$. Estimate (\ref{3.31}) can be obtained considering the following results.
We know that the embedding from $H^4(\Omega)$ to $C^{k}(\Omega)$ is continuous for $0\leq k < 4-\frac{d}{2}$, with $d=2,3$. Then, in particular,
\begin{equation*}
\|u_0\|_{C^1(D)}\leq C \|u_0\|_{H^4(D)}.
\end{equation*}
Moreover, from the interior regularity of fourth order equations, see \cite[Th. 8.3]{morassi2007size}, we obtain
\begin{equation*}
\|u_0\|_{H^4(D)}\leq C\|u_0\|_{H^2(\Omega_{d_0/2})}.
\end{equation*}
Finally, considering the estimates in \cite{auscher2000equivalence} and \cite{boyer2012mathematical}, we have
\begin{equation*}
\|u_0\|_{H^2(\Omega_{d_0/2})}\leq C\|u_0\|_{L^2(\Omega_{d_0/4})}\leq C\|u_0\|_{L^2(\Omega)},
\end{equation*}
and \eqref{3.31} holds. We refer to \cite{auscher2000equivalence,barton2014gradient,cordes1956erste}, and references therein, for more details on interior estimates for elliptic operators.
As the boundary data $g$ satisfies $\bf{(H4)}$, we use the classical Poincar\'e inequality and obtain
\begin{equation}\label{3.33}
\displaystyle \|u_0\|_{L^2(\Omega)}\leq C\|\nabla u_0\|_{L^2(\Omega)}.
\end{equation}
Therefore, by means of the inequality $\displaystyle\int_{\Omega}|\nabla u_0|^2\leq C\displaystyle \int_{\partial\Omega}\psi_0\cdot g$, we deduce
\begin{equation}
\displaystyle\left(\int_{D}|\nabla u_0|^2\right)^{1/2}\leq C |D|^{1/2} W_0^{1/2}.
\end{equation}
Now, concerning the second integral in \eqref{3.29} we note that from the Trace Theorem it follows
\begin{equation}\label{26}
\| \sigma(u,p)\cdot n\|_{L^2(\partial D)}\leq C (\| u\|_{H^2(\Omega\setminus\overline{D})}+\| p\|_{L^2(\Omega\setminus\overline{D})}),
\end{equation}
and applying Theorem \ref{TSR} we obtain the inequality
\begin{equation}\label{27}
\| \sigma(u,p)\cdot n\|_{L^2(\partial\Omega)}\leq C(\| u\|_{H^2(\Omega\setminus\overline{D})}+\| p\|_{L^2(\Omega\setminus\overline{D})})\leq C \| g\|_{H^{3/2}(\partial\Omega)}.
\end{equation}
Therefore, it holds
\begin{equation*}
C\displaystyle\frac{(W-W_0)^2}{\| g\|_{H^{3/2}(\partial\Omega)}^2W_0}\leq |D|,
\end{equation*}
where $C$ depends on $M_1, \rho_0, d, \rho, L$ and Q. This completes the proof.
\end{proof}
We conclude the section observing that proof of Corollary \ref{corol1} is a straightforward consequence of Theorem \ref{teo1} and Theorem \ref{teo2}.
\section{\bfseries Computational examples}\label{sec4}
\
\par
In this section we will perform some numerical experiments to compute $|\frac{W-W_0}{W_0}|$ for classes of cavities for which our result holds.
In particular, we expect to collect numerical evidence that the ratio between $\frac{|D|}{|\Omega|}$ and $|\frac{W-W_0}{W_0}|$ is bounded from below and above
by two constants indicating that, due to the limits of our technique, the estimate from below is not optimal.
Indeed, the numerical experiments we perform give some preliminary indications that this conjecture is true.
Moreover, we are interested in studying the dependence of this ratio on $d_0$, which bounds from below the distance of $D$ from $\partial\Omega$, and
the size of the inclusions.
A more systematic analysis would require the knowledge of explicit solutions $u$ and $u_0$. This would allow to compute analytically the constants in the upper
and lower bounds, at least for some particular geometries. On the contrary to the case in \cite{alessandrini2002detecting}, for
the Stokes system it is difficult to find explicit solutions.
For the experiments we use the free software \emph{FreeFem++} (see \cite{MR3043640}). Moreover, in all numerical tests we consider a square domain $\Omega$, discretized with a mesh of $100\times 100$ elements, and with boundary condition $u|_{\partial\Omega}=g$ as in Figure $4.1$. The datum $g$
satisfies the assumptions ${\bf (H4)}$ and ${\bf (H5)}$.
\begin{figure}[H]
\begin{center}
\includegraphics[width=0.8\textwidth]{figure0.eps}
\caption{Square domain in $2$-D with boundary condition $g$.}
\end{center}
\end{figure}
The first series of numerical tests has been performed by varying the position and the size of a circle inclusion $D$ with volume up to $8\%$ of the total size of the domain.
In particular, we consider a circle inclusion with volume $0.2\%$, $3.1\%$ and $7.1\%$ with respect to $|\Omega|$. We have placed these circles in eight different positions, see Figure $4.2$.
The results are collected in Figure $4.3$, $4.4$ and $4.5$, for different values of the distance $d_0$ between the object $D$ and the boundary of $\Omega$.
Also, the averages of all this simulations are collected in Figure $4.6$.
\begin{figure}[H]
\centering
\includegraphics[width=0.4\textwidth]{figure10.eps}
\caption{The eight positions of the circle inclusion $D$.}
\end{figure}
In order to compare our numerical results with the theoretical upper and lower bounds \eqref{7} and \eqref{8}, it is interesting to study the relationship between $\frac{|D|}{|\Omega|}$ and $|\frac{W-W_0}{W_0}|$.
As we expected from the theory, the points $(\frac{W-W_0}{W_0},\frac{|D|}{|\Omega|})$ are confined inside an angular sector delimited by two straight lines.
\begin{figure}[H]
\centering
\includegraphics[width=0.65\textwidth]{figure1.eps}
\caption{Case $d_0=5$ for circle inclusion.}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=0.65\textwidth]{figure2.eps}
\caption{Case $d_0=3$ for circle inclusion.}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=0.65\textwidth]{figure3.eps}
\caption{Case $d_0=2$ for circle inclusion.}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=0.65\textwidth]{figure5.eps}
\caption{Averages of the ratio $\frac{W-W_0}{W_0}$ with different $d_0$ for circle inclusion.}
\end{figure}
However, it is quite clear that when $d_0$ decreases, then the lower bound becomes worse. To illustrate this situation, we simulate also the case when the distance is $d_0=1$,
see Figure $4.7$.
\begin{figure}[H]
\centering
\includegraphics[width=0.65\textwidth]{figure4.eps}
\caption{Case $d_0=1$ for circle inclusion.}
\end{figure}
As a second class of experiments, we consider what happens when the size of the circle increases.
In this case we can observe that the number $|\frac{W-W_0}{W_0}|$ grows rapidly when the volume occupies almost the entire domain. The result is collected in Figure $4.8$.
Again it is observed the relationship between the volume of the object with the quotient $(W-W_0)/W_0$. This gives us an indication that the estimates found in Theorems \ref{teo1} and \ref{teo2}
involve constants that do not depend on the inclusion.
\begin{figure}[H]
\centering
\includegraphics[width=0.65\textwidth]{figure9.eps}
\caption{Influence of the size of the circle.}
\end{figure}
\begin{remark}
From the previous analysis an interesting problem would be to find optimal lower and upper bounds for this model.
An other interesting issue would be to weaken the a-priori assumptions imposed on the obstacle, as for example the
fatness condition (see, for instance, \cite{cristo2013size}, where this restriction is removed in the case of the shallow shell equations).
\end{remark}
\bigskip
\noindent \textbf{Acknowledgements.} This work was partially supported by PFB03-CMM and Fondecyt 1111012.
The work of E. Beretta was supported by GNAMPA (Gruppo Nazionale per l'Analisi Matematica, la Probabilit\`a e le loro Applicazioni) of INdAM
(Istituto Nazionale di Alta Matematica) and part of it was done while the author was visiting New York University Abu Dhabi.
The work of C. Cavaterra was supported by the FP7-IDEAS-ERC-StG \#256872 (EntroPhase) and by GNAMPA (Gruppo Nazionale per l'Analisi Matematica,
la Probabilit\`a e le loro Applicazioni) of INdAM.
Part of this work was done while J. Ortega was visiting the Departamento de Matem\'atica, Universidad Aut\'onoma de Madrid - UAM and the Instituto de Ciencias
Matem\'aticas ICMAT-CSIC, Madrid, Spain.
The work of S. Zamorano was supported by CONICYT-Doctorado nacional 2012-21120662. Part of this work was done while S. Zamorano was visiting the
Basque Center for Applied Mathematics and was partially supported by the Advanced Grant NUMERIWAVES/FP7-246775 of the European Research Council Executive Agency,
the FA9550-15-1-0027 of AFOSR, the MTM2011-29306 and MTM2014-52347 Grants of the MINECO.
|
1,477,468,750,307 | arxiv |
\section{\label{sec: intro} Introduction}
Discovery of kHz QPOs in the flux from certain X--ray burst sources have
prompted substantial amount of work in connection with accretion physics
and structure properties of the central accretors in such systems. In
particular, these
oscillations have been used to derive estimates of the mass of the neutron
star in X--ray binaries (Kaaret, Ford \& Chen 1997; Zhang, Strohmayer \&
Swank 1997; Klu\'{z}niak 1998). All these estimates, based on the beat
frequency model, tacitly assume that the highest QPO frequency of 1.22 kHz
observed so far (in the source 4U 1636--53; Zhang et al. 1997) can
be identified with the Keplerian orbital frequency corresponding to the
marginally stable orbit associated with the neutron star. Beat frequency
models require that the difference in frequencies between the twin QPO
peaks be the spin frequency of the neutron star and that this remain constant.
However, further observations have shown that there exist microsecond
lags in the QPO difference frequencies in many sources implying that an
exact beat frequency mechanism may not be at work. Recently,
Osherovich \& Titarchuk (1999a), Titarchuk \& Osherovich (1999), Osherovich
\& Titarchuk (1999b) have developed alternative models unifying the
mechanism for production of low frequency QPOs and that for high frequency
QPOs. This model requires the lower frequency QPO to be due to Keplerian
circulation of matter in the disk and the higher frequency one to be hybrid
between the lower frequency and the rotational frequency of the stellar
magnetosphere. Li et al. (1999b) have suggested that if such a model is
taken recourse of, then the compact star in the source 4U 1728 -- 34 may
possibly be a strange star.
The possible existence of a new sequence of degenerate compact stellar
objects, made up of light mass u, d and s quarks, has been suggested
(Witten 1984; Haensel, Zdunik \& Schaeffer 1986; Alcock, Farhi \&
Olinto 1986) for quite sometime now, based on ideas from particle
physics which indicate that a more fundamental description of hadronic
degrees of freedom at high matter densities must be in terms of their quark
constituents. For energetic reasons, a two--component (u,d) quark
matter is believed to convert to a three--component (u,d,s) quark
matter in beta equilibrium. As suggested by Witten (1984), the latter
form of matter could be the absolute ground state of strongly interacting
matter rather than $^{56}Fe$.
Because of the important role played by the confinement forces in
quantum chromodynamics (QCD) to describe the quark interactions,
the mass--radius relationship for stable strange
stars differ in an essential manner from that of neutron stars (Haensel,
Zdunik \& Schaeffer 1986; Alcock, Farhi \& Olinto 1986).
Recent work (Cheng et al. 1998; Li et al. 1999a, Li et al. 1999b) seem to
suggest that a consistent explanation of the
observed features of the hard X--ray burster GRO J 1744 -- 28,
the transient X--ray burst source SAX J 1808.4 -- 3658 and the source
4U 1728 -- 34 is possible only
in terms of an accreting strange star binary system.
A new class of low--mass X--ray binaries, with strange star as the central
compact object (SSXBs), is thus an interesting astrophysical possibility that
merits study. Some consequences of the SSXB hypothesis for the properties
of bulk strange matter have been discussed recently by Bulik,
Gondek-Rosi\'{n}ska and Klu\'{z}niak (1999) (see also Schaab \& Weigel 1999).
The compact nature of the sources make general relativity important in
describing these systems. Furthermore, their existence in binary systems
imply that these may possess rapid rotation rates (Bhattacharya \& van den
Heuvel 1991 and references therein). These two properties make the
incorporation of general relativistic effects of rotation imperative
for satisfactory treatment of the problem.
General relativity predicts the existence of marginally stable orbits
around compact stars. For material particles within the radius of
such orbits, no Keplerian orbit is possible and the particles will
undergo free fall under gravity. This radius ($r_{ms}$) can
be calculated for equilibrium sequences of rapidly rotating
strange stars in a general relativistic space--time in the same
way as for neutron stars (Datta, Thampan \& Bombaci 1998).
In this letter, we calculate the Keplerian frequency of matter revolving
around rapidly rotating strange stars.
The present results, together with those obtained assuming
a neutron star as the central accretor (Thampan et al. 1999),
demonstrate that QPO frequencies in the range (1.9-3.1) kHz can be
interpreted in terms of a non-magnetized SSXB rather than a NSXB.
Future discovery of such high frequency QPOs from X--ray
burst sources will constitute a new astrophysical diagnostic for SSXBs.
In section (\ref{sec: gr}) we very briefly discuss the formalism used to
construct rapidly rotating strange star sequences and further computing the
Kepler frequencies around such objects. Section(\ref{sec: eos}) provides
a brief outline of the equation of state (EOS) models used by us.
In section (\ref{sec: res}) we discuss the results and conclusions.
\section{\label{sec: gr} Calculations}
We use the methodology described in detail in Datta, Thampan \& Bombaci (1998)
to calculate the structure of rapidly rotating strange stars. For
completeness, we briefly describe the method here. For a general
axisymmetric and stationary space--time, assuming a perfect fluid
configuration, the Einsten field equations reduce to ordinary integrals
(using Green's function approach). These integrals may be self consistently
(numerically and iteratively) solved to yield the value of metric coefficients
in all space. Using these metric coefficients, one may then compute the
structure parameters, moment of inertia and angular momentum corresponding to
initially assumed central density and polar to equatorial radius ratio.
The values of the structure parameters and the metric coefficients, so
computed, may then be used (as described in Thampan \& Datta 1998) to
calculate parameters connected with stable circular orbits (like the innermost
stable orbit and the Keplerian angular velocities) around the configuration
in question.
\section{\label{sec: eos} Strange star equations of state}
For purpose of this letter, we have calculated the relevant quantities
(of interest here), corresponding to three different equation of state
(EOS) models for strange stars. Two of these equations of state are based
on the MIT bag model (Chodos et al. 1974)
with the following values for the bag pressure ($B$), the strange quark mass
($m_s$) and the QCD structure constant ($\alpha_c$):
(i) $B=90$~MeV~fm$^{-3}$, $m_s=0$~MeV and $\alpha_c=0$;
(ii) $B=56$~MeV~fm$^{-3}$, $m_s=150$~MeV, with the short range
quark--quark interaction incorporated perturbatively to second order in
$\alpha_c$ according to Freedman \& McLerran (1978) and Goyal \& Anand (1990).
Next we considered a phenomenological model by Dey et al. (1998) (model (iii))
that has the
basic features of QCD (namely, quark confinement and asymptotic freedom), but
employs a potential description for the interaction.
These models for the EOS are quite divergent in their approach, so that the
conclusions presented here using these will be of sufficient generality.
\section{\label{sec: res} Results and Conclusions}
\input{psbox.tex}
\begin{figure}
{\mbox{\psboxto(9cm;15cm){fig1.ps}}}
\caption{The Kepler frequency $\nu_K$ corresponding to the innermost
`allowed' orbit as a function of gravitational mass $M$ of the neutron star.
The three curves: solid, dotted and dashed are, respectively, for three values
of neutron star spin frequency $\nu_s$, namely, 0, 200 and 580 Hz. The
vertical dot--dashed line represents a 1~$M_{\odot}\ $ configuration.
Each panel correspond to one of the EOS models described in the text.}
\end{figure}
For the EOS models described in the previous section, we calculate the
Keplerian frequencies corresponding to the innermost `allowed' orbits
(as given by general relativity) for
rotating strange stars, and obtain their relationship with QPO
frequencies in the kHz range, assuming the SSXB scenario.
The inner edge of the accretion disk may not always be coincident with
$r_{ms}$, but there can be instabilities in the disk that can relocate it
outside of $r_{ms}$. If the radius ($R$) of the strange star is larger than
$r_{ms}$, the innermost possible orbit will be at the surface of the strange
star. It must be mentioned here that rotation of the central accretor is an
important consideration because the accretion driven angular momentum transfer
over dynamical timescales can be quite large (Bhattacharya \& van den
Heuvel 1991). Because the values of
$r_{ms}$ and the mass of the spinning strange star will depend on two
independent parameters, namely, the central density ($\rho_{c}$) of the
star and its spin frequency ($\nu_{s}$), a range of values of
($\rho_{c}$,$\nu_{s}$) will exist that will allow solutions for a Keplerian
frequency corresponding to any specified value of the QPO frequency.
The variation of the Keplerian frequency ($\nu_{K}$) of the innermost
`allowed' orbit with respect to the gravitational mass (M) of the spinning
strange star is shown in Fig. 1.
For purpose of illustration, we have chosen three values of
$\nu_{s}$ : 0 (the static limit), 200 Hz and 580 Hz (the last rotation
rate inferred from the X--ray source 4U 1636--53 as given by Zhang et al.
1997, using beat frequency model).
It can be noted from Fig. 1 that all the curves have a cusp.
For any curve, the nearly flat part (to the left of the cusp)
corresponds to the case $ R \ge r_{ms} $, and the descending part
(to the right of the cusp) corresponds to the case $ R \le r_{ms} $.
These are the only possibilities for the location of $r_{ms}$ with respect
to the stellar surface.
The highest kHz QPO frequency observed so far is 1.22 kHz, exhibited
by the source 4U 1636--53.
Fig. 1 shows that only the maximum mass end of the curve for non--rotating
configuration described by EOS model (ii) attains the value
$\nu_{K} = 1.22$~kHz.
A simple analysis, relating the minimum value of $\nu_{\rm K}$ to the
bag constant (see Fig. 1 for EOS (i)) in the case of
non-rotating strange stars
within the MIT bag model EOS for massless non-interacting quarks gives
$\nu_{K}(r_{ms},M_{max}) = 1.081 (B/56)^{1/2}$~kHz,
where B is in MeV~fm$^{-3}$. The lowest possible value for $B$,
which is compatible with the Witten's hypothesis (Witten 1984),
is $56$~Mev~fm$^{-3}$.
Finite values of $m_s$, $\alpha_c$, and $\nu_{s}$ increase the value of
$\nu_{K}(r_{ms},M_{max})$ with respect to the previous case.
This implies that, if one adheres to the restrictive
assumption that $\nu_{\rm QPO} = 1.22$~kHz in the X--ray source 4U 1636--53
is generated at the marginally stable orbit of the central compact star
(with $r_{ms}>R$), then the latter being a strange star is an admissible
solution only for low values of the bag constant
and for very slowly rotating configurations of the star.
Next we investigate the possibility that the kHz QPO frequency is
generated at locations outside the marginally stable orbit.
Since $\nu_{K}(r)$ is a decreasing function of r, $\nu_{K}=1.22$ kHz
in SSXBs will occur at $r>r_{ms}$, that is, somewhere in the accretion
disk and not at the disk inner edge.
In Fig. 2 we show the plot of the Keplerian frequency profiles
$\nu_{K}(r)$ of test particles around a (rotating) strange star of one
solar mass (for the same values of the rotation rates as before).
This figure shows that the radial location in the disk, where
a solution : $\nu_{K}=1.22$ kHz occurs in a SSXB, is about $4.5 r_{g}$,
where $r_{g}=2GM/c^{2}$ is the Schwarzschild radius of the strange star.
A similar analysis for $M=1.4$~$M_{\odot}\ $ yields $r_{1.22}$ (radius at which
$\nu_{\rm QPO}=1.22$~kHz is produced) in the range
($3.53$, $3.55$)~$r_{g}$, the higher value being that for the non--rotating
configuration and the lower for $\nu_s = 580$~Hz.
It is interesting to ask what range of $\nu_{K}$ obtains for a
specified value of the strange star mass.
From Fig. 1, it can be seen that the values of $\nu_{K}$
for SSXB, for a one solar mass strange star, lie in the range
(2.2--2.3) kHz for EOS model (i), (1.8--1.9) kHz for EOS model (ii)
and (2--2.6) kHz for EOS model (iii).
The first two ranges of kHz QPOs occur at $r=R$, while
the third at $r=r_{ms}$.
For $M=1.4$~$M_{\odot}\ $, these ranges are: (1.57--1.84), (1.57-1.87) and
(1.57--1.79), respectively for EOS models (i), (ii) and (iii).
The similarity in these ranges is due to $r_{ms} > R$ for all these
configurations.
It also follows from Fig. 1 that the EOS model (iii) gives the
maximum value of $\nu_{K}$, namely, 3 kHz.
\begin{figure}
\hspace{-1.5cm}
{\mbox{\psboxto(9cm;15cm){fig2.ps}}}
\caption{Radial variation of $\nu_K$ for a $1$~$M_{\odot}\ $ configuration. On the
x--axis is the radial distance ($r$)
scaled with the Schwarzschild radius ($r_g=2GM/c^2$). The various curves have
the same meaning as in Fig. 1. Where the dotted/dashed curves are not
visible, they merge with the solid curve for the non--rotating configuration.
The horizontal dot--dashed curve corresponds to $\nu_K=1220Hz$, the highest
QPO frequency observed to date from the X--ray source 4U 1636$-$53. The
$\nu_K=1220$~Hz line intersects the curves (in all cases) at $r = 4.5r_g$.}
\end{figure}
The most interesting result ensues if a comparison is made of Fig. 1 with
its counterpart for the case of a NSXB.
A detailed calculation of the latter was reported recently by
Thampan, Bhattacharya \& Datta (1999), using realistic EOS models.
This calculation showed that the maximum theoretically expected value of
$\nu_{\rm QPO}$ for NSXBs is 1.84 kHz. Therefore, values of
$\nu_{\rm QPO}$ in excess of $1.84$~kHz, if observed, cannot
be understood in terms of a NSXB. The SSXB scenario is a more
likely one for these events (assuming that generation of X--ray
bursts is possible on strange star surfaces); this will constitute
a new astrophysical diagnostic for the existence of strange stars
in our galaxy.
|
1,477,468,750,308 | arxiv | \section{Introduction}
The growing complexity of the Josephson junctions (JJs) in terms of layout and materials has significantly increased the "parameters space" for a full understanding and control of their properties~\cite{Barone1982,Tafuri2019}. Ferromagnetic (SFS) JJs are a unique platform to integrate the coherent quantum nature of superconductors and ferromagnets into new mechanisms and new smart tunable functionalities. The rich literature has established several key elements, which arise when superconducting pair correlations traverse the exchange field of a ferromagnet~\cite{Ryazanov2001,Golubov2004,Buzdin2005,Bergeret2005,Blamire2014}. JJs with multiple F-layer barriers have been theoretically and experimentally studied in connection to unconventional triplet superconductivity with equal-spin Cooper pairs, which can be artificially generated in these structures~\cite{Buzdin2005,Bergeret2005,Robinson2010,Khaire2010,Sprungmann2010,Eschrig2011,Banerjee2014,Linder2015,Singh2015,Eschrig2015,Martinez2016}. The spin-aligned triplet Cooper pairs are immune to the exchange field of the F layer and can carry a non-dissipative spin current. Therefore, spin-triplet Cooper pairs constitute the essential element for the emerging field of superconducting spintronics~\cite{Robinson2010,Khaire2010,Eschrig2011,Linder2015,Eschrig2015,Martinez2016}.
It is well established that spin-polarized triplet pairs are generated via spin-mixing and spin-rotation processes at magnetically inhomogeneous S/F interfaces~\cite{Bergeret2005,Eschrig2011,Eschrig2015,Linder2015,Houzet2007,Cascales2019}. Evidence of equal-spin triplets has been reported in S-F'-F-F''-S JJs, where the F', F'' spin-mixer layers mediate the conversion of singlet to triplet pair correlations~\cite{Robinson2010,Khaire2010,Sprungmann2010,Banerjee2014,Iovan2014,Linder2015,Singh2015,Martinez2016,Massarotti2018}. Recently, theoretical and experimental studies have been dedicated to an alternative mechanism for triplet pair generation involving spin-orbit coupling (SOC) in combination with a magnetic exchange field~\cite{Edelshtein1989,Banerjee2018,KunRok2020}. These systems may benefit from the capability to generate controllable spin-polarized supercurrents with a single ferromagnetic layer, compared to magnetically textured JJs.
The strong evidence for the presence of spin-triplet supercurrents in a JJ is the slower decay of the characteristic voltage of the junction with increasing F layer thickness~\cite{Robinson2010,Khaire2010,Sprungmann2010,Eschrig2011,Anwar2012,Blamire2014,Linder2015,Eschrig2015,Singh2015,Martinez2016}, due to the robustness of spin-triplet Cooper pairs to the exchange field. However, it has been suggested that a mechanism of phase-compensation can arise in clean S/F heterostructures, which may cancel the destructive interference effect due to the exchange field on conventional spin-singlet pairing~\cite{Melnikov2012,Iovan2014}. Therefore, a conclusive evidence for the spin-triplet nature of the supercurrent could be supported by the capability to distinguish singlet and triplet components. The capability of quantifying the amount of spin-polarized supercurrents remains a fundamental benchmark to further prove triplet correlations and a key step towards real applications.
In parallel with the work on diffusive ferromagnets, superconducting tunnel junctions with ferromagnetic insulator (FI) barriers, namely spin-filter JJs \ce{NbN}/\ce{GdN}/\ce{NbN}, have revealed unique transport properties, such as spin-polarization phenomena~\cite{Senapati2011,28,Pal2014,Ahmad2020JOSC}, an interfacial exchange field in the superconducting layer~\cite{Pal2015,Zhu2016}, macroscopic quantum tunneling~\cite{Massarotti2015} and an unconventional incipient $0$-$\pi$ transition~\cite{Caruso2019}. They are especially well-suited for the implementation in superconducting circuits in which a very low dissipation is required~\cite{Bannykh2009,Wild2010,Kawabata2006,Kawabata2010,Feofanov2010,Zhu2017,Ahmad2020}. In these systems, evidence of spin-triplet transport has been reported~\cite{Pal2014,Blamire2014,Pal2017,Caruso2019}.
Here we build on a previous study of the critical current $I\ped c$ as a function of the temperature $T$ in \ce{NbN}-\ce{GdN}-\ce{NbN} JJs\cite{Caruso2019}, now performed in presence of magnetic field as an unambiguous knob to demonstrate coexistence and tuning of singlet and triplet components. By using a tight-binding Bogolioubov de Gennes approach~\cite{Furusaki1994, Asano2019, Asano2001}, we model the $I\ped c(T)$ curves in the whole temperature range, along with the corresponding current-phase relation (CPR) as a function of the temperature $T$. Here we demonstrate alternative accurate methods to assess the spin-triplet transport, which can be extended to different types of JJs: the amount of spin-singlet and -triplet correlations can be for the first time quantified and parametrized in terms of disorder parameter and spin-mixing mechanisms through a direct fitting of experimental data. The non-monotonic behavior of the $I\ped{c}(T)$ curves turns out to be the benchmark for the coexistence of spin-singlet and spin-triplet superconductivity in SFIS junctions. When the $I\ped c(T)$ curve shows a plateau over a wide range of temperatures, the competition between the singlet and triplet pairing amplitudes becomes significant, in both s-wave and p-wave symmetries. This behavior sets in due to the combined effects of impurities and spin-mixing mechanisms. When the $I\ped{c}(T)$ curve exhibits an incipient $0-\pi$ transition, i.\,e. a local minimum is observed instead of the typical cusp~\cite{Ryazanov2001,Buzdin2005,Feofanov2010,Wild2010}, the equal-spin triplet component is gradually suppressed, becoming irrelevant in the limit case of a more standard cusp-like $0-\pi$ transition. This last situation corresponds to relative low values of disorder and spin-mixing effects.
An external magnetic field perpendicular to the Josephson transport direction gradually modifies the experimental $I\ped c(T)$ curves, i.e. a plateau extended over a wide range of temperatures evolves into a non-monotonic behavior by increasing the magnetic field. Therefore, we provide the first experimental evidence of an "in situ" tuning of the relative weight between spin-singlet and spin-triplet supercurrents in single-layered SFIS JJs, which is explained in terms of a reduced disorder parameter in presence of magnetic field for a multi-domain ferromagnet. The ability to describe the combined effect of magnetic inhomogeneities and disorder in complex barriers, with clear benchmarks on the phenomenology of the junctions, can be of reference for a variety of structures.
\section{Results}
The junctions under study are \ce{NbN}/\ce{GdN}/\ce{NbN} trilayer reported in Ref.~\cite{Caruso2019}, with a special focus on devices with thick FI layers. The superconducting electrodes have thicknesses of $\SI{100}{\nm}$ and the analyzed FI thicknesses are: $d\ped F=\SI{3.0}{\nm}$, $\SI{3.5}{\nm}$ and $\SI{4.0}{\nm}$. We here report measurements of the $I\ped c(T)$ curves performed down to $\SI{20}{\milli\K}$ and measurements of the $I\ped c(T)$ curves as a function of an external magnetic field applied in the plane of the JJs. A sketch of the JJs is reported in Fig.~\ref{Fig_4} (a).
We model the experimental results using a tight-binding Bogolioubov de Gennes (BdG) Hamiltonian on a two-dimensional lattice~\cite{Madelung2012,Ashcroft1976}. A schematization of the 2D-lattice model is reported in Fig.~\ref{Fig_4} (b), where $L$ is the length of the FI barrier, $W$ is the width of the junction expressed in lattice units. Conventional s-wave superconductivity in the \ce{NbN} electrodes is assumed~\cite{Barone1982}, while the Hamiltonian of the FI barrier (\ce{GdN}) includes the electron hopping~\cite{Madelung2012,Ashcroft1976}, an exchange field $h$ with small local fluctuations in the amplitude, which couples electrons-spin~\cite{Madelung2012,Ashcroft1976}, and an on-site impurity potential with random strength $V\ped{imp}$~\cite{Madelung2012,Ashcroft1976}. A spin-orbit coupling (SOC)~\cite{Madelung2012,Ashcroft1976}, whose strength is $\alpha$, is used to introduce a spin-symmetry breaking and efficiently mimics the presence of magnetic inhomogeneities in the barrier, which are more likely to occur in devices with large areas~\cite{Eschrig2011,Blamire2014}. The junctions under study, in fact, are characterized by areas of $\sim \SI{50}{\micro\m^2}$. The Josephson current $J$ at finite temperature $T$ is derived from the Matsubara Green’s function (GF) of the FI barrier, calculated with the recursive Green's function (RGF) technique~\cite{Furusaki1994,Asano2001,Asano2019}.
In Fig.~\ref{IcT_sim_JJs} (a), (b) and (c), we show the comparison between the $I\ped c(T)$ curves measured down to $\SI{20}{\milli\K}$ at zero field for the junctions with \ce{GdN} barriers $d\ped F=\SI{3.0}{\nm}$, $\SI{3.5}{\nm}$ and $\SI{4.0}{\nm}$ (black points), respectively, and the simulations obtained with the tight-binding BdG lattice model (red straight lines). In the insets, we report the measured $I\ped c(T)$ values down to dilution temperatures. The experimental data evolve from a plateau over a wide range of temperatures (a few Kelvins) observed for the junctions with \ce{GdN} thickness $d\ped F=3.0$ and $\SI{3.5}{\nm}$ into a non-monotonic $I\ped c(T)$ curve for the junction with $d\ped F=\SI{4.0}{\nm}$. The agreement between numerical outcomes and experimental data is certified by the capability to reproduce the unconventional plateau (Fig.~\ref{IcT_sim_JJs} (a) and (b)) and the non-monotonic behavior (Fig.~\ref{IcT_sim_JJs} (c)). The numerical values of the simulation parameters can be found in the Supplementary Material~\cite{Litinskii1989,LeuenbergerGdN:article,Manchon2015,Marsoner2015}.
The Hamiltonian parameters, as well as the lattice size, have no microscopic origin and are chosen to describe the main mechanisms that are expected to occur in the experimental devices. Therefore, we use \emph{effective} parameters to describe the junctions. When modeling the peculiar behavior of the $I\ped c(T)$ curves in each device, we use as control parameters the exchange field flux $\Phi(h)=WLh$, where $h$ is the magnetic exchange field, and the impurity potential $V\ped{imp}$. More details on the choice of the parameters are reported in Supplementary Material. This approach is meant to include all possible effects occurring in the FI barrier, and it is a powerful platform to describe a large variety of JJs. As shown below, fitting of experimental $I\ped c(T)$ curves will unambiguously allow to identify the coexistence of spin-singlet and triplet transport. When compared with what available in literature, the correlation functions are determined \emph{a posteriori} from the experimental data and allow to quantify the weight between the different transport channels.
We can relate the plateau in the $I\ped c(T)$ curve to an overall broadening of a $0-\pi$ transition in temperature. The calculated CPRs in Figs.~\ref{IcT_sim_JJs} (d), (e) and (f) indicate that at low temperatures the JJs are in the $0$-state (light blue gradient region in Figs.~\ref{IcT_sim_JJs} (a), (b) and (c)), while at temperatures above $T=0.7\;T\ped c$ the JJs are in the $\pi$ state (red gradient region).
Compared to what has been theoretically and experimentally observed in $0-\pi$ SFS and SFIS JJs~\cite{Ryazanov2001,Bannykh2009,Wild2010,Feofanov2010,Goldobin2013}, when the plateau is measured in the JJs with $d\ped F=\SI{3.0}{\nm}$ and $\SI{3.5}{\nm}$ in Figs.~\ref{IcT_sim_JJs} (a) and (b), the CPRs exhibit the presence of higher order harmonics in the Josephson current $J$ for a wide range of temperatures (yellow gradient region). The transition region is reduced when the $I\ped c(T)$ curve gradually points towards a non monotonic behavior, as shown in Fig.~\ref{IcT_sim_JJs} (c). In all the cases reported in Fig.~\ref{IcT_sim_JJs}, the $0-\pi$ transition extends over a few Kelvins in temperature around $\SI{4.2}{\K}$, in agreement with previous findings reported in Ref.~\cite{Pal2014}.
In Fig.~\ref{s_p_corrs}, we show the amplitude of the correlation functions $\left<|f|\right>$ determined from numerical simulations for the three devices at $T = 0.025\;T\ped c$ (corresponding to $\SI{0.3}{\K}$) and $\phi=0$, where $\phi$ is the phase-difference across the device. The correlation functions are determined for the spin-singlet ($f_0$), spin-triplet with opposite spins ($f_3$) and equal-spin triplet functions ($f_{\uparrow}$ and $f_{\downarrow}$), both in s-wave (Figs.~\ref{s_p_corrs} (a), (b) and (c)) and p-wave symmetries (Figs.~\ref{s_p_corrs} (d), (e) and (f)), as a function of the number of sites $\#$ of the barrier. In order to assure the total antisymmetry of the fermionic wave-function, triplet superconductivity for even-frequency pairing is conventionally of p-wave type\cite{Tanaka2007}. As shown in the following, for symmetry reasons here the dominant orbital part in the triplet pairing channel happens to be of s-wave type. We use $\left<\right>$ to indicate the ensemble average, due to the presence of random on-site impurities in the $2D$-lattice. Details on the calculation of the spatial profile of the correlation function can be found in the Supplementary Material. All the cases show a dominant s-wave singlet component $f_{0}$ at the superconductor/barrier interface that strongly decays toward the middle of the barrier thickness. This is reasonable because the sides of the FI-layer are attached to the superconducting leads with a usual Bardeen-Cooper-Schrieffer (BCS) s-wave symmetry~\cite{Ambegaokar1963} and, due to the proximity effect, the singlet pair wavefunction enters the barrier. In the middle of the barrier (lattice site $\#4$), where the spin mixing and the exchange field effects take place, a competition between the s-wave triplet and singlet pair amplitudes arises. On the contrary, for the p-wave case, the singlet component $f_0$ turns out to be much lower than the corresponding s-wave one. At the same time, we may observe a prevalence of the zero-spin p-wave triplet component $f_{3}$ at the superconductor/barrier interface, while in the middle of the barrier thickness the spin-aligned triplet correlations become relevant. These results are justified by symmetry considerations~\cite{Eschrig2011, Eschrig2015, Bergeret2005}. Indeed, for the s-wave symmetry, the singlet is an even-frequency function, while the triplets are odd-frequency. The viceversa is valid for the p-wave case.
The $\SI{3.0}{\nm}$-thick barrier junction exhibits s-wave triplet correlations functions larger than the singlet one, with a major contribution provided by the equal-spin triplet component with $S_z=+1$, $f_{\uparrow}$ (Fig.~\ref{s_p_corrs} (a)). For what concerns the p-wave spin-correlation functions for this device, $f_{3}$ provides the main contribution at the borders, while $f_{\downarrow}$ competes with $f_3$ in the middle of the barrier (site $\# 4$), as shown in Fig.~\ref{s_p_corrs} (d). Moreover, the opposite- and equal-spin p-wave triplet components are nearly a factor $2$ larger than the corresponding s-wave singlet component. By increasing the thickness of the barrier, thus gradually pointing towards an incipient $0-\pi$ transition with a non-monotonic behavior in the $I\ped c(T)$ curve, in the s-wave cases we can observe a progressive suppression of the equal-spin triplet components and a dominant spin-singlet channel. At the same time, in the p-wave case, we observe a slight reduction of the ratio between the equal-spin triplets ($f_{\uparrow}$, $f_{\downarrow}$) and the major zero-spin component ($f_{3}$). Thus, the p-wave opposite spin-triplet components are of the same order of magnitude compared to the corresponding s-wave spin-singlet component, while the equal spin-triplet components are instead reduced.
In order to investigate the peculiar transport properties arising in these systems, we have probed the $I\ped{c}(T)$ response to an external magnetic field applied in the plane of the JJs. In Fig.~\ref{Fig4:29} we show the evolution of the normalized critical current $I\ped c(T,H/H_0)/I\ped c(\SI{0.3}{\K},H/H_0)$ as a function of a weak magnetic field $H/H_0$, where $H_0$ is the amplitude of the first lobe of the Fraunhofer pattern curve, acquired by applying the magnetic field from $+\SI{2.4}{\milli\tesla}$ to $-\SI{2.4}{\milli\tesla}$. $H_0$ is estimated at each investigated temperature $T$ (from $T=\SI{0.3}{\K}$ to $T=\SI{8}{\K}$). Details on the measurement procedure can be found in the Methods section.
The results are reported in Fig.~\ref{Fig4:29} in the two density-plots (a) and (c) for the junctions with $d\ped F=\SI{3.0}{\nm}$ and $\SI{3.5}{\nm}$, respectively. Increasing the field $H/H_0$, the plateau structure at zero field evolves into a non-monotonic behavior with a minimum (dark region around $70-80\%H_0$ and between $2$ and $\SI{4}{\K}$) and a maximum (bright region around $70-80\%H_0$ and between $4$ and $\SI{6}{\K}$). The effect is more pronounced for the JJ with $d\ped F=\SI{3.0}{\nm}$. The blue, green and red dashed line cuts are related to the cross-section curves reported in Fig.~\ref{Fig4:29} (b) and (d), where the gradual appearance of an enhanced dip and a non-monotonic behavior in the normalized $I\ped{c}(T)$ curves can be observed by increasing $H/H_0$. In Supplementary Material, we report the Fraunhofer pattern curves measured for the JJ with \ce{GdN} thickness $d\ped F=\SI{3.0}{\nm}$ at three selected temperatures: $\SI{0.3}{\K}$, $\SI{3}{\K}$ where a minimum of the $I\ped c(T)$ curve is measured for $H/H_0=75\%$, and $\SI{7}{\K}$ where a maximum of the $I\ped c(T)$ is observed for the same value of $H/H_0$. In the same figure, the IV curves measured at $H/H_0=75\%$ for the three selected temperatures are shown as a term of reference.
Such a progressive variation of the $I\ped c(T)$ curves in the presence of an external magnetic field is not observed in junctions with thinner \ce{GdN} barriers, as shown in the Supplementary Material, in which we report the $I\ped c(T,H/H_0)/I\ped c(\SI{0.3}{\K},H/H_0)$ density-plot measured on a \ce{NbN}-\ce{GdN}-\ce{NbN} junction with \ce{GdN} barrier thickness $d\ped F=\SI{1.5}{\nm}$. For this device, a standard Ambegaokar-Baratoff (AB) trend\cite{Ambegaokar1963} for the $I\ped c(T)$ curve is preserved in the presence of the external magnetic field. Compared to devices with thin \ce{GdN} barriers, the samples analyzed in this work are indeed sensitive to a weak magnetic field\cite{Pal2014,Massarotti2015,Caruso2019}. A finite shift of the order of $\SI{0.1}{\milli\tesla}$ in the Fraunhofer pattern curves arises when ramping the field from positive to negative values, and viceversa~\cite{Blamire2013}. Even if the strength of the external magnetic field is not enough to generate a complete magnetic ordering, slight modifications in the microscopic structure of the barrier arise~\cite{Cullity2011}, which has been already predicted to occurr in systems with tunable domain walls~\cite{Baker2014}, intrinsic SOC~\cite{Liu2010} and magnetic impurities~\cite{Pal2018}. At zero field, the magnetic disorder is maximum and
likely introduces electronic defect states in the barrier\cite{Maity2020}. As the field increases, the system undergoes towards a more ordered phase, and hence defect states density reduces. Therefore, the tunability of the $I\ped c(T)$ shape from the plateau towards a non-monotonic curve by applying an external magnetic field can be related to a reduction of the disorder in the barrier.
This picture is supported by numerical simulations obtained when changing the strength of the impurity potential in the $2D$-lattice model while keeping fixed all the other parameters. As a matter of fact, in order to have a good agreement with experimental data, we model the \ce{GdN} as a ferromagnetic half metal, as experimentally observed in Ref.\cite{Watcher1980} and predicted by full atomistic simulations in Refs.\cite{Duan2005,Larson2006}. Hence, local impurity potentials are assumed to induce small site-dependent fluctuations of the chemical potential. In our approach, the coexistence of spin mixing mechanisms, promoted by SOC-like interactions, and on-site impurities model the magnetic disorder. In Fig.~\ref{Vimp_vs_Field} (a), we can notice that the characteristic $0-\pi$ behavior is modified by increasing the impurity potential $V\ped{imp}$. The enhancement of the impurity strength produces a shift of the minimum of the curve towards lower temperatures and higher critical current values, with a consequent broadening of the typical $0-\pi$ cusp that progressively gives rise to the plateau. Viceversa, decreasing $V\ped{imp}$, one can recover the $0-\pi$ transition. Details on the parameters are reported in the Supplementary Material.
In Figs.~\ref{Vimp_vs_Field} (b)-(g) we finally report the s- and p-wave correlation functions corresponding to the simulated $I\ped{c}(T)$ curves for different impurity potentials $V\ped{imp}$. While the p-wave components appear to be approximately unaffected by disorder (Figs.~\ref{Vimp_vs_Field} (e)-(g)), for the s-wave symmetry the effect of increasing the impurity strength $V\ped{imp}$ results in a pronounced enhancement of the equal-spin triplet pairing correlations, $f_{\uparrow}$ and $f_{\downarrow}$ (e.g. Figs.~\ref{Vimp_vs_Field} (b)-(d)).
\section{Discussion}
The theoretical results in Figs.~\ref{IcT_sim_JJs}-\ref{s_p_corrs} show that the characteristic behavior of the $I\ped{c}(T)$ is related to the amplitude of the different s-wave spin-correlation functions. In Tab.~\ref{Tab:corr}, we summarize the values of the pair-correlations in the middle of the barrier thickness (lattice site $\# 4$), in units of the majority zero-spin component, i.\,e. $f_{0}$ for the s-wave (Tab.~\ref{Tab:corr} A) and $f_{3}$ for the p-wave cases (Tab.~\ref{Tab:corr} B), respectively. Indeed, we observe a general decrease in the relative weight of the s-wave equal-spin triplet components ($f_{\uparrow}$ and $f_{\downarrow}$) in the junctions that show an increasing non-monotonicity of the $I\ped{c}(T)$ curves. Hence, the more $I\ped{c}(T)$ exhibits a behavior approaching the $ 0- \pi $ regime, the lower is the weight of the s-wave equal-spin correlations. This is in agreement with the fact that spin-aligned supercurrents are insensitive to exchange field and, thus, cannot give rise to $0-\pi$ transitions.
In Fig.~\ref{diagram}, we show how the impurities and the SOC affect the $I\ped c(T)$ shape. The parameters of the simulations, including the SOC strength $\alpha$ and the impurity potential $V\ped{imp}$, are collected in the Supplementary Material. For small values of $\alpha$ and $V\ped{imp}$ (bright red- and blue-scale), the simulated $I\ped c(T)$ curve shows a cusp-like $0-\pi$ transition, provided that the exchange field $h$ in the junction is non-zero, as it occurs in SFS JJs tipically reported in literature~\cite{Ryazanov2001,Bannykh2009,Wild2010,Feofanov2010,Goldobin2013}. By increasing $\alpha$ (dark red-scale), the main effect is to reduce the height of the second maximum in the $I\ped c(T)$ curve, without recovering the plateau structure observed in SFIS JJs. At very large $\alpha$, the $0-\pi$ transition is washed out and an AB-like shape sets in, stabilizing a "$0$"-phase. In this case the main contribution is expected from the spin-singlet, though the spin-triplet correlations are increased compared to the cases with smaller $\alpha$.
At the same time, by keeping the spin-orbit field weak and by increasing $V\ped{imp}$ (dark blue-scale), the minimum of the $0-\pi$ transition occurs at higher critical current values and it is broadened in temperature, but always showing a non-monotonic trend for the $I\ped c(T)$. The characteristic plateau structure is observed only when considering a combined effect of SOC and impurities, once fixed the dimensions of the system. As it is shown for the SFIS JJ with $d\ped F=\SI{3.0}{\nm}$ in Fig.~\ref{IcT_sim_JJs} (a) and Fig.~\ref{s_p_corrs} (a), the formation of the plateau goes along with the coexistence of comparable spin-singlet and triplet superconductivity. Finally, in the limit of large $V\ped{imp}$, the $0-\pi$ transition is shifted towards very low $T$ values, thus, stabilizing a '$\pi$'-phase almost over the whole temperature range. The evidence of this is given by the sharp decrease of $I\ped c$ when the temperature drops. In this regime, in agreement with Fig.~\ref{Vimp_vs_Field}, we predict an enhanced contribution of the s-wave spin-triplet components due to the interplay of spin-orbit and disorder.
A transition between the peculiar plateau-shape of the $I\ped c(T)$ curve towards an incipient $0-\pi$-curve is experimentally observed increasing the strength of an external weak magnetic field (Fig.~\ref{Fig4:29}). The position in temperature of the $I\ped c(T)$ dip is an important benchmark relating the $0$-$\pi$ transition induced by the weak magnetic field to the combined effect of impurities, exchange field fluctuations and spin-orbit coupling in the simulations. For weak on-site impurity potential, by increasing $\alpha$, the minimum of the $I\ped c(T)$ curve occurs at the same temperature. Instead, as shown in Fig.\ref{Vimp_vs_Field}, when increasing $V\ped{imp}$, the minimum is shifted in temperature, as it occurs in the experimental $I\ped c(T)$ curves at a finite external magnetic field. We highlight the evolution of the $0$-$\pi$ transition and the dip shift in the experimental data in Fig.~\ref{Fig4:29} (a) and (c) by using the dashed white arrow in figure, which is here only a guide for the eye.
In conclusion, in this work we have investigated on the occurence of the unconventional $I\ped{c}(T)$ behaviors observed in SFIS JJs. The presence of a plateau extended over a wide range of temperatures and the peculiar non-monotonic behavior in the $I\ped{c}(T)$ when increasing the thickness of the barrier can be explained in terms of the coexistence of spin-singlet and triplet superconductivity, whose correlation functions have been calculated by using a tight-binding BdG descripion of the system~\cite{Asano2001, Asano2019, Furusaki1994}. This approach highlighted also the role played by the disorder in the barrier. At the same time, the presence of a spin-mixing effect, in this context provided by the spin-orbit interaction, is crucial to reproduce the characteristic plateau in the $I\ped{c}(T)$ curves. Furthermore, the obtained results confirm that the reinstatement of an overall ordering in the system by the means of an external magnetic field points towards the recovery of more standard $0-\pi$ transition. Within this picture, the application of a weak magnetic field represents a tool for controlling the relative weight of equal-spin-triplet transport in SFIS JJs.
\section{Methods}
\subsection{Experimental $I\ped c(T)$ curves at zero- and finite-field}
The $I\ped c(T)$ measurements at zero-field have been performed by using an evaporation cryostat from $\SI{0.3}{\K}$ up to the critical temperature of the devices $T\ped c\sim\SI{12}{\K}$~\cite{Caruso2018,Ahmad2020}. Measurements below $\SI{0.3}{\K}$ have been performed by using a wet dilution refrigerator \emph{Kelvinox400MX}~\cite{Ahmad2020,Longobardi2012} and a dry dilution cryostat \emph{Triton400}. The three refrigerators employed are equipped with customized low-noise filters anchored at different temperature stages~\cite{Caruso2018,Ahmad2020}.
The $I\ped c(T)$ curves at finite external magnetic field have been measured by using the evaporation cryostat, which is equipped with a \ce{NbTi} coil able to generate a magnetic field perpendicular to the transport direction\cite{ Caruso2018,Caruso2019}. The protocol followed is based on the measurement of the Fraunhofer pattern curve at fixed temperature for the spin-filter JJs with $d\ped F=\SI{3.0}{\nm}$ and $\SI{3.5}{\nm}$. The $I\ped c(H)$ curve is acquired ramping the magnetic field from positive to negative values, in a maximum field range of $\pm\SI{2.4}{\milli\tesla}$. The investigated range of temperatures corresponds to the plateau regime at zero field (from $\SI{2}{\K}$ to $\SI{8}{\K}$). Measurements at $\SI{0.3}{\K}$ have been performed as a term of comparison with the $I\ped c(T)$ curves at zero-field.
At each temperature, we have measured the amplitude of the first lobe of the Fraunhofer pattern $H_0$. The error on $H_0$ is $3\%$. Then, we have estimated the critical current $I\ped c$ at different percentage of $H_0$, from $0\%H_0$ to $85\%H_0$ for the JJ with $d\ped F=\SI{3.0}{\nm}$. The $I\ped c$ for the JJ with $d\ped F=\SI{3.5}{\nm}$ at high temperatures is less than some nanoamperes~\cite{Caruso2019}, hard testing the observation of the Fraunhofer modulation. This limited the maximum magnetic field range investigated to $75\%H_0$. The step in field was chosen to be sufficiently small to avoid the interpolation of the $I\ped c$ from the pattern curve. Thus, the $I\ped c$ has been measured directly from the $I(V)$ curves at the field corresponding to $H/H_0$. Moreover, to distinguish a peak structure in temperature, the step in temperature was fixed to $\SI{0.5}{\K}$.
Given the small range of the field explored, the shift of the maximum in the Fraunhofer pattern due to the hysteretic magnetization of the barrier was of the order of $\sim\SI{0.1}{\milli\tesla}$~\cite{Pal2014,Caruso2019}. We have removed the hysteretic shift in post-processing to guarantee that any measured effect is only related to the local magnetic field. The error on $I\ped c$ ranges from $1\%$ to $5\%$ for currents below the nanoamperes~\cite{Caruso2019,Ahmad2020}.
\begin{acknowledgments}
H.G.A., R.C., D.M. and F.T. also thank NANOCOHYBRI project (COST Action CA 16218).
\end{acknowledgments}
\section*{Author contributions}
H.G.A., D.M. and F.T. conceived the experiments, A.P. and M.G.B. designed and realized the junctions; H.G.A., D.M. and R.Caruso carried out the measurements; M.M., R.Capecelatro, H.G.A. and R.Caruso worked on the data analysis; G.P., M.M., R.Capecelatro, and P.L worked on the theoretycal model code; H.G.A., M.M., R.Capecelatro, D.M., P.L., F.T., M.G.B. co-wrote the paper. All authors discussed the results and commented the manuscript.
|
1,477,468,750,309 | arxiv |
\subsubsection*{References}}
\usepackage{mathtools}
\usepackage{booktabs}
\usepackage{tikz}
\usepackage{amsmath}
\usepackage{amsfonts}
\usepackage{float}
\usepackage{bashful}
\usepackage{pdfpages}
\usepackage{xcolor}
\usepackage{graphicx}
\usepackage[caption = false]{subfig}
\usepackage{xr-hyper}
\usepackage{hyperref}
\makeatletter
\newcommand*{\addFileDependency}[1]
\typeout{(#1)}
\@addtofilelist{#1}
\IfFileExists{#1}{}{\typeout{No file #1.}}
}
\makeatother
\newcommand*{\myexternaldocument}[1]{%
\externaldocument{#1}%
\addFileDependency{#1.tex}%
\addFileDependency{#1.aux}%
}
\myexternaldocument{appendix}
\newcommand{\swap}[3][-]{#3#1#2}
\newcommand{\jasper}[1]{\textcolor{red}{JS:#1}}
\title{Empirical Frequentist Coverage\\of Deep Learning Uncertainty Quantification Procedures}
\author[1]{\href{mailto:Benjamin Kompa <benjamin_kompa@hms.harvard.edu>?Subject=Your UAI 2021 paper}{Benjamin~Kompa}{}}
\author[2]{Jasper~Snoek}
\author[3]{Andrew~Beam}
\affil[1]{%
Department of Biomedical Informatics\\
Harvard Medical School
}
\affil[2]{%
Google Research
}
\affil[3]{
Department of Epidemiology\\
Harvard School of Public Health
}
\begin{document}
\maketitle
\begin{abstract}
Uncertainty quantification for complex deep learning models is increasingly important as these techniques see growing use in high-stakes, real-world settings. Currently, the quality of a model's uncertainty is evaluated using point-prediction metrics such as the negative log-likelihood (NLL), expected calibration error (ECE) or the Brier score on heldout data. Marginal coverage of prediction intervals or sets, a well-known concept in the statistical literature, is an intuitive alternative to these metrics but has yet to be systematically studied for this class of models. With marginal coverage, and the complementary notion of the width of a prediction interval, downstream users of a deployed machine learning models can better understand uncertainty quantification both on a global dataset level and on a per-sample basis. In this study, we provide the first large scale evaluation of the empirical frequentist coverage properties of well known uncertainty quantification techniques on a suite of regression and classification tasks. We find that, in general, some methods do achieve desirable coverage properties on \emph{in distribution} samples, but that coverage is not maintained on out-of-distribution data. Our results demonstrate the failings of current uncertainty quantification techniques as dataset shift increases and reinforce coverage as an important metric in developing models for real-world applications.
\end{abstract}
\section{Introduction}\label{sec:intro}
Predictive models based on deep learning have seen dramatic improvement in recent years \citep{lecun2015deep}, which has led to widespread adoption in many areas. For critical, high-stakes domains such as medicine or self-driving cars, it is imperative that mechanisms are in place to ensure safe and reliable operation. Crucial to the notion of safe and reliable deep learning is the effective quantification and communication of \emph{predictive uncertainty} to potential end-users of a system. In medicine, for instance, understanding predictive uncertainty could lead to better decision-making through improved allocation hospital resources, detecting dataset shift in deployed algorithms, or helping machine learning models abstain from making a prediction \citep{Kompa2021-lf}. For medical classification problems involving many possible labels (i.e. creating a \emph{differential diagnosis}), methods that provide a set of possible diagnoses when uncertain are natural to consider and and align more closely with the differential diagnosis procedure used by physicians. The prediction sets and intervals we propose in this work are a new way to quantify uncertainty in machine learning models and provide intuitive metrics for downstream users such as clinicians.
Many approaches have recently been proposed to quantify uncertainty and generally fall into two broad categories: ensembles and approximate Bayesian methods. Deep ensembles \citep{Lakshminarayanan2017-pv} aggregate information from multiple individual models to provide a measure of uncertainty that reflects the ensembles' agreement about a given data point. Bayesian methods offer direct access to predictive uncertainty through the posterior predictive distribution, which combines prior knowledge with the observed data. Although conceptually elegant, calculating exact posteriors of even simple neural models is computationally intractable~\citep{Yao2019-ia, Neal1996-li}, and many approximations have been developed \citep{Hernandez-Lobato2015-fo, Blundell2015-ms, Graves2011-er, Pawlowski2017-si, Hernandez-Lobato2015-un, Louizos2016-ki, Louizos2017-wo}. Though approximate Bayesian methods scale to modern sized data and models, recent work has questioned the quality of the uncertainty provided by these approximations \citep{Yao2019-ia,Wenzel2020-ui, Ovadia2019-tt}.
Previous work assessing the quality of uncertainty estimates have focused on calibration metrics and scoring rules such as the negative-loglikelihood (NLL), expected calibration error (ECE), and Brier score. Here we provide a complementary perspective based on the notion of empirical \emph{coverage}, a well-established concept in the statistical literature \citep{wasserman2013all} that evaluates the quality of a predictive \emph{set} or \emph{interval} instead of a point prediction. Informally, coverage asks the question: If a model produces a predictive uncertainty interval, how often does that interval actually contain the observed value? Ideally, predictions on examples for which a model is uncertain would produce larger intervals and thus be more likely to cover the observed value.
In this work, we focus on marginal coverage \textit{over a dataset} $\mathcal{D}'$ for the canonical $\alpha$ value of $0.05$, i.e. 95\% prediciton intervals. For a machine learning model that produces a 95\% prediction interval $\hat{\mathcal{C}_n}({x_n})$ based on the training dataset $\mathcal{D}$, we consider what fraction of the points in the dataset $\mathcal{D}'$ have their true label contained in $\hat{\mathcal{C}_n}({x_{n+1}})$ for ${x_{n+1}} \in \mathcal{D}'$. To measure the robustness of these intervals, we also consider cases when the generating distributions for $\mathcal{D}$ and $\mathcal{D}'$ are not the same (i.e. dataset shift).
Figure \ref{fig:coverage} provides a visual depiction of marginal coverage over a dataset for two hypothetical regression models. Throughout this work, we refer to ``marginal coverage over a dataset'' as ``coverage''.
\begin{figure}[h]
\centering
\includegraphics[width=.5\textwidth]{images/coverage-fig1.png}
\caption{An example of the coverage properties for two methods of uncertainty quantification. In this scenario, each model produces an uncertainty interval for each $x_i$ which attempts to cover the true $y_i$, represented by the red points. Coverage is calculated as the fraction of true values contained in these regions, while the width of these regions is reported in terms of multiples of the standard deviation of the training set $y_i$ values.}
\label{fig:coverage}
\end{figure}
For a machine learning model that produces predictive uncertainty estimates (i.e. approximate Bayesian methods and ensembling), coverage encompasses both the aleatoric and epistemic uncertainties \citep{Gal2016-yd} produced by these models. The predictions from these models can be written as:
\begin{equation}
\hat{y} = f(x) + \epsilon
\end{equation}
where epistemic uncertainty is captured in the $f(x)$ component while aleatoric uncertainty is considered in the $\epsilon$ term. Since coverage captures how often the predicted interval of $\hat{y}$ contains the true value, it captures the contributions from both types of uncertainty.
A complementary metric to coverage is \emph{width}, which is the size of the prediction interval or set. In regression problems, we typically measure width in terms of the standard deviation of the true label in the training set. As an example, a prediction interval could have 90\% marginal coverage with an average width of 2 standard deviations. For classification problems, width is simply the average size of a prediction set. Width can provide a relative ranking of different methods, i.e. given two methods with the same level of coverage we should prefer the method that provides intervals with smaller widths.
\textbf{Contributions:} In this study we investigate the empirical coverage properties of prediction intervals constructed from a catalog of popular uncertainty quantification techniques such as ensembling, Monte-Carlo dropout, Gaussian processes, and stochastic variational inference. We assess the coverage properties of these methods on nine regression tasks and two classification tasks with and without dataset shift. These tasks help us make the following contributions:
\begin{itemize}
\item We introduce coverage and width over a dataset as natural and interpretable metrics for evaluating predictive uncertainty for deep learning models.
\item A comprehensive set of coverage evaluations on a suite of popular uncertainty quantification techniques.
\item An examination of how dataset shift affects these coverage properties.
\end{itemize}
\begin{figure*}[ht]
\centering
\includegraphics[width=\textwidth]{images/cifar10c.png}
\caption{An example of the corruptions in CIFAR-10-C from \cite{Hendrycks2016-ov}. The 16 different corruptions have 5 discrete levels of shift, of which 3 are shown here. The same corruptions were applied to ImageNet to form the ImageNet-C dataset.}
\label{fig:cifar10c}
\end{figure*}
\section{Background and Related Work}
\textbf{Obtaining Predictive Uncertainty Estimates}\\ Several lines of work focus on improving approximations of the posterior of a Bayesian neural network \citep{Graves2011-er, Hernandez-Lobato2015-fo, Blundell2015-ms, Hernandez-Lobato2015-un, Louizos2016-ki, Pawlowski2017-si, Louizos2017-wo}. \citet{Yao2019-ia} provide a comparison of many of these methods and highlight issues with common metrics of comparison, such as test-set log likelihood and RMSE. Good scores on these metrics often indicates that the model posterior happens to match the test data rather than the true posterior \citep{Yao2019-ia}. \cite{Maddox2019-en} developed a technique to sample the approximate posterior from the first moment of SGD iterates. \citet{Wenzel2020-ui} demonstrated that despite advances in these approximations, there are still outstanding challenges with Bayesian modeling for deep networks.
Alternative methods that do not rely on estimating a posterior over the weights of a model can also be used to provide uncertainty estimates. \citet{Gal2016-yd}, for instance, demonstrated that Monte Carlo dropout is related to a variational approximation to the Bayesian posterior implied by the dropout procedure. \citet{Lakshminarayanan2017-pv} used ensembling of several neural networks to obtain uncertainty estimates. \citet{Guo2017-ly} established that temperature scaling provides well calibrated predictions on an i.i.d test set. More recently, \citet{Van_Amersfoort2020-ho} showed that the distance from the centroids in a RBF neural network yields high quality uncertainty estimates. \cite{Liu2020-ig} also leveraged the notion of distance (in the form of an approximate Gaussian process covariance function) to obtain uncertainty estimates with their Spectral-normalized Neural Gaussian Processes.
\textbf{Assessments of Uncertainty Properties under Dataset Shift}\\ \citet{Ovadia2019-tt} analyzed the effect of dataset shift on the accuracy and calibration of Bayesian deep learning methods. Their large scale empirical study assessed these methods on standard datasets such as MNIST, CIFAR-10, ImageNet, and other non-image based datasets. Additionally, they used translations, rotations, and corruptions \citep{Hendrycks2016-ov} of these datasets to quantify performance under dataset shift. They found stochastic variational inference (SVI) to be promising on simpler datasets such as MNIST and CIFAR-10, but more difficult to train on larger datasets. Deep ensembles had the most robust response to dataset shift.
\textbf{Definitions of Coverage}\\
Given features ${x_i} \in \mathbb{R}^d$ and a response $y_i \in \mathbb{R}$ for some dataset $\mathcal{D} = \{({x_i},y_i)\}_{i=1}^n$, \cite{Barber2019-ra} define \textit{distribution-free} marginal coverage in terms of a set $\hat{\mathcal{C}}_n({x})$ and a level $\alpha \in [0,1]$. The set $\hat{\mathcal{C}}_n({x})$ is said to have coverage at the $1-\alpha$ level if for all distributions $P \in \mathbb{R}^d \times \mathbb{R}$ where $({x},y) \sim P$, the following inequality holds:
\begin{equation}
\label{eq:coverage}
\mathbb{P}\{y_{n+1} \in \hat{\mathcal{C}}_n({x_{n+1}})\} \geq 1-\alpha
\end{equation}
For new samples beyond the first $n$ samples in the training data, there is a $1-\alpha$ probability of the true label of the test point being contained in the set $\hat{\mathcal{C}}_n({x_{n+1}})$. This set can be constructed using a variety of procedures. For example, in the case of simple linear regression a prediction interval for a new point $x_{n+1}$ can be constructed\footnote{A well-known result from the statistics literature (c.f. chapter 13 of \citet{wasserman2013all}) is that the interval is given by $\hat{y}_{n+1} \pm t_{n-2}s_y\sqrt{1/n + (x_{n+1}-\bar{x})^2/((n-1)s_x^2)}$, where $\hat{y}_{n+1}$ is the predicted value, $t_{n-2}$ is the $1-\alpha/2$ critical value from a t-distribution with $n-2$ degrees of freedom, $\bar{x}$ is the mean of $x$ in the training data, and $s_y, s_x$ are the standard deviations for $y$ and $x$ respectively. such that (\ref{eq:coverage}) holds asymptotically. However, for more complicated models, closed form solutions with coverage guarantees are unavailable, and constructing these intervals via the bootstrap \citep{efron1982jackknife}) can be computationally infeasible or fail to provide the correct coverage \citep{chatterjee2011bootstrapping}.} using a simple, closed-form solution.
An important and often overlooked distinction is that of marginal and conditional coverage. In conditional coverage, one considers
\begin{equation}
\label{eq:conditional-coverage}
\mathbb{P}\{y_{n+1} \in \hat{\mathcal{C}}_n({x_{n+1}})|{x_{n+1}=x}\} \geq 1-\alpha
\end{equation}
The probability has been conditioned on specific features. This is potentially a more useful version of coverage to consider because one could make claims for specific instances rather than over the broader distribution $P$. However, it is impossible in general to have conditional coverage guarantees \citep{Barber2019-ra}.
Another important point to consider is that while the notion of a confidence interval may seem natural to consider in our analysis, confidence intervals estimate global statistics over repeated trials of data and generally come with guarantees about how often these statistics lie in said intervals. In our study, this is not the case. Although we estimate coverage across many datasets, we are not aiming to estimate an unknown statistic of the data. We would like understand the empirical coverage properties of machine learning models.
\section{Methods}
For features ${x_i} \in \mathbb{R}^d$ and a response $y_i \in \mathbb{R}$ for some dataset $\mathcal{D} = \{({x_i},y_i)\}_{i=1}^n$, we consider the prediction intervals or sets $\hat{\mathcal{C}}_n({x})$ in regression and classification settings, respectively. Unlike in the definitions of marginal and conditional coverage, we do not assume that $(x,y)\sim P$ always holds true. Thus, we consider the marginal coverage on a dataset $\mathcal{D}'$, for some new test set that may have undergone dataset shift from the generating distribution of the training set $\mathcal{D}$.
In both the regression and classification settings, we analyzed the coverage properties of prediction intervals and sets of five different approximate Bayesian and non-Bayesian approaches for uncertainty quantification. These include Dropout \citep{Gal2016-yd, Srivastava2015-ow}, ensembles \citep{Lakshminarayanan2017-pv}, Stochastic Variational Inference \citep{Blundell2015-ms, Graves2011-er, Louizos2016-ki, Louizos2017-wo, Wen2018-fl}, and last layer approximations of SVI and Dropout \citep{Riquelme2018-kg}. Additionally, we considered prediction intervals from linear regression and the 95\% credible interval of a Gaussian process with the squared exponential kernel as baselines in regression tasks. For classification, we also considered temperature scaling \citep{Guo2017-ly} and the softmax output of vanilla deep networks~\citep{Hendrycks2016-ov}.
\subsection{Regression Methods and Metrics}\label{sec:regression-methods}
We evaluated the coverage properties of these methods on nine large real world regression datasets used as a benchmark in \citet{Hernandez-Lobato2015-fo} and later~\citet{Gal2016-yd}. We used the training, validation, and testing splits publicly available from \cite{Gal2016-yd} and performed nested cross validation to find hyperparameters. On the training sets, we did 100 trials of a random search over hyperparameter space of a multi-layer-perceptron architecture with an Adam optimizer \citep{Kingma2014-hf} and selected hyperparameters based on RMSE on the validation set.
Each approach required slightly different ways to obtain a 95\% prediction interval. For an ensemble of neural networks, we trained $N=40$ vanilla networks and used the 2.5\% and 97.5\% quantiles as the boundaries of the prediction interval. For dropout and last layer dropout, we made 200 predictions per sample and similarly discarded the top and bottom 2.5\% quantiles. For SVI, last layer SVI (LL SVI), and Gaussian processes we had approximate variances available for the posterior which we used to calculate the prediction interval. We calculated 95\% prediction intervals from linear regression using the closed-form solution.
Then we calculated two metrics:
\begin{itemize}
\item \textbf{Coverage}: A sample is considered covered if the true label is contained in this 95\% prediction interval. We average over all samples in a test set to estimate a method's marginal coverage on this dataset.
\item \textbf{Width}: The width is the average over the test set of the ranges of the 95\% prediction intervals.
\end{itemize}
Coverage measures how often the true label is in the prediction region while width measures how specific that prediction region is. Ideally, we would have high levels of coverage with low levels of width on in-distribution data. As data becomes increasingly out of distribution, we would like coverage to remain high while width increases to indicate model uncertainty.
\subsection{Classification Methods and Metrics} \label{sec:class-methods}
\citet{Ovadia2019-tt} evaluated model uncertainty on a variety of datasets publicly available. These predictions were made with the five apprxoimate Bayesian methods describe above, plus vanilla neural networks, with and without temperature scaling. We focus on the predictions from MNIST, CIFAR-10, CIFAR-10-C, ImageNet, and ImageNet-C datasets. For MNIST, we calculated coverage and width of model prediction intervals on rotated and translated versions of the test set. For CIFAR-10, \cite{Ovadia2019-tt} measured model predictions on translated and corrupted versions of the test set from CIFAR-10-C \citep{Hendrycks2016-ov}. For ImageNet, we only considered the coverage and width of prediction sets on the corrupted images of ImageNet-C \citep{Hendrycks2016-ov}. Each of these transformations (rotation, translation, or any of the 16 corruptions) has multiple levels of shift. Rotations range from 15 to 180 degrees in 15 degrees increments. Translations shift images every 2 and 4 pixels for MNIST and CIFAR-10, respectively. Corruptions have 5 increasing levels of intensity. Figure \ref{fig:cifar10c} shows the effects of the 16 corruptions in CIFAR-10-C at the first, third, and fifth levels of intensity.
\begin{figure}[h]
\centering
\includegraphics[width=.5\textwidth]{images/rolling.png}
\caption{Several examples of the ``rolling'' translation shift that moves an image across an axis.}
\label{fig:rolling}
\end{figure}
Given $\alpha \in (0,1)$, the $1-\alpha$ prediction set $\mathcal{S}$ for a sample ${x_i}$ is the minimum sized set of classes such that
\begin{equation}
\sum_{c \in \mathcal{S}} p(y_c|{x_i}) \geq 1-\alpha
\end{equation}
This consists of the top $k_i$ probabilities such that $1-\alpha$ probability has been accumulated. This inherently assumes that the label are unordered categorical classes such that including classes $1$ and $K$ does not imply that all classes between are also included in the set $\mathcal{S}$. Then we can define:
\begin{itemize}
\item \textbf{Coverage:} For each dataset point, we calculate the $1-\alpha$ prediction set of the label probabilities, then coverage is what fraction of these prediction sets contain the true label.
\item \textbf{Width:} The width of a prediction set is simply the number of labels in the set, $|\mathcal{S}|$. We report the average width of prediction sets over a dataset in our figures.
\end{itemize}
Although both calibration \citep{Guo2017-ly} and coverage can involve a probability over a model's output, calibration only considers the most likely label and it's corresponding probability, while coverage considers the the top-$k_i$ probabilities. In the classification setting, coverage is more robust to label errors as it does not penalize models for putting probability on similar classes.
\section{Results}
\subsection{Regression}
Figure \ref{fig:reg_cov_width} plots the mean test set coverage and width for the regression methods we considered averaged over the nine regression datasets. Error bars demonstrate that for low performing methods such as ensembling, dropout, and LL dropout, there is high variability in coverage levels and widths across the datasets.
We observe that several methods perform well across the nine datasets. In particular, LL SVI, SVI, and GPs all exceed the 95\% coverage threshold on average, and linear regression comes within statistical sampling error of this threshold. Over the regression datasets we considered, LL SVI had the lowest mean width while maintaining at least 95\% coverage. For specific values of coverage and width for methods on a particular dataset, see Tables \ref{tab:regression-coverage} and \ref{tab:regression-width} in the appendix.
Figure \ref{fig:reg_cov_width} also demonstrates an important point that will persist through our results. Coverage and width are directly related. Although high coverage can and ideally does occur when width is low, we typically observe that high levels of coverage occur in conjunction with high levels of width.
\begin{figure}[h]
\subfloat[Coverage and width across regression datasets]{\includegraphics[width = .5\textwidth]{images/revised_coverage_width_no_inset.png}} \\
\subfloat[Detailed view]{\includegraphics[width = .5\textwidth]{images/revised_coverage_width_inset.png}}
\caption{The mean coverage and widths of models' prediction intervals average over the nine regression datasets we considered (\textbf{panel a}). Error bars indicate the standard deviation for both coverage and width across all experiements. In \textbf{panel b} we observe that the four methods which maintained 95\% coverage did so because they had appropriately wide prediction intervals. LL SVI had the lowest average width while maintaining at least 95\% coverage.}
\label{fig:reg_cov_width}
\end{figure}
\subsection{MNIST}
In the classification setting, we begin by calculating coverage and width for predictions from \citet{Ovadia2019-tt} on MNIST and shifted MNIST data. \citet{Ovadia2019-tt} used a LeNet architecture and we refer to their manuscript for more details on their implementation.
Figure \ref{fig:mnist} shows how coverage and width co-vary as dataset shift increases. The elevated width for SVI on these dataset splits indicate that the posterior predictions of label probabilities were the most diffuse to begin with among all models. In Figure \ref{fig:mnist}, all seven models have at least 95\% coverage with a 15 degree rotation shift. Most models don't see an appreciable increase in the average width of the 95\% prediction set, except for SVI. The average width for SVI jumps to over 2 at 15 degrees rotation. As the amount of shift increases, coverage decreases across all methods in a comparable way. In the rotation shifts, we observe that coverage increases and width decreases after about 120 degrees of shift. This is likely due to some of the natural symmetry of several digits (i.e. 0 and 8 look identical after 180 degrees of rotation).
SVI maintains higher levels of coverage, but with a compensatory increase in width. In fact, there is a Pearson correlation of 0.9 between the width of the SVI prediction set and the distance from the maximum shift of 14 pixels. The maximum shift occurs when the original center of the image is broken across the edge as the image rolls to the right. Figure \ref{fig:rolling}'s right most example is a case of the maximum shift of 14 pixels on a MNIST digit. This strong correlation between width and severity of shift for some methods makes the width of a prediction set at a fixed $\alpha$ level a natural proxy to detect dataset shift. For this simple dataset, SVI outperforms other models with regards to coverage and width properties. It is the only model that has an average width that corresponds to the amount of shift observed and provides the highest level of average coverage.
\begin{figure}[]
\centering
\includegraphics[width = .5\textwidth]{images/mnist_fig1.png}
\caption{The effect of rotation and translation on coverage and width, respectively, for MNIST.}
\label{fig:mnist}
\end{figure}
\subsection{CIFAR-10}
Next, we consider a more complex image dataset, CIFAR-10. \citet{Ovadia2019-tt} trained 20 layer and 50 layer ResNets. Figure \ref{fig:cifar-translation} shows how the width of the prediction sets increases as translation shift increases. This shift ``rolls'' the image pixel by pixel such that the right most column in the image becomes the left most image. Temperature scaling and ensemble, in particular, have at least 95\% coverage for every translation, although all methods have high levels of coverage on average (though not exceeding 95\%). We find that this high coverage comes with increases in width as shift increases. Figure \ref{fig:cifar-translation} shows that temperature scaling has the highest average width across all models and shifts. Ensembling has the lowest width for the methods that maintain coverage of at least 95\% across all shifts.
All models have the same encouraging pattern of width increasing as shift increases up to 16 pixels, then decreasing. As CIFAR-10 images are 28 pixels in width and height, this maximum width occurs when the original center of the image is rolled over to and broken by the edge of the image. This likely breaks common features that the methods have learned for classification onto both sides of the image, resulting in decreased classification accuracy and higher levels of uncertainty.
Between the models which satisfy 95\% coverage levels on all shifts, ensemble models have lower width than temperature scaling models. Under translation shifts on CIFAR-10, ensemble methods perform the best given their high coverage and lower width.
\begin{figure}[h]
\centering
\includegraphics[width=.5\textwidth]{images/revised_fig2.png}
\caption{The effect of translation shifts on coverage and width in CIFAR-10 images. Coverage remains robust across all pixel shifts, while width increases.}
\label{fig:cifar-translation}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=.5\textwidth]{images/cifar_coverage_vs_width.png}
\caption{The effect of corruption intensity on coverage levels vs. width in CIFAR-10-C. Each facet panel represents a different corruption level, while points are the coverage of a model on one of 16 corruptions. Each facet has 80 points per method, since 5 iterations were trained per method. For methods with points at the same coverage level, the superior method is to the left as it has a lower width. }
\label{fig:cifar-coverage_v_width}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=.5\textwidth]{images/revised_ensemble_5.png}
\caption{The coverage and width of ensemble and non-ensemble methods at the fifth level out of five levels of corruption in CIFAR-10-C. The black line is a simple linear regression of coverage against width. We then can consider the fraction of points for a particular method (in this case, ensembling) that are above the regression line (see Figures \ref{fig:cifar_percents} and \ref{fig:imagenet_percents}). The higher the fraction of these points above the regression line, the better the method is at providing higher coverage at a relatively smaller width than other methods.}
\label{fig:cifar_level_3}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=.5\textwidth]{images/cifar_percents.png}
\caption{The fraction of marginal coverage levels achieved on CIFAR-10-C corruptions by our assessed methods that are above a regression line of coverage vs width at a specific corruption level. Methods that have better coverage levels at the same width will have a higher fraction of points above the regression line (see Figure \ref{fig:cifar_level_3} for an example). At low levels of shift, dropout, ensemble, SVI, and temperature scaling have strictly better relative performance. As shift increases, poor coverage levels in general cause models to have more parity.}
\label{fig:cifar_percents}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=.5\textwidth]{images/imagenet_percents.png}
\caption{The fraction of marginal coverage levels achieved on ImageNet-C corruptions by our assessed methods that are above a regression line of coverage vs width at a specific corruption level. Methods that have better coverage levels at the same width will have a higher fraction of points above the regression line (see Figure \ref{fig:cifar_level_3} for an example). Ensembling produces the best coverage levels given specific widths across all levels of corruption. However, at higher level of dataset shift, there is more parity between methods.}
\label{fig:imagenet_percents}
\end{figure}
Additionally, we consider the coverage properties of models on 16 different corruptions of CIFAR-10 from Hendrycks and Gimpel \citep{Hendrycks2016-ov}. Figure \ref{fig:cifar-coverage_v_width} shows coverage vs. width over varying levels of shift intensity. Models that have more dispersed points to the right have higher widths for the same level of coverage. An ideal model would have a cluster of points above the 95\% coverage line and be far to the left portion of each facet. For models that have similar levels of coverage, the superior method will have points further to the left.
Figure \ref{fig:cifar-coverage_v_width} demonstrates that at the lowest shift intensity, ensemble models, dropout, temperature scaling, and SVI were able to generally provide high levels of coverage on most corruption types. However, as the intensity of the shift increases, coverage decreases. Ensembles and dropout models have for at least half of their 80 model-corruption evaluations at least 95\% coverage up to the third intensity level. At higher levels of shift intensity, ensembles, dropout, and temperature scaling consistently have the highest levels of coverage. Although these higher performing methods have similar levels of coverage, they have different widths.
\begin{figure*}[h]
\centering
\includegraphics[width=\textwidth]{images/rank.png}
\caption{The ranks of each method's performance with respect to each metric we consider on CIFAR-10-C. For Brier Score and ECE, lower is better, while for coverage, higher is better. We observe that all three metrics have a generally consistent ordering, with coverage closely corresponding to the rankings of ECE.}
\label{fig:rank}
\end{figure*}
We also present a way to quantify the relative strength of each method over a specific level of corruption. In Figure \ref{fig:cifar_level_3}, for instance, we plot only the coverage and widths of methods at the third level of corruption and use the fraction of the points of a particular method that lie above the regression line. Methods that are more effective are providing higher coverage levels at lower widths will have more points above this regression line.
For each of the five corruption levels, we calculated a regression line that modeled coverage as a function of width. Figure \ref{fig:cifar_percents} presents the fraction of marginal coverages on various CIFAR-10-C datasets for each method that exceeded the linear regression prediction. The larger the fraction, the better the marginal coverage of a method given a prediciton interval/set of a particular width. We observe that dropout and ensembles have a strong relative performance to the other methods across all five levels of shift.
Finally, we compared the relative rank order of these methods across coverage as well as two common metrics in uncertainty quantification literature: Brier score and ECE. Figure \ref{fig:rank} shows that the rankings are similar across methods. In particular, coverage has a nearly identical pattern to ECE, with changes only in the lower ranking methods.
\subsection{ImageNet}
Finally, we analyze coverage and width on ImageNet and ImageNet-C from \citet{Hendrycks2016-ov}. Figure \ref{fig:imagenet-corruption_v_width} shows similar coverage vs. width plots to Figure \ref{fig:cifar-coverage_v_width}. We find that over the 16 different corruptions at 5 levels, ensembles, temperature scaling, and dropout models had consistently higher levels of coverage. Unsurprisingly, Figure \ref{fig:imagenet-corruption_v_width} shows that these methods have correspondingly higher widths. Figure \ref{fig:imagenet_percents} reports the relative performance of each method across corruption levels. Ensembles had the highest fraction of marginal coverages on ImageNet-C datasets above the regression lines at each corruption level. Dropout, LL Dropout, and temperature scaling all had similar performances, while LL SVI a much lower fraction of marginal coverages about the regression lines. None of the methods have a commensurate increase in width to maintain the 95\% coverage levels seen on in-distribution test data as dataset shift increases.
\section{Discussion}
We have provided the first comprehensive empirical study of the frequentist-style coverage properties of popular uncertainty quantification techniques for deep learning models. In regression tasks, LL SVI, SVI, and Gaussian processes all had high levels of coverage across nearly all benchmarks. LL SVI, in particular, had the lowest widths. SVI also had excellent coverage properties across most tasks with tighter intervals than GPs and linear regression. In contrast, the methods based on ensembles and Monte Carlo dropout had significantly worse coverage due to their overly confident and tight prediction intervals.
In the classification setting, all methods showed very high coverage in the i.i.d setting (i.e. no dataset shift), as coverage is reflective of top-1 accuracy in this scenario. On MNIST data, SVI had the best performance, maintaining high levels of coverage under slight dataset shift and scaling the width of its prediction intervals more appropriately as shift increased relative to other methods. On CIFAR-10 data and ImageNet, ensemble models were superior. They had the highest coverages relative to other methods as demonstrated in Figures \ref{fig:cifar_percents} and \ref{fig:imagenet_percents}.
An important consideration throughout this work is the choice of hyperparameters in most all of the analyzed methods makes a significant impact on the uncertainty estimates. We set hyperparameters and optimized model parameters according to community best practices in attempt to reflect what a ``real world'' machine learning practitioner might do: selecting hyperparameters based on minimizing validation loss over nested cross validation. Our work is a measurement of the empirical coverage properties of these methods as one would typically utilize them, rather than an exploration of how pathological hyperparameters can skew uncertainty estimates to 0 or to infinity.
Of particular note is that the width of a prediction interval or set typically correlated with the degree of dataset shift. For instance, when the translation shift is applied to MNIST, both prediction set width and dataset shift is maximized around 14 pixels. There is a 0.9 Pearson correlation between width and shift. Width can serve as a soft proxy of dataset shift and potentially detect shift in real world scenarios. Simultaneously, the ranks of coverage, Brier score, and ECE all are generally consistent. However, coverage is arguably the most interpretable to downstream users of machine learning models. Clinicians, for instance, may not have the technical training to have an intuition about what specific values of Brier score or ECE mean in practice, while coverage and width are readily understandable. \cite{Manrai2014-xo} already demonstrated clinicians' general lack of intuition about positive predictive value and these uncertainty quantification metrics are more difficult to internalize than PPV.
In summary, we find that popular uncertainty quantification methods for deep learning models do not provide good coverage properties under moderate levels of dataset shift. Although the width of prediction regions do increase under increasing amounts of shift, these changes are not enough to maintain the levels of coverage seen on i.i.d data. We conclude that the methods we evaluated for uncertainty quantification are likely insufficient for use in high-stakes, real-world applications where dataset shift is likely to occur. However, marginal coverage of a prediction interval or set is a natural and intuitive metric to quantify uncertainty. The width of a prediction interval/set is an additional tool that captures dataset shift and provides additional interpretable information to downstream users of machine learning models.
|
1,477,468,750,310 | arxiv | \section{Introduction}
\label{SecIntroduction}
The key challenge of many tasks in both computer vision and graphics can be attributed to image smoothing. At the same time, the required smoothing properties can vary dramatically for different tasks. In this paper, depending on the required smoothing properties, we roughly classify a large number of applications into four groups.
Applications in the first group require the smoothing operator to smooth out small details while preserving strong edges, and the amplitudes of these strong edges can be reduced but the edges should be neither blurred nor sharpened. Representatives in this group are image detail enhancement and HDR tone mapping \cite{farbman2008edge,fattal2007multiscale,he2013guided}. Blurring edges can result in halos while sharpening edges will lead to gradient reversals \cite{farbman2008edge}.
\begin{figure}
\subfigure[]
{
\includegraphics[width=0.222\linewidth]{FigCover_DetailEnhancement.pdf}
}
\subfigure[]
{
\includegraphics[width=0.222\linewidth]{FigCover_ClipArt.pdf}
}
\subfigure[]
{
\includegraphics[width=0.222\linewidth]{FigCover_GuidedDepthUpsampling.pdf}
}
\subfigure[]
{
\includegraphics[width=0.222\linewidth]{FigCover_TextureSmooth.pdf}
}
\caption{Our method is capable of (a) image detail enhancement, (b) clip-art compression artifacts removal, (c) guided depth map upsampling and (d) image texture removal. These applications are representatives of edge-preserving and structure-preserving image smoothing and require contradictive smoothing properties.}\label{FigCover}
\end{figure}
The second group includes tasks like clip-art compression artifacts removal \cite{nguyen2015fast,xu2011image}, image abstraction and pencil sketch production \cite{xu2011image}. In contrast to the ones in the first group, these tasks require to smooth out small details while sharpening strong edges. This is because edges can be blurred in the compressed clip-art image and they need to be sharpened when the image is recovered (see Fig.~\ref{FigCover}(b) for example). Sharper edges can produce better visual quality in image abstraction and pencil sketch. At the same time, the amplitudes of strong edges are not allowed to be reduced in these tasks.
Guided image filtering, such as guided depth map upsampling \cite{park2011high,ferstl2013image,liu2017robust} and flash/no flash filtering \cite{kopf2007joint,petschnigg2004digital}, is categorized into the third group. The structure inconsistency between the guidance image and target image, which can cause blurring edges and texture copy artifacts in the smoothed image \cite{ham2015robust,liu2017robust}, should be properly handled by the specially designed smoothing operator. They also need to sharpen edges in the smoothed image due to the reason that low-quality capture of depth and noise in the no flash images can lead to blurred edge (see Fig.~\ref{FigCover}(c) for example).
Tasks in the fourth group require to smooth the image in a scale-aware manner, e.g., image texture removal \cite{xu2012structure,zhang2014rolling,cho2014bilateral}. This kind of tasks require to smooth out small structures even when they contain strong edges, while large structure should be properly preserved even the edges are weak (see Fig.~\ref{FigCover}(d) for example). This is totally different from that in the above three groups where they all aim at preserving strong edges.
To be more explicit, we categorize the smoothing procedures in the first to the third groups as \emph{edge-preserving image smoothing} since they try to preserve salient edges, while the smoothing processes in the fourth group are classified as \emph{structure-preserving image smoothing} because they aim at preserving salient structures.
A diversity of edge-preserving and structure-preserving smoothing operators have been proposed for various tasks. Generally, each of them is designed to meet the requirements of certain applications, and thus its inherent smoothing nature is usually fixed. Therefore, there is seldom a smoothing operator that can meet all the smoothing requirements of the above four groups, which are quite different or even contradictive. For example, the $L_0$ norm smoothing \cite{xu2011image} can sharpen strong edges and is suitable for clip-art compression artifacts removal, however, this will lead to gradient reversals in image detail enhancement and HDR tone mapping. The weighted least squares (WLS) smoothing \cite{farbman2008edge} performs well in image detail enhancement and HDR tone mapping, but it is not capable of sharpening edges and structure-preserving smoothing, etc.
In contrast to most of the smoothing operators in the literature, a new smoothing operator, which is based on a non-convex non-smooth optimization framework, is proposed in this paper. It can achieve different and even contradictive smoothing behaviors and is able to handle the applications in the four groups mentioned above. The main contributions of this paper are as follows:
\begin{itemize}
\item[1.] We introduce the \emph{truncated Huber penalty} function which has seldom been used in image smoothing. By varying the parameters, it shows strong flexibility.
\item[2.] A robust non-convex non-smooth optimization framework is proposed. When combined with the strong flexibility of the truncated Huber penalty function, our model can achieve various and even contradictive smoothing behaviors. We show that it is able to handle the tasks in the four groups mentioned above. This has seldom been achieved by previous smoothing operators.
\item[3.] An efficient numerical solution to the proposed optimization framework is provided. Its convergence is theoretically guaranteed.
\item[4.] Our method is able to outperform the specially designed approaches in many tasks and state-of-the-art performance is achieved.
\end{itemize}
\section{Related Work}
\label{SecRelatedWork}
Tremendous smoothing operators have been proposed in recent decades. In terms of edge-preserving smoothing, bilateral filter (BLF) \cite{tomasi1998bilateral} is the early work that has been used in various tasks such as image detail enhancement \cite{fattal2007multiscale}, HDR tone mapping \cite{durand2002fast}, etc. However, it is prone to produce results with gradient reversals and halos \cite{farbman2008edge}. Its alternatives \cite{gastal2012adaptive,gastal2011domain} also share a similar problem. Guided filter (GF) \cite{he2013guided} can produce results free of gradient reversals but halos still exist. The WLS smoothing \cite{farbman2008edge} solves a global optimization problem and performs well in handling these artifacts. The $L_0$ norm smoothing is able to eliminate low-amplitude structures while sharpening strong edges, which can be applied to the tasks in the second group. To handle the structure inconsistency problem, Shen et~al. \cite{shen2015mutual} proposed to perform mutual-structure joint filtering. They also explored the relation between the guidance image and target image via optimizing a scale map \cite{shen2015multispectral}, however, additional processing was adopted for structure inconsistency handling. Ham et~al. \cite{ham2015robust} proposed to handle the structure inconsistency by combining a static guidance weight with a Welsch's penalty \cite{holland1977robust} regularized smoothness term, which leaded to a static/dynamic (SD) filter. Gu et~al. \cite{gu2017learning} presented a weighted analysis representation model for guided depth map enhancement.
In terms of structure-preserving smoothing, Zhang et~al. \cite{zhang2014rolling} proposed to smooth structures of different scales with a rolling guidance filter (RGF). Cho et~al. \cite{cho2014bilateral} modified the original BLF with local patch-based analysis of texture features and obtained a bilateral texture filter (BTF) for image texture removal. Karacan et~al. \cite{karacan2013structure} proposed to smooth image textures by making use of region covariances that captured local structure and textural information. Xu et~al. \cite{xu2012structure} adopted the relative total variation (RTV) as a prior to regularize the texture smoothing procedure. Fan et~al. \cite{fan2018image,fan2019general} proposed to perform various kinds of image smoothing through convolutional neural networks. Chen et~al. \cite{chan2005aspects} proved that the TV-$L_1$ model \cite{chan2005aspects,nikolova2004variational} could smooth images in a scale-aware manner, and it is thus ideal for structure-preserving smoothing such as image texture removal \cite{buades2010fast}.
Most of the approaches mentioned above are limited to a few applications because their inherent smoothing natures are usually fixed. In contrast, our method proposed in this paper can have strong flexibility in achieving various smoothing behaviors, which enables wider applications of our method than most of them. Moreover, our method can show better performance than these methods in several applications that they are specially designed for.
\section{Our Approach}
\subsection{Truncated Huber Penalty Function}
\label{SecTruncatedHuber}
We first introduce the truncated Huber penalty function which is defined as:
\begin{small}
\begin{eqnarray}\label{EqTruncatedHuber}
{ h_T(x)=\left\{\begin{array}{l}
h(x), \ \ \ \ \ |x|\leq b\\
b -\frac{a}{2}, \ \ \ |x|>b
\end{array}\right.
\text{s.t.} \ \ \ a \leq b,
}
\end{eqnarray}
\end{small}
where $a,b$ are constants. $h(\cdot)$ is the Huber penalty function \cite{huber1964robust} defined as:
\begin{small}
\begin{eqnarray}\label{EqHuber}
{
h(x)=\left\{\begin{array}{l}
\frac{1}{2a}x^2, \ \ \ \ \ \ |x|<a\\
|x|-\frac{a}{2}, \ \ |x|\geq a
\end{array}\right.,
}
\end{eqnarray}
\end{small}
$h_T(\cdot)$ and $h(\cdot)$ are plotted in Fig.~\ref{FigPenaltyComp}(a) with $a=\epsilon$ which is a sufficient small value (e.g., $\epsilon=10^{-7}$). $h(\cdot)$ is an edge-preserving penalty function, but it cannot sharpen edges when adopted to regularize the smoothing procedure. In contrast, $h_T(\cdot)$ can sharpen edges because it is able to not penalize image edges due to the truncation. The Welsch's penalty function \cite{holland1977robust}, which was adopted in the recent proposed SD filter \cite{ham2015robust}, is also plotted in the figure. This penalty function is known to be capable of sharpening edges, which is also because it seldom penalizes strong image edges. The Welsch's penalty function is close to the $L_2$ norm when the input is small, while the $h_T(\cdot)$ can be close to the $L_1$ norm when $a$ is set sufficient small, which demonstrates $h_T(\cdot)$ can better preserve weak edges than the Welsch's penalty function.
\begin{figure}
\centering
\subfigure[]
{
\includegraphics[width=0.35\linewidth]{FigPenaltyComp_Penalty.pdf}
}
\subfigure[]
{
\includegraphics[width=0.35\linewidth]{FigPenaltyComp_VariousTruncatedHuber.pdf}
}
\caption{Plots of (a) different penalty functions and (b) the truncated Huber penalty function with different parameter settings.}\label{FigPenaltyComp}
\end{figure}
With different parameter settings, $h_T(\cdot)$ can show strong flexibility to yield different penalty behaviors. Assume the input intensity values are within $[0, I_m]$, then the amplitude of any edge will fall in $[0, I_m]$. We first set $a=\epsilon$. Then if we set $b>I_m$, $h_T(\cdot)$ will be actually the same as $h(\cdot)$ because the second condition in Eq.~(\ref{EqTruncatedHuber}) can never be met. Because $a$ is sufficient small, $h_T(\cdot)$ will be close to the $L_1$ norm in this case, and thus it will be an edge-preserving penalty function that does not sharpen edges. Conversely, when we set $b<I_m$, the truncation in $h_T(\cdot)$ will be activated. This can lead to having penalization on weak edges without penalizing strong edges, and thus the strong edges are sharpened. To be short, $b$ can act as a switch to decide whether $h_T(\cdot)$ can sharpen edges or not. Similarly, by setting $a=b>I_m$ and $a=b<I_m$, $h_T(\cdot)$ can be easily switched between the $L_2$ norm and truncated $L_2$ norm. Note that the truncated $L_2$ norm is also able to sharpen edges \cite{xu2013unnatural}. In contrast, the Welsch's penalty function does not enjoy this kind of flexibility. Different cases of $h_T(\cdot)$ are illustrated in Fig.~\ref{FigPenaltyComp}(b).
\subsection{Model}
\label{SecModel}
Given an input image $f$ and a guidance image $g$, the smoothed output image $u$ is the solution to the following objective function:
\begin{scriptsize}
\begin{equation}\label{EqObjFun}
{
E_u(u)=\sum_{i}\left(\sum_{j\in N_d(i)}h_T(u_i-f_j) + \lambda\sum_{j\in N_s(i)}\omega_{i,j}h_T(u_i-u_j)\right),
}
\end{equation}
\end{scriptsize}
where $h_T$ is defined in Eq.(\ref{EqTruncatedHuber}); $N_d(i)$ is the $(2r_d+1)\times (2r_d+1)$ square patch centered at $i$; $N_s(i)$ is the $(2r_s+1)\times (2r_s+1)$ square patch centered at $i$; $\lambda$ is a parameter that controls the overall smoothing strength. To be clear, we adopt $\{a_d,b_d\}$ and $\{a_s,b_s\}$ to denote the parameters of $h_T(\cdot)$ in the data term and smoothness term, respectively. The guidance weight $\omega_{i,j}$ is defined as:
\begin{small}
\begin{equation}\label{EqGuidanceWeight}
{
\omega_{i,j}=\frac{1}{(|g_i-g_j| + \delta)^\alpha},
}
\end{equation}
\end{small}
\noindent where $\alpha$ determines the sensitivity to the edges in $g$ which can be the input image, i.e., $g=f$. $|\cdot|$ represents the absolute value. $\delta$ is a small constant being set as $\delta=10^{-7}$.
The adoption of $h_T(\cdot)$ makes our model in Eq.~(\ref{EqObjFun}) to enjoy a strong flexibility. As will be shown in the following property analysis section, with different parameter settings, our model is able to achieve different smoothing behaviors, and thus it is capable of various tasks that require either edge-preserving smoothing or structure-preserving smoothing.
\subsection{Numerical Solution}
Our model in Eq.~(\ref{EqObjFun}) is not only non-convex but also non-smooth, which arises from the adopted $h_T(\cdot)$. Commonly used approaches \cite{lanckriet2009convergence,nikolova2005analysis,wang2008new,zhang2004surrogate} for solving non-convex optimization problems are not applicable. To tackle this problem, we first rewrite $h_T(\cdot)$ in a new equivalent form. By defining $\nabla^d_{i,j}=u_i-f_j$ and $\nabla^s_{i,j}=u_i-u_j$, we have:
\begin{small}
\begin{equation}\label{EqRelationWithHuberL0}
{
h_T(\nabla^\ast_{i,j})=\min_{l^\ast_{i,j}}\left\{h(\nabla^\ast_{i,j}-l^\ast_{i,j})+(b_\ast-\frac{a_\ast}{2})|l^\ast_{i,j}|_0\right\},
}
\end{equation}
\end{small}
where $\ast\in\{d,s\}$, $|l^\ast_{i,j}|_0$ is the $L_0$ norm of $l^\ast_{i,j}$. The minimum of the right side of Eq.~(\ref{EqRelationWithHuberL0}) is obtained on the condition:
\begin{small}
\begin{eqnarray}\label{EqTruncatedHuberMinCondition}
{
l^\ast_{i,j}=\left\{\begin{array}{l}
0, \ \ \ \ \ \ \ \ |\nabla^\ast_{i,j}|\leq b_\ast\\
\nabla^\ast_{i,j}, \ \ |\nabla^\ast_{i,j}|>b_\ast
\end{array}\right.
, \ \ \ast\in\{d,s\}.
}
\end{eqnarray}
\end{small}
The detailed proof of Eq.~(\ref{EqRelationWithHuberL0}) and Eq.~(\ref{EqTruncatedHuberMinCondition}) is provided in our supplementary file. These two equations also theoretically validate our analysis in Fig.~\ref{FigPenaltyComp}(b): we have $|\nabla^\ast_{i,j}|\in[0, I_m]$ if the intensity values are in $[0, I_m]$. Then if $b>I_m$, based on Eq.~(\ref{EqRelationWithHuberL0}) and Eq.~(\ref{EqTruncatedHuberMinCondition}), we will always have $h_T(\nabla^\ast_{i,j})=h(\nabla^\ast_{i,j})$ which means $h_T(\cdot)$ degrades to $h(\cdot)$.
A new energy function is defined as:
\begin{small}
\begin{equation}\label{EqObjFunAuxUL}
{
\begin{array}{r}
E_{ul}(u, l^d, l^s)=\sum\limits_{i,j}\left(h(\nabla^{d}_{i,j} - l^d_{i,j}) + (b_d-\frac{a_d}{2})|l^d_{i,j}|_0 \right)\\
\ \ \ \ \ \ + \lambda\sum\limits_{i,j}\omega_{i,j}\left(h(\nabla^{s}_{i,j} - l^s_{i,j}) + (b_s-\frac{a_s}{2})|l^s_{i,j}|_0 \right)
\end{array}
}.
\end{equation}
\end{small}
Based on Eq.~(\ref{EqRelationWithHuberL0}) and Eq.~(\ref{EqTruncatedHuberMinCondition}), we then have:
\begin{small}
\begin{equation}\label{EqEnergyRelation1}
{
E_u(u)=\min_{l^\ast}E_{ul}(u, l^d, l^s),\ \ast\in\{d,s\}.
}
\end{equation}
\end{small}
Given Eq.~(\ref{EqTruncatedHuberMinCondition}) as the optimum condition of Eq.~(\ref{EqEnergyRelation1}) with respect to $l^\ast$, optimizing $E_{ul}(u, l^d, l^s)$ with respect to $u$ only involves Huber penalty function $h(\cdot)$. The problem can thus be optimized through the half-quadratic (HQ) optimization technique \cite{geman1995nonlinear,nikolova2005analysis}. More specifically, a variable $\mu^\ast (\ast\in\{d,s\})$ and a function $\psi(\mu^\ast_{i,j})$ with respect to $\mu^\ast$ exist such that:
\begin{small}
\begin{equation}\label{EqMultHQ}
{
h(\nabla^\ast_{i,j} - l^\ast_{i,j})=\min_{\mu^\ast_{i,j}}\left\{\mu^\ast_{i,j}(\nabla^\ast_{i,j}-l^\ast_{i,j})^2 + \psi(\mu^\ast_{i,j}) \right\},
}
\end{equation}
\end{small}
where the optimum is yielded on the condition:
\begin{small}
\begin{eqnarray}\label{EqMultHQCondition}
{
\mu^\ast_{i,j}=\left\{\begin{array}{l}
\frac{1}{2a_\ast}, \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ |\nabla^\ast_{i,j} - l^\ast_{i,j}|< a_\ast\\
\frac{1}{2|\nabla^\ast_{i,j} - l^\ast_{i,j}|}, \ \ \ \ |\nabla^\ast_{i,j} - l^\ast_{i,j}| \geq a_\ast
\end{array}\right.
, \ \ \ast\in\{d,s\}.
}
\end{eqnarray}
\end{small}
The detailed proof of Eq.~(\ref{EqMultHQ}) and Eq.~(\ref{EqMultHQCondition}) is provided in our supplementary file. Then we can further define a new energy function:
\begin{footnotesize}
\begin{equation}\label{EqObjFunAuxULMu}
{
\begin{array}{l}
E_{ul\mu}(u, l^d, l^s, \mu^d, \mu^s)= \\
\ \ \ \ \ \ \ \ \sum\limits_{i,j}\left(\mu^d_{i,j}(\nabla^{d}_{i,j} - l^d_{i,j})^2 + \psi(\mu^d_{i,j}) + (b_d-\frac{a_d}{2})|l^d_{i,j}|_0 \right) + \\
\ \lambda\sum\limits_{i,j}\omega_{i,j}\left(\mu^s_{i,j}(\nabla^{s}_{i,j} - l^s_{i,j})^2 + \psi(\mu^s_{i,j}) + (b_s-\frac{a_s}{2})|l^s_{i,j}|_0 \right).
\end{array}
}
\end{equation}
\end{footnotesize}
Based on Eq.~(\ref{EqMultHQ}) and Eq.~(\ref{EqMultHQCondition}), we then have:
\begin{small}
\begin{equation}\label{EqEnergyRelation2}
{
E_{ul}(u, l^\ast)=\min\limits_{\mu^\ast}E_{ul\mu}(u, l^\ast, \mu^\ast),\ \ast\in\{d,s\}.
}
\end{equation}
\end{small}
Given Eq.~(\ref{EqMultHQCondition}) as the optimum condition of $\mu^\ast$ for Eq.~(\ref{EqEnergyRelation2}), optimizing $E_{ul\mu}(u, l^d, l^s, \mu^d, \mu^s)$ with respect to $u$ only involves the $L_2$ norm penalty function, which has a closed-form solution. However, since the optimum conditions in Eq.~(\ref{EqTruncatedHuberMinCondition}) and Eq.~(\ref{EqMultHQCondition}) both involve $u$, therefore, the final solution $u$ can only be obtained in an iterative manner. Assuming we have got $u^k$, then $(l^\ast)^{k}$ and $(\mu^\ast)^{k}, (\ast\in\{s,d\})$ can be updated through Eq.~(\ref{EqTruncatedHuberMinCondition}) and Eq.~(\ref{EqMultHQCondition}) with $u^k$. Finally, $u^{k+1}$ is obtained with:
\begin{small}
\begin{equation}\label{EqIterativeSolution}
{
u^{k+1}=\min_{u}E_{ul\mu}\left(u, (l^\ast)^k, (\mu^\ast)^k\right),
}
\end{equation}
\end{small}
Eq.(\ref{EqIterativeSolution}) has a close-form solution as:
\begin{small}
\begin{equation}\label{EqCloseFormSolution}
{
u^{k+1}=\left(\mathcal{A}^k - 2\lambda\mathcal{W}^k\right)^{-1}\left(D^k + 2\lambda S^k\right),
}
\end{equation}
\end{small}
where $\mathcal{W}^k$ is an affinity matrix with $\mathcal{W}^k_{i,j}=\omega_{i,j}(\mu^s_{i,j})^k$, $\mathcal{A}^k$ is a diagonal matrix with $\mathcal{A}^k_{ii}=\sum_{j\in N_d(i)}(\mu^d_{i,j})^k + 2\lambda\sum_{j\in N_s(i)}\omega_{i,j}(\mu^s_{i,j})^k$, $D^k$ is a vector with $D^k_i=\sum_{j\in N_d(i)}(\mu^d_{i,j})^k(f_j+(l^d_{i,j})^k)$ and $S^k$ is also a vector with $S^k_i=\sum_{j\in N_s(i)}\omega_{i,j}(\mu^s_{i,j})^k(l^s_{i,j})^k$.
The above optimization procedure monotonically decreases the value of $E_u(u)$ in each step, its convergence is theoretically guaranteed. Given $u^k$ in the $k$th iteration and $\ast\in\{s,d\}$, then for any $u$, we have:
\begin{small}
\begin{equation}\label{EqEnergyRelationTruncation}
{
E_u(u)\leq E_{ul}(u, (l^\ast)^k),\ E_u(u^k)=E_{ul}(u^k, (l^\ast)^k),
}
\end{equation}
\begin{eqnarray}\label{EqEnergyRelationHQ}
{\left\{
\begin{array}{l}
E_{ul}(u,(l^\ast)^k)\leq E_{ul\mu}(u, (l^\ast)^k, (\mu^\ast)^k)\\
E_{ul}(u^k,(l^\ast)^k)=E_{ul\mu}(u^k, (l^\ast)^k, (\mu^\ast)^k)
\end{array}\right. .
}
\end{eqnarray}
\end{small}
Given $(l^\ast)^k$ has been updated through Eq.~(\ref{EqTruncatedHuberMinCondition}), Eq.~(\ref{EqEnergyRelationTruncation}) is based on Eq.~(\ref{EqEnergyRelation1}) and Eq.~(\ref{EqRelationWithHuberL0}). After $(\mu^\ast)^k$ has been updated through Eq.~(\ref{EqMultHQCondition}), Eq.~(\ref{EqEnergyRelationHQ}) is based on Eq.~(\ref{EqEnergyRelation2}) and Eq.~(\ref{EqMultHQ}). We now have:
\begin{small}
\begin{equation}\label{EqEnergyDecrease1}
{
\begin{array}{l}
E_{ul}(u^{k+1},(l^\ast)^k)\leq E_{ul\mu}(u^{k+1}, (l^\ast)^k, (\mu^\ast)^k)\\
\leq E_{ul\mu}(u^{k}, (l^\ast)^k, (\mu^\ast)^k)=E_{ul}(u^{k},(l^\ast)^k),
\end{array}
}
\end{equation}
\end{small}
the first and second inequalities follow from Eq.~(\ref{EqEnergyRelationHQ}) and Eq.~(\ref{EqIterativeSolution}), respectively. We finally have:
\begin{small}
\begin{equation}\label{EqEnergyDecrease2}
{
E_u(u^{k+1})\leq E_{ul}(u^{k+1}, (l^\ast)^k)\leq E_{ul}(u^{k}, (l^\ast)^k)=E_{u}(u^k),
}
\end{equation}
\end{small}
the first and second inequalities follow from Eq.~(\ref{EqEnergyRelationTruncation}) and Eq.~(\ref{EqEnergyDecrease1}), respectively. Since the value of $E_u(u)$ is bounded from below, Eq.~(\ref{EqEnergyDecrease2}) indicates that the convergence of our iterative scheme is theoretically guaranteed.
The above optimization procedure is iteratively performed $N$ times to get the output $u^N$. In all our experiments, we set $u^0=f$, which is able to produce promising results in each application. Our optimization procedure is summarized in Algorithm~\ref{Alg}.
\begin{algorithm}[t]
\caption {Image Smoothing via Non-convex Non-smooth Optimization}\label{Alg}
\begin{algorithmic}[1]
\REQUIRE
Input image $f$, guide image $g$, iteration number $N$, parameter $\lambda, \alpha, a_\ast, b_\ast, r_\ast$, $u^0\leftarrow f$, with $\ast\in\{d,s\}$\\
\FOR{$k=0:N$}
\STATE With $u^k$, compute $(\nabla^\ast_{i,j})^k$, update $(l^\ast_{i,j})^k$ according to Eq.~(\ref{EqTruncatedHuberMinCondition})
\STATE With $(l^\ast_{i,j})^k$, update $(\mu^\ast_{i,j})^k$ according to Eq.~(\ref{EqMultHQCondition})
\STATE With $(l^\ast_{i,j})^k$ and $(\mu^\ast_{i,j})^k$, solve for $u^{k+1}$ according to Eq.~(\ref{EqIterativeSolution}) (or Eq.~(\ref{EqCloseFormSolution}))
\ENDFOR
\ENSURE
Smoothed image $u^{N+1}$
\end{algorithmic}
\end{algorithm}
\subsection{Property Analysis}
\label{SecPropertyAnalysis}
With different parameter settings, the strong flexibility of $h_T(\cdot)$ makes our model able to achieve various smoothing behaviors. First, we show that some classical approaches can be viewed as special cases of our model. For example, by setting $a_d=b_d>I_m, a_s=\epsilon,b_s>I_m,\alpha=0,r_d=0,r_s=1$, our model is an approximation of the TV model \cite{rudin1992nonlinear} which is a representative edge-preserving smoothing operator. If we set $\alpha=0.2,g=f$ with other parameters the same as above, then the first iteration of Algorithm~\ref{Alg} will be the WLS smoothing \cite{farbman2008edge} which performs well in handling gradient reversals and halos in image detail enhancement and HDR tone mapping. With parameters $a_d=\epsilon,b_d>I_m,a_s=\epsilon,b_s>I_m,\alpha=0,r_d=0,r_s=1$, our model can yield very close smoothing natures as the TV-$L_1$ model \cite{buades2010fast} which is classical for structure-preserving smoothing.
\begin{figure*}
\centering
\subfigure[]
{
\includegraphics[width=0.13\linewidth]{Fig1DComp_TVL1.png}
}
\subfigure[]
{
\includegraphics[width=0.13\linewidth]{Fig1DComp_My_2.png}
}
\subfigure[]
{
\includegraphics[width=0.13\linewidth]{Fig1DComp_WLS.png}
}
\subfigure[]
{
\includegraphics[width=0.13\linewidth]{Fig1DComp_My_3.png}
}
\subfigure[]
{
\includegraphics[width=0.13\linewidth]{Fig1DComp_SDFilter.png}
}
\subfigure[]
{
\includegraphics[width=0.13\linewidth]{Fig1DComp_My_1.png}
}
\caption{1D signal with structures of different scales and amplitudes. Smoothing result of (a) TV-$L_1$ smoothing \cite{buades2010fast}, (c) WLS \cite{farbman2008edge}, (e) SD filter \cite{ham2015robust}, our results in (b), (d) and (f).}\label{Fig1DComp}
\end{figure*}
\begin{figure*}[!ht]
\centering
\subfigure[]
{
\includegraphics[width=0.22\linewidth]{FigMyVsWLS_Input.pdf}
}
\subfigure[]
{
\includegraphics[width=0.2775\linewidth]{FigMyVsWLS_WLS.pdf}
}
\subfigure[]
{
\includegraphics[width=0.2775\linewidth]{FigMyVsWLS_My.pdf}
}
\caption{Image detail enhancement results of different approaches. (a) Input image. Result of (b) WLS \cite{farbman2008edge} and (c) our method. The upper parts of each close-up in (b) and (c) correspond to the patches in the smoothed image.}\label{FigMyVsWLS}
\end{figure*}
\begin{figure*}[!ht]
\centering
\subfigure[]
{
\includegraphics[width=0.2\linewidth]{FigMyVsSDFilter_Input.png}
}
\subfigure[]
{
\includegraphics[width=0.2\linewidth]{FigMyVsSDFilter_My.png}
}
\subfigure[]
{
\includegraphics[width=0.1\linewidth]{FigMyVsSDFilter_patch_Input.png}
}
\subfigure[]
{
\includegraphics[width=0.1\linewidth]{FigMyVsSDFilter_patch_SDFilter.png}
}
\subfigure[]
{
\includegraphics[width=0.1\linewidth]{FigMyVsSDFilter_patch_structure.png}
}
\subfigure[]
{
\includegraphics[width=0.1\linewidth]{FigMyVsSDFilter_patch_My.png}
}
\caption{Clip-art compression artifacts removal results of different approaches. (a) Input image. (b) Our result. Close-ups of (c) input image and results of (d) SD filter \cite{ham2015robust}, (e) our method with the structure-preserving parameter setting, (f) our method with the edge-preserving and structure-preserving parameter setting.}\label{FigMyVsSDFilter}
\end{figure*}
\begin{figure}
\centering
\subfigure[]
{
\includegraphics[width=0.27\linewidth]{FigMyVsTVL1_Input.pdf}
}
\subfigure[]
{
\includegraphics[width=0.27\linewidth]{FigMyVsTVL1_TVL1.pdf}
}
\subfigure[]
{
\includegraphics[width=0.27\linewidth]{FigMyVsTVL1_My.pdf}
}
\caption{Texture smoothing results of different approaches. (a) Input image. Result of (b) TV-$L_1$ smoothing \cite{buades2010fast}, and (e) our method.}\label{FigMyVsTVL1}
\end{figure}
For different kinds of applications, our model can produce better results than the special cases mentioned above. To be convenient, we first start with the tasks in the fourth group which require structure-preserving smoothing. For these tasks, the parameters are set as $a_d=\epsilon,b_d>I_m,a_s=\epsilon,b_s>I_m,r_d=r_s,\alpha=0.5,g=f$. This parameter setting has the following two advantages: first, the setting $a_d=\epsilon,b_d>I_m,a_s=\epsilon,b_s>I_m$ enables our model to have the structure-preserving property similar to that of the TV-$L_1$ model; second, the guidance weight with $\alpha=0.5,g=f$ can make our model to obtain sharper edges in the results than the TV-$L_1$ model does. We illustrate this with 1D smoothing results in Fig.~\ref{Fig1DComp}(a) and (b). Fig.~\ref{FigMyVsTVL1}(b) and (c) further show a comparison of image texture removal results. As shown in the figure, both the TV-$L_1$ model and our model can properly remove the small textures, however, edges in our result are much sharper than that in the result of the TV-$L_1$ model. The typical values for $r_d=r_s$ are $1\sim3$ depending on the texture size. $\lambda$ is usually smaller than 1. Larger $r_d,r_s,\lambda$ can lead larger structures to be removed. The iteration number is set as $N=10$.
When dealing with image detail enhancement and HDR tone mapping in the first group, one way is to set the parameters so that our model can perform WLS smoothing. In contrast, we can further make use of the structure-preserving property of our model to produce better results. The parameters are set as follows: $a_d=\epsilon,b_d>I_m,a_s=\epsilon,b_s>I_m,r_d=r_s,\alpha=0.2,g=f$. This kind of parameter setting is based on the following observation in our experiments: when we adopt $N=1$ and set $\lambda$ to a large value, the amplitudes of different structures will decrease at different rates, i.e., the amplitudes of small structures can have a larger decrease than the large ones, as illustrated in Fig.~\ref{Fig1DComp}(d). At the same time, edges are neither blurred nor sharpened. This kind of smoothing behavior is desirable for image detail enhancement and HDR tone mapping. As a comparison, Fig.~\ref{Fig1DComp}(c) shows the smoothing result of the WLS smoothing. As can be observed from the figures, our method can better preserve the edges (see the bottom of the 1D signals in Fig.~\ref{Fig1DComp}(c) and (d)). Fig.~\ref{FigMyVsWLS}(b) and (c) further show a comparison of image detail enhancement results. We fix $r_d=r_s=2$ and vary $\lambda$ to control the smoothing strength. $\lambda$ for the tasks in the first group is usually much larger than that for the ones in the fourth group, for example, the result in Fig.~\ref{FigMyVsWLS}(c) is generated with $\lambda=20$.
\begin{figure*}[!ht]
\centering
\subfigure[]
{
\includegraphics[width=0.135\linewidth]{FigHDRToneMapping_BF.pdf}
}
\subfigure[]
{
\includegraphics[width=0.135\linewidth]{FigHDRToneMapping_GF.pdf}
}
\subfigure[]
{
\includegraphics[width=0.135\linewidth]{FigHDRToneMapping_L0.pdf}
}
\subfigure[]
{
\includegraphics[width=0.135\linewidth]{FigHDRToneMapping_WLS.pdf}
}
\subfigure[]
{
\includegraphics[width=0.135\linewidth]{FigHDRToneMapping_SG-WLS.pdf}
}
\subfigure[]
{
\includegraphics[width=0.135\linewidth]{FigHDRToneMapping_My.pdf}
}
\caption{HDR tone mapping results of different approaches. Result of (a) BF \cite{tomasi1998bilateral}, (b) GF \cite{he2013guided}, (c) $L_0$ norm smoothing \cite{xu2011image}, (d) WLS \cite{farbman2008edge}, (e) SG-WLS \cite{liu2017semi} and (f) our method.}\label{FigHDRToneMapping}
\end{figure*}
\begin{figure*}
\centering
\subfigure[]
{
\includegraphics[width=0.135\linewidth]{FigClipArt_JPEG.pdf}
}
\subfigure[]
{
\includegraphics[width=0.135\linewidth]{FigClipArt_Wang.pdf}
}
\subfigure[]
{
\includegraphics[width=0.135\linewidth]{FigClipArt_L0Norm.pdf}
}
\subfigure[]
{
\includegraphics[width=0.135\linewidth]{FigClipArt_RegionFusion.pdf}
}
\subfigure[]
{
\includegraphics[width=0.135\linewidth]{FigClipArt_BTF.pdf}
}
\subfigure[]
{
\includegraphics[width=0.135\linewidth]{FigClipArt_My.pdf}
}
\caption{Clip-art compression artifacts removal results of different methods. (a) Input compressed image. Result of (b) the approach proposed by Wang et~al. \cite{wang2006deringing}, (c) $L_0$ norm smoothing \cite{xu2011image}, (d) region fusion approach \cite{nguyen2015fast}, (e) BTF \cite{cho2014bilateral} and (f) our method.}\label{FigClipArt}
\end{figure*}
To sharpen edges that is required by the tasks in the second and the third groups, we can set $b_s<I_m$ in the smoothness term. In addition, we further set other parameters as $a_d=\epsilon, b_d<I_m, a_s=\epsilon$. The truncation $b_d<I_m$ in the data term can help our model to be robust against the outliers in the input image, for example, the noise in the no flash image and low-quality depth map. The truncation $b_s<I_m$ in the smoothness term can enable our model to be an edge-preserving one. By setting $a_d=a_s=\epsilon$, our model can further enjoy the structure-preserving property. With both edge-preserving and structure-preserving smoothing natures, our model has the ability to preserve large structures with weak edges and small structures with strong edges at the same time, which is challenging but is of practical importance. Fig.~\ref{FigMyVsSDFilter}(a) illustrates this kind of case with an example of clip-art compression artifacts removal: both the thin black circle around the ``wheel'' and the gray part in the center of the ``wheel'' should be preserved. The challenge lies on two facts. On one hand, if we perform edge-preserving smoothing, the gray part will be removed because the corresponding edge is weak. Fig.~\ref{FigMyVsSDFilter}(d) shows the result of the SD filter \cite{ham2015robust}. The SD filter can properly preserve the thin black circle and sharpen the edges thanks to the adopted Welsch's penalty function, however, it fails to preserve the weak edge between the black part and the gray part. On the other hand, if we adopt structure-preserving smoothing, then the thin black circle will be smoothed due to its small structure size. Fig.~\ref{FigMyVsSDFilter}(e) shows the corresponding result of our method with the structure-preserving parameter setting described above. In contrast, our method with the edge-preserving and structure-preserving parameter setting can preserve both these two parts and sharpen the edges, as shown in Fig.~\ref{FigMyVsSDFilter}(f). Fig.~\ref{Fig1DComp}(e) and (f) also show a comparison of the SD filter and our method with 1D smoothing results. We fix $\alpha=0.5, r_d=r_s, N=10$ for the tasks in both the second and the third groups. We empirically set $b_d=b_s=0.05I_m\sim0.2I_m$ and $r_d=r_s=1\sim5$ depending on the applied task and the input noise level.
\begin{figure*}[!ht]
\centering
\subfigure[]
{
\includegraphics[width=0.122\linewidth]{FigToFSimulated_Color.pdf}
}
\subfigure[]
{
\includegraphics[width=0.122\linewidth]{FigToFSimulated_GT.pdf}
}
\subfigure[]
{
\includegraphics[width=0.122\linewidth]{FigToFSimulated_Learn.pdf}
}
\subfigure[]
{
\includegraphics[width=0.122\linewidth]{FigToFSimulated_SGF.pdf}
}
\subfigure[]
{
\includegraphics[width=0.122\linewidth]{FigToFSimulated_SDFilter.pdf}
}
\subfigure[]
{
\includegraphics[width=0.122\linewidth]{FigToFSimulated_NLM_WLS.pdf}
}
\subfigure[]
{
\includegraphics[width=0.122\linewidth]{FigToFSimulated_My.pdf}
}
\caption{Guided depth map upsampling results of simulated ToF data. (a) Guidance color image. (b) Ground-truth depth map. Result of (c) the approach proposed by Gu et~al. \cite{gu2017learning}, (d) SGF \cite{zhang2015segment}, (e) SD filter \cite{ham2015robust}, (f) Park et~al. \cite{park2011high} and (g) our method.}\label{FigToFSimulated}
\end{figure*}
\begin{figure*}
\centering
\subfigure[]
{
\includegraphics[width=0.122\linewidth]{FigToFReal_Color.pdf}
}
\subfigure[]
{
\includegraphics[width=0.122\linewidth]{FigToFReal_GT.pdf}
}
\subfigure[]
{
\includegraphics[width=0.122\linewidth]{FigToFReal_Learn.pdf}
}
\subfigure[]
{
\includegraphics[width=0.122\linewidth]{FigToFReal_TGV.pdf}
}
\subfigure[]
{
\includegraphics[width=0.122\linewidth]{FigToFReal_SDFilter.pdf}
}
\subfigure[]
{
\includegraphics[width=0.122\linewidth]{FigToFReal_SGF.pdf}
}
\subfigure[]
{
\includegraphics[width=0.122\linewidth]{FigToFReal_My.pdf}
}
\caption{Guided depth upsampling results of real ToF data. (a) Guidance intensity image. (b) Ground-truth depth map. Result of (c) the approach proposed by Gu et~al. \cite{gu2017learning}, (d) TGV \cite{ferstl2013image}, (e) SD filter \cite{ham2015robust}, (f) SGF \cite{zhang2015segment} and (g) our method.}\label{FigToFReal}
\end{figure*}
\begin{figure*}[!t]
\centering
\subfigure[]
{
\includegraphics[width=0.122\linewidth]{FigTextureSmooth_Input.pdf}
}
\subfigure[]
{
\includegraphics[width=0.122\linewidth]{FigTextureSmooth_JCAS.pdf}
}
\subfigure[]
{
\includegraphics[width=0.122\linewidth]{FigTextureSmooth_RTV.pdf}
}
\subfigure[]
{
\includegraphics[width=0.122\linewidth]{FigTextureSmooth_FCN.pdf}
}
\subfigure[]
{
\includegraphics[width=0.122\linewidth]{FigTextureSmooth_muGIF.pdf}
}
\subfigure[]
{
\includegraphics[width=0.122\linewidth]{FigTextureSmooth_BTF.pdf}
}
\subfigure[]
{
\includegraphics[width=0.122\linewidth]{FigTextureSmooth_My.pdf}
}
\caption{Image texture removal results. (a) Input image. Result of (b) JCAS \cite{gu2017joint}, (c) RTV \cite{xu2012structure}, (d) FCN based approach \cite{chen2017fast}, (e) muGIF \cite{guo2018mutually} (f) BTF \cite{cho2014bilateral} and (g) our method.}\label{FigTextureSmooth}
\end{figure*}
The structure inconsistency issue in the third group can also be easily handled by our model. Note that $\mu_{i,j}^s$ in Eq.~(\ref{EqObjFunAuxULMu}) is computed with the smoothed image in each iteration, as formulated in Eq.~(\ref{EqMultHQCondition}), it thus can reflect the inherent natures of the smoothed image. The guidance weight $\omega_{i,j}$ can provide additional structural information from the guidance image $g$. This means that $\mu_{i,j}^s$ and $\omega_{i,j}$ can complement each other. In fact, the equivalent guidance weight of Eq.~(\ref{EqObjFunAuxULMu}) in each iteration is $\mu_{i,j}^s\omega_{i,j}$, which can reflect the property of both the smoothed image and the guidance image. In this way, it can properly handle the structure inconsistency problem and avoid blurring edges and texture copy artifacts. Similar ideas were also adopted in \cite{ham2015robust,liu2017robust}.
\section{Applications and Experimental Results}
\label{SecExperiments}
Our method is applied to various tasks in the first to the fourth groups to validate the effectiveness. Comparisons with the state-of-the-art approaches in each application are also presented. Due to the limited space, we only show experimental results of four applications.
Our experiments are performed on a PC with an Intel Core i5 3.4GHz CPU (one thread used) and 8GB memory. For an RGB image of size $800\times600$ and $N=10$ in Algorithm \ref{Alg}, the running time is $10.04/25.09/43.11/69.82/96.73$ seconds in MATLAB for $r_d=r_s=1/2/3/4/5$. Note that as described in the property analysis section, the value of $r_d=r_s$ is smaller than 3 in most cases except for guided depth map upsampling. For the tasks in the first group which require $N=1$, the computational cost could be further reduced to $\frac{1}{10}$ of that mentioned above.
\textbf{HDR tone mapping} is a representative task in the first group. It requires to decompose the input image into a base layer and a detail layer through edge-preserving smoothing. The challenge of this task is that if the edges are sharpened by the smoothing procedure, it will result in gradient reversals, and halos will occur if the edges are blurred. Fig.~\ref{FigHDRToneMapping} shows the tone mapping results using different edge-preserving smoothing operators. The results of BF \cite{tomasi1998bilateral} and GF \cite{he2013guided} contain clear halos around the picture frames and the light fixture, as shown in Fig.~\ref{FigHDRToneMapping}(a) and (b). This is due to their local smoothing natures where strong smoothing can also blur salient edges \cite{farbman2008edge,he2013guided}. The $L_0$ norm smoothing \cite{xu2011image} can properly eliminate halos, but there are gradient reversals in its result as illustrated in Fig.~\ref{FigHDRToneMapping}(c). This is because the $L_0$ smoothing is prone to sharpen salient edges. The WLS \cite{farbman2008edge} and SG-WLS \cite{liu2017semi} smoothing perform well in handling gradient reversals and halos in most cases. However, there are slight halos in their results as illustrated in the left close-up in Fig.~\ref{FigHDRToneMapping}(d) and (e). These artifacts are properly eliminated in our results.
\begin{table*}
\centering
\caption{Quantitative comparison on the noisy simulated ToF data. Results are evaluated in MAE. The best results are in \textbf{bold}. The second best results are \underline{underlined}.}\label{TabToFSimulated}
\resizebox{1\textwidth}{!}
{
\begin{tabular}{|c|cccc|cccc|cccc|cccc|cccc|cccc|}
\hline
\multicolumn{1}{|c}{\multirow{2}{*}{}} & \multicolumn{4}{|c|}{\emph{Art}} & \multicolumn{4}{c|}{\emph{Book}} & \multicolumn{4}{c|}{\emph{Dolls}} & \multicolumn{4}{c|}{\emph{Laundry} } & \multicolumn{4}{c|}{\emph{Moebius}} & \multicolumn{4}{c|}{\emph{Reindeer}}\\
\cline{2-25}
& $2\times$ & $4\times$ & $8\times$ & $16\times$ & $2\times$ & $4\times$ & $8\times$ & $16\times$ & $2\times$ & $4\times$ & $8\times$ & $16\times$ & $2\times$ & $4\times$ & $8\times$ & $16\times$ & $2\times$ & $4\times$ & $8\times$ & $16\times$ & $2\times$ & $4\times$ & $8\times$ & $16\times$ \\
\hline
TGV\cite{ferstl2013image} & 0.8 & \underline{1.21} & 2.01 & 4.59 & 0.61 & 0.88 & 1.21 & 2.19 & 0.66 & \underline{0.95} & \underline{1.38} & 2.88 & \underline{0.61} & \textbf{0.87} & \underline{1.36} & 3.06 & {0.57} & \underline{0.77} & 1.23 & 2.74 & 0.61 & \textbf{0.85} & \underline{1.3} & 3.41 \\
AR\cite{yang2014color} & 1.17 & 1.7 & 2.93 & 5.32 & 0.98 & 1.22 & 1.74 & 2.89 & 0.97 & 1.21 & 1.71 & 2.74 & 1 & 1.31 & 1.97 & 3.43 & 0.95 & 1.2 & 1.79 & 2.82 & 1.07 & 1.3 & 2.03 & 3.34\\
SG-WLS\cite{liu2017semi} & 1.26 & 1.9 & 3.07 & - & 0.82 & 1.12 & 1.73 & - & 0.87 & 1.11 & 1.81 & - & 0.86 & 1.17 & 2 & - & 0.82 & 1.08 &1.79 & - & 0.9 & 1.32 & 2.01 & -\\
FGI\cite{li2016fast} & 0.9 & 1.37 & 2.46 & 4.89 & 0.66 & \underline{0.85} & \underline{1.23} & \underline{1.96} & 0.74 & \underline{0.95} & 1.41 & \underline{2.13} & 0.71 & 0.99 & 1.59 & 2.67 & 0.67 & 0.82 & \underline{1.2} & \underline{1.87} & 0.75 & 0.94 & 1.55 & 2.73\\
SGF\cite{zhang2015segment} & 1.42 & 1.85 & 3.06 & 5.55 & 0.84 & 1.11 & 1.76 & 3.03 & 0.87 & 1.2 & 1.88 & 3.26 & 0.74 & 1.1 & 1.96 & 3.63 & 0.81 & 1.13 & 1.84 & 3.16 & 0.93 & 1.25 & 2.03 & 3.67\\
SD Filter\cite{ham2015robust} & 1.16 & 1.64 & 2.88 & 5.52 & 0.86 & 1.1 & 1.57 & 2.68 & 1.04 & 1.27 & 1.73 & 2.76 & 0.96 & 1.25 & 1.94 & 3.54 & 0.93 & 1.14 & 1.68 & 2.75 & 1.05 & 1.31 & 1.99 & 3.43\\
FBS\cite{barron2016fast} & 1.93 & 2.39 & 3.29 & 5.05 & 1.42 & 1.55 & 1.76 & 2.48 & 1.33 & 1.45 & 1.69 & 2.26 & 1.32 & 1.49 & 1.77 & 2.67 & 1.16 & 1.29 & 1.61 & 2.44 & 1.63 & 1.76 & 2.01 & 2.69\\
muGIF\cite{guo2018mutually} & 1.00 & 1.26 & \underline{2.00} & \underline{3.46} & 0.73 & 0.89 & 1.35 & 2.15 & 0.85 & 1.04 & 1.50 & 2.45 & 0.64 & \textbf{0.87} & \underline{1.36} & \underline{2.57} & 0.67 & 0.85 & 1.35 & 2.25 & 0.78 & 0.94 & 1.39 & \underline{2.52}\\
Park et~al.\cite{park2011high} & 1.66 & 2.47 & 3.44 & 5.55 & 1.19 & 1.47 & 2.06 & 3.1 & 1.19 & 1.56 & 2.15 & 3.04 & 1.34 & 1.73 & 2.41 & 3.85 & 1.2 & 1.5 & 2.13 & 2.95 & 1.26 & 1.65 & 2.46 & 3.66 \\
Shen et~al.\cite{shen2015mutual} & 1.79 & 2.21 & 3.2 & 5.04 & 1.34 & 1.69 & 2.25 & 3.13 & 1.37 & 1.58 & 2.05 & 2.85 & 1.49 & 1.74 & 2.34 & 3.5 & 1.34 & 1.56 & 2.09 & 2.99 & 1.29 & 1.55 & 2.19 & 3.33\\
Gu et~al.\cite{gu2017learning} & \textbf{0.61} & 1.46 & 2.98 & 5.09 & \textbf{0.52} & 0.95 & 1.87 & 2.98 & \underline{0.63} & 1.02 & 1.89 & 2.92 & \textbf{0.58} & 1.14 & 2.21 & 3.58 & \underline{0.53} & 0.96 & 1.89 & 2.99 & \textbf{0.52} & 1.07 & 2.17 & 3.59\\
Li et~al.\cite{li2016deep} & - & 3.77 & 4.49 & 6.29 & - & 3.21 & 3.28 & 3.79 & - & 3.19 & 3.28 & 3.79 & - & 3.34 & 3.61 & 4.45 & - & 3.23 & 3.35 & 3.92 & - & 3.39 & 3.65 & 4.54\\
Ours & \underline{0.69} & \textbf{1.07} & \textbf{1.65} & \textbf{2.96} & \underline{0.55} & \textbf{0.81} & \textbf{1.22} & \textbf{1.78} & \textbf{0.62} & \textbf{0.9} & \textbf{1.27} & \textbf{1.84} & \underline{0.61} & \underline{0.89} & \textbf{1.28} & \textbf{2.12} & \textbf{0.51} & \textbf{0.75} & \textbf{1.12} & \textbf{1.71} & \underline{0.56} & \underline{0.87} & \textbf{1.27} & \textbf{2.08}\\
\hline
\end{tabular}
}\vspace{-0.5em}
\end{table*}
\begin{table}
\centering
\caption{Quantitative comparison on real ToF dataset. The errors are calculated as MAE to the measured ground-truth in \texttt{mm}. The best results are in \textbf{bold}. The second best results are \underline{underlined}.}\label{TabToFReal}
\resizebox{1\linewidth}{!}
{
\begin{tabular}{|c|ccc|}
\hline
& \emph{Books} & \emph{Devil} & \emph{Shark}\\
\hline
Bicubic & 16.23mm & 17.78mm & 16.66mm\\
GF\cite{he2013guided} & 15.55mm & 16.1mm & 17.1mm\\
SD Filter\cite{ham2015robust} & 13.47mm & 15.99mm & 16.18mm\\
SG-WLS\cite{liu2017semi} & 14.71mm & 16.24mm & 16.51mm\\
Shen et~al.\cite{shen2015mutual} & 15.47mm & 16.18mm & 17.33mm\\
Park et~al.\cite{park2011high} & 14.31mm & 15.36mm & 15.88mm\\
TGV\cite{ferstl2013image} &\underline{12.8mm} &\underline{14.97mm} & \underline{15.53mm}\\
AR\cite{yang2014color} & 14.37mm & 15.41mm & 16.27mm\\
Gu et~al.\cite{gu2017learning} & 13.87mm & 15.36mm & 15.88mm\\
SGF\cite{zhang2015segment} & 13.57mm & 15.74mm & 16.21mm\\
FGI\cite{li2016fast} & 14.21mm & 16.43mm & 16.37mm\\
FBS\cite{barron2016fast} & 15.93mm & 17.21mm & 16.33mm\\
Li et~al.\cite{li2016deep} & 14.33mm & 15.09mm & 15.82mm\\
Ours & \textbf{12.49mm} & \textbf{14.51mm} & \textbf{15.02mm}\\
\hline
\end{tabular}
}
\end{table}
\textbf{Clip-art compression artifacts removal}. Clip-art images are piecewise constant with sharp edges. When they are compressed in JPEG format with low quality, there will be edge-related artifacts, and the edges are usually blurred as shown in Fig.~\ref{FigClipArt}(a). Therefore, when removing the compression artifacts, the edges should also be sharpened in the restored image. We thus classify this task into the second group. The approach proposed by Wang et~al. \cite{wang2006deringing} can seldom handle heavy compression artifacts as shown in Fig.~\ref{FigClipArt}(b). The $L_0$ norm smoothing fails to preserve weak edges as shown in Fig.~\ref{FigClipArt}(c). The region fusion approach \cite{nguyen2015fast} is able to produce results with sharpened edges, however, it also enhances the blocky artifacts along strong edges as highlighted in Fig.~\ref{FigClipArt}(d). The edges in the result of BTF \cite{cho2014bilateral} are blurred in Fig.~\ref{FigClipArt}(e). Our result is illustrated in Fig.~\ref{FigClipArt}(f) with edges sharpened and compression artifacts removed.
\textbf{Guided depth map upsampling} belongs to the guided image filtering in the third group. The RGB guided image can provide additional structural information to restore and sharpen the depth edges. The challenge of this task is the structure inconsistency between the depth map and the RGB guidance image, which can cause blurring depth edges and texture copy artifacts in the upsampled depth map. We test our method on the simulated dateset provided in \cite{yang2014color}. Fig.~\ref{FigToFSimulated} shows the visual comparison between our result and the results of the recent state-of-the-art approaches. Our method shows better performance in preserving sharp depth edges and avoiding texture copy artifacts. Tab.~\ref{TabToFSimulated} also shows the quantitative evaluation on the results of different methods. Following the measurement used in \cite{guo2018mutually,li2016fast,liu2017semi,yang2014color}, the evaluation is measured in terms of mean absolute errors (MAE). As Tab.~\ref{TabToFSimulated} shows, our method can achieve the best or the second best performance among all the compared approaches.
We further validate our method on the real data introduced by Ferstl et~al. \cite{ferstl2013image}. The real dataset contains three low-resolution depth maps captured by a ToF depth camera and the corresponding highly accurate ground-truth depth maps captured with structured light. The upsampling factor for the real dataset is $\sim6.25\times$. The visual comparison in Fig.~\ref{FigToFReal} and the quantitative comparison in Tab.~\ref{TabToFReal} shows that our method can outperform the compared methods and achieve state-of-the-art performance.
\textbf{Image texture removal} belongs to the tasks in the fourth group. It aims at extracting salient meaningful structures while removing small complex texture patterns. The challenge of this task is that it requires structure-preserving smoothing rather than the edge-preserving in the above tasks. Fig.~\ref{FigTextureSmooth}(a) shows a classical example of image texture removal: the small textures with strong edges should be smoothed out while the salient structures with weak edges should be preserved. Fig.~\ref{FigTextureSmooth}(b)$\sim$(f) show the results of the recent state-of-the-art approaches. The joint convolutional analysis and synthesis sparse (JCAS) model \cite{gu2017joint} can well remove the textures, but the resulting edges are also blurred. The RTV method \cite{xu2012structure}, muGIF \cite{guo2018mutually}, BTF \cite{cho2014bilateral} and FCN based approach \cite{chen2017fast} cannot completely remove the textures, in addition, the weak edges of the salient structures have also been smoothed out in their results. Our method can both preserve the weak edges of the salient structures and remove the small textures.
\section{Conclusion}
We propose a non-convex non-smooth optimization framework for edge-preserving and structure-preserving image smoothing. We first introduce the truncated Huber penalty function which shows strong flexibility. Then a robust framework is presented. When combined with the flexibility of the truncated Huber penalty function, our framework is able to achieve different and even contradictive smoothing behaviors using different parameter settings. This is different from most previous approaches of which the inherent smoothing natures are usually fixed. We further propose an efficient numerical solution to our model and prove its convergence theoretically. Comprehensive experimental results in a number of applications demonstrate the effectiveness of our method.
\noindent\textbf{Acknowledgement}\\
We gratefully acknowledge the support of the Australia Centre for Robotic Vision. This paper is also partly supported by NSFC, China (No. U1803261, 61977046), Key Research and Development Program of Sichuan Province (No. 2019YFG0409) and National Key Research and Development Project (No. 2018AAA0100702)
{\small
\bibliographystyle{aaai}
|
1,477,468,750,311 | arxiv |
\section{SM 1: QFI for mixed states}
In this Section, we recall some elements of the general theory of QFI for mixed states \cite{pezze2014}.
For two mixed states, described by density matrices $\rho$ and $\sigma$, the fidelity, similar to that for pure states, is defined as:
\begin{eqnarray}
\mathcal{F}(\theta, \mathrm{d} \theta) = \Big( \mathrm{Tr} \, \sqrt{\sqrt{\rho} \, \sigma \sqrt{\rho}} \Big)^2 \, \, .
\end{eqnarray}
The last expression is considered applied to two infinitesimally-separated density matrices
\begin{eqnarray}
\rho (\theta) = \sum_{\lambda} p_{\lambda} \, \rho_{\lambda} (\theta) \, , \quad \rho_{\lambda} (\theta) = \ket{\lambda(\theta)} \bra{\lambda(\theta)} \, ,
\label{densmix}
\end{eqnarray}
and $\rho (\theta + \mathrm{d} \theta) = e^{- i \, \mathrm{d} \theta \, \hat{0}} \rho(\theta) \, e^{i \, \mathrm{d} \theta \, \hat{0}} $, {\color{black} $\hat{O}$ labelling a generic Hermitian operator.} There,
$0 \leq p_{\lambda} \leq 1$, and the basis $\ket{\lambda}$ is chosen orthonormal, then $\rho(\theta) \equiv \rho_D$ is in its diagonal form.
The resulting QFI, $- 2 \, \frac{\mathrm{d}^2 \mathcal{F}(\theta, \mathrm{d} \theta)}{\mathrm{d} \theta^2} $, reads:
\begin{eqnarray}
F[\rho_D , \hat{O}]_N = 2 \, \sum_{x,y} \sum_{\lambda, \nu} \,
\frac{\big(p_{\lambda} - p_{\nu} \big)^2}{p_{\lambda} + p_{\nu}} \,
\langle \lambda | \hat{o} (x) \ket{\nu} \bra{\nu} \hat{o} (y) | \lambda \rangle \, ,
\label{qfitherm0bis}
\end{eqnarray}
that clearly reduces to the QFI for a pure state in Eq. (1)
of the main text, since in that case $p_{\lambda} = 1$ just for a certain state $\ket{\lambda}$, vanishing for the other ones.
The QFI in Eq. \eqref{qfitherm0bis} fulfills the inequality \cite{pezze2014}
\begin{eqnarray}
F[\rho_D , \hat{O}]_N \leq \sum_{\lambda} \, p_{\lambda} \, F[\ket{\lambda}, \hat{O}]_N \, ,
\label{ineqFs}
\end{eqnarray}
the bound being saturated by pure states. This convexity inequality is strictly required by a entanglement witness, since physically this property reflects the fact that mixing quantum states cannot increase the entanglement content, as well as the related achievable estimation sensitivity \cite{pezze2014}.
The QFI cannot be expressed entirely in terms of two-point connected correlation functions, indeed
the following relation holds \cite{gabbrielli2018,wang2014}
\begin{eqnarray}
F[\rho_D , \hat{O}]_N = 4 \, \sum_{x,y} \Bigg( \sum_{\lambda} p_{\lambda} \bra{\lambda} \hat{o} (x) \hat{o} (y) \ket{\lambda}_{\mathrm{conn}} - 2 \sum_{\lambda, \nu \neq \lambda} \, \frac{p_{\lambda} \, p_{\nu} }{p_{\lambda} + p_{\nu}} \, \langle \lambda | \hat{o}(x) \ket{\nu} \bra{\nu} \hat{o}(y)| \lambda \rangle \Bigg) \, .
\label{qieqbis}
\end{eqnarray}
The second term of this equation is the difference between the QFI and $\tilde{F}[\rho_D , \hat{O}]_N$ in the main text.
\section{SM 2: on the cluster-decomposition theorem for pure states}
\label{clustersub}
In this Section, we provide additional details on the role of the so-called {\it cluster decomposition theorem} for correlation functions on pure states on lattices. In this way,
cluster decomposition encodes the locality principle \cite{hastings2004,sims2006,weinberg1} for Hamiltonian systems, and it is valid at least for all the physical systems where the area law for the entanglement entropy is not massively violated, like in systems with volume-law.
Let us consider a pure state $\ket{\lambda}$, which is not degenerate in energy.
For this state, the theorem claims that, when $|x - y|$ diverges ($x$ and $y$ still labelling lattice sites), then $\bra{\lambda} \hat{o} (x) \hat{o} (y) \ket{\lambda}$ tends to $\bra{\lambda} \hat{o} (x) \ket{\lambda} \bra{\lambda} \hat{o} (y) \ket{\lambda}$: this is the subtracted second part in Eq. (12) of the main text.
In order to justify the theorem in these conditions, $x$ and $y$ must be supposed belonging to different unconnected sets: the infinite $|x-y|$ limit assures this condition, even if genuine multipartite entanglement, $c = \infty$, is hosted on the analyzed system.
Indeed, the theorem can be someway justified starting from
the following decomposition of the two-points connected correlations:
\begin{eqnarray}
\bra{\lambda} \hat{o} (x) \hat{o} (y) \ket{\lambda}_{\mathrm{conn}} = \sum_{\lambda^{\prime}} \, \langle \lambda | \hat{o}(x) \ket{\lambda^{\prime}} \bra{\lambda^{\prime}} \hat{o}(y)| \lambda \rangle - \langle \lambda | \hat{o}(x) \ket{\lambda} \bra{\lambda} \hat{o}(y)| \lambda \rangle ) = \sum_{\lambda^{\prime} \neq \lambda} \, \langle \lambda | \hat{o}(x) \ket{\lambda^{\prime}} \bra{\lambda^{\prime}} \hat{o}(y)| \lambda \rangle \, ,
\label{decomp}
\end{eqnarray}
where $\ket{\lambda}$ and $\ket{\lambda}^{\prime}$ form again a orthonormal basis.
In fact, in Eq. \eqref{decomp} we assume that the whole system has an orthonormal producible basis, to be inserted in $\langle \lambda | \hat{o} (x) \hat{o} (y) | \lambda \rangle_{\mathrm{conn}}$.
In Eq. \eqref{decomp}, we notice that (for instance) $ \bra{\lambda^{\prime}} \hat{o} (x) \ket{\lambda } \to 0$ if $\lambda \neq \lambda^{\prime}$, then $\bra{\lambda} \lambda^{\prime} \rangle = 0$, if they are separate in energy, and if $x$ (or $y$ or both) is (are) located on the (infinite) boundary of the system.
More in general, the cluster decomposition theorem implies that
\begin{eqnarray}
\sum_{\lambda^{\prime} \neq \lambda} \, a_{\lambda , \lambda^{\prime}} \, \langle \lambda | \hat{o}(x) \ket{\lambda^{\prime}} \bra{\lambda^{\prime}} \hat{o}(y)| \lambda \rangle \to 0 \quad \mathrm{if} \quad |x-y| \to \infty \, ,
\label{cluster}
\end{eqnarray}
at least if $|a_{\lambda , \lambda^{\prime}}| < \infty$,
a generally fulfilled condition in physical systems.
The latter argument suggests that the cluster-decomposition theorem, often
referred to the ground-state(s) of local Hamiltonians, if valid for a non-degenerate ground state, must hold also for excited states, although more entangled in general, for instance fulfilling a volume law for the Von Neumann entropy, see e. g. \cite{eisert2010,alba2009,sierra2011}. Indeed, Eq. \eqref{cluster} means that a local operator $\hat{o}$ applied on (one point of) the infinite boundary of the state $\ket{\lambda}$ or $\ket{\lambda^{\prime}}$ (no matter their energies), cannot change the same state in a way to induce a nonvanishing overlap with other states of the orthogonal basis.
Expressed in a alternative manner, the cluster decomposition theorem must be valid for all the states of the considered systems at least if $c < \infty$, and therefore if some producibility holds. Otherwise, the entanglement between the infinite boundary and a point in the bulk, or between two points of the boundary, would immediately set $c \to \infty$. More in detail, if $N \sim L^D$ in that case, then $c \sim L^{\delta}$, with $\delta \geq D-1$, $D-1$ being the dimension of the boundary, then also $N^{\gamma}$, $\gamma \geq {D-1}/{D}$. The bound for $\delta$ can be further improved by noticing that if cluster decomposition fails and translational invariance is assumed, then:
\begin{eqnarray}
\langle \psi | \hat{o} (x) \hat{o} (y) | \psi \rangle_{\mathrm{conn}} \sim |x-y|^{\beta} \, \quad \quad \mathrm{for} \, \quad \quad |x-y| \to \infty \, , \quad \quad \beta \geq 0 \, .
\label{limbound}
\end{eqnarray}
Merging Eq. \eqref{limbound} and the scaling property $f[\ket{\psi}, \hat{O}]_N \sim N^{1 - 2{\beta}/{D}}$ (see~\cite{hauke2016} for critical lattice systems and \cite{pezze2017} for one-dimensional lattices), we obtain $\delta = D + 2 \beta \geq D$.
Therefore, failure of the cluster decomposition condition implies the reach of genuine multipartite entanglement. Thus, no producibility of any kind is allowed.
For the same reason, we point out that cluster decomposition turns out to be strictly required to prove the bound in Eq. (5)
of the main text, even though in those proofs no explicit mention to it is made \cite{lepori2020}.
Finally, the validity of the cluster decomposition theorem does not require translational invariance, avoiding any prescription on the limit operations in Eqs. \eqref{cluster} and \eqref{limbound}.
Above, we claimed that cluster decomposition holds if the ground state is unique. Instead, if the ground state is degenerate, the theorem can not hold in a generic basis. However, at least in the presence of a finite number of degenerate states, the theorem can be recovered, provided to choose properly the combination of them. This fact can be illustrated via a paradigmatic example, i.e. the ferromagnetic quantum Ising model in a transverse field, governed by the Hamiltonian
\begin{eqnarray}
\hat H_{\mathrm{IS}} = h \, \sum_{i=1}^N \hat{\sigma}_i^{(x)} + J \, \sum_{i=1}^N \, \hat{\sigma}_i^{(z)} \hat{\sigma}_{i+1}^{(z)} \, , \quad \quad J < 0 \, .
\label{ham1}
\end{eqnarray}
It is known that, if $J <h$, the ground state in the thermodynamic limit is doubly degenerate, in the states $\ket{\uparrow, \dots , \uparrow} \equiv \ket{\uparrow}$ and $\ket{\downarrow, \dots , \downarrow} \equiv \ket{\downarrow}$, orthonormal to each other. However, at finite volume, these states are mixed, forming the almost degenerate states $\ket{\pm} = \frac{\ket{\uparrow} \pm \ket{\downarrow}}{\sqrt{2}}$, exactly degenerate in the thermodynamic limit only.
For the latter states, the cluster decomposition theorem is not fulfilled, exactly since they result from the mixing of $\ket{\uparrow}$ and $\ket{\downarrow}$. Indeed:
\begin{eqnarray}
\mathrm{lim}_{|x-y| \to \infty} \langle \pm | \hat{\sigma}^{(z)} (x) \hat{\sigma}^{(z)} (y) | \pm \rangle \to \frac{1}{2} \quad \mathrm{but} \quad \langle \pm | \hat{\sigma}^{(z)} (x) \ket{\pm} \bra{\pm} \hat{\sigma}^{(z)} (y) | \pm \rangle = 0 \, .
\end{eqnarray}
However, it is fulfilled for the states $\ket{\uparrow}$ and $\ket{\downarrow}$. This means that, adopting the basis formed by $\ket{\uparrow}$, $\ket{\downarrow}$, and by the excited states above them, cluster decomposition can be recovered, and locality made explicit. In a sense, in the presence of a finite degenerate ground state, cluster decomposition still holds, up to global unitary transformations, as that linking the states $\ket{\pm}$ and $\ket{\uparrow}$ and $\ket{\downarrow}$ in the example above. In Eq. \eqref{cluster}, they transform the states $\ket{\lambda}$ and $\ket{\lambda}^{\prime}$ but, critically, not the operators $\hat{o}(x)$ and $\hat{o}(y)$ set at the beginning acting on them. Therefore, the value of the sums in the same equation can change. Clearly, the bounds after Eq. \eqref{limbound} still hold; in particular the absence of cluster decomposition in any basis implies the simultaneous absence of any sort of producibility.
In real experiments, the states corresponding to $\ket{\uparrow}$ and $\ket{\downarrow}$ are generically selected by the fluctuations (also classical) or by perturbations suitably added, as well as in simulations: for instance, in the example above, by an additional term $h^{\prime} \, \sum_{i=1}^N \hat{\sigma}_i^{(z)}$, $h^{\prime} \to 0$, customary e.g. in DMRG calculations.
The discussion above does not cover the important cases of continuous degeneracies, as for spontaneously broken continuous symmetries and for genuine topologically ordered matter \cite{wen2012}. Moreover, it is not clear whether a volume-law dependence for the von Neumann entropy is enough to guarantee the violation of cluster decomposition. These issues are beyond the scope of the present work.
\section{SM 3: derivation of Eq. (12) of the main text}
Starting from Eq. (9)
of the main text,
\begin{eqnarray}
\tilde{F}[\rho_P , \hat{O}]_N \equiv 4 \, \sum_{\tilde{\lambda}} p_{\tilde{\lambda}} \, \Big(\sum_{x,y} \, \bra{\tilde{\lambda}} \hat{o} (x) \hat{o} (y) \ket{\tilde{\lambda}}_{\mathrm{conn}} \Big) \, ,
\label{finalboundmix2}
\end{eqnarray}
in this Section we obtain Eq. (12) of the main text,
expressed in a generic basis, possibly orthonormal.
This step is strictly required to operatively evaluate $ \tilde{F}[\rho_P , \hat{O}]_N$, since in general the producible basis $\ket{\tilde{\lambda}}$, fulfilling Eq. (8)
of the main text, is not known. For this purpose, it is useful
to consider how Eq. \eqref{finalboundmix2} evolves under a transformation to an orthonormal, and generally entangled, basis, i.e. via the Gram-Schmidt procedure corresponding to a {\it not unitary} transformation.
Setting this transformation as
\begin{eqnarray}
\ket{\tilde{\lambda}} = \sum_{n} \, a_{\tilde{\lambda} , n} \, \ket{n} \, , \quad \quad \sum_{n} \, |a_{\tilde{\lambda} , n}|^2 \neq 1 \, ,
\label{cob}
\end{eqnarray}
where $\ket{n}$ are all orthonormal each others,
and forgetting for the moment the asymptotic therms involved in the connected correlations, we can write:
\begin{eqnarray}
\sum_{\tilde{\lambda}} p_{\tilde{\lambda}} \, \sum_{x,y} \, \bra{\tilde{\lambda}} \hat{o} (x) \hat{o} (y) \ket{\tilde{\lambda}} \to \sum_{x,y} \sum_{\tilde{\lambda}} p_{\tilde{\lambda}} \sum_{n,m} \, a^*_{\tilde{\lambda} , m} a_{\tilde{\lambda} , n} \, \langle m | \hat{o} (x) \hat{o} (y) | n \rangle \equiv \sum_{x,y} \sum_{n,m} \, p_{n , m} \, \langle m | \hat{o} (x) \hat{o} (y) | n \rangle \, .
\label{ftildegen}
\end{eqnarray}
Exploiting again the orthonormality of the $\ket{n}$ basis, it is now easy to convince ourselves that the latter expression is equal to
\begin{eqnarray}
\sum_{x,y} \mathrm{Tr} \big[ \rho \, \hat{o} (x) \hat{o} (y) \big] \, ,
\label{summa}
\end{eqnarray}
where
\begin{eqnarray}
\rho = \sum_{\tilde{\lambda}} p_{\tilde{\lambda}} \sum_{n,m} \, a_{\tilde{\lambda} , n} a^*_{\tilde{\lambda} , m} \, \ket{ n} \bra{m} \equiv \sum_{n,m} \, p_{n,m} \, \ket{n} \bra{m}
\end{eqnarray}
is now expressed in a orthonormal basis. We also have
\begin{eqnarray}
\mathrm{Tr} \, \rho = \mathrm{Tr} \, \rho_P = \sum_{\tilde{\lambda}} p_{\tilde{\lambda}} \sum_{n} \, |a_{\tilde{\lambda} , n}|^2 = 1 \, .
\label{invtrace}
\end{eqnarray}
Importantly, the quantity in Eq. \eqref{summa} is a trace invariant in the space of the conceivable unitary transformations from the orthonormal basis $\ket{n}$.
Notice also that Eq. \eqref{invtrace} is explicit if a change of basis is unitary, $\sum_{n} \, |a_{\tilde{\lambda} , n}|^2 = 1$ as referred to Eq. \eqref{cob}, since, by hypothesis, $\sum_{\tilde{\lambda}} p_{\tilde{\lambda}} = 1$.
Let us now consider the product of matrix elements
$ \langle \tilde{\lambda} | \hat{o}(x) \ket{\tilde{\lambda}^{\prime}} \bra{\tilde{\lambda}^{\prime}} \hat{o}(y)| \tilde{\lambda} \rangle$, with $\tilde{\lambda}^{\prime} \neq \tilde{\lambda}$, like those appearing in Eq. \eqref{decomp}.
It is immediately clear that this product can be nonvanishing only if $\bra{\tilde{\lambda}_j} \tilde{\lambda}_j^{\prime} \rangle = 0$ for just a {\it single} domain $D_j$, and only if $x$ and $y$ belong exactly to $D_j$, so that they are entangled.
Therefore, exploiting further the cluster decomposition theorem described in SM3, as well as translational invariance, it turns out that
the sum in Eq. \eqref{finalboundmix2} can be expressed again as
\begin{eqnarray}
\tilde{F}[\rho , \hat{O}]_N = 4 \, \sum_{x,y} \sum_{\tilde{\lambda}} p_{\tilde{\lambda}} \, \langle \tilde{\lambda} | \hat{o} (x) \hat{o} (y) | \tilde{\lambda} \rangle_{\mathrm{conn}} = 4 \, \sum_{x,y} \, \sum_{\tilde{\lambda}} p_{\tilde{\lambda}} \, \Big( \bra{\tilde{\lambda}} \hat{o} (x) \hat{o} (y) \ket{\tilde{\lambda}} - \mathrm{lim}_{|x-y| \to \infty} \bra{\tilde{\lambda}} \hat{o} (x) \hat{o} (y) \ket{\tilde{\lambda}} \Big) \, .
\label{ftildetraceort}
\end{eqnarray}
Consequently, this can be cast in a covariant-trace form:
\begin{eqnarray}
\tilde{F}[\rho , \hat{O}]_N = 4 \, \sum_{x,y} \sum_{\tilde{\lambda}} p_{\tilde{\lambda}} \, \langle \tilde{\lambda} | \hat{o} (x) \hat{o} (y) | \tilde{\lambda} \rangle_{\mathrm{conn}} = 4 \, \sum_{x,y} \Big( \mathrm{Tr} \, \big[\rho \, \hat{o} (x) \hat{o} (y) \big] - \mathrm{lim}_{|x-y| \to \infty} \mathrm{Tr} \, \big[ \rho \, \hat{o} (x) \hat{o} (y) \big] \Big) \, .
\label{ftildetraceort2}
\end{eqnarray}
As a result, in a generic {\it orthonormal} basis where in general the density matrix is not diagonal, i.e.
\begin{eqnarray}
\rho = \sum_{n,m} \, p_{n,m} \, \ket{n} \bra{m} \, ,
\label{nodiagro}
\end{eqnarray}
it can finally be written as:
\begin{eqnarray}
\tilde{F}[\rho , \hat{O}]_N = 4 \, \sum_{x,y} \sum_{n,m} \, p_{n,m} \, \Big[ \bra{m} \hat{o} (x) \hat{o} (y) \ket{n} - \mathrm{lim}_{|x-y| \to \infty} \bra{m} \hat{o} (x) \hat{o} (y) \ket{n } \Big]\, .
\label{ftildestorta}
\end{eqnarray}
So, apparently, we end up in the "strange correlators" $ \bra{m} \hat{o} (x) \hat{o} (y) \ket{n}$, known to be relevant for topology probing \cite{xu2014}.
Thanks to the cluster-decomposition theorem, described in the SM 3, the second addendum in Eq. \eqref{ftildetraceort} is physically equivalent to the term from the transformation of the product $ \langle \tilde{\lambda} | \hat{o} (x) | \tilde{\lambda} \rangle \langle \tilde{\lambda} | \hat{o} (y) | \tilde{\lambda} \rangle$ in Eq. \eqref{finalboundmix2}, via the Gram-Schmidt change of basis. In this way, and oppositely to the product of one-point correlations, both the therms in Eq. \eqref{ftildetraceort} are still expressed as a trace and are covariant under changes of basis. Therefore, the cluster-decomposition theorem allows to obtain the covariant expression in Eq. \eqref{ftildetraceort2} from Eq. \eqref{finalboundmix2}, a not possible task otherwise.
In order to be valid for all the ground and excited states of the considered system, cluster decomposition is strictly required to guarantee some sort of producibility: $c \sim N^{\delta}$, $0 \leq \delta < 1$, when $N \to \infty$. Therefore, the validity of Eqs. (8) and (12) of the main text
implies in itself that cluster decomposition is fulfilled for {\it all} the states $\ket{\tilde{\lambda}}$ in the same equation.
Finally, if the ground state is degenerate in the thermodynamic limit, we recall that the orthogonal basis $\ket{n}$ must be properly chosen to make cluster decomposition explicit, as discussed in SM 2.
Since $\bra{m} k \rangle = \delta_{m n}$ and transformations of basis preserve the scalar products between the states, Eq. \eqref{ftildestorta} can be simplified further. Indeed, if $m \neq n$, then $ \mathrm{lim}_{|x-y| \to \infty} \bra{m} \hat{o} (x) \hat{o} (y) \ket{n} = 0$. This fact can be somehow justified after inserting in the last expression a complete set $\ket{\lambda}$ of states, and noticing that (for instance) $ \bra{m} \hat{o} (x) \ket{\lambda} \to 0$ if $\lambda \neq m$ (then $\bra{m} k \rangle = 0$) and if $x$ is located on the (infinite) boundary of the system.
In this manner, Eq. \eqref{ftildestorta} can be recast as follows:
\begin{eqnarray}
\tilde{F}[\rho , \hat{O}]_N = 4 \, \sum_{x,y} \Big[ \sum_{n,m} \, p_{n,m} \, \bra{m} \hat{o} (x) \hat{o} (y) \ket{n} - \sum_{n} \, p_{n , n} \, \mathrm{lim}_{|x-y| \to \infty} \bra{n} \hat{o} (x) \hat{o} (y) \ket{n} \Big] \, .
\label{ftildestorta2}
\end{eqnarray}
Notably, Eqs. \eqref{ftildestorta} and \eqref{ftildestorta2} become particularly manageable in the orthonormal basis $\ket{\alpha}$ where $\rho$ is diagonal. Indeed:
\begin{eqnarray}
\tilde{F}[\rho , \hat{O}]_N = 4 \, \sum_{x,y} \Big[ \sum_{\alpha} \, p_{\alpha , \alpha} \, \bra{\alpha} \hat{o} (x) \hat{o} (y) \ket{\alpha} - \sum_{\alpha} \, p_{\alpha , \alpha} \, \mathrm{lim}_{|x-y| \to \infty} \bra{\alpha} \hat{o} (x) \hat{o} (y) \ket{\alpha} \Big] \, .
\label{ftildestorta2bis}
\end{eqnarray}
The limits in Eqs. \eqref{ftildestorta}, \eqref{ftildestorta2}, and \eqref{ftildestorta2bis} can be evaluated in a number of physically interesting cases, making these expressions manageable even at an operative level.
\section{SM 4: Further numerical simulations}
\label{sec:sm5}
\begin{figure}[h!]
\centering
\includegraphics[scale=0.65]{fig2.pdf}
\caption{Dissipative evolution of a XXZ chain of $N = 6$ site, using exact diagonalization, with a time-step of $J_{x}dt/\hbar = 0.01$ chosen to ensure numerical convergence. We begin the evolution from the ground state of $H$ with $J_{z}/J_{x} = 0.8$ and $h_{x}/J_{x} = 0.05$, and study the dissipative evolution with symmetry-respecting $\hat{s}^{z}_{j}$ jump operators. The observables are shown in the legend.}
\label{fig:Fig2}
\end{figure}
Fig.~\ref{fig:Fig2} shows an additional example of the time evolution of the different functionals described in the main text, starting from the ground state, having $S_z = 0$, of the Hamiltonian $H$, in the presence of a symmetry breaking transverse field $h_{x}/J_{x} = 0.05$. Here, we break the global $S_{z}$ symmetry at the Hamiltonian level, leading the QFI to differ from the variance of the operator, $\textrm{Var}(S_{x})$ (defined in Fig. 1 (a) of the main text). Moreover, the instantaneous state $\rho(t)$ no longer has a well defined magnetization, and thus the diagonal terms $\bra{\lambda} \hat{O} \ket{\lambda}$ become finite, despite the dissipator respecting the $S_{z}$ symmetry in this case ($L^{\phantom{\dagger}}_{j} = \hat{s}^{z}_{j}$). This leads to the interesting and complex situation where all our estimators differ from one another and are non-zero. Finally, note that, since $M-R \neq 0$, the functional $\tilde{F}[\rho,O]_{N} = \tilde{F}_{1}[\rho,O]_{N} + \tilde{F}_{2}[\rho,O]_{N}$ is no longer covariant.
|
1,477,468,750,312 | arxiv | \section{Cunningham chains}\label{SECunChain}
Cunningham \cite[p.~241]{Cunningham1907} claimed that a $C_{+1}$ chain $(p_k)_{k=0}^{\lambda-1}$ of length $\lambda \geq 4$ must have (a)~each $p_k \equiv -1 \pmod 3$ and (b)~each $p_k \equiv -1 \pmod 5$. However, condition (b) is incorrect for the prime chain $(2,5,11,23,47)$. In Theorem~\ref{THBpm1Mod8} we will prove that there are no counter-examples to Cunningham's condition (b) when $p_0>5$. Lehmer \cite{Lehmer1965} stated that $C_{+1}$ chains $(p_i)_{i=0}^{\lambda-1}$ of length $\lambda \geq 10$ have $p_0 \equiv -1 \pmod {2 \cdot 3 \cdot 5 \cdot 11}$. Loh \cite{Loh1989} showed that $C_{-1}$ chains $(p_i)_{i=0}^{\lambda-1}$ of length $\lambda \geq 12$ have $p_0 \equiv 1 \pmod {2 \cdot 3 \cdot 5 \cdot 11 \cdot 13}$. In Corollary~\ref{COC1C2Chains} we will generalise this list of results to prime chains based on $(2,b)$ for all odd integers $b \equiv \pm 1 \pmod 8$.
For any odd prime $s$ let $o_s(2)$ denote the multiplicative order of $2$ modulo $s$. Let $\mathbb{N}=\{1,2,\dots\}$. We make use of the following Legendre symbol identities, which can be found in many elementary number theory texts, for example \cite{LeVeque1996}. For odd prime $q$
\begin{equation}\label{EQLegendre}
\left(\frac{2}{q}\right)=\begin{cases} 1 & \text{if } q \equiv \pm1 \pmod 8 \\ -1 & \text{if } q \equiv \pm 3 \pmod 8 \end{cases} \hspace{10pt} \text{ and } \hspace{10pt} \left(\frac{a}{q}\right) \equiv a^{\frac{q-1}{2}} \pmod q.
\end{equation}
We are now ready to state and prove the main theorem.
\begin{theo}\label{THBpm1Mod8}
Let $b$ be an odd integer such that $b \equiv \pm 1 \pmod 8$ and let $q$ be a $2 \bigtriangleup$-prime that does not divide $b$. Suppose $(p_k)_{k=0}^{q-2}$ is a prime chain based on the pair $(2,b)$. Then either (a)~$q$ divides $p_0+b$, (b)~$p_0=q$, (c)~$p_1=q$ or (d)~$p_0=2$.
\end{theo}
\begin{proof}
Suppose $p_0$ is an odd prime and is of the form $p_0=2m-b$ for some integer $m$. So $p_k=2^{k+1} m-b$ for all $0 \leq k \leq q-2$ by \eqref{GeneralEq}. If $q$ does not divide $p_0+b$, then Theorem~\ref{Theorem1} implies that $q=p_k=2^{k+1}m-b$ for some $0 \leq k \leq q-2$. If $q=2^{k+1} m-b$ where $k \geq 3$, then \[1=\left(\frac{2}{q}\right) \equiv 2^{\frac{q-1}{2}} \pmod q\] by \eqref{EQLegendre} since $b \equiv \pm 1 \pmod 8$. However, this contradicts that $2 \bigtriangleup q$. Hence $q=p_0$ or $q=p_1$.
\end{proof}
We can now deduce the following corollary, for which we make use of the fact that contiguous subsequences of prime chains are themselves prime chains to find a large divisor for $p_0-1$. Let $\lambda \in \mathbb{N}$.
\begin{corol}\label{COC1C2Chains}
Let $(p_k)_{k=0}^{\lambda-1}$ be a prime chain based on $(2,b)$ for an odd integer $b \equiv \pm 1 \pmod 8$. Let \[Q=\{q \leq \lambda+1: q\text{ is a prime and } 2 \bigtriangleup q\} \cup \{2\} \setminus \{p_0,p_1\}.\] If $p_0 \geq 3$, then $p_k+b$ is divisible by every $q \in Q$ for all $0 \leq k \leq \lambda-1$.
\end{corol}
\begin{proof}
We know that $2$ divides each $p_k+b$ since both $p_k$ and $b$ are odd. So let $q \in Q \setminus \{2\}$. If $p_0 \equiv -b \pmod q$ then $q$ divides $p_k+1$ for all $0 \leq k \leq \lambda-1$ since $2 \times (-b)+b \equiv b \pmod q$. Observe that $(p_i)_{i=0}^{q-2}$ is a prime chain of length $q-1$ for all $q \in Q$. The result therefore follows from Theorem~\ref{THBpm1Mod8}.
\end{proof}
The $2\bigtriangleup$-primes are given by Sloane's \cite{Sloane} A001122 as $3,5,11,13,19,29,37,53$, and so on. It would also be of interest to know if an analogue of Corollary~\ref{COC1C2Chains} holds for other non-trivial values of $(a,b)$. The techniques in this article use the Legendre symbol identity \eqref{EQLegendre} which requires $a=2$ and $b \equiv \pm 1 \pmod 8$, so they are not easily extended to encompass other pairs $(a,b)$.
Corollary~\ref{COC1C2Chains} does not hold for when $p_0=2$. For example, $(2,5,11,23,47)$ is a prime chain based on $(2,1)$. In fact, the subsequences $(2,5,11,23)$ and $(5,11,23,47)$ also illustrate why we need to exclude $p_0$ and $p_1$ from $Q$ in Corollary~\ref{COC1C2Chains}.
Finally, the author would like to thank Hans Lausch for valuable feedback.
\bibliographystyle{amsplain}
\providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace}
\providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR }
\providecommand{\MRhref}[2]{%
\href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2}
}
\providecommand{\href}[2]{#2}
|
1,477,468,750,313 | arxiv |
\section{Machine Learning Model Lifecycle}\label{sec:background}
In this paper, we focus on supervised learning, more specifically, on
training classification models using (deep) neural
networks~\cite{DL_origin}. The model is used to give predictions
to inputs provided by users.
\subsection{Machine Learning Models}
\label{sec:ml}
A machine learning model, e.g., a neural network, encodes a general hypothesis function $F_w$ (with parameters $w$) which is learned from a training dataset with the goal of making predictions on unseen data. The function maps some input space $\mathcal{X}$ to an output space $\mathcal{Y}$ (e.g., labels). A neural network classification model $F_w(\vec{x})$ predicts the class for input $\vec{x} \in \mathcal{X}$ using a multi-layer network of basic non-linear activation functions (neurons) whose connections are weighted according to the model parameters $w$. Each neuron obtains a number of activation signals from neurons in its preceding layer, whose importance weights are determined by their associated model parameters. Then, it computes a non-linear activation function on the weighted sum of its input signals (plus a bias signal), and passes it to the neurons in the next layer. An example of a widely used activation function is the rectified linear unit $\mathsf{relu}(z) = max(0,z),$ which we will also be using in our experiments. In a classification model, a layer of normalized exponential function $\mathsf{softmax}(\vec{z})_i = \frac{\exp(z_i/T)}{\sum_j \exp(z_j/T)}$, where $T$ is the temperature, is added to the activation signals of the last layer to convert their arbitrary values into a vector of real values in $[0,1]$ that sum up to $1$. Thus, the output could be interpreted as the probability that the input falls into each class.
\subsection{Training \& Regularization}
\label{sec:train}
Let $\vec{x}$ be the data drawn from the underlying data distribution $p_x(\vec{x})$, and $y$ be the class of $\vec{x}$.
The training goal is to find the parameters $w$ such that the model $F_w$ is a good approximation of the mapping between
every data point $(\vec{x},y)$ in the space $\mathcal{X}\times\mathcal{Y}$.
The accuracy of the model in this approximation is tested using a loss function $\mathcal{L} ( F_w(\vec{x}),y )$ that measures the difference between the class $y$ and the model's prediction $F_w(\vec{x})$.
A common choice of $\mathcal{L}$ for classification models is the cross entropy loss function~\cite{murphy2012machine}.
The training objective is to find a function $F_w$ which minimizes the expected loss.
\begin{equation}
L(F_w) = \mathbb{E}_{\vec{x}\sim p_x} [\mathcal{L}(F_w(\vec{x}), y)]
\end{equation}
It is intractable to accurately represent the actual probability function $p_x(\vec{x})$, but in practice, we can estimate it using samples drawn from it. These samples form the training set $D \subset \mathcal{X}$.
Hence, we can train the model to minimize the empirical loss over the training set $D$.
\begin{equation}
\label{classification_loss}
L_D(F_w) = \frac{1}{|D|} \sum_{\vec{x}\in D} \mathcal{L} (F_w(\vec{x}),y)
\end{equation}
Learning the optimal parameters is a non-linear optimization problem. Algorithms used for solving this problem are variants of the gradient descent algorithm~\cite{gradient_descent}. Stochastic gradient descent (SGD)~\cite{zhang2004solving} is a very efficient method that updates the parameters by gradually computing the average gradient on small randomly selected subsets of the training data.
The SGD algorithm navigates high dimensional space of the parameters to finds a local optimum of the loss function on the training data. This could lead to an overfitted model which attains a very low prediction error on its training data, but fails to generalize well for unseen data. Various {\em regularization} techniques have been introduced to mitigate this issue~\cite{DL_origin}. These approaches try to prevent the parameter values from arbitrarily adapting to the training data. This can be achieved by augmenting the training set, by adding a \textit{regularization term} $R(F_w)$ to the loss $L_D(F_w)$ for penalizing large parameters, or by randomly dropping the network connections during the training to prevent their complex co-adaptation during the training~\cite{dropout}.
The training process of a model can be summarized as to find a model $F_w$ that minimizes the following objective.
\begin{equation}
\label{regualarized_loss}
\mathcal{C}(F_w)=L_D(F_w)+\lambda R(F_w)
\end{equation}
where the regularization factor $\lambda$ controls the balance between the classification function and the regularization function.
\section{Conclusion}
We empirically show that all the watermarks embedded by existing methods can be removed by distillation attack.
We design ingrain technique as a countermeasure which mitigates the independence between watermark embedding task and the main task inside the model. Its robustness against other widely used transformations is comparable to existing methods.
Future work includes, but is not limited to, investigating various ways of constructing the ingrainer model, and further enhancing overall embedding robustness against various transformations with minimal loss in the model's main task performance.
\section{Experiments}
In this section, we perform an empirical comparison of existing watermark embedding methods and ingrain.
We first introduce the datasets and models we used, and the embedding methods and removal techniques we evaluated.
Then, we compare them in both embedding performance and embedding robustness.
\subsection{Datasets \& Models}
We perform the evaluation on benchmark image datasets including CIFAR10~\cite{krizhevsky2014cifar} and MNIST~\cite{lecun1998mnist}.
We train convolutional neural network (CNN) and fully-connected multi-layer perceptron (MLP) for these datasets.
\begin{itemize}[leftmargin=2em]
\item \textbf{MNIST~\cite{lecun1998mnist}.} This is a dataset composed of 70,000 handwritten digit images with size 28$\times$28 in 10 classes. We rescale each image to value range [0, 1].
\textbf{Model}. We use a fully-connected neural network with the same architecture as in ~\cite{distilling_DL}. Specifically, it has two hidden layers of 1200 rectified linear neurons. Each fully-connected layer is regularized using dropout 0.5. We train the model in mini-batch size 128, using the Adadelta~\cite{zeiler2012adadelta} optimizer with initial learning rate 0.1, $\rho$ 0.95, and $\varepsilon$ 1e-8.
\item \textbf{CIFAR10~\cite{krizhevsky2014cifar}.} It consists of 32$\times$32 color images in 10 classes. It has 50,000 training records and 10,000 test records. We rescale each image to value range [0, 1]. \textbf{Model}. We train a standard convolutional neural network (CNN) with the same architecture as in ~\cite{papernot2016distillation}. Specifically, it is a succession of 2 convolutional layers with 64 (3$\times$3) filters, a max pooling layer, 2 convolutional layers with 128 (3$\times$3) filters, a max pooling layer, and 2 fully connected layers with 256 neurons. Each fully-connected layer is regularized with dropout 0.5. We set the mini-batch size to 128.
We use SGD with momentum 0.9 to optimize the model. The initial learning rate is set to 0.01, and decays 0.95 every 10 epochs.
\end{itemize}
\subsection{Embedding Techniques}
We evaluate the following embedding techniques.
\begin{itemize}[leftmargin=2em]
\item \textbf{W:LSB}~\cite{ccs_ML_remembers}, embedding into the least significant bit(s) of parameters. Trained for 500 epochs.
\item \textbf{W:SGN}~\cite{ccs_ML_remembers}, embedding into the sign of parameters. Trained for 500 epochs.
\item \textbf{W:COR}~\cite{ccs_ML_remembers}, embedding into parameters by making them correlated to the watermarks. Trained for 500 epochs.
\item \textbf{W:STA}~\cite{embedding_watermarks}, embedding into the statistical information of parameters. Trained for 500 epochs.
\item \textbf{P:CAP}~\cite{ccs_ML_remembers,watermark_backdoor}, embedding into predictions on a watermark-carrier set through capacity abuse. Trained for 500 epochs.
\item \textbf{P:ING}, embedding into predictions through ingrain. We train the ingrainer for 1,000 epochs, and train the classifier for 500 epochs. We perform ingrain 5 times with ingrain coefficient $\lambda$ of 0.5, 1, 2, 4, and 8 respectively. We set the ingrain temperature $T$ to 10 during ingrain, because we empirically find it gives the best overall embedding robustness.
\end{itemize}
\noindent \textbf{Watermark-carrier set $D_S$}.
Similar to the setting of the P:CAP method~\cite{ccs_ML_remembers}, we compose $D_S$ by randomly generating images using a pseudorandom number generator based on a random seed. Such data points could be generated in many different ways, for example, white noise, uniformly random noise, one-hot images~\cite{ccs_ML_remembers}, and random walk.
Random walk starts from the center of a blank image and moves by one pixel of random value at a random direction for many steps---which we set to the size of the image. In our experiments, we observe that random-walk images are more effective in practice (i.e., easier to embed with less accuracy overhead). We use it to generate $D_S$.
Totally, we have synthesized three $D_S$ sets with size 1,000, 5,000 and 10,000 watermark-carrier images respectively. Each $D_S$ set is labeled with a sequence $Y_S$ which jointly compose the watermarks.
For W:$*$ methods, we convert $D_S$ to a number of bits that contain the same amount of watermark information.
Note that W:STAT needs massive memory in the embedding process (see Section \ref{sec:existing_methods}), which makes it impractical to embed the same amount of watermark information. We embed 256 bits for it, as in the case of \cite{embedding_watermarks}.
\subsection{Watermark Removal Attacks}
We evaluate the following watermark removal attacks in our experiments.
\begin{itemize}[leftmargin=2em]
\item \textbf{Distillation}~\cite{distilling_DL}, using a refining set and distilled for 500 epochs under different temperatures, i.e., 5, 10, and 15.
\item \textbf{Pruning}~\cite{pruning_quantization}, by pruning the parameters with different rates (from 0.1 to 0.6) followed by fine tuning trained for 25 epochs.
\item \textbf{Rounding}~\cite{pruning_quantization}, by reducing the precision of the parameters by 1 to 6 digits.
\item \textbf{Fine tuning}~\cite{fine_tune}, by retraining the model on the refining set for 25 epochs.
\item \textbf{Expansion}~\cite{expand_nonlinear_CNN, speedingNN_expansion}, by expanding the convolutional layers to linear combinations of smaller filters, for speedup rate ranging from 2.5x to 3x. It is not applicable to MLP models (i.e., classifier for MNIST).
\end{itemize}
For distillation and fine tuning attacks, we leave $20\%$ of the original training data $D$ as the refining set $D'$. We use the remaining $80\%$ training data for training the classifier.
The student model architectures in distillation are presented in the following.
\begin{itemize}[leftmargin=2em]
\item \textbf{MNIST}. The student model is a fully-connected neural network with 2 hidden layers of 800 neurons, which is the same architecture as in ~\cite{distilling_DL}.
\item \textbf{CIFAR10}. It is a standard convolutional neural network, a succession of 2 convolutional layers with 64 (3$\times$3) filters, a max pooling layer, a convolutional layer with 128 (3$\times$3) filters, a max pooling layer, and 2 fully-connected layers of 200 neurons.
\end{itemize}
\subsection{Evaluation Metrics}
Embedding watermarks should preserve the classification accuracy comparable to the clean model. We evaluate the performance of a model using two metrics: \textit{classification accuracy} and \textit{watermark accuracy}.
The classification accuracy reflects the model's main classification performance. It is measured on an independent test set.
The watermark accuracy is the accuracy in extracting the watermarks.
In P:$*$ methods, it is measured by using the watermark-carrier set $D_S$ to query the classifier $F_w$ and counting the ratio of correct predictions to $Y_S$.
In W:$*$ methods, it is measured by extracting the embedded watermark bits from the model and calculating the accuracy.
To better compare the watermark accuracy among different classifiers embedded using different methods, we normalize it as in the following.
$$\hat{A}_{wm}=\max \left (\frac{A_{wm}-\frac{1}{c}}{1-\frac{1}{c}},0\right )$$
where $A_{wm}$ represents the watermark accuracy, $c$ is the number of values that the watermark can take (i.e., $c$ equals to the number of classes in P:$*$ methods and $c=2$ in W:$*$ methods), and $\frac{1}{c}$ is the probability of random guess.
\subsection{Embedding Performance}
\begin{table*}[t]
\centering
\setlength{\tabcolsep}{4pt}
\caption{Embedding performance. The classification and watermark accuracies are presented in white and gray columns respectively. Numbers in the 2nd row under P:ING represent the $\lambda$.}
\label{tb:app_embedding}
\begin{tabular}{ll|la|lalalalala|lalalalala}
\hline
\multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{2}{c|}{} & \multicolumn{10}{c|}{Prior Methods} & \multicolumn{10}{c}{P:ING} \\ \hline
\multicolumn{1}{c}{$|D_S|$} & \multicolumn{1}{c}{$D$} & \multicolumn{2}{c|}{Clean} & \multicolumn{2}{c}{W:LSB} & \multicolumn{2}{c}{W:SGN} & \multicolumn{2}{c}{W:COR} & \multicolumn{2}{c}{W:STA} & \multicolumn{2}{c|}{P:CAP} & \multicolumn{2}{c}{0.5} & \multicolumn{2}{c}{1} & \multicolumn{2}{c}{2} & \multicolumn{2}{c}{4} & \multicolumn{2}{c}{8} \\ \hline
1,000 & MNIST & 0.98 & 0.02 & 0.98 & 1 & 0.98 & 1 & 0.98 & 0.99 & 0.98 & 0.8 & 0.98 & 1 & 0.97 & 1 & 0.96 & 1 & 0.95 & 1 & 0.95 & 1 & 0.93 & 1 \\
& CIFAR10 & 0.84 & 0 & 0.81 & 1 & 0.81 & 0.99 & 0.8 & 0.98 & 0.8 & 0.98 & 0.83 & 1 & 0.82 & 1 & 0.81 & 1 & 0.79 & 1 & 0.76 & 1 & 0.71 & 1 \\
\hline
5,000 & MNIST & 0.98 & 0 & 0.98 & 1 & 0.97 & 1 & 0.98 & 0.98 & 0.98 & 0.8 & 0.98 & 1 & 0.96 & 1 & 0.96 & 1 & 0.95 & 1 & 0.94 & 1 & 0.92 & 1 \\
& CIFAR10 & 0.84 & 0 & 0.8 & 1 & 0.8 & 0.99 & 0.81 & 0.98 & 0.8 & 0.98 & 0.83 & 1 & 0.82 & 1 & 0.82 & 1 & 0.81 & 1 & 0.78 & 1 & 0.73 & 1 \\
\hline
10,000 & MNIST & 0.98 & 0 & 0.98 & 1 & 0.99 & 1 & 0.98 & 0.98 & 0.98 & 0.8 & 0.98 & 1 & 0.97 & 1 & 0.96 & 1 & 0.95 & 1 & 0.94 & 1 & 0.91 & 1 \\
& CIFAR10 & 0.84 & 0 & 0.79 & 1 & 0.8 & 0.99 & 0.8 & 0.98 & 0.8 & 0.98 & 0.83 & 1 & 0.82 & 1 & 0.82 & 1 & 0.81 & 1 & 0.79 & 1 & 0.76 & 1 \\
\hline
\end{tabular}
\end{table*}
\begin{figure*}[t]
\begin{center}
\includegraphics[width=\linewidth]{figures/mnist_embed_all_wm_num.pdf}
\caption{Embedding performance. The host network is trained on MNIST. The size of $D_S$ is 1,000 (left), 5,000 (middle) and 10,000 (right) images. The ingrain coefficient $\lambda$s in P:ING are labeled as different numbers in the dashed curves.}
\label{fig:mnist_embed_all_wm_num}
\end{center}
\end{figure*}
\begin{figure*}[t]
\begin{center}
\includegraphics[width=\linewidth]{figures/cifar10_embed_all_wm_num.pdf}
\caption{Embedding performance. The host network is trained on CIFAR10. The size of $D_S$ is 1,000 (left), 5,000 (middle) and 10,000 (right) images. The ingrain coefficient $\lambda$s in P:ING are labeled as different numbers in the dashed curves.}
\label{fig:cifar10_embed_all_wm_num}
\end{center}
\end{figure*}
We depict the embedding performance of W:$*$ and P:$*$ methods in Figure \ref{fig:mnist_embed_all_wm_num} and \ref{fig:cifar10_embed_all_wm_num}.
The result shows that P:$*$ methods and W:$LSB$ achieves 100\% watermark accuracy, while the rest W:$*$ methods gets lower watermark accuracy.
It is worth noting that ingrain leads to a small drop (e.g., 1\%$\sim$5\% for MNIST) in the model's classification accuracy, which is proportional to the ingrain coefficient $\lambda$.
This is because the $\lambda$ controls the weight of the classification loss and the ingrain loss during the training. A larger $\lambda$ leads to more watermark information ingrained in the model, but at the cost of losing more classification accuracy. It is a trade off between the two accuracies.
We present the comprehensive evaluation result of embedding performance in Table \ref{tb:app_embedding}.
\subsection{Embedding Robustness against Distillation}
\label{sec:robustness_to_distillation}
\begin{table*}[t]
\centering
\setlength{\tabcolsep}{4.5pt}
\caption{Embedding robustness against distillation. The classification and watermark accuracies are presented in white and gray columns respectively. Numbers in the 2nd row under P:ING represent the $\lambda$. The P:ING results that have better robustness than P:CAP are labeled in bold type.}
\label{tb:app_dist}
\begin{tabular}{ll|l|la|la|lalalalala}
\hline
\multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{2}{c|}{} & \multicolumn{2}{c|}{Prior Method} & \multicolumn{10}{c}{P:ING} \\ \hline
\multicolumn{1}{c}{$|D_S|$} & \multicolumn{1}{c}{$D$} & \multicolumn{1}{c}{$T$} & \multicolumn{2}{c|}{Clean} & \multicolumn{2}{c|}{P:CAP} & \multicolumn{2}{c}{0.5} & \multicolumn{2}{c}{1} & \multicolumn{2}{c}{2} & \multicolumn{2}{c}{4} & \multicolumn{2}{c}{8} \\ \hline
& & 5 & 0.984 & 0.018 & 0.984 & 0.036 & \textbf{0.97} & \textbf{0.229} & \textbf{0.965} & \textbf{0.271} & \textbf{0.96} & \textbf{0.297} & \textbf{0.958} & \textbf{0.339} & \textbf{0.955} & \textbf{0.34} \\
1,000 & MNIST & 10 & 0.982 & 0.02 & 0.983 & 0.037 & \textbf{0.969} & \textbf{0.224} & \textbf{0.964} & \textbf{0.261} & \textbf{0.96} & \textbf{0.288} & \textbf{0.957} & \textbf{0.307} & \textbf{0.956} & \textbf{0.327} \\
& & 15 & 0.982 & 0.019 & 0.982 & 0.032 & \textbf{0.969} & \textbf{0.197} & \textbf{0.966} & \textbf{0.242} & \textbf{0.962} & \textbf{0.262} & \textbf{0.96} & \textbf{0.279} & \textbf{0.959} & \textbf{0.281} \\
\hhline{~|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|}
& & 5 & 0.817 & 0.006 & 0.816 & 0 & \textbf{0.79} & \textbf{0.009} & \textbf{0.772} & \textbf{0.088} & \textbf{0.755} & \textbf{0.13} & \textbf{0.739} &\textbf{ 0.151} & \textbf{0.724} & \textbf{0.181} \\
& CIFAR10 & 10 & 0.81 & 0.008 & 0.809 & 0 & \textbf{0.795} & \textbf{0.033} & \textbf{0.775} & \textbf{0.053} & \textbf{0.759} & \textbf{0.101} & \textbf{0.752} & \textbf{0.116} & \textbf{0.741} & \textbf{0.147} \\
& & 15 & 0.803 & 0.002 & 0.802 & 0.002 & \textbf{0.794} & \textbf{0.03} & \textbf{0.784} & \textbf{0.06} & \textbf{0.769} & \textbf{0.07} & \textbf{0.764} & \textbf{0.103} & \textbf{0.763} & \textbf{0.101} \\
\hline
& & 5 & 0.984 & 0 & 0.984 & 0.016 & \textbf{0.969} & \textbf{0.101} & \textbf{0.962} & \textbf{0.125} & \textbf{0.957} & \textbf{0.14} & \textbf{0.954} & \textbf{0.143} & \textbf{0.95} & \textbf{0.146} \\
5,000 & MNIST & 10 & 0.982 & 0 & 0.982 & 0.02 & \textbf{0.967} & \textbf{0.096} & \textbf{0.962} & \textbf{0.119} & \textbf{0.958} & \textbf{0.134} & \textbf{0.955} & \textbf{0.143} & \textbf{0.953} & \textbf{0.143} \\
& & 15 & 0.982 & 0 & 0.982 & 0.016 & \textbf{0.969} & \textbf{0.082} & \textbf{0.964} & \textbf{0.107} & \textbf{0.96} & \textbf{0.121} & \textbf{0.957} & \textbf{0.128} & \textbf{0.956} & \textbf{0.132} \\
\hhline{~|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|}
& & 5 & 0.817 & 0 & 0.815 & 0 & 0.803 & 0 & \textbf{0.799} & \textbf{0.012} & \textbf{0.778} & \textbf{0.022} & \textbf{0.766} & \textbf{0.032} & \textbf{0.748} & \textbf{0.042} \\
& CIFAR10 & 10 & 0.81 & 0 & 0.806 & 0 & \textbf{0.8} & \textbf{0.002} & \textbf{0.795} & \textbf{0.009} & \textbf{0.786} & \textbf{0.018} & \textbf{0.775} & \textbf{0.019} & \textbf{0.771} & \textbf{0.03} \\
& & 15 & 0.803 & 0 & 0.799 & 0 & 0.797 & 0 & \textbf{0.793} & \textbf{0.005} & \textbf{0.791} & \textbf{0.005} & \textbf{0.789} & \textbf{0.011} & \textbf{0.785} & \textbf{0.014} \\
\hline
& & 5 & 0.984 & 0 & 0.984 & 0.005 & \textbf{0.971} & \textbf{0.054} & \textbf{0.966} & \textbf{0.065} & \textbf{0.959} & \textbf{0.077} & \textbf{0.956} & \textbf{0.083} & \textbf{0.952} & \textbf{0.087} \\
10,000 & MNIST & 10 & 0.982 & 0 & 0.982 & 0.007 & \textbf{0.969} & \textbf{0.047} & \textbf{0.965} & \textbf{0.061} & \textbf{0.961} & \textbf{0.073} & \textbf{0.957} & \textbf{0.083} & \textbf{0.955} & \textbf{0.083} \\
& & 15 & 0.982 & 0 & 0.981 & 0.006 & \textbf{0.971} & \textbf{0.037} & \textbf{0.966} & \textbf{0.052} & \textbf{0.963} & \textbf{0.066} & \textbf{0.961} & \textbf{0.071} & \textbf{0.958} & \textbf{0.072} \\
\hhline{~|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|}
& & 5 & 0.817 & 0 & 0.815 & 0.004 & 0.802 & 0 & 0.796 & 0 & \textbf{0.797} & \textbf{0.007} & \textbf{0.782} & \textbf{0.009} & \textbf{0.77} & \textbf{0.018} \\
& CIFAR10 & 10 & 0.81 & 0 & 0.81 & 0.001 & \textbf{0.794} & \textbf{0.002} & \textbf{0.795} & \textbf{0.002} & \textbf{0.792} & \textbf{0.003} & \textbf{0.791} & \textbf{0.006} & \textbf{0.787} & \textbf{0.008} \\
& & 15 & 0.803 & 0 & 0.804 & 0 & 0.791 & 0 & \textbf{0.794} & \textbf{0.01} & \textbf{0.793} & \textbf{0.005} & \textbf{0.792} & \textbf{0.005} &\textbf{0.793} & \textbf{0.007} \\
\hline
\end{tabular}
\end{table*}
The W:$*$ methods have no robustness against distillation because the original classifier is replaced by a new classifier with smaller size. All the original parameters in which the watermarks are embedded are destroyed.
Therefore, we only evaluate the robustness of P:$*$ methods.
We first present all the evaluation results on the robustness against distillation in Table \ref{tb:app_dist}.
The result shows that for each distillation temperature on each host classifier, there is at least one $\lambda$ that enables P:ING to embed watermarks with better robustness than P:CAP.
We depict the robustness of P:$*$ methods for MNIST and CIFAR10 classifiers in Figure \ref{fig:mnist_dist_all_wm_num} and \ref{fig:cifar10_dist_all_wm_num} under distillation $T=10$.
As it is shown, P:CAP has almost no robustness to distillation. On the other hand, P:ING preserves much more watermark information after distillation. For example, P:ING achieves 22.4\%$\sim$32.7\% watermark accuracy for MNIST classifier when $|D_S|=1,000$. As a trade off, the classification accuracy drops only 1.3\%$\sim$2.6\%.
As expected, the watermark accuracy (which reflects the embedding robustness) is proportional to $\lambda$.
We show the trajectories of the classification and watermark accuracies of P:CAP and P:ING during the distillation process in Figure \ref{fig:distillation_privacy}, where $\lambda=2$, distillation $T=10$.
The plots show that there is no significant difference between the classification accuracy of P:CAP and P:ING.
However, as the model's distillation progresses, the ingrained model performs evidently better than the alternative approach in terms of the watermark accuracy.
This is mainly because the outputs of the classifier on the refining dataset $D'$ (i.e., the training set of distillation but are drawn from $p_x$ as well) also contain the watermark information when the watermark is ingrained in the classifier by P:ING. Whereas, P:CAP embeds watermarks independently from the classifier's main task, which explains why the watermarks cannot be transferred to the student model during distillation.
From the results, it is clear that a greater $\lambda$ leads to a more robust embedding, even on the 10,000 watermark-carrier data, where a $\lambda$ of 8 causes 1\% drop in embedding accuracy. Together, the present findings confirm that ingrain does help attain a more robust watermark embedding against distillation. Figure~\ref{fig:distillation_privacy} compares the classification and embedding accuracies of the two embedding techniques (P:CAP and ingrain P:ING(2) with coefficient $\lambda=2$) during distillation. As the plots show, there is no significant difference between the classification accuracy of P:CAP and P:ING(2). However, as the model's distillation progresses, the ingrained model performs evidently better than the alternative approach in terms of the embedding accuracy. This is mainly because the outputs of the model on benign data contain watermark information, when the watermark is embedded using ingrain. Whereas, P:CAP embeds watermarks independently from the model's main task, which explains why the watermarks cannot be transferred during distillation.
\begin{figure*}[t]
\begin{center}
\includegraphics[width=\linewidth]{figures/mnist_dist_all_wm_num.pdf}
\caption{Embedding robustness against distillation. The host network is trained on MNIST.
The size of $D_S$ is 1,000 (first 2 columns), 5,000 (3rd column), and 10,000 (4th column) images.
The result in the 1st column is obtained without augmenting $D$ with $D_S$.
The $\lambda$s in P:ING are labeled as different numbers in the dashed curves. Distillation temperature is 10.}
\label{fig:mnist_dist_all_wm_num}
\end{center}
\end{figure*}
\begin{figure*}[t]
\begin{center}
\includegraphics[width=\linewidth]{figures/cifar10_dist_all_wm_num.pdf}
\caption{Embedding robustness against distillation. The host network is trained on CIFAR10.
The size of $D_S$ is 1,000 (first 2 columns), 5,000 (3rd column), and 10,000 (4th column) images.
The result in the 1st column is obtained without augmenting $D$ with $D_S$.
The $\lambda$s in P:ING are labeled as different numbers in the dashed curves. Distillation temperature is 10.}
\label{fig:cifar10_dist_all_wm_num}
\end{center}
\end{figure*}
\begin{figure}[t]
\centering
\includegraphics[width=0.8\linewidth]{figures/mnist_dist_trajectory.pdf}
\\
\includegraphics[width=0.8\linewidth]{figures/cifar10_dist_trajectory.pdf}
\caption{Trajectories of classification and watermark accuracies during distillation.
We compare P:CAP and P:ING wherein $\lambda=2$. Distillation temperature is 10.
}\label{fig:distillation_privacy}
\end{figure}
\subsection{Embedding Robustness against Other Attacks}
\begin{table*}[t]
\centering
\setlength{\tabcolsep}{3.5pt}
\caption{Embedding robustness against other attacks. The host network is trained on MNIST. The classification and watermark accuracies are presented in white and gray columns respectively. Numbers in the 2nd row under P:ING represent the $\lambda$. Numbers in the 2nd column represent the parameters for the corresponding attack.}
\label{tb:mnist_other_attacks}
\begin{tabular}{ll|la|la|la|la|la|la|la|la|la|la}
\hline
\multicolumn{2}{l|}{} & \multicolumn{10}{c|}{Prior Methods} & \multicolumn{10}{c}{P:ING} \\
\hline
$|D_S|$ & Attacks & \multicolumn{2}{c}{W:LSB} & \multicolumn{2}{c}{W:SGN} & \multicolumn{2}{c}{W:COR} & \multicolumn{2}{c}{W:STA} & \multicolumn{2}{c|}{P:CAP} & \multicolumn{2}{c}{0.5} & \multicolumn{2}{c}{1} & \multicolumn{2}{c}{2} & \multicolumn{2}{c}{4} & \multicolumn{2}{c}{8} \\
\hline
\multirow{3}{*}{1,000} & Pruning(0.4) & 0.97 & 0.24 & 0.98 & 0.53 & 0.97 & 0.42 & 0.98 & 0.77 & 0.98 & 1 & 0.98 & 1 & 0.98 & 0.99 & 0.98 & 0.98 & 0.98 & 0.97 & 0.98 & 0.93 \\
& Rounding(2) & 0.97 & 0 & 0.98 & 0.75 & 0.97 & 0.41 & 0.99 & 0.8 & 0.98 & 1 & 0.97 & 1 & 0.96 & 1 & 0.95 & 1 & 0.95 & 1 & 0.93 & 1 \\
& Fine Tuning & 0.99 & 0.27 & 0.99 & 0.59 & 0.99 & 0.48 & 0.98 & 0.8 & 0.98 & 1 & 0.97 & 1 & 0.97 & 1 & 0.97 & 0.99 & 0.97 & 0.98 & 0.97 & 0.95 \\
\hline
\multirow{3}{*}{5,000} & Pruning(0.4) & 0.98 & 0.23 & 0.98 & 0.52 & 0.98 & 0.42 & 0.98 & 0.77 & 0.98 & 1 & 0.98 & 0.99 & 0.97 & 0.98 & 0.97 & 0.94 & 0.97 & 0.87 & 0.97 & 0.76 \\
& Rounding(2) & 0.97 & 0 & 0.97 & 0.74 & 0.97 & 0.4 & 0.99 & 0.8 & 0.98 & 1 & 0.96 & 1 & 0.96 & 1 & 0.95 & 1 & 0.94 & 1 & 0.91 & 1 \\
& Fine Tuning & 0.98 & 0.28 & 0.98 & 0.6 & 0.99 & 0.48 & 0.98 & 0.8 & 0.98 & 1 & 0.97 & 1 & 0.97 & 1 & 0.96 & 0.98 & 0.96 & 0.95 & 0.96 & 0.88 \\
\hline
\multirow{3}{*}{10,000} & Pruning(0.4) & 0.98 & 0.23 & 0.98 & 0.52 & 0.99 & 0.42 & 0.98 & 0.77 & 0.98 & 1 & 0.98 & 0.99 & 0.98 & 0.96 & 0.97 & 0.89 & 0.97 & 0.79 & 0.97 & 0.64 \\
& Rounding(2) & 0.97 & 0 & 0.97 & 0.75 & 0.97 & 0.4 & 0.99 & 0.8 & 0.98 & 1 & 0.97 & 1 & 0.96 & 1 & 0.95 & 1 & 0.94 & 1 & 0.9 & 1 \\
& Fine Tuning & 0.99 & 0.26 & 0.99 & 0.59 & 0.99 & 0.48 & 0.98 & 0.8 & 0.98 & 1 & 0.97 & 1 & 0.97 & 1 & 0.96 & 0.98 & 0.96 & 0.94 & 0.96 & 0.84 \\
\hline
\end{tabular}
\end{table*}
\begin{table*}[t]
\centering
\setlength{\tabcolsep}{3.3pt}
\caption{Embedding robustness against other attacks. The host network is trained on CIFAR10. The classification and watermark accuracies are presented in white and gray columns respectively. Numbers in the 2nd row under P:ING represent the $\lambda$. Numbers in the 2nd column represent the parameters for the corresponding attack.}
\label{tb:cifar10_other_attacks}
\begin{tabular}{ll|la|la|la|la|la|la|la|la|la|la}
\hline
\multicolumn{2}{l|}{} & \multicolumn{10}{c|}{Prior Methods} & \multicolumn{10}{c}{P:ING} \\
\hline
$|D_S|$ & Attacks & \multicolumn{2}{c}{W:LSB} & \multicolumn{2}{c}{W:SGN} & \multicolumn{2}{c}{W:COR} & \multicolumn{2}{c}{W:STA} & \multicolumn{2}{c|}{P:CAP} & \multicolumn{2}{c}{0.5} & \multicolumn{2}{c}{1} & \multicolumn{2}{c}{2} & \multicolumn{2}{c}{4} & \multicolumn{2}{c}{8} \\
\hline
\multirow{4}{*}{1,000} & Pruning(0.4) & 0.8 & 0.26 & 0.8 & 0.42 & 0.81 & 0.52 & 0.81 & 0.98 & 0.83 & 1 & 0.82 & 1 & 0.82 & 0.99 & 0.8 & 0.95 & 0.79 & 0.92 & 0.77 & 0.82 \\
& Rounding(2) & 0.8 & 0.01 & 0.82 & 0.79 & 0.81 & 0.48 & 0.8 & 0.98 & 0.83 & 1 & 0.82 & 1 & 0.81 & 1 & 0.79 & 1 & 0.76 & 1 & 0.71 & 1 \\
& Fine Tuning & 0.81 & 0.31 & 0.82 & 0.52 & 0.8 & 0.58 & 0.8 & 0.98 & 0.82 & 1 & 0.81 & 1 & 0.79 & 0.98 & 0.79 & 0.98 & 0.77 & 0.9 & 0.75 & 0.83 \\
& Expansion & 0.75 & 0.65 & 0.76 & 0.64 & 0.75 & 0.63 & 0.78 & 0.88 & 0.79 & 0.97 & 0.79 & 0.97 & 0.78 & 0.97 & 0.76 & 0.97 & 0.73 & 0.97 & 0.67 & 0.96 \\
\hline
\multirow{4}{*}{5,000} & Pruning(0.4) & 0.79 & 0.25 & 0.81 & 0.41 & 0.79 & 0.51 & 0.81 & 0.98 & 0.82 & 1 & 0.81 & 1 & 0.82 & 1 & 0.81 & 0.99 & 0.8 & 0.93 & 0.78 & 0.76 \\
& Rounding(2) & 0.8 & 0 & 0.8 & 0.79 & 0.79 & 0.47 & 0.8 & 0.98 & 0.83 & 1 & 0.82 & 1 & 0.81 & 1 & 0.81 & 1 & 0.78 & 1 & 0.73 & 1 \\
& Fine Tuning & 0.81 & 0.28 & 0.8 & 0.53 & 0.81 & 0.57 & 0.8 & 0.98 & 0.81 & 1 & 0.81 & 1 & 0.8 & 1 & 0.79 & 1 & 0.78 & 0.93 & 0.76 & 0.81 \\
& Expansion & 0.75 & 0.64 & 0.76 & 0.63 & 0.74 & 0.63 & 0.78 & 0.88 & 0.79 & 0.97 & 0.8 & 0.97 & 0.79 & 0.97 & 0.77 & 0.97 & 0.74 & 0.97 & 0.69 & 0.96 \\
\hline
\multirow{4}{*}{10,000} & Pruning(0.4) & 0.79 & 0.25 & 0.79 & 0.42 & 0.79 & 0.52 & 0.81 & 0.98 & 0.83 & 0.97 & 0.81 & 0.98 & 0.81 & 0.99 & 0.81 & 0.97 & 0.8 & 0.94 & 0.78 & 0.65 \\
& Rounding(2) & 0.79 & 0.01 & 0.79 & 0.79 & 0.79 & 0.48 & 0.8 & 0.98 & 0.83 & 1 & 0.82 & 1 & 0.82 & 1 & 0.81 & 1 & 0.79 & 1 & 0.76 & 0.99 \\
& Fine Tuning & 0.8 & 0.32 & 0.8 & 0.52 & 0.8 & 0.57 & 0.8 & 0.98 & 0.82 & 0.99 & 0.81 & 0.99 & 0.79 & 0.99 & 0.8 & 0.97 & 0.79 & 0.94 & 0.77 & 0.8 \\
& Expansion & 0.76 & 0.65 & 0.75 & 0.64 & 0.74 & 0.63 & 0.78 & 0.88 & 0.79 & 0.97 & 0.79 & 0.97 & 0.78 & 0.97 & 0.77 & 0.97 & 0.75 & 0.96 & 0.71 & 0.96\\
\hline
\end{tabular}
\end{table*}
We present the result of robustness evaluation of W:$*$ and P:$*$ methods against other widely used attacks (i.e., pruning, rounding, fine tuning and low-rank expansion) on the MNIST and CIFAR10 classifiers in Table \ref{tb:mnist_other_attacks} and \ref{tb:cifar10_other_attacks} respectively.
The pruning rate is 0.4 and the rounding is by 2 digits.
We do not evaluate the expansion attack on the MNIST classifier because it is a MLP model.
The result shows that, in the prior methods, W:$*$ are generally less robust than P:CAP, which again demonstrates the fragility of W:$*$ methods.
For instance, the watermarks embedded by W:LSB are completely removed by rounding attack and mostly removed by pruning attack.
W:STA embeds the watermarks into the overall statistics of the network parameters as opposed to the individual parameters as in the case of other W:$*$ methods. Therefore, W:STA is slightly more robust than other W:$*$ methods.
The low-rank expansion attack does not touch all the parameters, but rather expands several convolutional layers.
The watermark information that is embedded in the untouched parameters by W:$*$ can survive the expansion attack. Hence, expansion attacks is less harmful to the embedded watermarks than the other attacks.
The P:CAP method is quite robust to these attacks compared to W:$*$ method. For example, it achieves 97\%$\sim$100\% watermark accuracy on both classifiers.
P:ING with $\lambda\leq2$ can achieve comparable watermark accuracy to P:CAP on the MNIST classifier, and on the CIFAR10 classifier, $\lambda\leq4$ leads to comparable watermark accuracy.
This result shows that the robustness of P:ING against other widely used attacks is still comparable to the best result of existing methods.
An interesting finding from the tables is that a larger $\lambda$ of P:ING tends to result in lower watermark accuracy in the pruning and fine tuning attacks.
This is because both attacks require retraining the classifier on a refining set $D'$.
The retraining process without the govern of the ingrain loss, will drive the classifier to ``forget'' the previously embedded watermarks.
However, because $D'$ is drawn from the same data distribution $p_x$ as the training set $D$, such effect is negligible unless $\lambda$ becomes large which ingrains the watermarks more deeply in the network.
We present the evaluation result against pruning attack with more rates and rounding attack by more digits in Table \ref{tb:app_prune} and \ref{tb:app_round} respectively in Appendix.
\section{Introduction}
Machine learning (ML) models are becoming ubiquitous, powering an extremely wide variety of applications. As a consequence, they are being treated as conventional commodity softwares.
The model training could be outsourced, and the constructed models could be shared among various parties.
Cloud service providers, such as Google, Amazon, and Microsoft provide machine learning as a service which enables outsourcing machine learning and facilitates using third-party models. Many companies, such as BigML, also operate exclusively on platforms that enable sharing and selling machine learning models.
As ML models are treated as an intellectual property, which can be sold or licensed, the needs of traitor-tracing and proof of authorship emerge. One can ``watermark'' the model during its training by implanting some secret into its parameters, which can then be used to prove one's authorship on the model\cite{embedding_watermarks, watermark_backdoor, adversarial_watermark, deepsigns, deepmarks}. Compared to watermarking multimedia data, watermarking ML models poses new technical challenges since it is the model's functionality trained for a specific task that need to be protected.
A number of basic techniques are proposed in the literature for embedding secret information in neural networks, such as poisoning training data~\cite{backdoor_DL_posoning, ccs_ML_remembers, watermark_backdoor, adversarial_watermark, tamper_data}, modifying the training algorithm and retraining~\cite{badnets, trojanning_attack_NN, ccs_ML_remembers, adversarial_watermark, deepmarks, deepsigns}, or simply writing the secrets into the (e.g., least significant bits of) parameters after the training~\cite{ccs_ML_remembers}.
Some existing methods exploit the large capacity of neural networks in representing and memorizing random functions~\cite{memorizattion_DL, rethinking_generalisation_DL}, without damaging the model's accuracy~\cite{DL_robust_to_noise}.
Others exploit the massive unused capacity of the models' parameter space, which is largely redundant for the main classification function represented by the model~\cite{pruning_quantization}.
Nonetheless, what is missing in the design of the existing watermarking techniques is that they are not designed strategically with countermeasures in mind.
In fact, it is a common practice that models go through some transformations for memory, energy and computation optimization~\cite{compress_by_hashing, speeding_up_CNN_from_within, speedingNN_expansion, pruning_quantization, han2016eie, han_pruning, distilling_DL, expand_nonlinear_CNN}, for fine-tuning with new data~\cite{fine_tune}, or for transfer learning~\cite{transfer_learning_cvpr, transfer_learning_survey}.
These transformations can be actively used as attacks to remove watermarks in neural networks \cite{embedding_watermarks, watermark_backdoor, adversarial_watermark, deepsigns, deepmarks}.
Unfortunately, little has been studied about the impact of model transformations on the result of existing watermark embedding mechanisms.
In this paper, we show that one of the widely used transformation techniques---distillation~\cite{distilling_DL}---is surprisingly a quite effective attack to remove the embedded watermarks.
Distillation, as a type of compression techniques, uses the knowledge of the neural network to train a new model of smaller size.
We evaluate existing watermarking methods under distillation attack.
The results empirically show that {\em all} the watermark information embedded in a neural network, using {\em any} of the existing methods, can be removed by distillation with negligible loss in the model's accuracy.
We perform a deep analysis on the results obtained. Our analysis reveals that existing methods which simply leverage the vast capacity of neural networks, leads to the embedded watermarks decoupled from the model's main functionality. More specifically, the sub-models or parameters which are responsible for memorizing the watermarks are almost independent from the part of the model which represents the main classification task. As distillation is constrained to preserve the model's accuracy, the redundant information (which contains the watermarks, but does not contribute to the distillation's objective) will be lost.
In response to the fragility of existing watermarking methods against distillation, we design {\em ingrain}, a more robust watermarking method to counter distillation.
Ingrain essentially imbues the watermarks onto the predictions of the model on real (benign) data.
The objective is to embed the watermark information into the same neural connections that are responsible for representing the main classification task.
We achieve this by ingraining the watermarks in the main model's predictions through modifying the loss function of the classifier.
We execute this in two steps. First, we train an \textit{ingrainer} model exclusively on a watermark-carrier dataset (which is derived from the watermark information). The ingrainer model has the same input-output format as the main classification model and contains all the watermark information (recoverable without any loss).
In the second step, we use the loss function of the ingrainer as a regularization term for training the main classification model, so as to encourage the model to not only match the label for each training input, but also to match the output of the ingrainer on the same training data.
Informally, this leads to a joint training of the classification model and the watermarking model, as opposed to independent training of the two objectives inside a model.
By tuning the weight of the regularization term, we can trade-off the accuracy loss with the watermark robustness.
We extensively evaluated ingrain for various machine learning tasks and model architectures on multiple training datasets. The results show that, with acceptable accuracy loss, it improves the resistance of the embedded watermarks against distillation.
For example, on one of the datasets, 30\% of the watermarks survive distillation with only 2\% loss in the classification accuracy.
Although ingrain is designed to counter distillation, it turns out that its robustness against other widely used transformations is comparable to existing methods.
This work highlights that even if the neural network is transformed into a different model via a strong attack (i.e., distillation), it is still possible to preserve watermark information if it is deeply ``ingrained'' in the model's main functionality.
\textbf{Contributions.} In summary, we make the following contributions in this paper.
\begin{itemize} [leftmargin=2em]
\item We empirically show that distillation is an effective attack which can remove all watermark information in neural networks embedded by existing methods.
\item We perform a deep analysis of the robustness of existing methods.
We argue that minimizing the independence between the main task and the watermark embedding task inside a model can improve the robustness. Nonetheless, it intuitively harms the model's accuracy. We highlight the problem of studying the trade-off between embedding robustness and cost.
\item We design ingrain as an embedding technique to counter distillation. The watermark is embedded in such a way that it is correlated with the model's main task. We empirically show that ingrain can achieve better robustness against distillation.
Besides, we also show that the robustness of ingrain against other widely used transformations (attacks) is comparable to existing methods.
\end{itemize}
\section{Watermark Embedding}
\label{sec:priorwork}
Given a watermark $S$, watermark embedding in a model $F_w$ refers to a process where the owner of $F_w$ embeds $S$ into $F_w$ in the training phase of the model, and later extracts $S$ from $F_w$ after its release to prove ownership on it.
In this section, we first formulate the watermark representation, and then introduce various embedding techniques in the literature. Next, we introduce distillation and other widely used transformation techniques of ML models which can be used as attacks to remove watermarks.
We then analyze the robustness of existing embedding methods against distillation.
\subsection{Watermark Representation}
The watermark $S$ is typically represented as $n$ bits of information that carries the watermark in the model $F_w$.
The owner can extract the $n$ bits of watermark information from $F_w$ as a proof of ownership.
Alternatively, the watermark can be encoded as $m$ predefined data-label pairs in a classification task with $k$ classes, where $m=\lceil \frac{n}{\lfloor\log_2(k)\rfloor} \rceil$.
Let $D_S=\{\vec{x_s}_1, \vec{x_s}_2,\cdots,\vec{x_s}_m\}$ represent the predefined data sequence drawn from some data distribution $p_s$, and $Y_S=\{{y_s}_1,{y_s}_2,\cdots,{y_s}_m\}$ represent the predefined label sequence.
The embedding goal is to enforce a hidden function $F_w(\vec{x_s}_i)={y_s}_i$ in the model $F_w$ for each $\vec{x_s}_i \in D_S$, such that the owner can later obtain $Y_S$ by providing $D_S$ to $F_w$.
The sequence $D_S$ is referred to as \textit{watermark-carrier dataset} in the rest of our paper.
Note that the watermark-carrier data distribution $p_s$ should be different from the training data distribution $p_x$.
For example, one common way is to generate $D_S$ randomly as the $i$-th draw from a pseudo-random function.
\subsection{Existing Embedding Methods}
\label{sec:existing_methods}
Depending on the watermark representation, the watermark $S$ could be embedded in the model's parameters $w$, or the model's predictions $F_w(\vec{x})$ on the watermark-carrier set $D_S$.
Following the same spirit of generic watermarking techniques~\cite{cox1997secure}, the embedding should meet two requirements, i.e., \textit{fidelity} and \textit{robustness}. Fidelity requires the performance of the host network (i.e., the neural network to which the watermark bit vector is embedded into)
is not impaired by the embedding, while robustness requires the embedded watermark to be detectable even if the host network undergoes modifications.
\subsubsection{Embedding in Parameters $w$}
Essentially, this problem is to embed a $n$-bit vector $S \in \{0,1\}^n$ into $w$ of a given neural network (host network).
The avenues where existing methods embed the watermark into $w$ typically falls into 4 classes: the least significant bit(s) of $w$, the signs of $w$, correlation with $w$ and statistics of $w$.
\underline{\textit{Least significant bit(s) of $w$} (W:LSB).}
Leveraging an observation that high-precision parameters are not necessary for high performance of the model~\cite{pruning_quantization}, Song et al. \cite{ccs_ML_remembers} investigated embedding the secret bit vector directly into the least significant bit(s) of the network parameters. They show that given a CNN model comprising $880$K parameters and trained on the LFW dataset \cite{LFWTech}, the adversary can embed up to $17.6$M bits at a cost of $0.14\%$ decrease in test accuracy of the model.
\underline{\textit{Signs of $w$} (W:SGN).}
Another avenue that the model owner can exploit to embed his watermark is the signs of the model parameters \cite{ccs_ML_remembers}. In particular, given a watermark bit vector $S \in \{-1,1\}^n$, the owner would like to force the sign of $w_i$ to match that of $S_i$. The owner can achieve this by adding a penalty term $P$ to the original loss function. $P$ is defined as:
\begin{equation}
P (w, S) = \frac{\lambda_S}{n} \sum_{i=1}^n |\max(0, -w_i S_i)|
\end{equation}
where $\lambda_S$ controls the magnitude of the penalty. The penalty is minimal (i.e., zero) when $w_i$ and $S_i$ have the same sign.
\underline{\textit{Correlation with $w$} (W:COR).}
Alternatively, the model owner can embed the watermark $S \in \mathbb{R}^l$ into the model parameters $w$ by adding a correlation term $C$ to the loss function that is employed during training~\cite{ccs_ML_remembers},
so as to maximize the correlation between $w$ and $S$. The correlation term $C (w, S)$ is defined as:
\begin{equation}
C(w, S) = - \lambda_c \cdot \frac{|\sum_{i=1}^l (w_i - \bar{w}) (S_i - \bar{S})|}{\sqrt{\sum_{i=1}^l (w_i - \bar{w})^2}\sqrt{\sum_{i=1}^l (S_i - \bar{S})^2}}
\end{equation}
where $l$ is number of parameters, $\lambda_c$ is the level of correlation and $\bar{w}, \bar{S}$ are mean values of $w, S$, respectively.
\underline{\textit{Statistics of $w$} (W:STA).}
Uchida et al. \cite{embedding_watermarks} have investigated a problem of embedding watermarks into the statistical information of $w$ by the use of a regularization term $E_R(w, S)$ defined in the following.
\begin{equation}
\label{eq:icmr}
E_R(w, S) = - \sum_{j=1}^n (S_j \log (y_j) + (1 - S_j) \log(1-y_j))
\end{equation}
where $y_j = \sigma (\sum_iX_{ji}w_i)$ and $\sigma(x) = \frac{1}{1 + \exp(-x)}$. The matrix $X$ is an embedding parameter (secret key) with size $n\times M$, where $M$ is the size of the network parameters $w$.
The regularization term enforces $w$ to have a certain statistical bias reflecting the embedded watermark.
Experimental studies show that the watermark embedding incurs minimal effect on the performance of the host network (e.g., increase a test error rate on CIFAR-10 dataset by only $1\%$~\cite{embedding_watermarks}). They also show that the watermark remains detectable in the event the host network undergoes fine-tuning and compression by pruning. Nevertheless, we show later in our evaluation that the embedded watermark is detached if the host network is distilled.
Besides, the parameter size $M$ in a neural network is usually numerous. Thus, the embedding parameter $X$ (with size $n\times M$) consumes massive memory especially when the watermark size $n$ is large. In practice, the amount of watermark embedded by this approach is quite limited.
Similarly, Chen et al.~\cite{deepmarks} and Rouhani et al.\cite{deepsigns} also add a regularization term to embed a watermark in the probability density function of the model's parameters or neurons' activations.
\subsubsection{Embedding in Predictions $F_w(\vec{x})$}
\underline{(P:CAP).}
The capability of neural networks to ``memorize'' random noise~\cite{rethinking_generalisation_DL} suggests that the model owner can force the model to ``memorize'' the watermark of his choice, and then rely on the model's predictions $F_w(\vec{x})$ to extract the watermark \cite{ccs_ML_remembers}.
In particular, the owner synthesizes a set of records, and assigns to them labels $Y_S$ that encode the watermark he wants to embed, obtaining a labeled watermark-carrier set $D_S$. He then poisons the training data set $D$ with the synthetic data set $D_S$. Finally, he trains the model on the poisoned training set using a standard training pipeline.
When $F_w$ becomes overfitted on $D_S$, the owner can extract the watermark from model predictions $F_w(\vec{x})$ by querying $F_w$ with $D_S$.
Experimental results show that the amount of embedded information is equivalent to the pixel information of $25$ images in the CIFAR-10 dataset at a cost of $0.69\%$ decrease in the test accuracy of the model~\cite{ccs_ML_remembers}.
Similarly, Adi et al.~\cite{watermark_backdoor} watermark a neural network by re-training it on a random set $D_S$ with random labels $Y_S$ as the watermark.
Merrer et al.~\cite{adversarial_watermark} use a set of adversarial examples of a neural network to convey the watermark information via a set of queries of them.
\subsection{Watermark Removal Attacks: Model Transformations}
\label{sec:transformations}
Given a trained model, transformation techniques aim at deriving a new model with the same or slightly different prediction task with additional features such as memory and computational efficiency, for example when the model is going to be used in mobile applications. It might also be needed for transfer learning, updating the model using fine tuning, or further regularizing the model.
Many of the transformation techniques make use of an extra refining set $D'$ in connection with the original model $F_w$ to update or reconstruct it as $F_w'$.
The model transformations can be used as attacks to remove the embedded watermarks in neural networks \cite{embedding_watermarks, watermark_backdoor, adversarial_watermark, deepsigns, deepmarks}.
Although there are many transformation techniques proposed in the literature, in this section, we only list the major techniques for deep neural networks, which are commonly used in practice.
\textit{\underline{Model Compression.}}
The objective here is to optimize the memory needed to fit the (parameters of the) model, while preserving the accuracy of the model. The compression can be achieved by removing insignificant parameters and pruning their links between neurons~\cite{han_pruning} (which is often followed by fine tuning the remaining parameters using the refining set to further make use of them), limiting the number of required bits to represent the model's parameters~\cite{pruning_quantization}, or grouping parameters into a few hash buckets~\cite{compress_by_hashing}.
\textit{\underline{Distillation}}~\cite{distilling_DL} is another type of compression, where the original model's knowledge could be distilled into another model of smaller size, for example by reducing the number of neurons in each layer.
Essentially, the knowledge of the original model $F_{w,T}$ (teacher model) is represented as its predictions on a refining dataset $D'$ (which is drawn from $p_x$) under temperature $T$ in the $\mathsf{softmax}$ function (see Section~\ref{sec:ml}). The temperature $T$ is usually set larger than $1$ so as to make the teacher model $F_{w,T}$ produce a softer prediction (i.e., softer probability distribution over classes), which encodes more knowledge of $F_{w,T}$.
The smaller model $F_w'$ (student model) is then trained on $D'$ with combination of the soft predictions produced by $F_{w,T}$ and hard labels which are determined by the ground truth labels of $D'$.
\textit{\underline{Fine Tuning.}}
A very common practice in machine learning is to refine and update a model using new data. In the fine tuning process, an existing model $F_w$ is updated by simply training a new model $F_w'$ on a refining set $D'$, while the initial parameters of $F_w'$ are set to those of $F_w$~\cite{fine_tune}.
\textit{\underline{Transfer Learning.}}
This transformation technique is used to update the classification task of a model $F_w$ to a related yet slightly different task~\cite{transfer_learning_cvpr, transfer_learning_survey}.
It often retains the lower layers of the original model, which usually extract generic features, and fine-tunes or retrains the last few layers using the refining set $D'$ as the training set for the new model $F_w'$.
\textit{\underline{Computation Optimization.}}
The computation time for predictions on a test input is often not negligible for (deep) convolutional neural networks. Using a technique known as {\em low-rank expansion}, it has been shown that approximating the convolutional layers of a model by linear combinations of smaller filters can accelerate the network's computation~\cite{expand_nonlinear_CNN, speedingNN_expansion}. In other words, a convolutional layer in a pre-trained model is decomposed into more convolutional layers with smaller filters, thus reducing the computation complexity~\cite{speedingNN_expansion}.
\subsection{Robustness of Existing Embedding Methods}
\begin{table}[]
\centering
\setlength{\tabcolsep}{1.5pt}
\caption{Robustness of existing and our (P:ING) embedding methods.
The accuracies of the main classification task and the watermark extraction are presented in white and gray columns respectively.
The host network is a fully-connected multilayer perceptron (MLP) trained on MNIST dataset. The size of watermark-carrier set $D_S$ is 1,000 images. We perform removal attacks on the embedded network including distillation (with temperature 5), pruning (with rate 0.4), rounding (by 2 digits) and fine tuning.
}
\label{tb:robustness_mnist}
\begin{tabular}{l|la|la|la|la|la}
\hline
& \multicolumn{2}{c|}{\multirow{2}{*}{Embed}} & \multicolumn{8}{c}{Attack} \\
\cline{4-11}
& \multicolumn{2}{c|}{} & \multicolumn{2}{c|}{Distillation} & \multicolumn{2}{c|}{Pruning} & \multicolumn{2}{c|}{Rounding} & \multicolumn{2}{c}{Fine Tuning} \\
\hline
W:LSB & 0.98 & 1.00 & 0.98 & 0 & 0.97 & 0.24 & 0.97 & 0 & 0.99 & 0.27 \\
W:SGN & 0.98 & 1.00 & 0.98 & 0 & 0.98 & 0.53 & 0.98 & 0.75 & 0.99 & 0.59 \\
W:COR & 0.98 & 0.99 & 0.98 & 0 & 0.97 & 0.42 & 0.97 & 0.41 & 0.99 & 0.48 \\
W:STA & 0.98 & 0.80 & 0.98 & 0 & 0.98 & 0.77 & 0.99 & 0.80 & 0.98 & 0.80 \\
P:CAP & 0.98 & 1.00 & 0.98 & 0 & 0.98 & 1.00 & 0.98 & 1.00 & 0.98 & 1.00 \\
\hline
\textbf{P:ING} & 0.97 & \textbf{1.00} & 0.97 & \textbf{0.23} & 0.98 & \textbf{1.00} & 0.97 & \textbf{1.00} & 0.97 & \textbf{1.00} \\
\hline
\end{tabular}
\end{table}
We first present part of our experimental results on the robustness of existing embedding methods in Table \ref{tb:robustness_mnist}.
The result shows that distillation is a strong attack which can remove all the embedded watermarks by existing methods. Nonetheless, our method (P:ING) preserves much more watermark information after distillation. Besides, it also achieves comparable robustness to the best result of existing methods against other attacks.
Embedding $S$ into parameters $w$ (i.e., W:$\ast$ methods) is inherently fragile in attacks that alter $w$ or the model architecture. For example, parameter pruning and rounding for compression purpose can completely remove watermarks embedded by W:LSB method.
Distilling the model's knowledge to another model of smaller size from scratch destroys all the watermarks because it has a fresh model architecture and training process.
In existing methods that embed the watermarks into the model's predictions (i.e., P:CAP method),
the watermark-carrier set $D_S$ is synthesized randomly with labels on the owner's will, so it is \textit{noise data} to the model in terms of the main classification task.
A series of recent studies has indicated that
a model fits the noise data by ``memorization'' instead of extracting general patterns from it, and learns the general patterns of the real data first before fitting noise data~\cite{memorizattion_DL}.
Our experiments align with this phenomenon as shown in Figure~\ref{fig:host_task_vs_noise}.
The noise data $D_S$ is memorized in a later stage during training after the model learns the most meaningful features on $D$.
This suggests that the embedded watermarks form a hidden content (as a set of parameters and neural connections) that is largely {\em independent} from the main classification task.
In other words, there is a negligible knowledge intersection between the main classification task and the representation of watermarks in the model.
Exactly because of this independence, the hidden watermark functions are very fragile to distillation whose major objective or constraint is to preserve the accuracy of the original model. Hence, redundant and independent embedded watermarks can be easily removed.
\begin{figure}[t]
\begin{center}
\includegraphics[width=70mm]{figures/host_task_vs_noise.pdf}
\caption{Trajectories of training metrics on main task data $D$ and noise data $D_S$.
$D$ is the test set of CIFAR10.
The size of $D_S$ is $1,000$ random images which have random labels.
}
\label{fig:host_task_vs_noise}
\end{center}
\end{figure}
\section{Related Work}
While DL-based systems are reported to attain very high accuracy in various applications~\cite{ML_Go, ML_image}, there remains some limitations that hinder wide adoption of these systems. On the one hand, the computational cost required to operate a neural networks can be prohibitive, often surpassing resource that typically available on mobile devices. On the other hand, various studies have demonstrated fragility of DL models in the presence of adversarial attacks~\cite{defensive_distillation_broken, NN_evasion}. Thus, motivated either by efficiency or security concerns, several model transformation techniques have been studied in the literature~\cite{distilling_DL, defensive_distillation, pruning_quantization}.
Besides model adaptation, various techniques have also been applied in security-related tasks, such as embedding watermark information into a neural network~\cite{embedding_watermarks}, or creating a backdoor in a model~\cite{badnets}.
\subsection{ML Privacy}
Researches have strongly suggested that, similar to other data-driven applications, ML poses a threat to privacy~\cite{attract_info_from_ML, membership_inference_attack, Pharmacogenetics_privacy}. For instance, given access to an ML model, an adversary can infer non-trivial and useful information about its training set~\cite{attract_info_from_ML}. It has also been shown that one can abuse the prediction output by an ML model for an partially unknown input $x$ to infer its unknown features~\cite{Pharmacogenetics_privacy}. Following the same spirit, Shokri et al.~\cite{membership_inference_attack} study membership inference attacks against ML models, wherein an adversary attempts to learn if a record of his choice is part of the private training sets.
The above mentioned attacks on privacy indicate that benignly trained ML models could leak certain information about its input or training sets. Our study, on the other hand, examines how one can intentionally abuse a model to embed a watermark in a robust way.
\subsection{Models Abusing}
Model abusing has recently attracted a great attention from the research community. The model abusing problems ask to which extent one can exploit a deep learning model to learn or conduct an additional task beyond the intended task that is associated with the training set~\cite{DL_robust_to_noise, ccs_ML_remembers, embedding_watermarks}. Arpit et al.~\cite{memorizattion_DL} have shown that DL models with sufficient capacity (i.e., having a large enough number of parameters) can ``memorize'' noise contained in training set, without yielding poor generalization to real data samples. Song et al.~\cite{ccs_ML_remembers} explore another abuse which attempts to use the model as a covert channel to convey some secret information (i.e., sensitive information of the training set).
The authors propose various approaches to achieve this, including poisoning the training set, tampering with the training procedure, or directly modifying parameters of the model after training.
Motivated by copyright protection of ML models, Uchida et al. \cite{embedding_watermarks} investigated an approach that employs malicious embedding regularizers during training to force the model parameters to observe certain statistical bias, which can then be considered as a ``watermark'' of the model. These abuses have been shown to offer impressive results, strongly indicating that ML models can be abused for additional tasks beyond their intended tasks.
Nevertheless, it remains unclear if the introduced features are retained after the model undergoes typical ML transformations. Our work, in contrast, experimentally shows that typical ML transformations, especially distillation, are destructive to embedded watermarks which are independent from the model's main task.
\subsection{Neural Networks in Adversarial Settings}
Deep learning techniques, while achieving utmost accuracy in various application domains~\cite{ML_image_accuracy, ML_Go}, were not originally designed with built-in security. However, recent years have witnessed an ever increasing adoption of DL models for security-sensitive tasks, which causes the accuracy of the models' predictions to have significant implications on the security of the host tasks.
Various works have suggested that ML models are likely vulnerable in adversarial settings~\cite{evadingML_CCS, badnets, defensive_distillation_broken}. In particular, an adversary could force a victim model to deviate from its intended task and behave erratically according to the adversary's wish. The adversary can stage these attacks by corrupting the training phase (e.g., poisoning the training set with adversarial data~\cite{poisoning_SVM, trojanning_attack_NN}, employing adversarial loss function~\cite{badnets}), maliciously modifying the victim model~\cite{defensive_distillation_broken}, or feeding the victim model with adversarially crafted samples~\cite{adversarial_examples_DL, NN_evasion} in the testing phase.
\subsection{Secure $\&$ Privacy-Preserving ML Training}
In the wake of security and privacy threats posed to ML techniques, much research has been devoted to provisioning secure and privacy-preserving training of ML models~\cite{DP_DL, privacypreservingDL, obliviousML}. For instances, Abadi et al. studied a framework to train deep learning models with differential privacy. Shokri et al.~\cite{privacypreservingDL} proposed a protocol for privacy-preserving collaborative deep learning, enables participants to jointly train a model without revealing their private training data. Bonawitz et al.~\cite{secure_aggregation} presented a solution for secure aggregation of high-dimensional data. In addition, systems for oblivious multi-party machine learning have also been built using trusted hardware primitive~\cite{obliviousML}.
The threat models assumed by these techniques are to protect privacy of users' data contributed to the training set. Our work studies a different threat model wherein the model owner participating in the training process intends to embed watermark in the model.
\subsection{Machine learning vs. Watermarking}
Quiring et al.~\cite{ML_vs_WM} discuss the similarities between machine learning (ML) and watermarking (WM) research. They present two case studies to illustrate such similarities. The first case examines the use of ``1.5-class classifier'' technique in ML (combing two-class and one-class models~\cite{one_and_a_half_class_classifier}) as a defense for oracle attack in WM, whereas the second case study explores the use of stateful detector, which is a common concept in WM, to mitigate model stealing/extraction attacks in ML.
\section{Ingrain}
\begin{figure}[t]
\begin{center}
\includegraphics[width=.85\linewidth]{figures/architecture.pdf}
\caption{Watermark ingrain in a neural network classifier. The goal is to train a classifier $F_w$ that also carries the watermarks represented by an ingrainer model $G_\theta$. $G_\theta$ has the same input-output format and architecture as $F_w$, and is pre-trained on a watermark-carrier set $D_S$ (which together with its label sequence $Y_S$ compose the watermarks), so its parameters $\theta$ are fixed during the ingrain.
During the ingrain, $F_w$'s training set $D$ is augmented with $D_S$ for the classification loss $\mathcal{L}(F_w(\mathbf{x}), y)$ to reinforce the embedding of watermarks in $F_w$.
We use stochastic gradient descent (SGD) to update $F_w$'s parameters $w$, by jointly optimizing the classification loss function and the ingrain loss function $\mathcal{L}(F_{w,T}(\mathbf{x}), G_\theta(\mathbf{x}))$ weighted by an ingrain coefficient $\lambda$. The ingrain loss acts similarly as a regularizer and helps $F_w$'s predictions on $D$ implicitly contain the watermark information.}
\label{fig:architecture}
\end{center}
\end{figure}
Drawing insights from the fragility of existing embedding methods especially to distillation,
We introduce \textit{ingrain} as a means to deeply embed the watermarks into the model's functionalities by mitigating the independence between the watermark function and the main classification function.
We first present an overview of ingrain technique. Next, we introduce the ingrainer model which represents the watermarks.
We then use it to regularize the training of the classifier to embed the watermarks.
\subsection{Overview}
In ingrain, the watermark $S$ is represented as the watermark-carrier sequence $D_S$ and its label sequence $Y_S$ in the embedding process.
Figure~\ref{fig:architecture} illustrates the ingrain mechanism, which is an indirect way of embedding watermarks in a neural network as opposed to directly overfitting the model on the watermarks. The main idea behind ingrain is to force the model to carry the watermark information on its predictions on in-distribution data (i.e., data sampled from the training data distribution $p_x$). Thus, when the model is attacked (i.e., distilled by using a refining dataset $D'$ drawn from $p_x$), the watermark knowledge is also transferred along with the model's core knowledge on classifying $D'$.
The main technique of ingrain is to explicitly represent the watermark $S$ using an additional model---ingrainer, and further ingrain the watermark information implicitly in the classifier's predictions on training data $D$.
Such watermark information is expected to also appear in the classifier's predictions on the refining dataset $D'$ which is used in the further distillation attack.
This is achieved by modifying its training process.
In particular, the ingrainer $G_\theta$, where $\theta$ are its parameters, can cast the watermark information to its predictions implicitly on $D$. When training the classifier, such ingrainer's predictions work as an additional term in the loss function, so as to encourage the classifier to \textit{simultaneously} learn the ground-truth labels and the watermark information (in ingrainer's predictions) on the \textit{same} training data $D$. Hence, the watermark information is correlated with the classifier's predictions on $D$ in an implicit way.
Note that this does not mean the model owner extracts the explicit watermark $S$ using in-distribution inputs (i.e., training data $D$).
It is actually the tricky part in the design of ingrainer where its predictions on training set $D$ carries the watermark information, but the owner extracts the watermark $S$ by querying the classifier with a watermark-carrier set $D_S$ which is drawn from a different distribution $p_s$.
\subsection{Ingrainer Model}
\label{sec:infuser}
The ingrainer model $G_\theta$ is used to represent the watermark information via its predictions on the training data $D$, such that it can regularize the training of the classifier to ingrain watermark information implicitly in the classifier's predictions on training data $D$.
The most straightforward way one might think of to fulfill such mapping is to train the ingrainer model directly on a subsequence of $D$ labeled by a sequence $Y_S$ which jointly compose the watermark $S$.
However, it has several issues. Let $D_{sub}$ represent the chosen subsequence. On the one hand, if $D_{sub}$ is chosen in the way that
the ground-truth label of each data point $\vec{x}_i \in D_{sub}$ is identical to ${y_s}_i\in Y_S$, it is exactly the classification task and does not watermark the model.
On the other hand, if $\vec{x}_i$'s ground-truth label is not ${y_s}_i$ and it is embedded as a hidden function, it watermarks the model but can be easily removed in attacks (e.g., distillation) that re-train the model on a refining set.
We sample the watermark-carrier set $D_S$ from a different distribution $p_s$ to carry $Y_S$.
We train the ingrainer, using the same architecture as the classifier, to overfit on $D_S$ (see Algorithm~\ref{alg:train_infuser}). Although this does not establish an explicit mapping from training data $D$ to the watermark $S$, it leads a part of the ingrainer's connections or parameters to memorize the watermark information. When a training data passes through the ingrainer, it is expected to trigger some of these connections, leading the output (prediction) to encode some implicit watermark information. The same architecture as the classifier is expected to boost this implicit mapping. Thus, the entire training set $D$ is supposed to carry rich watermark information in the ingrainer's predictions.
Although our experimental results demonstrate the effectiveness of the ingrainer by training it in this way, we believe it is not the only way. We leave an investigation into various potentially effective ingrainers as future work.
\begin{algorithm}
\small
\caption{Training Ingrainer $G_\theta$}\label{alg:train_infuser}
\SetKwInOut{Input}{Input}
\SetKwInOut{Output}{Output}
\Input{Watermark-carrier set $D_S$, number of epochs $P$, learning rate $\eta $, size of mini-batch $q$.}
\Output{Model parameters $\theta$ of ingrainer $G_\theta$.}
$\theta \gets $ \textbf{initialize}($G_\theta$) \\
\For{\text{p = 1 to P}}{
\For{\text{each mini-batch $\{(\vec{x_s}_j, {y_s}_j)\}_{j=1}^q \subset D_S$}}{
$g \gets \nabla_\theta\frac{1}{q}\sum_{j=1}^q {\mathcal L} (G_\theta(\vec{x_s}_j), {y_s}_j)$ \\
$\theta \gets$ \textbf{updateParameters} $(\eta, \theta, g)$
}
}
\end{algorithm}
\subsection{Training Classifier}
The training of the classifier $F_w$ is regularized by the trained ingrainer model $G_\theta$ to force $F_w$ to learn two tasks together.
Specifically, we add an additional term $\mathcal{L}(F_{w,T}(\mathbf{x}), G_\theta(\mathbf{x}))$ to the training loss function $\mathcal{L}(F_w(\mathbf{x}), y)$, where $T$ determines the classifier's temperature in the $\mathsf{softmax}$ function.
This term is referred to as \textit{ingrain loss} in the paper.
In the distillation attack, the classifier $F_{w,T}$ becomes the teacher model whose $T$ is usually set larger than $1$ to produce soft predictions which also encode the watermark knowledge. The student model learns the $F_{w,T}$'s knowledge from the soft predictions.
To maximally preserve watermark information in such soft predictions, we set a larger $T$ for $F_{w,T}$ in the ingrain loss to ingrain $G_\theta$'s watermark information in $F_{w,T}$'s soft predictions.
It mitigates the distance between the soft predictions $F_{w,T}(\mathbf{x})$ on each training data $\vec{x}$ and the watermark information $G_\theta(\mathbf{x})$ carried by $\vec{x}$ when passing through $G_\theta$.
This leads to a correlation of the classifier's soft predictions with the watermark information, which improves the watermark's resistance against the distillation attack.
Note that the ingrain process is similar to distillation where $G_\theta$ is the teacher model and $F_{w,T}$ is the student model. Nonetheless, we set a higher $T$ for the student model as opposed to the teacher model.
Formally, the loss function we are optimizing when training the classifier is the following.
\begin{align}
\label{eq:loss}
\begin{split}
L_D(F_w) = \frac{1}{|D|} \sum_{\vec{x}\in D} & \mathcal{L}(F_w(\mathbf{x}), y) \\
& + \lambda \mathcal{L}(F_{w,T}(\mathbf{x}), G_\theta(\mathbf{x}))
\end{split}
\end{align}
where the ingrain coefficient $\lambda$ determines the degree of ingrain.
Throughout the training process, the ingrainer is fixed, and we use SGD (see Section~\ref{sec:train}) to optimize the classifier's parameters $w$.
Each training data, that passes through the ingrainer, produces one sample point in the watermark space. The classifier that is regularized with these samples, tries to also solve for the watermark function through the optimization process. Thus, by tuning the ingrain coefficient $\lambda$, we can control the trade-off between the prediction accuracy of the classifier and the degree to which the watermark information is ingrained in the classifier's predictions on training data.
\underline{{\em Explicitly embed watermarks into classifier.}}
The ingrain loss imbues the classifier's predictions on training data with implicit watermark information, which can lead to better robustness against distillation attack. Nonetheless, it does not explicitly embed the watermarks into the classifier. Querying $F_w$ with watermark-carrier set $D_S$ might not obtain the $Y_S$ with high accuracy.
Therefore, we augment (poison) the training data $D$ with $D_S$ in the training of $F_w$ using loss function $\mathcal{L}(F_w(\mathbf{x}), y)$.
This is to further enforce the embedding into the unused parameter space of the model. In this case, when the ingrain coefficient $\lambda$ is set to 0, the whole process becomes equivalent to the existing P:CAP method that exploit the neural networks' large capacity~\cite{ccs_ML_remembers}.
The watermark information that is passed to the classifier through poisoning $D$ with $D_S$ is not expected to be resistant to distillation attack, but it helps boosting the accuracy of the watermark extraction for the case of simple attacks, e.g., parameter pruning, rounding.
A further interesting finding is that such augmentation imposes negligible influence on the ingrain loss as shown in Figure \ref{fig:loss}.
This result demonstrates the independence of the sub-models that are responsible for memorizing the watermarks and the part of the model which represents the classification task, as in the case of existing P:CAP method~\cite{ccs_ML_remembers}.
Such independence leads to the fragility of P:CAP against distillation attack.
In summary, the whole process of training the classifier is shown in Algorithm~\ref{alg:infusion}, where we shuffle $D$ and $D_S$ (line 2), and extract them in each mini-batch (line 5-6) to apply separate losses on them (line 7-9). The weighted average gradient is used to update the parameters $w$ (line 10-11).
\begin{figure}[t]
\centering
\includegraphics[width=0.7\linewidth]{figures/mnist_ingrain_loss_without_wm.pdf}
\includegraphics[width=0.7\linewidth]{figures/mnist_ingrain_loss_with_wm.pdf}
\caption{
Ingrain loss on the training data $D$ in the training of the classifier. (Top) $D$ is not augmented. (Bottom) We use $D_S$ to augment $D$ in the classification loss.
The host network is a MLP model. $D$ is MNIST dataset. The size of $D_S$ is $1,000$ images. The ingrain temperature $T$ is $10$. The ingrain coefficient $\lambda$ is $2$.}\label{fig:loss}
\end{figure}
\begin{algorithm}
\small
\caption{Training Classifier $F_w$}\label{alg:infusion}
\SetKwInOut{Input}{Input}
\SetKwInOut{Output}{Output}
\Input{Training dataset $D=\{(\vec{x}_j,y_j)\}_{i=1}^n$, watermark-carrier set $D_S=\{(\vec{x_s}_j,{y_s}_j)\}_{i=1}^m$, Ingrainer $G_\theta$, number of epochs $P$, learning rate $\eta $, ingrain coefficient $\lambda$, ingrain temperature $T$.}
\Output{Model parameters $w$ of classifier $F_w$.}
$w \gets $ \textbf{initialize}($F_w$) \\
$D_A \gets$ \textbf{shuffle}($D \cup D_S$) \\
\For{\text{p = 1 to P}}{
\For{\text{each mini-batch $b\subset D_A$}} {
$\{(\vec{x}_j, y_j)\}_{j=1}^a \gets$ \textbf{getTrainData}($b$) \\
$\{(\vec{x_s}_j, {y_s}_j)\}_{j=1}^b \gets$ \textbf{getWatermarkCarrier}($b$) \\
$g_D \gets \nabla_w\frac{1}{a}\sum_{j=1}^a {\mathcal L} (F_w(\vec{x}_j), y_j) $\\\hspace{2.3cm}$ + \lambda {\mathcal L} ( F_{w,T}(\vec{x}_j), G_\theta(\vec{x})) $ \\
$g_{D_S} \gets \nabla_w\frac{1}{b}\sum_{j=1}^b {\mathcal L} (F_w(\vec{x_s}_j), {y_s}_j)$ \\
$g \gets (a g_D+b g_{D_S})/(a+b)$ \\
$w \gets$ \textbf{updateParameters} $(\eta, w, g)$ \\
}
}
\end{algorithm}
|
1,477,468,750,314 | arxiv | \section{Introduction}
In 1977, Shu presented seminal work in the theory of low-mass, isolated
star formation \citep[hereafter Shu77]{shu77}. He presented the idea that
stars could form from inside-out collapse. This model is still important
today because it is simple, yet it predicts so many observables in the
process of star formation. It prescribes the evolution of inflow, the
velocity structure of the envelope, and the particular shape of the
envelope's density distribution. \citet{motte01} found that the inside-out
collapse model fit millimeter observations of protostars in Taurus and
various Bok globules. However, these authors also found Class 0 sources in
Perseus, a less quiescent region of star formation, that had central
densities and accretion rates that were too high to be accounted for by the
Shu77 model. Molecular line observations have also been used to compare
the predicted densities and velocities with the actual conditions in
star-forming cores. \citet{hogerheijde00} and \citet{zhou93} presented
evidence that the envelopes of some protostars are undergoing inside-out
collapse. Others have presented evidence against inside-out collapse. For
example, \citet{tafalla98} suggested that L1544, a starless core, exhibits
infall motions across the entire core. Nonetheless, it is still not clear
whether the Shu77 inside-out collapse scenario or its variants can be ruled
out.
Therefore, we have calculated the observational signatures of a star, which
is forming through inside-out collapse, so that astronomers can test this
theory in a well-defined way. The most relevant aspects of the Shu77
inside-out collapse model are a constant accretion rate of material from
the envelope onto the star+disk system and an envelope density that is
initially described by a singular, isothermal sphere (SIS).
The constant accretion rate in this model has given rise to the so-called
``luminosity problem'': the luminosities that result when material accretes
onto a central object with a small radius and increasing mass exceed those
seen for most young, low-mass stars \citep{kenyon95}. The presence of a
disk can help by increasing the accretion radius and acting as a reservoir
where matter is stored and then episodically accreted. However, a disk
does not completely eliminate the ``luminosity problem'' in these models.
A constant accretion rate in star-forming cores should be evident when
comparing populations in the different stages of star formation. The
transition from Class 0 to Class I object is thought to occur when the mass
of the star and disk is equal to the envelope mass. Therefore, we should
observe equal numbers of Class 0 and I objects if they form with constant
accretion rates. \citet{andre94} found about a 10:1 ratio for Class I and
0 sources in Ophiuchus, suggesting the Class 0 stage is very short.
However, \citet{visser02}, in an unbiased survey of dark clouds, found a
1:1 ratio. These authors suggested that Ophiuchus has experienced a burst
of star formation in the past, resulting in the 10:1 ratio. Future and
ongoing surveys of nearby star-forming regions will certainly offer more
information regarding the relative populations of Class 0 and Class I
sources \citep{evans03,benjamin03}.
The initial density configuration for the Shu77 model is that of a SIS,
which has $n(r)\propto r^{-2}$. One-dimensional modelling of the
submillimeter emission from starless cores has shown that their density
distributions are well-fitted by Bonnor-Ebert spheres, but power-law density
profiles are not conclusively ruled out \citep{wardthompson94,evans01}.
Indeed, some starless cores have a density structure that seems to be
approaching the SIS. For example, three-dimensional modelling of L1544
shows a power-law density distribution with $n(r)\propto r^{-2}$
\citep{doty05}. Other studies of the density distribution in more evolved
star-forming cores were unable to rule out the Shu77 predictions
\citep{young03}; in fact, one-third of the sources in \citet{young03} were
well-fitted by the inside-out collapse model. For this paper, we assume
that starless cores must evolve into an SIS. At this point, collapse
ensues, and we begin modeling the evolution of the protostar. This work
has been expanded by \citet{lee04} to model the chemical evolution of these
star-forming cores.
Other authors have predicted the observed signatures of a
forming star. \citet{myers98}, hereafter M98, developed a framework on
which to calculate various signatures of star formation. Our work has been
prompted by their efforts. However, the methods and models employed in
this work are quite different from those of M98; as a result, our
conclusions are different. We discuss these differences in this paper.
Because these models are 1-dimensional, we do not consider the role of
outflows, flattened envelopes, or asymmetric disks; therefore, we probably
underestimate the amount of short-wavelength radiation. \citet[hereafter
W03]{whitney03} have included some of these complications. However, W03 did
not create a consistent evolutionary model but, instead, considered
``typical'' protostellar objects of different observational classes.
Fortunately, these authors were able to explore the impact of 3-dimensional
effects, which we are unable to model. In this way, we consider this effort
to be complementary to the work of W03 and compare our results to theirs.
The advent of large surveys such as 2MASS and the Spitzer Space Telescope's
Legacy programs \citep{evans03,benjamin03} provide vast sets of data
through which theories of star formation may be vigorously tested. In
this work, we hope to provide some tangible means to study the validity of
the inside-out collapse model in the role of forming stars.
\section{The Model}
First, we define the framework through which we have created this
evolutionary sequence. In this section, we discuss what has been assumed
for the dust opacity, interstellar radiation field, envelope structure and
dynamics, and the evolution of the star and disk components of the system.
\subsection{Interstellar Radiation Field \& Dust Properties}
\citet{evans01} showed that, for starless cores, the interstellar radiation
field (ISRF) significantly affects both the total observed luminosity and
the shape of the observed submillimeter intensity profile; even objects
with a luminous, internal source might attribute most of their luminosity
to the ISRF \citep{young04a}. \citet{evans01} scaled the ISRF by a
constant, but we have used the opacity of \citet{draine84} dust to
attenuate the ISRF with $A_V=0.5$ (see Figure 8 in \citet{young04b}). This
method simulates the effects of low density material in the environs of a
star-forming core.
These authors \citep{evans01,shirley02,young03} also found that the
multi-wavelength observations were best matched by using the opacities of
the dust modeled by \citet{ossenkopf94}. In particular, they concluded
that ``OH5'' dust, found in the fifth column of Table 1 in
\citet{ossenkopf94}, was optimal for star-forming cores. Unfortunately,
the modeled data for OH5 dust does not include wavelengths shortward of
1.25 $\mu$m. \citet{ossenkopf94} calculated only the values for the dust
opacity ($\kappa$) and not the scattering and absorption cross-sections
($\sigma_{abs}$ and $\sigma_{scat}$) as needed by DUSTY, the radiative
transfer program we have used \citep{ivezic99,ivezic97}. Therefore, we have
obtained data from \citet{pollack94}, which includes the scattering and
absorption cross-sections for wavelengths as short as 0.091 $\mu$m. In
Figure~\ref{kappa}, we show the opacities for OH5 dust and the opacity
calculated by \citet{pollack94} for dust grains with a radius of 0.1 $\mu$m
at a temperature of 10 K (hereafter, P1 dust); we have assumed a
gas-to-dust ratio of 100 and give the opacity of the gas in this figure.
At short wavelengths, these two types of opacities are in fairly good
agreement; unless $\tau$ is low, however, the short-wavelength opacity is
not relevant. We used the opacity given for OH5 and $\sigma_{scat}$ for
the P1 dust to calculate the absorption coefficient for the OH5 dust.
Further, we used the albedo values given by \citet[Figure 4b]{pendleton90}
to apportion the opacity due to scattering and absorption from the 3 $\mu$m
ice feature. Finally, we have extrapolated the cross-sections out to 3.6
cm, as required by DUSTY. For $\sigma_{scat}$, we extrapolate by a
$\lambda^{-4}$ power-law as expected for Rayleigh scattering. We fit the
last several data points of the OH5 absorption coefficients to determine
the $\lambda^{-1.8}$ power-law used to extrapolate $\sigma_{abs}$ out to
$\lambda=3.6$ cm. We show the scattering and absorption coefficients in
Figure~\ref{kappa}.
\subsection{Envelope}\label{sxn-envelope}
For the density structure in the envelope, we adopt the inside-out collapse
scenario \citep{shu77}. This model begins with an SIS with a density
distribution that is proportional to $r^{-2}$. Through some perturbation,
collapse begins inside the cloud and proceeds outward. As collapse ensues,
the cloud's density distribution can be approximately described by a broken
power law: the inner collapsing portion, $n \propto r^{-3/2}$ (indicative
of freefall), and the outer static envelope, $n \propto r^{-2}$. However,
there is a transition region just within the infall radius where the
density profile is significantly flatter than the $r^{-3/2}$ power law.
Therefore, we use the actual solutions to Equations 11 and 12 in
\citet{shu77}.
When the infall radius exceeds the outer radius, the Shu77 solution is
no longer valid. Therefore, we adopt a density profile with $n
\propto r^{-3/2}$ and let the mass of the envelope (and, hence, the
fiducial density) decrease as mass is accreted onto the protostar and
disk.
The total amount of mass is constrained by the effective sound speed
($c_s$) and the envelope's outer radius ($r_o$). The models presented
herein all begin with cores whose initial masses are different. We
calculate this mass from the following expression:
\begin{equation}\label{eqn-menv}
M_{env}^{t=0}=\frac{2c_s^2 r_o}{G},
\end{equation}
where $G$ is the gravitational constant, and $c_s$ is the effective
sound speed,
\begin{equation}
c_s=\left(\frac{kT}{\mu m_H} +\frac{1}{2}v_{turb}^2\right )^{1/2},
\end{equation}
where $k$ is Boltzmann's constant, $T$ is the isothermal temperature, $\mu$
is the mean molecular mass ($\mu=2.29$), and $m_H$ is the mass of the
hydrogen atom ($m_H=1.6733\times10^{-24}$g) and $v_{turb}$ is the turbulent
velocity (1/e Doppler width). We choose $T=10$ K and the value for
$v_{turb}$ such that the turbulent contribution to the sound speed is equal
to the thermal component ($v_{turb}=0.268$ km s$^{-1}$); $c_s=0.268$ km
s$^{-1}$.
We calculate the total envelope mass as follows,
\begin{equation}
M_{env}=\mu m_H \int\limits_{r_i}^{r_o} 4\pi r^2 n(r) dr,
\end{equation}
where $r_i$ and $r_o$ are the inner and outer radii of the envelope. In
this paper, radii pertaining to the envelope are denoted by the lower-case
``r'' ($r_i$, $r_{inf}$, and $r_o$) while the radii of the star and disk
are denoted by the upper-case ``$R$'' ($R_\ast$, $R_i$, $R_D$).
Even though the Shu77 model has no specific mass scale, we assume that the
model is applicable to the formation of stars with different masses. In
order to define the mass of the core, we truncate the outer radius of the
envelope, using Equation~\ref{eqn-menv}. Such truncated envelopes are not
unheard of; \citet{motte98} found evidence for truncated outer radii for
cores in Ophiuchus. In this paper, we consider cores with three initial
masses: 0.3, 1.0, and 3.0 M$_\odot$. With our assumed sound speed, these
masses correspond to outer radii of 1767, 5889, and 17667 AU. We end our
modeling when all envelope material has accreted onto the star-disk system.
In these three scenarios, this event occurs at 62500, 210000, and 625000
years, respectively. The time for collapse varies significantly among the
three models, but this variation is inherent within the inside-out collapse
model, assuming similar initial conditions. With constant and identical
accretion rates, lower-mass objects simply form more quickly than
higher-mass objects. We show the mass evolution for each of these cases in
Figure~\ref{mass}.
It is not clear whether this model is realistic for cores with masses as
low as 0.3 M$_\odot$. The Jean's mass for a core with $T=10$ K and density
of $10^6$ cm$^{-3}$ is $\sim$0.6 M$_\odot$ ($M_J=18$M$_\odot
T_K^{1.5}n^{-0.5}$). However, if the cloud is cooler, it could be unstable
to collapse (e.g., if $T=5$ K and $n=10^6$, $M_J=0.2$ M$_\odot$). In fact,
much smaller mass cores can be created through turbulent fragmentation
\citep{boyd05}, so the 0.3 M$_\odot$ is probably not so absurd.
Considering that some evolved protostars are thought to be substellar
\citep{white04}, we must consider how these objects are formed.
Defining the envelope's inner radius is an issue. One choice is that the
envelope's inner radius could be equal to the outer radius of the disk.
However, this disk radius, which is defined as the centrifugal radius
(Section~\ref{sxn-diskradius}), is very small at early times. With a small
inner radius, the density in the inner region is unrealistically large
(e.g., $n\sim10^{10}$ cm$^{-3}$). These dense regions cause the opacity to
become very high. Further, a rotating envelope becomes flattened and
aspherical at these small radii \citep{terebey84}, so a spherical model is
not appropriate to these regions. Therefore, we set a maximum value for
$\tau(100$ $\mu$m) and calculate the inner radius of the SIS (at $t=0$) so
that $\tau_\nu(100 \mu$m) is equal to $\tau_{max}$.
\begin{equation}\label{eqn-rienv}
r_i=\left[ \frac{\tau_{max}G2\pi}{\kappa_\nu c_s^2}+\frac{1}{r_o} \right] ^{-1}
\end{equation}
We choose $\tau_{max}=10$ and discuss the impact of varying this in
section~\ref{sxn_tau_max}. The envelope's inner radius follows this
formula until it is exceeded by the disk radius. For the three cores, the
envelope's inner radius is 100 AU at the end of collapse (see
Section~\ref{sxn-diskradius}).
We define the infall rate for the case of a non-magnetic, centrally peaked
envelope density distribution as described in the \citet{shu77} scenario.
The rate of constant accretion is calculated as follows:
\begin{equation}
\dot{M}=m_o \frac{c_s^3}{G},
\end{equation}
where $m_o$ is a dimensionless constant of order unity.
Since $c_s=0.268$ km s$^{-1}$, the accretion rate, $\dot{M}$, is
$4.8\times 10^{-6}$ M$_\odot$ yr$^{-1}$.
We adopt the prescription of \citet{adams86} for the infall
rate onto the disk and star. These authors assume that all mass is
accreted onto either the disk or star such that
$\dot{M}=\dot{M}_\ast+\dot{M}_D$ where $\dot{M}_\ast$ is the accretion rate
of envelope material directly onto the star and $\dot{M}_D$ is the accretion rate
onto the disk. These values are calculated as follows:
\begin{eqnarray}
\dot{M}_\ast&=&\dot{M}[1-(1-u_\ast)^{1/2}],\\
\dot{M}_D&=&\dot{M}(1-u_\ast)^{1/2},
\end{eqnarray}
where $u_\ast$ is the ratio of the star and disk radii, $R_\ast/R_D$. In
almost all cases, $u_\ast$ becomes very small in a short time and, hence,
$\dot{M}_\ast$ also approaches zero. However, material also accretes from
the disk onto the star; this process is not included in these equations.
\citet{adams86} defined an efficiency factor---$\eta_D$, the fraction of
material in the disk that will accrete onto the star---so that the star
could gain mass and be a source of accretion luminosity. We discuss the
implementation of this accretion process in the next section.
\subsection{Disk}
Evidence in the form of near- and mid-infrared \citep{padgett99} and
millimeter \citep{mundy96,kitamura02} observations of disks surrounding
stellar and substellar objects has recently become more convincing. Also,
the inclusion of a disk in various models has a significant effect on the
interpretations of those models. Therefore, to not include a disk in this
evolutionary scheme is wholly unrealistic.
We adopt the disk model developed, in theory, by \citet{adams88}, and, in
practice, by \citet{butner94}. The density distribution for the dust
and gas (assuming homogeneous mixing) is defined as
\begin{equation}
\Sigma(R)=\Sigma_\circ\left( \frac{R}{R_f} \right) ^{-p},
\end{equation}
where $\Sigma_\circ$ is the surface density (in gm/cm$^{2}$) at $R_f$, a
fiducial radius. We choose $p=1.5$ in accordance with the density
structure for vertical hydrostatic equilibrium \citep{chiang97}. The mass
of the disk, given this power law distribution is given by:
\begin{equation}
M_D=\int\limits_{R_i}^{R_D}2 \pi \Sigma R dR=
\frac{2 \pi \Sigma_\circ R_f^p}{2-p}(R_D^{2-p}-R_i^{2-p}), p<2,
\end{equation}
where $R_i$, $R_D$, and $R_f$ are the inner, outer, and fiducial
radii of the disk.
\subsubsection{Radius of the Disk}\label{sxn-diskradius}
The inner radius of the disk is the dust destruction radius defined as:
\begin{equation}\label{eqn-diskinner}
R_i=\sqrt\frac{L_\ast}{4 \pi \sigma T_{dust}^4 },
\end{equation}
where we define the dust destruction temperature, $T_{dust}=2000$ K,
and $L_\ast$ is the luminosity of the star.
For the disk outer radius, $R_D$, we adopt the centrifugal radius that
evolves with time as follows \citep{terebey84},
\begin{equation}\label{eqn-diskouter}
R_D(t)=\frac{m_\circ^3}{16}c_st^3\Omega_\circ^2,
\end{equation}
where $t$ is the time and $\Omega_\circ$ is the angular velocity of the
cloud prior to collapse; other variables are as already defined. We set
$\Omega_\circ$ so that, at the end of the Class I stage, the disk radius is
100 AU. These angular velocities are $1\times10^{-14}$,
$5.5\times10^{-14}$, and $3.4\times10^{-13}$ s$^{-1}$ for the 3, 1, and 0.3
M$_\odot$ models, respectively. \citet{goodman93} found a range for
$\Omega_\circ$ from $9.7\times10^{-15}$ to $1.3\times10^{-13}$ s$^{-1}$.
The upper end of this range is about one-third of $\Omega_\circ$ for the
0.3 M$_\odot$ model, but the least massive of the cores in
\citet{goodman93} was 0.6 M$_\odot$. We assume that less massive and
smaller cores have higher angular velocities.
\subsubsection{Mass of the Disk}\label{sxn-diskmass}
We evolve the mass of the disk via the expression given by \citet{adams86}:
\begin{equation}\label{eqn-mdisk}
M_D=(1-\eta_D) \int_{t_o}^t \dot{M_D}dt=(1-\eta_D)\mathcal{M_D} M,
\end{equation}
where $t_o$ is the time when $u_\ast=1$ (i.e., $R_D=R_\ast$). We assume
$t_o=0$. Further, \citet{adams86} defines $\mathcal{M_D}$ as follows:
\begin{equation}\label{script_m}
\mathcal{M_D}=\frac{1}{3}u_\ast \int_{u_\ast}^1 (1-u_\ast)^{1/2}u^{-4/3}du.
\end{equation}
We evaluate this expression numerically as suggested by \citet{adams86}.
Finally, $M$ is the total mass accreted (i.e. $M=\dot{M}t$).
\citet{adams86} give $\eta_D$ as a free parameter; $\eta_D$ is the fraction
of material accreted onto the disk that will eventually accrete onto the
star. To determine a value for $\eta_D$, we assume that the ratio of star
to disk mass must be $\sim1/4$, which is in accord with theoretical work
\citep{li02}. In Figure~\ref{mratio}, we show this ratio for several
values of $\eta_D$. We choose $\eta_D=0.75$ so that $M_D/M_\ast\sim1/4$
for the 1 M$_\odot$ model. We apply the same criterion for the 0.3 and 3
M$_\odot$ models and set $\eta_D=0.7$ and $\eta_D=0.9$, respectively. In
section~\ref{sxn-parameters}, we explore the effects of allowing it to
vary.
\subsubsection{Luminosity of the Disk}\label{sxn-diskluminosity}
The temperature distribution for the disk is defined by the following,
\begin{equation}\label{eqn-tdisk}
T(R)=T_\circ\left( \frac{R}{R_f} \right) ^{-q} (K),
\end{equation}
where $T_\circ$ is the temperature at the fiducial radius, $R_f$. We set
$q=0.5$, a temperature distribution that decreases more slowly with radius
than expected for a flat disk, to simulate flaring and disk accretion
\citep{butner94,kenyon87}.
The luminosity of the disk has several components as given by
\citet{adams86}. First, envelope material falling onto the disk will act
as a source of luminosity. Then, there is a source of ``mixing
luminosity'' that arises, basically, from the mixing of newly accreted
material and that material already in orbit around the star. Finally,
\citet{adams86} assumed that some fraction ($\eta_D$) of the disk material
will frictionally dissipate its remaining orbital energy and fall onto the
star. \citet{adams86} give the expression for $L_{acc}^D$ in equation 33b
of their work; we use this equation for $L_{acc}^D$, which includes all
three of these components, in the evolutionary model. As the disk radius
grows larger with time and $u_\ast \rightarrow 0$, their expression for
$L_{acc}^D$ simplifies to \citep{adams87}
\begin{equation}\label{eqn-ldisk-approx}
L_{acc}^D\approx\frac{1}{2} \eta_D L_\circ,
\end{equation}
where $L_\circ \equiv \frac{G M_\ast \dot{M}_\ast}{R_\ast}.$
\subsection{Star}
For the star, we define several parameters: mass ($M_\ast$), luminosity
($L_\ast$), radius ($R_\ast$), and effective temperature ($T_{eff}$). Each
of these quantities evolve over time, and they are generally
interdependent.
\subsubsection{Mass of the Star}
For the stellar mass, we subtract the disk mass from the total accreted
mass as in \citet{adams86}:
\begin{equation}\label{eqn-mstar}
M_\ast=M-M_D=[1-(1-\eta_D)\mathcal{M_D}]M.
\end{equation}
All variables are as previously defined. Included in this equation are two
means by which the star gains mass. Material accretes directly from the
envelope onto the star until the disk grows in size and accretes most of
the infalling envelope mass. Then, the star mostly gains material that has
accreted onto and is processed through the disk.
\subsubsection{Luminosity of the Star}\label{sxn-starlum}
The luminosity of the star has two components: that arising from
accretion ($L_{acc}^\ast$), the dominant source of luminosity at early
times, and the luminosity due to gravitational contraction and
deuterium burning ($L_{phot}$). Simply, $L_\ast=L_{acc}^\ast+L_{phot}$.
\citet{adams86} calculate the accretion luminosity from material accreting
onto the star. Their calculations include the luminosity from material
that falls onto the star and the energy released due to differential
rotation of the protostar. \citet{adams86} give an expression for
$L^\ast_{acc}$ in equation 33a of their work; we use this prescription for
L$_{acc}^\ast$. As the disk radius grows larger with time and $u_\ast
\rightarrow 0$, this simplifies to \citep{adams87}
\begin{equation}\label{eqn-lacc-approx}
L_{acc}^\ast\approx\frac{1}{2} \eta_D^2 \eta_\ast L_\circ.
\end{equation}
\citet{adams86} define $\eta_\ast$ as an ``efficiency factor'' that
dictates how the star dissipates the energy due to differential rotation.
\citet{adams87} considered this value to be a free parameter but chose
$\eta_\ast=0.5$ for their standard model, which we use as well. In
section~\ref{sxn-parameters}, we discuss the effects of varying
$\eta_\ast$.
$L_{phot}$ was calculated by \citet[hereafter DM]{dantona94}. We have made
linear fits to their pre-main sequence tracks with opacities from
\citet{alexander89} (Tables 1 and 5 of DM). First, we have fit a power-law
in the luminosity-time plane for each stellar mass given by DM. For masses
less than 0.2 M$_\odot$, where a single power-law is not appropriate, we
have fit two-piece power-laws to DM's data. For times earlier than those
covered by DM's tracks, we have assumed a power-law expression:
$L_{phot}=L_\circ^{phot} \left(\frac{t}{t_\circ}\right)^5$, where $t_\circ$
is the earliest time in the calculations by DM, and $L_\circ^{phot}$ is the
luminosity of the pre-main sequence star at time $t_\circ$. This equation
is ad hoc and meant to smoothly bridge the transition from where there is
no data for L$_{phot}$ to the point where DM's evolutionary tracks begin.
Finally, to obtain the appropriate value for $L_{phot}$, we linearly
interpolate between masses for a given time in the star's evolution.
As noted in M98, the beginning of infall and accretion luminosity is not
the same time as that for the onset of the luminosity represented by
$L_{phot}$. We adopt, as M98 did, a difference in these two timescales of
$10^5$ years such that, for $L_{phot}(t)$, we take $t=t_{phot}+10^5$ yr for
particular values of $t_{phot}$ as given by DM; this assumption is based on
the theoretical work of \citet{stahler83}. After collapse begins, the
forming star must wait $10^5$ years before the luminosity due to
contraction and deuterium burning (as described by DM) will begin.
Further, the luminosity from DM's models does not become significant until
$\sim7\times10^4$ years. Therefore, $L_{phot}$ is not relevant until $t\sim
1.7\times10^5$ years, which is greater than the time required for the 0.3
M$_\odot$ to collapse and only $4\times10^4$ years more than the collapse
time for the 1 M$_\odot$ core.
With this final term, the total luminosity of the protostellar system is
now made of three components: $L_{acc}^\ast$, the luminosity due to
accretion and differential rotation of the protostar; $L_{phot}$, the
luminosity arising from gravitational contraction and deuterium burning in
the protostar, and $L_{acc}^D$, the luminosity from accretion onto the disk
and dissipation of the orbital energy within the disk.
In addition, there is the luminosity that results from the ISRF,
$L_{ISRF}$. We calculated $L_{ISRF}$ by illuminating a core that has no
internal source. However, the core does have an evolving density
distribution identical to the three mass scenarios presented herein. Then,
we calculate the luminosity that results from the dust grains, which are
heated externally. At early times, the external radiation field
contributes most of the luminosity. As the envelope mass decreases and the
internal source luminosity grows, $L_{ISRF}$ becomes insignificant. We
plot the evolution of $L_{ISRF}$ in Figure~\ref{lisrf}.
In conclusion, the total luminosity is given as
\begin{equation}
L_{tot}=L_{acc}^\ast+L_{phot}+L_{acc}^D+L_{ISRF}.
\end{equation}
For $T_{eff}$, we use the Stefan-Boltzmann Law,
\begin{equation}\label{eqn-teff}
T_{eff}=\left(\frac{L_\ast}{\sigma 4 \pi R_\ast^2}\right)^{1/4},
\end{equation}
where $L_\ast=L_{acc}^\ast+L_{phot}$. At early times, the effective
temperature is very low, $\sim100$ K because the radius of the first
hydrostatic core (Section~\ref{sxn-fhc}) is $\sim$5 AU. When the stellar
radius approaches 2-5 R$_\odot$, $T_{eff}$ becomes more stellar-like ($\sim
3000$ K).
\subsubsection{Radius of the Star - Simulating the FHC}\label{sxn-fhc}
We allow the radius of the star to evolve as suggested by \citet[see Figure
1 in their paper]{palla91}. In their calculations, they find that the
radius of the star rises to about 2-5 solar radii; this result is in accord
with the assumption of a constant radius at 3 R$_\odot$ by M98. However,
the time at which to apply these calculations is not so clear.
In the evolution of a protostar, the early years are occupied by the first
hydrostatic core. While not yet clearly observed, the first hydrostatic
core has been predicted \citep{boss95,masunaga98}. \citet{boss95}
concluded, based on some simple arguments, that the lifetime of this stage
should be short, only about 20,000 years. Further, \citet{masunaga98} have
determined that the average radius of this core should be about 5 AU. The
transition between this very large core and the smaller core described by
the calculations of \citep{palla91} is not well-understood. We have
assumed the stellar radius evolves as shown in Figure~\ref{rstar}. In the
beginning, the radius of the first hydrostatic core is 5 AU. At t=20,000
years, we allow the radius to decrease from 5 AU to the radius calculated
by \citet{palla91}. This transition lasts 100 years and is described, in
our model, by:
\begin{eqnarray}
R_\ast (AU)=5\left [1-\left (\frac{t-20000}{100}\right )^{0.5}\right
]+R_\ast^{PS}, \\
20,000<t<20,100 \nonumber
\end{eqnarray}
where $t$ is the time in years and $R_\ast^{PS}$ is the value for the
radius calculated by \citet{palla91}. This equation is somewhat ad hoc and
simply used to simulate the transition between the large radius as predicted
by \citet{masunaga98} and the much smaller radius of the actual star as
predicted by \citet{palla91}.
There are consequences for including this large radius at early times.
Because the centrifugal radius is very small and, hence, the disk has not
formed, the luminosity at these early times is derived wholly from
spherical accretion onto the central source. If this central source is
small, the accretion luminosity can be very high. In Figure~\ref{fhc}, we
show the evolution of the accretion luminosity for two scenarios. In one
case, we have included the FHC. We also plot $L_{acc}^\ast$ when there is
no FHC. In this case, the stellar radius evolves via the data of
\citet{palla91} (i.e., between 2 and 5 R$_\odot$). Without a FHC, the
accretion luminosity rises quickly because there is no disk and material is
accreting directly onto the star. At about $2\times10^4$ years, the
centrifugal radius has increased so that a disk can form. Then, the
accreting material is processed by the disk causing $L_{acc}^\ast$, which
arises from accretion onto the star, to decrease.
In summary, we let the mass of the star and the accretion luminosity evolve
as defined by \citet{adams86}, the radius of the star change as predicted by
\citet{palla91} (except at early times), and the luminosity due to
deuterium burning and gravitational contraction of the PMS star evolve as
calculated by DM. The effective temperature, given these other factors, is
defined by the Stefan-Boltzmann Law.
\section{Signatures}\label{sxn-signatures}
In this section, we discuss the various observational signatures in
the evolution of protostellar systems.
We calculate the bolometric temperature by the prescription given in
\citet{myers93},
\begin{equation}
T_{bol}\equiv[\zeta(4)/4\zeta(5)]h\bar{\nu}/k=1.25\times10^{-11}\bar{\nu},
\end{equation}
where $\zeta(m)$ is the Riemann zeta function of argument $m$, $h$ is
Planck's constant, $k$ is Boltzmann's constant, and the mean
frequency, $\bar{\nu}$, is the ratio of the first and zeroth frequency
moments:
\begin{equation}
\bar{\nu}\equiv I_1/I_0 , I_m=\int\limits_{0}^{\infty} \nu^m S_\nu d\nu.
\end{equation}
In addition, we calculate the bolometric luminosity:
\begin{equation}\label{eqn-lbol}
L_{bol}=\int\limits_{0}^{\infty} 4 \pi D^2 S_\nu d\nu
\end{equation}
where $S_\nu$ is the flux density, and $D$ is the distance. We set $D=140$
pc, suitable for nearby star-forming regions. We also calculate
$L_{bol}/L_{smm}$ for our models \citep{andre93}. $L_{smm}$ is found by
integrating Equation~\ref{eqn-lbol} from 350 $\mu$m to $\infty$.
We also calculate the fluxes that would be seen by the photometric bands on
the Spitzer Space Telescope by convolving the modeled spectral energy
distribution (SED) with the bandpasses for the MIPS instruments. These
fluxes do not vary substantially from the monochromatic fluxes for the
central wavelength of each bandpass. To convert the fluxes to magnitudes,
we use these zero-point fluxes for MIPS bands 1-3 (24, 70, and 160 $\mu$m),
respectively: 7.2, 0.8, and 0.17 Jy \citep{young05}. We do not convolve
the MIPS resolution element with the model but assume that all emission is
included within this beam.
\section{Radiative Transfer}
We use the radiative transfer code, DUSTY, as developed by \citet{ivezic99}
to calculate the temperature distribution in the envelope and the emergent
SED of the star, disk and envelope. In this section, we discuss the effect
and treatment of scattering by dust grains in these calculations.
DUSTY assumes that scattering from dust grains is isotropic. Longward of 10
$\mu$m, the SED is barely affected because
the scattering cross-section ($\sigma_{scat}$) is significantly less than
the absorption cross-section ($\sigma_{abs}$, see Figure~\ref{kappa}).
Further, the effect is also minimal at wavelengths shortward of 10 $\mu$m
when the interstellar radiation field is not included.
Unfortunately, the assumption of isotropic scattering causes some problems
when the interstellar radiation field is included. In
Figure~\ref{scat_isrf}, we show the SED for a core with a mass of 1
M$_\odot$ (with $\tau_{100\mu m}=1$). In these models, the only heating of
the dust grains is externally from the ISRF. For the solid line, we
include the effects of isotropic scattering by the dust grains while the
dashed line shows the SED without scattering. Both SEDs have a peak at
submillimeter wavelengths as is expected. However, the SED with scattering
included also has a peak in the near-infrared. Of course, we do not
observe strong near-infrared radiation from starless cores. At short
wavelengths, these dust grains preferentially forward scatter light, so
neglecting the anisotropic nature of the scattering causes this unrealistic
flux in the near-infrared.
Our options are either a) neglect the effects of scattering in calculating
the emergent SED or b) ignore the ISRF. The latter is not really feasible
because, at early times, the ISRF provides the sole source of heating and,
hence, ignoring it radically affects the temperature profile for the core.
Therefore, we opt for the first alternative and ignore the effects of
scattering.
\section{An Evolving Protostar}
With these methods and assumptions, we have calculated the SED for an
evolving protostar. In Figure~\ref{sed}, we show the SED for particular
times in the evolution of the core that began with a pre-collapse mass of 1
$M_\odot$. We have also set $\tau_{max}=10$, $\eta_D=0.75$, and
$\eta_\ast=0.5$ (see sections~\ref{sxn-envelope},~\ref{sxn-diskmass},
and~\ref{sxn-starlum}). The solid line is the emergent SED as observed at
140 pc, the distance to Taurus. The dashed line represents the star+disk
SED; this spectrum is for the central source and is the input for DUSTY.
The bars represent the sensitivity for the Spitzer Space Telescope c2d
Legacy program \citep{evans03}; we have increased the 70 $\mu$m sensitivity
by a factor of three based on in-flight performance. The asterisks in the
second frame are IRAS sensitivities.
We calculate the observational signatures, described in
Section~\ref{sxn-signatures}, for models with different initial conditions
and whose evolution proceeds in different ways. We use different timesteps
for the models: $\Delta t=1000$, 2000, and 6000 years for the 0.3, 1, and 3
M$_\odot$ models, respectively. These timesteps are each about 1\% of the
total infall time.
\section{Free Parameters}\label{sxn-parameters}
In this section, we explore the effects of various parameters in these
models. We show how the model changes when we use different values for
$\eta_D$, $\eta_\ast$, and $\tau_{max}$. We find that neither $\eta_D$ or
$\eta_\ast$ have a large effect on the observational signatures of the
SEDs, but the choice for $\tau_{max}$ can significantly affect the
short-wavelength emission during all stages of evolution. We adopt these
values: $\eta_D=0.75$, $\eta_\ast=0.5$, and $\tau_{max}=10$.
\subsection{$\eta_D$}
This factor determines what fraction of the disk material will dissipate
its energy and accrete onto the star. It is relevant in the calculation of
the disk mass ($M_D$, equation~\ref{eqn-mdisk}), the disk accretion
luminosity ($L_{acc}^D$, equation~\ref{eqn-ldisk-approx}), the mass of the star
($M_\ast$, equation~\ref{eqn-mstar}), and the stellar accretion
luminosity ($L_{acc}^\ast$, equation~\ref{eqn-lacc-approx}).
We show the effects of changing $\eta_D$ for the 1 M$_\odot$ core in
Figure~\ref{lum_etad}. Varying $\eta_D$ alters the evolution of all
components of the luminosity. Of course, this parameter is included
directly in the equations for the disk and stellar accretion luminosities.
Because $\eta_D$ determines the stellar mass, it also affects the
luminosity due to deuterium burning and contraction ($L_{phot}$). With
$\eta_D=0.75$, each of these are higher than in the other two scenarios
($\eta_D=0.25$ \& 0.5).
Finally, in the left panels of Figure~\ref{tbol_lsmm_eta}, we show how
changing $\eta_D$ affects observational signatures. We plot $T_{bol}$ and
$L_{bol}/L_{smm}$ versus time with $\eta_D=0.25$, 0.5, and 0.75.
Variations in $\eta_D$ have almost no effect on the evolution of $T_{bol}$,
and $L_{bol}/L_{smm}$ only varies slightly for the different values of
$\eta_D$.
\subsection{$\eta_\ast$}
This factor is a measure of how much of the total luminosity arises from
the central star. In practice, it is only relevant for the accretion
luminosity, $L_{acc}^\ast$, as in equation~\ref{eqn-lacc-approx}. In
Figure~\ref{etastar}, we plot the evolution of $L_{acc}^\ast$ for different
values of $\eta_\ast$. The value that we choose for this ``efficiency
factor'' does considerably change the accretion luminosity. However, as
shown by Figure~\ref{tbol_lsmm_eta}, the value for $\eta_\ast$ has very
little effect on the observational signatures. For the standard model in
this paper, we adopt $\eta_\ast=0.5$.
\subsection{$\tau_{max}$}\label{sxn_tau_max}
As discussed in section~\ref{sxn-envelope}, the inner radius of the
envelope is set so that $\tau(100\mu$m) does not exceed a certain value.
Previously, some have proposed that the condition of isothermality is violated
if $\tau(100\mu$m)$>1$ \citep{larson69}. However, \citet{masunaga99}
showed that $\tau(100\mu$m) could be significantly greater than 1 while
the core still maintains isothermality.
In Figure~\ref{tau}, we plot the evolution of $\tau_\nu$(100 $\mu$m) with
$\tau_{max}=1$, 5, 10, and 15. The density profile begins with a
distribution of $n(r)\propto r^{-2}$, but, as collapse begins, the inner
region drops to a considerably shallower profile. Because the inner density
profile changes, $\tau$ drops considerably. In Figure~\ref{comp}, we show
the SED of a 1 M$_\odot$ core at $t=4\times10^4$ and $10^5$ years with
$\tau_{max}=1$, 10, and 15. The model with $\tau_{max}=1$ shows more
emission at shorter wavelengths than those with higher $\tau_{max}$. The
SED with $\tau_{max}=10$ is almost identical, at all times, to that with
$\tau_{max}=15$.
In Figure~\ref{tau}, there is a discontinuity at $\sim1\times10^5$ years.
This is the point where the infall radius exceeds the outer radius of the
infalling envelope. At this point, we changed from the \citet{shu77}
solution to an envelope whose density is described by $n(r)\propto
r^{-3/2}$. This is not a perfect transition, however, so the ``kink'' at
$\sim1\times10^5$ years is an artifact of the model.
We explore the effects of changing $\tau_{max}$ on the observational
signatures of our evolving protostar. In Figure~\ref{tbol_lsmm_tau}, we
show the evolution of $T_{bol}$ and $L_{bol}/L_{smm}$ for the different
values of $\tau_{max}$. The bolometric temperature is most affected by
varying $\tau_{max}$; higher values for $\tau_{max}$ slow the transition
from a Class I to a Class II protostar (as defined by $T_{bol}$). Simply,
less short-wavelength radiation can escape the cloud when the opacity is
higher. However, for those models with $\tau_{max}\geq 5$, the evolution
of $T_{bol}$ begins to converge. Finally, not surprisingly,
$L_{bol}/L_{smm}$ is unaffected by changing $\tau_{max}$. We do not
significantly alter the mass of the envelope, which dictates $L_{smm}$, nor
the accretion processes, which are responsible for $L_{bol}$.
The actual value for $\tau_{max}$ is highly uncertain. We do not fully
understand the density structure in this transition area between the
envelope and disk nor do we include the geometrical effects of a flattened
envelope in this region. We assume that $\tau_{max}=10$; higher values of
$\tau_{max}$ have little effect on observed quantities (i.e., the SED and
its derived signatures). The peak of a 10 K blackbody is 350 $\mu$m.
Assuming $\kappa\propto\lambda^{-1.5}$, $\tau_{350\mu m}=1.5$ when
$\tau_{100\mu m}=10$.
\section{Results}
With these free parameters set to the given values, we have run models with
different initial masses: 0.3, 1.0, and 3.0 M$_\odot$. Then, we have
calculated the various observational signatures.
\subsection{$T_{bol}$ and $L_{bol}/L_{smm}$}
The bolometric temperature and the ratio of bolometric to submillimeter
luminosity have emerged, in the past decade, as the two primary methods of
classifying protostars. Because $T_{bol}$ is a measure of the
flux-weighted mean frequency of the protostar's SED, it is highly affected
by the emergence of any short-wavelength radiation. The ratio of the
bolometric to submillimeter luminosity, on the other hand, is virtually
unaffected by the short-wavelength emission. Therefore, $L_{bol}/L_{smm}$
is less susceptible to the effects of geometry, which can cause more or
less NIR radiation to be observed. This ratio is a rough measure of the
ratio of protostellar mass (including the disk and star) to the envelope
mass.
In Figure~\ref{tbol_lsmm_mass}, we plot the two signatures as they change
for the evolving protostar. For the 0.3 M$_\odot$ core, each of the
evolutionary indicators increase drastically at about $2.0\times10^4$
years, when the FHC contracts. All of the envelope material has accreted
onto the star and disk by $6.3\times10^4$ years, shortly after the central
star has contracted from the FHC stage. Therefore, as the central star
becomes hotter and more luminous, there is little material left in the
envelope. Then, $T_{bol}$ increases because more short-wavelength
radiation is observed, and $L_{bol}/L_{smm}$ increases because
$L_{smm}\rightarrow 0$ as the envelope goes away. If this model is
correct, these low-mass stars should proceed through the FHC stage and,
almost immediately, be seen as Class II objects.
The 1.0 and 3.0 M$_\odot$ objects track one another fairly well, up to
$10^5$ years, in the $T_{bol}$ plot (Figure~\ref{tbol_lsmm_mass}) despite
the fact that they form on different timescales. The 1.0 $M_\odot$ core
requires $2.1\times10^5$ years for all envelope mass to accrete while the
3.0 $M_\odot$ core requires $6.24\times10^5$ years. However, both cores
evolve from Class 0 to Class I at about 50,000 years, which is about $1/4$
and $1/10$ of the total infall time for the 1.0 and 3.0 M$_\odot$ cores,
respectively.
This transition from Class 0 to I is partly due to the sudden ``turning
on'' of the central source as it contracts to $\sim3$ R$_\odot$, and
accretion luminosity becomes relevant. There is, however, another reason
for this sudden transition. As shown in Figure~\ref{tau}, the Shu77 model
exhibits a drastic decrease in $\tau$ regardless of the adopted value for
$\tau_{max}$. Initially, the envelope is described by $n(r)\propto
r^{-2}$, but, as collapse ensues, it changes to $r^{-3/2}$ and $\tau$
drops. Because $\tau$ is so low, any substantial source of stellar
luminosity will cause observable short-wavelength radiation to emerge from
the system.
These details are actually quite important if one uses these evolutionary
signatures to derive relative lifetimes of the various classes as has been
done in the past. If one calculates the bolometric temperature for a group
of protostars (whose SEDs have been well sampled), there should be very few
Class 0 cores---conservatively, about 1/10 to 1/4 of the Class I
population, but most likely a much smaller fraction. \citet{visser02}
presented their efforts to do such a study; they found approximately equal
numbers of Class 0 and I objects. However, their data only included the
far-infrared (IRAS) and submillimeter fluxes. Analysis of the mid- and
near-infrared data from the Spitzer Legacy and 2MASS surveys along with
far-infrared and millimeter observations will almost certainly produce
different results. With more complete sampling of the protostars' SEDs,
very few Class 0 cores, by the $T_{bol}$ criterion, should remain if this
picture of evolution is correct.
Finally, in the plot of $T_{bol}$, the 0.3, 1.0, and 3.0 M$_\odot$ data
exhibit a ``kink'' at about $3\times10^4$, $10^5$, and $3\times10^5$ years,
respectively. This is the point where the envelope's density distribution
is described by a power-law, $n(r)\propto r^{-3/2}$, instead of the Shu77
solution. The power-law distribution has a slightly higher $\tau$ that
causes less short-wavelength radiation to be observed. As a result,
$T_{bol}$ decreases slightly at this point of transition, but this is an
artifact of the model.
The ratio of bolometric to submillimeter luminosity seems to be much more
consistent in describing the evolution of these protostars. Adopting
$L_{bol}/L_{smm}=200$ as the dividing line for Class 0 and I cores, we find
that the 1 and 3 M$_\odot$ protostars become Class I objects after
$1.18\times10^5$ and $3.6\times10^5$ years, respectively. These times
correspond to slightly more than 1/2 of the total infall time for each core
whereas the $T_{bol}$ criterion showed that the cores became Class I after
1/4 and 1/10 of their total infall times.
Finally, in Figure~\ref{tbol_lsmm}, we show a plot of $T_{bol}$ and
$L_{bol}/L_{smm}$ for the three models. The points represent data from
\citet{young03}, \citet{shirley00}, and \citet{froebrich05}. In general,
these models are consistent with the data. However, the 11 starless cores
in the lower left-hand section of this plot show higher $L_{bol}/L_{smm}$
than the model predicts. The definition of $L_{bol}/L_{smm}$ includes data
longward of 350 $\mu$m, but, for all of these cores, no 350 $\mu$m data
exist. Therefore, the observed $L_{smm}$ is lower than that which is
modeled. Second, there is little near- or mid-infrared data available for
many of the sources represented. With future observations, the bolometric
temperature will almost certainly increase. For example, we consider the 1
M$_\odot$ model and calculate $T_{bol}$ by including different fluxes. If
we use IRAS fluxes only, $T_{bol}=83$ K for the core at 5$\times10^4$ years
while $T_{bol}$, calculated with just the Spitzer bands, is 92 K. At
$t=10^5$ years, $T_{bol}$ is 88 and 151 K as measured with the IRAS and
Spitzer fluxes, respectively.
\subsection{Mass Ratio}\label{sxn-massratio}
We can look at classification from a different, more physical, perspective.
In Figure~\ref{mratio_env}, we plot the ratio of the stellar and disk mass
to the envelope mass. Physically, we might consider a protostar to move
from Class 0 to Class I when this ratio is 1 and there are equal amounts of
mass in the protostellar system and the envelope surrounding this
star+disk. This event occurs at $t=3.1\times10^4$, 1.05$\times10^5$, and
3.1$\times10^5$ years for the 0.3, 1.0, and 3.0 M$_\odot$ cores. Of
course, as defined by Shu77, this is also the time when the infall radius
is equal to the outer radius.
In Figure~\ref{tbol_lsmm_mratio}, we plot $T_{bol}$ and $L_{bol}/L_{smm}$
as a function of $(M_\ast+M_D)/M_{env}$. In the $T_{bol}$ plot, the 1 and
3 M$_\odot$ cores change from Class 0 to Class I while
$(M_\ast+M_D)/M_{env} < 0.5$. With the presently defined boundaries, the
bolometric temperature does not appropriately classify the stages of star
formation. Only the 0.3 M$_\odot$ core changes from Class 0 to I when
$(M_\ast+M_D)/M_{env}\sim1$ as is appropriate for our understanding of
these stages.
However, with the $L_{bol}/L_{smm}$ criterion, we find that the cross-over
from Class 0 to I occurs approximately when $(M_\ast+M_D)/M_{env} = 1$,
which is a more realistic view of these evolutionary stages. Indeed, this
observational signature is also favored because it is not largely
dependent on what is observed at short wavelengths where geometric effects
play a big role \citep{andre93}.
Therefore, we set some physical divisions for the evolutionary transitions.
First, we let the PPC/Class 0 transition occur when the FHC first collapses
at $\sim2.0\times10^4$ years. For all three cores, this occurs
when $L_{bol}/L_{smm} \sim 35$. For the Class 0/I transition, we let
$(M_\ast + M_D)/M_{env}=1$; $L_{bol}/L_{smm} \sim 175$ when this criterion
is met. Notice that this value is slightly less than the requirement given
by \citet{andre93}, i.e. $L_{bol}/L_{smm}=200$. The Class II stage begins
when all of the envelope material has been accreted. Our models are not
reliable at these late times, and the $L_{bol}/L_{smm}$ signature is not a
very good indicator for this stage of evolution. Nonetheless, we find that
$L_{bol}/L_{smm}$ is approximately 2000 at this point, but this value is
dependent on the adopted model for the disk.
\subsection{BLT Diagrams: A Comparison with M98}
In Figure~\ref{blt}, we show a plot of the bolometric luminosity and
temperature, which M98 called a BLT diagram. The axes are laid out to
mimic the Hertzsprung-Russell diagram with $T_{bol}$ increasing right to
left. In Figure~\ref{blt}, we have included M98's models from Figure 7 in
their paper. Two of the thin lines in Figure~\ref{blt} are their models
for forming, 0.5 M$_\odot$ protostars whose initial envelope masses were 1
and 3 M$_\odot$. We also show their 0.3 M$_\odot$ model from Figure 9, but
we label it here as 1.8 M$_\odot$ because this is the mass of the envelope
before collapse begins while 0.3 M$_\odot$ is the mass of the star at
t=$\infty$. We also plot data from \citet{young03}, \citet{shirley02}, and
\citet{shirley04} as crosses; the dots represent data from \citet{chen95}
and \citet{chen97}. Our tracks are markedly different from any of those
presented in M98, so a summary of the differences between two models is
relevant here. Primarily, our methods differ in that M98 attempts to
create a reasonable model that fits the data while we are simply
determining the observational signatures of the Shu77 model.
The most significant difference between this work and that of M98 is the
assumptions for infall evolution. M98 described the infall and accretion
with an exponential decay function such that the accretion rates began at
about $10^{-6}$ M$_\odot$ yr$^{-1}$ and, as the star approached the main
sequence, finished with $10^{-9}$ M$_\odot$ yr$^{-1}$. We assume constant
accretion onto the star+disk system throughout the duration of the life of
the envelope as predicted by the Shu77 collapse solution. However, in our
model, the rate of accretion onto the star's surface does decrease with
time as the disk forms and takes a more prominent role in processing
material from the envelope to the star. Also, the modeled evolution is
longer for M98 ($10^6$ years) than in our model ($2-6\times10^5$ years).
These different assumptions about infall have several implications. First,
in our model, we form a more massive star from similar initial conditions
in less time. For our 3 M$_\odot$ core, the star reaches 0.5 M$_\odot$ at
$t=132,000$ years. On the other hand, M98's models require, by design,
about $10^6$ years to create this 0.5 M$_\odot$ star from a 3 M$_\odot$
core. Of course, at the end of $10^6$ years, the star created by M98 has
completed its pre-main-sequence evolution, while our model still requires
time to completely accrete the disk material.
This discrepancy in the star's final mass presents another difference
between this work and that of M98. M98 included a dispersal timescale for
the envelope that included an assumption of mass loss due to outflow from
the central protostar. We do not include outflows in any way in these
models. The lack of outflows in this work has two implications: 1) the
mass evolution, as depicted in Figure~\ref{mass}, is incomplete and 2) we
do not consider the effects of an evacuated outflow cavity. Fortunately,
the mass evolution is not considerably altered by the exclusion of
outflows. \citet{calvet98} found that the ratio of mass loss rate to mass
accretion rate is $\sim 0.1$. We are unable to model the scattered light
coming from the outflow cavity. Others have, however, and we discuss their
work in Section~\ref{sxn-whitney}.
M98 also assume the envelope to have a density profile with a free-falling
structure, $n(r)\propto r^{-3/2}$. We use the Shu77 solution, which has an
inner free-falling envelope surrounded by a static envelope with
$n(r)\propto r^{-2}$. These differences in structure of the envelope cause
great disparities in the opacity. For example, a core that has 0.8 M$_\odot$
of material with $n(r)\propto r^{-3/2}$ creates $\tau_\nu(100\mu m)=0.26$.
A core with the same amount of material but described by the Shu77 collapse
solution, with an infall radius that is one-half of the outer radius,
creates $\tau_\nu(100\mu m)=0.17$. Such a disparity causes large changes
in $T_{bol}$. For example, if we place a 0.7 L$_\odot$ star with
T$_{eff}$=2000 K inside these two cores, the Shu77 core has $T_{bol}=416$ K,
and the free-falling core has a bolometric temperature that is half as high,
$T_{bol}=207$ K.
M98 do not calculate the full SED. Instead, they consider the optically
thin and thick limits and calculate two moments of the protostellar
spectrum: the bolometric temperature and luminosity. On the other hand, we
use DUSTY to calculate the full radiative transfer in the protostellar
system.
Finally, M98 use a single power-law to describe the dust emissivity,
$\kappa$. In contrast, we use the dust properties calculated by
\citet{ossenkopf94}. While the dust opacity is aptly described by a
power-law at long wavelengths, the shorter wavelength opacities are clearly
not properly represented in the same way (see Figure~\ref{kappa}).
Interestingly, the data encompass both models, but M98's models obviously
best cover the median range for the data. However, there are a substantial
number of sources that have a lower bolometric luminosity than either model
allows. Perhaps, these sources are in the quiescent stage of episodic
accretion so that $L_{acc}$ is very low, or these low-luminosity objects
could simply have a mass less than 0.3 M$_\odot$ and not be included in
these models. Also, our models show higher luminosities at later times than
most of the data, a result of the aforementioned ``luminosity problem.''
Perhaps, accretion does occur episodically and only for short times, and we
should expect only a few objects to be in the phase where material is
accreting onto the star and, hence, have a high luminosity. Another
explanation, of course, is that the assumption of constant accretion
throughout the star's evolution is wrong.
\subsection{Infrared Color-Magnitude Diagrams}\label{sxn-whitney}
In Figure~\ref{sirtf_colors}, we show color-magnitude diagrams as would be
observed with the Multiband Imaging Photometer for Spitzer (MIPS) on the
Spitzer Space Telescope (SST). The three mass sequences (0.3, 1.0, and 3.0
M$_\odot$) are represented by the black lines in these figures. We show
the magnitude at 24 $\mu$m ($[24]$) plotted against the $[24]-[70]$ color;
on the right of Figure~\ref{sirtf_colors}, we plot $[70]$ and $[70]-[160]$.
Also, we show the models of W03 from Figure 8a of their work. The colored
lines show W03's calculations over varying angles of inclination. The
magenta line represents almost all inclination angles for the Class 0
stage, and the magenta triangle is the Class 0 stage as viewed pole-on.
Many differences exist between these models and our work. For example, W03
describe the envelope as a rotating, freely falling envelope
\citep{ulrich76}; W03 also set the envelope's inner radius to be quite
small ($\sim10$R$_\odot$) while we use a much larger inner radius. In
addition, W03 have a number of ``common'' model parameters, which do not
change for the different models. These parameters include the stellar
radius, temperature, and mass as well as the overall source luminosity. In
our model, we allow all of these to evolve with time. Finally, W03
included the effects of outflow cavities, 2-dimensional disks, and varying
inclination angles. We are unable to account for these things.
Young objects in the PPC/Class 0 segment of their lifespan are very red in
our calculations and occupy the lower, right-hand section of this plot.
However, these models are bluer in this plane than the W03 models for a
Class 0 source. Several factors probably contribute to this disparity.
Most of the difference arises from the fact that W03's models have a
considerably more luminous central source than our models. Thus, the
envelope material is heated more and emits a greater 70 $\mu$m flux.
Because W03 uses a very small inner radius, the optical depth is larger
than for our models even though they may have similarly massive envelopes.
Therefore, relative to the 24 $\mu$m flux, there is a higher 70 $\mu$m flux
for W03's models, and their Class 0 stage appears redder (and, of course,
more luminous). The effect is much less pronounced in the $[70]-[160]$
plane because these longer wavelength fluxes are less affected by optical
depth effects.
As with W03's models, our evolutionary tracks show that the objects become
bluer in the $[24]-[70]$ color as the source becomes more luminous and
$[24]$ increases. We have marked the PPC/Class 0 and Class 0/I transitions
(as determined by $L_{bol}/L_{smm}$ as in section~\ref{sxn-massratio}).
The Class I/II transition occurs at the end of our modeling sequence when
all envelope mass has accreted onto the star and disk. However, aspherical
effects are most relevant with these later stages, so these models probably
underestimate the 24 $\mu$m flux for the Class I/II transition.
Transitions for the 0.3 M$_\odot$ models are shown as cyan squares, the red
squares represent the transitions for the 1 M$_\odot$ model, and the green
squares are for the 3.0 M$_\odot$ model.
The right panel of Figure~\ref{sirtf_colors} shows a simpler behavior for
the evolution as seen at 70 and 160 $\mu$m. As an FHC, the protostar
appears very red but becomes bluer as the core collapses to $\sim$3
R$_\odot$ and grows in luminosity. The 160 $\mu$m flux increases over time
as the internal luminosity continues to grow. However, there is a point
where the envelope contains so little material that the 160 $\mu$m flux
drops despite the fact that the far-infrared emission of the internal star
is growing. By the end of these tracks, the objects have reached the end
of their Class I stage; they no longer have envelope material but do have
an optically thick disk.
\section{Conclusions}
We have presented the results from modeling the evolution of protostars
with 3 different masses: 0.3, 1.0, and 3.0 M$_\odot$. The framework for
the evolution of these protostars was taken mostly from the work of
\citet{adams86} but also used the results from several other authors.
These efforts were similar and complementary to the work of \citet{myers98}
and \citet{whitney03} but employed different methods and theories and, hence,
got different results.
We note that the evolution of these modeled protostars is significantly
affected by the existence and lifetime of the first hydrostatic core. The
work done heretofore is useful \citep{boss95,masunaga98}, but more detailed
theoretical work is needed. As we begin to observe earlier stages of star
formation \citep{young04a}, the role of the FHC must be ascertained.
We find that the Class 0 stage, when determined by $T_{bol}$, should be
short-lived, lasting only about 1/10 to 1/4 of the protostar's life. This
result is somewhat model dependent, but it should also extend to other
models of star formation. Therefore, the surveys being conducted with the
Spitzer Space Telescope and 2MASS should reveal a small fraction of Class 0
to Class I objects. However, we suggest not using the $T_{bol}$ criterion
for classification.
We find that the bolometric temperature is a poor discriminator for
protostars at early evolutionary stages. Instead, we suggest using the
$L_{bol}/L_{smm}$ signature proposed by \citet{andre93}. Based on physical
grounds, we suggest these boundaries for classifying protostars:
$L_{bol}/L_{smm}=35$ for PPC/Class 0, $L_{bol}/L_{smm}=175$ for Class 0/I,
and $L_{bol}/L_{smm}\sim 2000$ for the Class I/II transition. The latter is
largely dependent on the adopted disk model. Also, $L_{bol}/L_{smm}$ is
not relevant after all envelope mass has been accreted (beyond the Class I
stage) since it is a measure of the stellar to envelope mass ratio
\citep{andre93}.
We have presented several observational tools by which the inside-out
collapse model can be effectively tested: infrared color-magnitude
diagrams, plots of the bolometric luminosity and temperature, and a plot of
$T_{bol}$ and $L_{bol}/L_{smm}$. Large surveys will produce these
observable quantities for hundreds of young protostars and test the theory
of inside-out collapse.
\section{Acknowledgements}
We thank Minho Choi for the use of his program to calculate the density
profiles. Also, many thanks to Moshe Elitzur, Maia Nenkova, and \v{Z}eljko
Ivezi\'{c} for providing the modified version of DUSTY that allows heating
by the ISRF and for much assistance in the use of DUSTY. Many thanks also
to our anonymous referee whose thorough reading and thoughtful critique
have made this a better paper. This work is supported by NASA grants
NAG5-10488 and NNG04GG24G.
|
1,477,468,750,315 | arxiv | \section{Introduction} \label{sec:intro}
\subsection{Regenerating Codes} \label{sec:regenerating_codes}
In the regenerating-code framework~\cite{DimGodWuWaiRam}, all symbols are drawn from a fixed finite field $\mathbb{F}$ whose size is the power of a prime. The size of the field does not play an important role in the present paper and for this reason does not appear in our notation for the field. Data pertaining to a file comprised of $B$ symbols is encoded into a set of $n\alpha$ coded symbols and then stored across $n$ nodes in the network with each node storing $\alpha$ coded symbols. A data collector should be able to retrieve the file downloading entire data from any $k$ nodes. Furthermore, $k$ is the minimum such number that allows reconstruction of the file. In the event of a node failure{\footnote{Though regenerating codes are defined for the case of single node-failures, there are later works that looked into the case of simultaneous failure of multiple nodes, and studied cooperative repair in such a situation\cite{ShuHu,KerScoStr}. However in this paper, we focus only on single node-failures.}}, node repair is accomplished by having the replacement node connect to any $d$ nodes and download $\beta \leq \alpha$ symbols from each node with $\alpha \leq d \beta < B$. These $d$ nodes are referred to as helper nodes. From the minimality of $k$, it can be shown that $d$ must lie in the range
\begin{eqnarray*}
k \leq d \leq n-1.
\end{eqnarray*}
The quantity $d\beta$ is called as the repair bandwidth. Here one makes a distinction between functional and exact repair. By functional repair (FR), it is meant that a failed node will be replaced by a new node such that the resulting network continues to satisfy the data-collection and node-repair properties defining a regenerating code. An alternative to functional repair is {\em exact repair} (ER) under which one demands that the replacement node store precisely the same content as the failed node. From a practical perspective, ER is preferred at least for two reasons. Firstly, the algorithms pertaining to data collection and node repair remain static for the ER case. Secondly if the ER code is linear, then it permits the storage of data in systematic form, which facilitates operations under paradigms such as MapReduce~\cite{mapreduce}. We will use ${\cal P}_{\text{f}}$ to denote the {\em full parameter set} ${\cal P}_{\text{f}} = \{(n,k,d), (\alpha,\beta)\}$ of a regenerating code and use ${\cal P}$ when we wish to refer to only the parameters $(n,k,d)$.
\begin{figure}[ht]
\begin{minipage}[b]{0.5\linewidth}
\centering
\includegraphics[height=1.5in]{IJICT_Data_Collection.pdf}
\caption{Data collection.}
\label{fig:data_collection}
\end{minipage}
\hspace{-0.5cm}
\begin{minipage}[b]{0.50\linewidth}
\centering
\includegraphics[height=1.5in]{IJICT_Node_Repair.pdf}
\caption{Node repair.}
\label{fig:node_repair}
\end{minipage}
\hspace{-0.5cm}
\end{figure}
\subsection{The Storage-Repair Bandwidth Tradeoff} \label{sec:intro_tradeoff}
A cut-set bound based on network-coding concepts, tells us that given a code parameter set ${\cal P}_{\text{f}}$, the maximum possible size $B$ of a regenerating code is upper bounded as~\cite{DimGodWuWaiRam},
\begin{eqnarray} \label{eq:cut_set_bd}
B & \leq & \sum_{\ell =0}^{k-1} \min\{\alpha,(d-\ell)\beta\} .
\end{eqnarray}
The derivation of the bound in \eqref{eq:cut_set_bd} makes use of only FR constraints, and therefore it is valid for both FR and ER codes. An FR code ${\cal \hat{C}}$ is said to be optimal if the file size $\hat{B}$ of ${\cal \hat{C}}$ achieves the cut-set bound in \eqref{eq:cut_set_bd} with equality, and further, that if either $\alpha$ or $\beta$ is reduced, equality fails to hold in \eqref{eq:cut_set_bd}. The existence of such codes has been shown in~\cite{DimGodWuWaiRam}, using network-coding arguments related to multicasting~\cite{Wu}. In general, we will use ${\cal \hat{C}}$, $\hat{B}$ etc to denote symbols relating to an optimal FR code while reserving ${\cal {C}}$, $B$ etc. to denote symbols relating to an ER code.
Given ${\cal P}$ and $B$, there are multiple pairs $(\alpha,\beta)$ that satisfy \eqref{eq:cut_set_bd}. It is desirable to minimize both $\alpha$ as well as $\beta$ since minimizing $\alpha$ reduces storage requirements, while minimizing $\beta$ results in a storage solution that minimizes repair bandwidth. It is not possible to minimize both $\alpha$ and $\beta$ simultaneously and thus there is a tradeoff between choices of the parameters $\alpha$ and $\beta$. This tradeoff will be referred to as Storage-Repair Bandwidth (S-RB) tradeoff under functional repair. Since much of the emphasis of the current paper is upon the distinction between the S-RB tradeoffs under functional and exact repair, we will use FR tradeoff and ER tradeoff to refer respectively, to the two tradeoffs. The two extreme points in the FR tradeoff are termed the {\em minimum storage regeneration} (MSR) and {\em minimum bandwidth regeneration} (MBR) points respectively. The parameters $\alpha$ and $\beta$ for the MSR point on the tradeoff can be obtained by first minimizing $\alpha$ and then minimizing $\beta$ to yield
\begin{eqnarray} \label{eq:MSR}
B = k \alpha, \ \ \alpha = (d-k+1)\beta .
\end{eqnarray}
Reversing the order leads to the MBR point which thus corresponds to
\begin{eqnarray} \label{eq:MBR}
B = \left( dk - {k \choose 2} \right)\beta, \ \ \alpha = d \beta .
\end{eqnarray}
\begin{figure}[ht]
\centering
\includegraphics[width=3.5in]{IJICT_Tradeoff.pdf}
\caption{FR Tradeoff. Here $(n=60, k=51, d=58, B=33660)$.}
\label{fig:tradeoff}
\end{figure}
The remaining points on the tradeoff will be referred to as {\em interior points}. As the tradeoff is piecewise-linear, there are $k$ points of slope discontinuity, corresponding to
\begin{eqnarray*}
\alpha = (d-\mu)\beta, \ \ \mu \in \{0, \cdots k-1 \}.
\end{eqnarray*}
Setting $\mu=k-1$ and $0$ respectively, yields the MSR and MBR points. The remaining values of $\mu \in \{1, \cdots k-2\}$ correspond to interior points with slope-discontinuity. Interior points where there is no slope discontinuity can be specified by setting,
\begin{eqnarray}
\nonumber \alpha & = & (d-\mu)\beta - \theta , \ \theta \in [0, \beta) \\
\label{eq:op_point}& = & (d-\mu)\beta - \nu \beta , \ \nu \in [0 , 1),
\end{eqnarray}
with $\mu \in \{0,1,\ldots, k-2\}$. When $\mu = k-1$, we always set $\nu =0$. We will refer to the pair $(\alpha, \beta)$ as an {\em operating point} of the regenerating code. The tradeoff between $\alpha$ and $d\beta$ is plotted in Fig.~\ref{fig:tradeoff} for $(n=131,k=120,d=130)$ and file size $B=725360$.
The results in the present paper pertain to the ER tradeoff. Several ER code constructions \cite{RasShaKum_pm, CadJafMalRamSuh, PapDimCad, SuhRam, ShaRasKumRam_ia, ShaRasKumRam_rbt, TamWanBru} are now available that correspond to the MSR and the MBR points of the FR tradeoff. Thus the end points of the ER tradeoff coincide with those of the FR tradeoff. However, characterization of the interior points of the ER tradeoff remains an open problem in general.
\subsection{The Normalized ER Tradeoff and ER-Code Symmetry \label{sec:norm_tradeoff}}
For a given parameter set ${\cal P} = (n,k,d)$, there are several known constructions for an ER code, each of which is valid only for a restricted set of file sizes. Since the ER tradeoff for a fixed $(n,k,d)$ varies with file size $B$, comparison across code constructions is difficult. For this reason, we normalize $(\alpha, \beta)$ by the file size $B$. The tradeoff between $\bar{\alpha}=\frac{\alpha}{B}$ and $\bar{\beta}=\frac{\beta}{B}$ thus obtained for a fixed value of $(n,k,d)$, will be referred to here as the {\em normalized ER tradeoff}. The tuple $(\bar{\alpha}, \bar{\beta})$ is referred to as the {\em normalized operating point} of a regenerating code. Throughout the remainder of this paper, we will work only with the normalized version of the ER tradeoff.
Given a regenerating code ${\cal C}$ associated to parameter set ${\cal P}$ and file size $B$, the parameters of the code are clearly invariant to coordinate (i.e., node) permutation. Given an ER code ${\cal C}$, we can vertically stack the $n!$ codewords obtained by encoding independent files using all possible node permutations of ${\cal C}$. The resultant stack of $n!$ codewords may be regarded as a single new ER regenerating code ${\cal C'}$ where the parameters $(n,k,d)$ remain the same, but where the parameters $(\alpha,\beta)$ and $B$ are each scaled up multiplicatively, by a factor of $n!$. It is clear that ${\cal C'}$ is symmetric in the sense that the amount of information contained in a subset $A \subset [n]$ of nodes depends only upon the size $|A|$ of $A$, and not upon the particular choice of nodes lying in $A$. This symmetry carries over even in the case of repair data transferred by a collection $D$ of $d=|D|$ nodes for the replacement of a fixed node. Such codes will be referred to as {\em symmetric} ER codes. Since the normalized values $(\bar{\alpha}, \bar{\beta})$ of ${\cal C'}$ remain the same as that of ${\cal C}$, there is no change in operating point on the normalized ER tradeoff in going from ${\cal C}$ to ${\cal C'}$. Thus, given our focus on the normalized tradeoff, it is sufficient to consider {\em symmetric} ER codes. This observation was first made by Tian in \cite{Tia}.
\subsection{Results \label{sec:results}}
Though the complete characterization of normalized ER tradeoff for every parameter set remains an open problem, much progress has been made. It was shown in \cite{ShaRasKumRam_rbt}, that apart from the MBR point and a small region adjacent to the MSR point, there do not exist ER codes whose $(\alpha, d\beta)$ values correspond to coordinates of an interior point on the FR tradeoff. However, the authors of \cite{ShaRasKumRam_rbt} did not rule out the possibility of approaching the FR tradeoff asymptotically i.e., as the file size $B \rightarrow \infty$. It was first shown by Tian in \cite{Tia} that the ER tradeoff lies strictly away from the FR tradeoff. This was accomplished by using an information theory inequality prover~\cite{ITIP} to characterize the normalized ER tradeoff for the particular case of $(n,k,d)=(4,3,3)$ and showing it to be distinct from the FR tradeoff. The results in the \cite{Tia} were however, restricted to the particular case $(n,k,d)=(4,3,3)$.
That the ER tradeoff lies strictly above the FR tradeoff for {\em any} value of the parameter set $(n,k,d)$ was first shown in \cite{SasSenKum_isit}. The first result in the present paper is to show an outer bound on the normalized ER tradeoff for every parameter set $(n,k,d)$, and is stated in Thm.~\ref{thm:bound1}. We refer to this outer bound as the {\em repair-matrix bound}. This outer bound in conjunction with a code construction appearing in \cite{SenSasKum_itw}, characterizes the normalized ER tradeoff for the parameter set $(n,k,d)$ for $k=3$, $d=n-1$ and any $n \geq 4$.
Two outer bounds on the normalized ER tradeoff appeared subsequently in \cite{Duursma2014} and \cite{Duursma2015}. In \cite{Duursma2014}, the author presents two bounds on the ER file size. In the first bound, he builds on top of the techniques presented in \cite{Tia} and derives a bound that applies to a larger set of parameters. The second bound is obtained by taking a similar approach as in \cite{SasSenKum_isit}, and is shown to improve upon the one given in \cite{SasSenKum_isit}. In \cite{Duursma2015}, the author provides an upper bound on ER file size, that is non-explicit in general. However for the case of linear codes, the bound can be computed to obtain an explicit expression for any parameter set $(n,k,d)$. A second paper by Tian, \cite{Tia_544}, characterizes the ER tradeoff for $(n=5,k=4,d=4)$ with the help of a class of codes known as the {\em layered codes} introduced in \cite{TiaSasAggVaiKum}. A different approach adopted to derive an outer bound on the normalized ER tradeoff is presented in \cite{MohTan}. In \cite{MohTan}, Mohajer et al. derived an outer bound for general $(n,k,d)$ that turns out to be optimal for the special case of $(n,k=n-1,d=n-1)$ in a limited region of $\bar{\beta} \leq \frac{2\bar{\alpha}}{k}$ close to the MBR point. Optimality follows from the fact that a code construction due to Goparaju et al. in \cite{GopRouCal_isit} meets their outer bound in the region $\bar{\beta} \leq \frac{2\bar{\alpha}}{k}$. We will refer to this outer bound in \cite{MohTan} as the {\em Mohajer-Tandon bound}.
The second result of the present paper is an improvement upon the Mohajer-Tandon bound for the case $k < d$. We make use of the very same techniques introduced in \cite{MohTan} to arrive at this improved bound. This bound is stated in Thm.~\ref{thm:bound2}, and we refer to it as the {\em improved Mohajer-Tandon bound}. While the improved Mohajer-Tandon bound performs better whenever $k < d$, it coincides with the Mohajer-Tandon bound when $k=d$. The repair-matrix bound still performs better than the improved Mohajer-Tandon bound in a region close to the MSR point. The theorem below essentially combines the repair-matrix bound and the improved Mohajer-Tandon bound.
\begin{thm} \label{thm:bound3} Let
\begin{eqnarray*}
B_1 = \sum_{i=0}^{k-1} \min\{\alpha, (d-i)\beta) - \delta,
\end{eqnarray*} where $\delta$ is as defined in \eqref{eq:eps}, and it corresponds to the repair-matrix bound. Let $B_2$ be the expression on the RHS in \eqref{eq:soh_improved}, corresponding to the improved Mohajer-Tandon bound. Then the ER file size $B$ is bounded by,
\begin{eqnarray*}
B & \leq & \min\{B_1, B_2\}.
\end{eqnarray*}
\end{thm}
The final result presented in this paper is under the restricted setting of linear codes. For the case of $(n \geq 4, k=n-1, d=n-1)$, we characterize the normalized ER tradeoff under this setting. This is done by deriving an explicit upper bound on the file size $B$ of a ER linear regenerating code for the case $k=d=n-1, n \geq 4$. The outer bound remains valid for the general case $k=d$ even when $d< n-1$. For the case of $(n,k=n-1,d=n-1)$, the outer bound matches with the region achieved by the layered codes. This result, which first appeared in\cite{PraKri_isit}, is stated below:
\begin{thm} \label{thm:new_bound_k_eq_d}
Consider an exact repair linear regenerating code, having parameters $(n, k = n-1, d = n-1), (\alpha, \beta), n \geq 4$. Then, the file size $B$ of the code is upper bounded by
\begin{eqnarray} \label{eq:bound_rank_G}
B & \leq & \left \{ \begin{array}{rl} \left \lfloor \frac{r(r-1)n\alpha + n(n-1)\beta}{r^2+r}\right \rfloor , & \frac{d\beta}{r} \leq \alpha \leq \frac{d\beta}{r-1},
\ \ \ 2 \leq r \leq n - 2 \\
(n-2)\alpha + \beta, & \frac{d\beta}{n-1} \leq \alpha \leq \frac{d\beta}{n-2} \end{array} \right. .
\end{eqnarray}
\end{thm}
We remark that there are no known instances of non-linear codes that violate the above outer bound derived under the linear setting. In an independent work \cite{ElyMohTan}, the authors also derive the normalized linear ER tradeoff for the case $(n,k=n-1,d=n-1)$, but the tradeoff is expressed in an implicit manner as the solution to an optimization problem.
\begin{figure}[h!]
\begin{center}
\subfigure[For $k=3$, $d=n-1$, codes in \cite{SenSasKum_itw} achieves our repair-matrix bound. The example here is $(n=6,k=3,d=5)$. ]{\label{fig:plot1a}\includegraphics[width=2.8in]{IJICT_6_3_5_Tradeoff.pdf}}
\hspace{0.2in}
\subfigure[For $k=d=n-1$, our outer bound matches the achievable region of layered codes, thus characterizing the tradeoff under linear setting. The example here is $(n=6,k=5,d=5)$.]{\label{fig:plot1b}\includegraphics[width=2.9in]{IJICT_6_5_5_Lin_Tradeoff.pdf}}
\caption{Characterization of normalized ER Tradeoff. \label{fig:plot1}}
\end{center}
\end{figure}
\begin{figure}[h!]
\begin{center}
\subfigure[The example here is $(n=13,k=7,d=12)$. The combination of repair-matrix bound and improved Mohajer-Tandon bound performs better than other bounds given in the plot.]{\label{fig:plot2a}\includegraphics[width=2.9in]{IJICT_13_7_12_Bounds.pdf}}
\hspace{0.2in}
\subfigure[Example here is $(n=6,k=d=5)$. When $k=d$, both Mohajer-Tandon and the improved Mohajer-Tandon bounds coincide.]{\label{fig:plot2b}\includegraphics[width=2.9in]{IJICT_6_5_5_Bounds.pdf}}
\caption{Performance comparison of various outer bounds.\label{fig:plot2}}
\end{center}
\end{figure}
In Fig.~\ref{fig:plot1}, we plot the cases in which our outer bounds characterize the normalized ER tradeoff. In Fig.~\ref{fig:plot2}, we do a performance comparison of various known bounds.
\subsection{Our Approach\label{sec:approach}}
The present paper derives outer bounds on the normalized ER tradeoff of a regenerating code with full-parameter-set ${\cal P}_f = \{(n,k,d),(\alpha, \beta)\}$. Since every ER code is an FR code, it is clear that the normalized ER tradeoff lies on or above and to the right, of the normalized FR tradeoff in the $(\bar{\alpha}, \bar{\beta})$-plane. When we say that the normalized ER tradeoff {\em lies above} the normalized FR tradeoff, we imply that for given $(n,k,d)$ there is at least one value of normalized parameter $\bar{\beta}_0$ such that the corresponding normalized values $\bar{\alpha}_{\text{ER}}$ and $\bar{\alpha}_{\text{FR}}$ satisfy $\bar{\alpha}_{\text{ER}} > \bar{\alpha}_{\text{FR}}$. An equivalent definition in terms of the file size $B$ is given as follows. For given $(n,k,d)$, let $\hat{B}_0:=\hat{B}_{\text{opt}}(\alpha_0, \beta_0)$ denote the optimal FR file size at an operating point $(\alpha_0,\beta_0)$ with $\alpha_0 = (d-\mu)\beta_0 - \nu \beta_0$ as in \eqref{eq:op_point}. Thus $\left(\frac{\alpha_0}{\hat{B}_0},\frac{\beta_0}{\hat{B}_0}\right)$ is a point lying on the normalized FR tradeoff. Suppose that the maximum file size of an ER code as a function of $(\alpha, \beta)$ is
\begin{eqnarray*}
B(\alpha, \beta) & = & \hat{B}(\alpha,\beta) - \epsilon(\alpha, \beta)
\end{eqnarray*}
for some non-negative function $\epsilon(\alpha, \beta)$. Let $\epsilon_0 = \epsilon(\alpha_0,\beta_0)$. Then the normalized operating points $(\bar{\alpha}_{\text{ER}},\bar{\beta}_{\text{ER}})$ for an optimal ER code as given by
\begin{eqnarray*}
\bar{\beta}_{\text{ER}} \ = \ \frac{\beta_0}{B(\alpha_0, \beta_0)} & = & \frac{1}{\frac{\hat{B}_0}{\beta_0} - \frac{\epsilon_0}{\beta_0}} \\
\bar{\alpha}_{\text{ER}} \ = \ \frac{\alpha_0}{B(\alpha_0, \beta_0)} & = & \frac{1}{\frac{\hat{B}_0}{\alpha_0} - \frac{\epsilon_0}{\alpha_0}} \ = \
\frac{1}{\frac{\hat{B}_0}{\alpha_0} - \frac{\epsilon_0}{\beta_0} \frac{1}{(d-\mu-\nu)}} \\
\end{eqnarray*}
will be bounded away from $\left(\frac{\alpha_0}{\hat{B}_0},\frac{\beta_0}{\hat{B}_0}\right)$ if $\left(\frac{\epsilon_0}{\beta_0}\right)$ does not vanish to zero. It follows that an upper bound on the file size $B$ of an ER code
\begin{eqnarray*}
B & \leq B_{\text{upper}} (\alpha, \beta),
\end{eqnarray*}
such that
\begin{eqnarray}
\label{eq:nonvanishing} \lim_{\beta \rightarrow \infty} \frac{\hat{B}(\alpha,\beta) - B_{\text{upper}} (\alpha, \beta)}{\beta} & > & 0
\end{eqnarray}
for some $(\mu,\nu)$ will equivalently define a bound on the normalized ER tradeoff that lie strictly above the normalized FR tradeoff. Throughout the paper, our approach therefore will be to derive upper bounds on ER file size that satisfy the criterion in \eqref{eq:nonvanishing}.
If the full parameter set of a regenerating code has $n>(d+1)$, then by restricting attention to a set of $(d+1)$ nodes, one obtains a regenerating code with $n=(d+1)$ with all other parameters remaining unchanged. It follows from this that any upper bound on the size $B$ corresponding to full parameter set $\{(n=(d+1),k,d), (\alpha,\beta)\}$ continues to holds for the case $n>(d+1)$ with the remaining parameters left unchanged. Keeping this in mind, we will assume throughout that $n=(d+1)$.
A key technique used in the paper is to lower bound the difference $\epsilon = \hat{B}_{\text{opt}}(\alpha, \beta) - B(\alpha,\beta)$ between the file size of an optimal FR code and an ER code. The total information content in a regenerating code can be accumulated from a set $\{1,2,\ldots, k\}$ of $k$ nodes. The conditional entropy of the $(i+1)$-th node data conditioned on the data accumulated from previous $i$, $0 \leq i \leq k-1$ nodes is compared against the corresponding value of an optimal FR code, and the difference is defined to be $\omega_i$. It follows that $\epsilon$ is the sum of all $\{\omega_i\}_{i=0}^{k-1}$. Our approach is to relate $\{\omega_i\}_{i=0}^{k-1}$ in terms of entropy of certain collections of repair data, and eventually find an estimate on $\epsilon$. Along the way, we construct a {\em repair matrix} as an arrangement of random variables corresponding to repair data in a $((d+1) \times (d+1))$-sized matrix. Many properties pertaining to the inherent symmetry of regenerating code become clear from the repair-matrix perspective, and we use it as a tool in our proofs.
A different approach is used in deriving an upper bound on the ER file size of a linear regenerating code. Here we focus on a parity-check matrix $H$ of a linear ER code, and construct an augmented parity-check matrix $H_{\text{repair}}$ of size $(n\alpha \times n\alpha)$ that captures the exact-repair properties. A block-matrix structure is associated to $H_{\text{repair}}$, and thereby we identify $n$ thick columns $\{H_1, H_2, \ldots H_n\}$ of $H_{\text{repair}}$ with $H_i$ associated to the node $i$. Here we mean by a thick column a collection of $\alpha$ columns. Let us denote by $\delta_i$ the incremental rank added by $H_i$ to the collection of $(i-1)\alpha$ vectors in $\{H_j \mid 1 \leq j < i\}$. We estimate lower bounds on $\{\delta_i\}_{i=1}^{n}$ that will eventually lead to a lower bound on the rank of $H$. It is clear that the file size $B$ is the dimension of the code, and therefore a lower bound on the rank of $H$ results in an upper bound on the file size.
\subsection{Organization of the Paper}
In Sec.~\ref{sec:non-exist}, we describe the result of Shah et al. showing the non-existence of ER codes operating on the FR tradeoff. In Sec.~\ref{sec:bound1}, we present an upper bound on the ER file size. In Sec.~\ref{sec:oth_bounds}, we review the various upper bounds on ER file size that are known in the literature. In Sec.~\ref{sec:bound2}, we develop on the existing Mohajer-Tandon bound, and make an improvement upon that to get a better bound when $d > k$. In Sections~\ref{sec:linear_app},\ref{sec:544},\ref{sec:main_proof}, we focus on upper bounds on file size under linear setting. We characterize the normalized ER tradeoff for the case $(n,k=n-1,d=n-1)$ in Sec.~\ref{sec:main_proof}, while the proof techniques are illustrated for a particular case of $(n=5,k=4,d=4)$ in Sec.~\ref{sec:544}. In Sec.~\ref{sec:achievability}, we discuss the achievability of the outer bounds on normalized ER tradeoff derived at earlier sections.
\section{The Non-existence of ER Codes Achieving FR tradeoff\label{sec:non-exist}}
As mentioned in Sec.~\ref{sec:results}, it was shown in \cite{ShaRasKumRam_rbt} that apart from the MBR point and a small region adjacent to the MSR point, there do not exist ER codes whose $(\alpha, d\beta)$ values correspond to coordinates of an interior point on the FR tradeoff. The theorem in \cite{ShaRasKumRam_rbt} due to Shah et al. is stated below.
\begin{thm} \label{thm:shah_non_exist}(Theorem 7 in \cite{ShaRasKumRam_rbt}) For any given values of $(n,k \geq 3,d)$, ER codes do not exist for the parameters $(\alpha, \beta, B)$ lying at an interior point on the FR tradeoff except possibly for the case
\begin{eqnarray} \label{eq:exception}
(d-k+1)\beta & \leq \ \alpha \ \leq & \left[(d-k+2) - \frac{d-k+1}{d-k+2}\right] \beta.
\end{eqnarray}
\end{thm}
The region
\begin{eqnarray*}
\{ (\alpha, \beta) \mid (d-k+1)\beta \ \leq \ \alpha \ \leq \ \left[(d-k+2) - \frac{d-k+1}{d-k+2}\right] \beta\}
\end{eqnarray*} on which the theorem does not claim the non-existence of ER codes is referred to as the {\em near-MSR region}.The Theorem~\ref{thm:shah_non_exist} however did not rule out the possibility of approaching the FR tradeoff asymptotically i.e., as the file size $B \rightarrow \infty$. As mentioned earlier, this question was answered by Tian in the negative in \cite{Tia} for the specific case when $(n,k,d)=(4,3,3)$.
In this section, we will describe the approach taken by Shah et al. in proving Theorem~\ref{thm:shah_non_exist} in terms of the notation to be used in the present paper. We begin with some notation and definitions. Let $\mathcal{C}$ be an ER regenerating code over $\mathbb{F}$ having file size $B$ and full-parameter set ${\cal P}_f = \{(n,k,d), (\alpha, \beta)\}$. We regard the message symbols as a collection of $B$ random variables taking on values in $\mathbb{F}$ and use $M$ to denote the $(1 \times B)$ random vector whose components are the $B$ message symbols. We use $p_M(\cdot)$ to denote the joint probability distribution of the $M$ random variables. All other random variables pertaining to the regenerating code are functions of the components of $M$, and satisfy probability distributions that are induced by $p_M$.
We will use $[i], 1 \leq i \leq n$ to denote the set $\{1, 2, \ldots, i \}$ and define $[0]$ to be the empty set $\phi$. For $1 \leq i \leq j \leq n$, we use $[i \ j]$ to denote the set $\{i, i+1, \ldots, j \}$. Whenever we write $[i \ j]$ with $i > j$, it will be assumed to be the empty set. On occasion, we will run into a set of random variables of the form $W_A$ where $A$ is the empty set, $W_A$ should again be interpreted as the empty set.
\subsection{The Repair Matrix and the Constraints Imposed By Exact-Repair\label{sec:rep_mat}}
As made clear in Sec.~\ref{sec:approach}, we assume that $n=d+1$ without loss of generality. Let $W_x, 1 \leq x \leq n$ denote the random variable corresponding to the contents of a node $x$. Given a subset $A \subseteq [n]$, we use \begin{eqnarray*}
W_A & = & \{W_x \mid x \in A \}
\end{eqnarray*}
to denote the contents of nodes indexed by $A$. Clearly,
\begin{eqnarray}
H(W_x) & \leq & \alpha. \label{eq:capacity_alpha}
\end{eqnarray}
Let $S_x^y$, $x,y \in [n], x \neq y$ denote the random variables corresponding to the helper data sent by the helper node $x$ to the replacement of a failed node $y$. This is well defined because under the assumption $n=(d+1)$, there is just one set of $d$ helper nodes for any failed node. Given a pair of subsets $X,Y \subseteq [n]$, we define $S_X^Y \ = \ \left\{ S_x^y \mid x \in X, y \in Y, x \neq y \right\}$. We use the short-hand notation $S_X$ to indicate $S_X^X$. From the definition of regenerating codes, it follows that
\begin{eqnarray}
H(S_x^y) \ \leq \ \beta. \label{eq:capacity_beta}
\end{eqnarray}
In (\ref{eq:capacity_alpha}, \ref{eq:capacity_beta}), information is measured in units of $\log_2(|\mathbb{F}|)$ bits. The collection of random variables $\{S_x^y \mid x \in [d+1], y \in [d+1], x \neq y \}$ can schematically be represented using a $(d+1) \times (d+1)$ matrix ${\cal S}$ with empty cells along the diagonal as shown in Fig.~\ref{fig:repairmatrix}. The rows in this matrix correspond to the helper nodes and the columns to nodes undergoing repair. The $(x,y)$th entry of this matrix, thus corresponds to $S_x^y$. We will refer to ${\cal S}$ as the {\em repair matrix}. The subset of ${\cal R}$ appearing below the diagonal and above the diagonal are denoted by ${\cal R}_L$ and ${\cal R}_U$ respectively.
\begin{figure}[h!]
\begin{center}
\subfigure[Illustration of the repair matrix.]{\label{fig:repairmatrix}\includegraphics[width=2.65in]{IJICT_RepairMatrix.pdf}}
\hspace{0.2in}
\subfigure[The trapezoidal configuration]{\label{fig:trapezium}\includegraphics[width=2.9in]{IJICT_Trapezoid.pdf}}
\caption{The repair matrix and the trapezoidal configuration}
\end{center}
\label{fig:repmat_trapez}
\end{figure}
Apart from the constraints given in \eqref{eq:capacity_alpha}, \eqref{eq:capacity_beta}, the requirements of data reconstruction and exact-repair impose further constraints. The constraint due to data reconstruction is given by either of the following two equivalent statements:
\begin{eqnarray}
H(W_A) & = & B, \ |A| \geq k \label{eq:data_collection_1}, \\
H(M \mid W_A) & = & 0, \ |A| \geq k \label{eq:data_collection_2}.
\end{eqnarray}
For every $i \in [n]$, the exact-repair condition imposes the constraint
\begin{eqnarray}
H(W_i \mid S_{\cal D}^i) & = & 0, \ |{\cal D}| = d, \ i \notin {\cal D} \label{eq:exact_repair}.
\end{eqnarray}
\subsection{Trapezoidal Configurations in the Repair Matrix \label{sec:trapezium}}
Throughout the discussion taking place in Sections up to \ref{sec:bounds_file_size}, we will assume that there is a fixed numbering of the $n=(d+1)$ nodes in the network. In \eqref{eq:data_collection_1}, the file size $B$ is expressed as the joint entropy of a collection $k$ random variables $\{W_1, W_2, \ldots, W_k\}$. It is possible to express $B$ as the joint entropy of other subsets of random variables, in particular those involved in node repair. An example, important for the discussion to follow, appears below. Let $q$ be an integer lying in the range $0 \leq q \leq k$ and set
\begin{eqnarray*}
Q & = & \{1,2,\cdots,q\} \\
P& = & \{q+1, q+2, \cdots, k\} \\
R & = & \{k+1, k+2, \cdots (d+1)\} .
\end{eqnarray*}
Note that $Q,P,R$ are all functions of the integer $q$. When $q=0$, we will set $Q$ to be the empty set $\phi$. Note that $P = [k] \setminus Q$ and $R = [k+1 \ d+1]$. We define:
\begin{eqnarray}
Z_q & = & {\cal R}_L \cap S_{[d+1]}^P \\
X_q & = & {\cal R}_L \cap S_P.
\end{eqnarray}
Then we can write $B$ as:
\begin{eqnarray}
\nonumber B & = & H(W_Q, W_P) \\
\label{eq:trapezoid1a} \nonumber & = & H(W_Q, W_P, Z_q)\\
\nonumber & = & H(W_Q, Z_q) + H(W_P \mid W_Q,Z_q) \\
\nonumber & = & H(W_Q, Z_q)
\end{eqnarray}
where \eqref{eq:trapezoid1a} follows from the exact-repair condition \eqref{eq:exact_repair}. The collection $Z_q$ of random variables forms a trapezoidal region within the repair matrix as shown in Fig.\ref{fig:trapezium}. We refer to $(W_Q, Z_q)$, $q \in \{0, 1, \ldots, k\}$ as a {\em trapezoidal configuration}. The set $Z_q$ is said to be the {\em trapezoid} corresponding to the trapezoidal configuration $(W_Q, Z_q)$. It is clear that $Z_q = X_q \uplus S_R^P$. Next we proceed to define a {\em sub-trapezoid} of the trapezoid $Z_q$. Let $T =\{q+1, q+3, \ldots, q+t\} \subseteq P$ be a subset of size $0 \leq t \leq k-q$ of $P$. Then we define the subset $Z_{q,t}$ of $Z_q$ as:
\begin{eqnarray*}
Z_{q,t} & := & {\cal R}_L \cap S_{[d+1]}^T.
\end{eqnarray*}
The set $Z_{q,t}$ also forms a trapezoidal region in ${\cal R}$ and is called a sub-trapezoid of the trapezoid $Z_q$. Here again, we define $X_{q,t}$ as:
\begin{eqnarray*}
X_{q,t} & := & S_T \cap Z_{q,t},
\end{eqnarray*}
and it follows that $Z_{q,t} = X_{q,t} \uplus S_{R\cup (P\setminus T)}^T$. A sub-trapezoid is illustrated in Fig.~\ref{fig:sub-trapezoid}.
\begin{figure}[ht]
\centering
\includegraphics[width=2.8in]{IJICT_SubTrapezoid.pdf}
\caption{Illustration of the sub-trapezoid $Z_{q,t}$.}
\label{fig:sub-trapezoid}
\end{figure}
For every trapezoidal configuration $(W_Q,Z_q)$ indexed by $q = 0, 1, \ldots, k$, we have the identity
\begin{eqnarray} \label{eq:trapezoid2}
B & = & H(W_Q, Z_q),
\end{eqnarray}
and the corresponding inequality obtained by repeatedly applying the union bound $H(X_1,X_2) \leq H(X_1)+H(X_2)$, i.e.,
\begin{eqnarray} \nonumber
B & \leq & H(W_Q) + H(Z_q) \\
\label{eq:trapezoid_bound} & \leq & H(W_Q) + H(X_q) + H(S_R^P) \\
\label{eq:Bq_bound} & \leq & q\alpha + {k-q \choose 2}\beta + (d+1-k)(k-q)\beta .
\end{eqnarray}
We define for $q \in \{0,1,2\cdots, k\}$, the quantities:
\begin{eqnarray*}
B_q & := & q\alpha + {k-q \choose 2}\beta + (d+1-k)(k-q)\beta .
\end{eqnarray*}
\subsection{The Argument For Non-existence \label{sec:hnp}}
Let us consider an ER code operating at the point $(\alpha, \beta)$ satisfying $\alpha = (d-\mu)\beta$. For this value of $\alpha$, as shown below, the FR bound gives us $B_{\mu+1}$ as the upper bound on file size:
\begin{eqnarray*}
B & \leq & \sum_{i=0}^{k-1}\min \{\alpha, (d-i)\beta\} \\
& = & (\mu+1)\alpha \ + \ \sum_{i=\mu+1}^{k-1}(d-i)\beta \\
& = & (\mu+1)\alpha \ + \ \sum_{j=0}^{k-\mu-2}(d-k+1+j)\beta \\
& = & (\mu+1)\alpha \ + \ (d-k+1)(k-\mu-1)\beta \ + \ {k-\mu-1 \choose 2} \beta \\
& = & B_{\mu+1} .
\end{eqnarray*}
Thus if an ER code is optimal with respect to the FR tradeoff at the point $\alpha=(d-\mu)\beta$, from equations \eqref{eq:trapezoid2} and \eqref{eq:trapezoid_bound}, with $q=(\mu+1)$, one obtains that such a code must satisfy:
\begin{eqnarray} \label{eq:union_bd_cond}
H(Z_{\mu + 1} \mid W_{[\mu]}) & = & H(Z_{\mu + 1}) \ = \ {k-\mu - 1 \choose 2}\beta + (d+1-k)(k-\mu-1)\beta,
\end{eqnarray}
i.e., the union bound on $Z_{\mu + 1}$ must hold with equality. That means that all the random variables in $Z_{\mu +1 }$ are mutually independent. However, it is shown by Shah et al. in \cite{ShaRasKumRam_rbt} that this is not possible if an ER code lies at an interior point except for the near-MSR region and the MBR point. To prove this result, the authors of \cite{ShaRasKumRam_rbt} focus on a subset $S_m^L$ of the repair matrix where $m \in [n]$ and $L \subseteq [n]$ are arbitrarily chosen from $[n]$ while satisfying the conditions $|L| := \ell < k$ and $m \notin L$. The subset $S_m^L$ is of course, the union of helper data sent by a single node $m$ to the nodes in $L$. We can write
\begin{eqnarray}
\nonumber H(S_m^L) & = & H(S_m^L \mid W_L) + I(S_m^L : W_L) \\
\label{eq:ia} & \leq & H(S_m^L \mid W_L) + I(W_m : W_L).
\end{eqnarray}
It can be shown that (see \cite{ShaRasKumRam_rbt})
\begin{eqnarray} \label{eq:ia_cancel}
H(S_m^L \mid W_L) = 0 , \ \ell \geq \mu + 1,
\end{eqnarray}
and that
\begin{eqnarray} \label{eq:rel_beta}
I(W_m : W_L) = \beta , \ \ell = \mu + 1.
\end{eqnarray}
As a consequence, we have that
\begin{eqnarray} \label{eq:row_beta}
H(S_m^L ) = \beta, \ \ell = \mu + 1.
\end{eqnarray}
It follows that
\begin{eqnarray*}
H(S_m^J ) \leq \beta, \text{ for any $J \subseteq [n]$ with $\mid J \mid < \mu + 1$}.
\end{eqnarray*}
In particular this is true of $J$ is of size $|J|=2$. On the other hand, optimality with respect to the FR bound assumes that each row in the trapezoidal region $Z_q$ has joint entropy equal to the number of repair random variables $S_x^y \in Z_q$ belonging to the row, times $\beta$. The bottom row of the trapezoid has $(k-\mu-1)$ entries and thus we clearly have a contradiction whenever $(k-\mu-1)\geq 2$. The argument does not go through when $(k-\mu-1)\leq 1$, i.e., when $\mu \geq k-2$. This necessary condition on $\mu$ underlies the fact that the non-existence of ER codes do no hold good in the near-MSR region. The proof given here is for the case when $\alpha=(d-\mu)\beta$ is a multiple of $\beta$. This proof can be extended to the general case $\alpha=(d-\mu)\beta - \theta$, for $0 < \theta < \beta$ as well. In the next section, we will exploit this contradiction to derive an upper bound on the file size of an ER code.
\section{An Upper Bound on the ER File Size \label{sec:bounds_file_size}}
In this section, we show that for {\em any} value of the parameter set $(n,k,d)$, the ER tradeoff lies strictly above the FR tradeoff, a result that was first established in \cite{SasSenKum_isit}. As explained in Sec.~\ref{sec:approach}, we do this by deriving a tighter bound on file size $B$ in the case of ER than is true under FR.
As mentioned in Sec.~\ref{sec:hnp}, our approach to bounding the file size $B$ is based on deriving estimates for the joint entropy of subsets of the repair matrix. First, we assume the existence of an ER code having parameters $(n,k,d),(\alpha,\beta)$ whose file size $B$ is of the form $B=\hat{B}-\epsilon$ for some $\epsilon \geq 0$, where $\hat{B}$ is the file size of an optimal FR code having the same parameter set ${\cal P}$. Next, we proceed to estimate the joint entropy of the subset $Z_q$ corresponding to a trapezoidal configuration $(W_Q, Z_q)$. We estimate the joint entropy in two different ways and show that the two estimates are in contradiction unless the value of $\epsilon$ lies above a threshold value $\epsilon_{\min}$. This allows us to replace $B-\epsilon_{\min}$ as the revised bound on the file size under ER. We will also show that $\epsilon_{\min}$ does not vanish as $\beta \rightarrow \infty$.
\subsection{Preliminaries \label{sec:prelim}}
Consider an optimal FR code ${\cal \hat{C}}$ possessing the same set of parameters ${\cal P}$ as the ER code ${\cal C}$. In what follows, given any deterministic or random entity associated with $\mathcal{C}$, we will use a hat to denote the corresponding entity in ${\cal \hat{C}}$. For example, $\hat{B}$ denotes the file size of ${\cal \hat{C}}$. With this, we can write
\begin{eqnarray*}
\sum_{i=0}^{k-1} \min\{\alpha,(d-i)\beta\} & = & \hat{B} \ = \ H(\hat{W}_{[k]}) \nonumber \\
& = & \sum_{i=0}^{k-1} H(\hat{W}_{i+1} \mid \hat{W}_{[i]} ) \\
& \leq & \sum_{i=0}^{k-1} \min\{\alpha,(d-i)\beta\} .
\end{eqnarray*}
It follows that in an optimal FR code ${\cal \hat{C}}$, we must have
\begin{eqnarray*}
H(\hat{W}_{i+1} \mid \hat{W}_{[i]} ) & = & \min\{\alpha,(d-i)\beta\}, \ 0 \leq i \leq (k-1) .
\end{eqnarray*}
Next, for $0 \leq i \leq k-1$, let us set:
\begin{eqnarray*}
\gamma_i & = & \min \{ \alpha, (d-i)\beta\} , \\
\omega_i & = & \gamma_i- H(W_{i+1} \mid W_{[i]}),
\end{eqnarray*}
where $\omega_i$ measures the drop in the conditional entropy $H(W_{i+1} \mid W_{[i]})$ of an ER code in comparison with its value $H(\hat{W}_{i+1} \mid \hat{W}_{[i]})$ in the case of an optimal FR code. A plot of $\gamma_i$ as a function of $i$ for a given operating point $(\alpha, \beta)$ with $\alpha = (d-\mu)\beta -\theta$, appears in Fig.~\ref{fig:gamma}. We also note the following identities:
\begin{eqnarray}
\label{eq:basic1} \epsilon & = & \sum_{i=0}^{k-1} \omega_i , \\
\label{eq:basic2} H(W_B \mid W_A) & = & \sum_{i =a}^{a+b-1} (\gamma_i - \omega_i),
\end{eqnarray}
where $A=[a]$ and $B=[a+1 \ a+b]$ and $0 \leq a \leq a+b \leq k$. The lemma below follows from these identities.
\begin{lem} \label{lem:colsum} Let $(Q, Z_q)$ be a trapezoidal configuration for some $q \in \{ 0, 1, \ldots, k \}$, and let $Z_{q,t} \subseteq Z_q$ be a sub-trapezoid with $0 \leq t \leq k-q$. Then
\begin{eqnarray*}
H(Z_{q,t} \mid W_Q) & \geq & \sum_{i =q}^{q+t-1} (\gamma_i - \omega_i)
\end{eqnarray*}
\end{lem}
\begin{proof} By the exact-repair condition, $H(Z_{q,t} \mid W_Q) $ is at least $H(W_{[q+1 \ q+t]} \mid W_Q) $ and the result follows from \eqref{eq:basic2}.
\end{proof}
\begin{figure}[ht]
\centering
\includegraphics[height=2in]{IJICT_Drop_NodeEntropy.pdf}
\caption{The function $\gamma_i$ versus $i$ for $\alpha=(d-\mu)\beta - \theta$.}\label{fig:gamma}
\end{figure}
\subsection{Upper Bounds On Joint Conditional Entropies Of Repair Data \label{sec:ubounds_ent}}
Let $Q = [q]$, and $M, L$, be two mutually disjoint subsets of $[d+1]\setminus Q$ with $\ell := |L|$, and $m := |M|$. Then we can write
\begin{eqnarray} \label{eq:ia_equation}
H(S_M^L \mid W_Q) & = & H(S_M^L \mid W_V, W_Q) + I(S_M^L : W_V \mid W_Q),
\end{eqnarray}
where in we take $V \supset L$ as a superset of $L$ with $V \cap M = \phi$ and $v:= |V|$. Our next objective is to estimate $H(S_M^L \mid W_V, W_Q)$ and $I(S_M^L : W_V \mid W_Q)$ in order to obtain an upper bound on $ H(S_M^L \mid W_Q) $.
\begin{lem} \label{lem:ubound} Suppose $\alpha = (d-\mu)\beta - \theta$ with $\mu \in \{0,1,\ldots, k-1\}$ and $\theta \in [0,\beta)$ except when $\mu = k-1$. Then for $2 \leq \ell \leq v < k-q$,
\begin{eqnarray*}
H(S_M^L \mid W_V, W_Q) & \leq & \left\{ \begin{array}{lc} \ell \theta + \ell \omega_{v-1+q} , & v= \mu + 1 - q \\
\ell \omega_{v-1+q} , & v > \mu + 1 -q . \end{array} \right.
\end{eqnarray*}
\end{lem}
\begin{proof} Let $\ell_0 \in L$, and by symmetry $H(S_M^{\ell_0} \mid W_V, W_Q)$ is same for every $\ell_0 \in L$. Define $\tilde{V} = V \setminus \{\ell_0\}$. Then we have
\begin{eqnarray*} H(S_M^L \mid W_V, W_Q) & \leq & \ell H(S_M^{\ell_0} \mid W_V, W_Q) \\
& = & \ell \{ H(S_M^{\ell_0},W_{\ell_0} \mid W_{\tilde{V}}, W_Q) - H(W_{\ell_0} \mid W_{\tilde{V}}, W_Q) \} \\
& = & \ell \{ H(S_M^{\ell_0} \mid W_{\tilde{V}}, W_Q) + H(W_{\ell_0} \mid S_M^{\ell_0}, W_{\tilde{V}}, W_Q) - H(W_{\ell_0} \mid W_{\tilde{V}}, W_Q) \}
\end{eqnarray*}
By substituting bounds, we obtain for the case $v-1+q > \mu$
\begin{eqnarray*}
H(S_M^L \mid W_V, W_Q) & \leq & \ell \{ m\beta + (d-v +1-q-m)\beta - (d-v +1-q)\beta + \omega_{v - 1+q} \} \\
& = & \ell \omega_{v - 1+q} \ ,
\end{eqnarray*}
and for the case $v - 1 + q = \mu$,
\begin{eqnarray*}
H(S_M^L \mid W_L, W_Q) & \leq & \ell \{ m\beta + (d-v +1-q-m)\beta - (d-v +1-q)\beta + \theta + \omega_{v - 1+q} \} \\
& = & \ell \theta + \ell \omega_{v - 1+q} \ .
\end{eqnarray*}
\end{proof}
We remark here that in \cite{Duursma2014} the quantity $H(S_M^L)$ is considered for obtaining a bound on ER file size. Our approach here is different in the sense that we estimate $H(S_M^L)$ in terms of $\{ \omega_i\}_{i=0}^{k-1}$. The second term in \eqref{eq:ia_equation} can also be easily estimated in terms of $\{\gamma_i, \omega_i \}_{i=0}^{k-1}$:
\begin{eqnarray}
\nonumber I(S_M^L :W_V \mid W_Q) & \leq & I(W_M :W_V \mid W_Q) \\
\nonumber & = & H(W_M \mid W_Q) - H(W_M \mid W_{Q\cup V}) \\
\label{eq:rel} & = & \left[ \sum_{i=q}^{q+m-1} (\gamma_i - \omega_i) \right] - \left[ \sum_{i=q+v}^{q+v+m-1} (\gamma_i - \omega_i) \right] .
\end{eqnarray}
The Lemma~\ref{lem:ubound} along with \eqref{eq:rel} allows us to bound $H(S_M^L \mid W_Q)$ from above given an operating point $\alpha = (d-\mu)\beta - \theta$. Calculations for the particular case of $q=0, m=1$ taking values for $v$ in $\{\mu+1, \mu+2\}$ result in the following corollary.
\begin{cor} \label{cor:rowbound} Let $\alpha = (d-\mu)\beta - \theta$. Then for $m \notin L$ and $\ell = |L|$, we have
\begin{eqnarray}
\label{eq:rb1} H(S_m^L) & \leq & \beta + (\ell -1)\theta + (\ell -1)\omega_{\mu} + (\omega_{\mu} + \omega_{\mu+1}) , \ \ 2 \leq \ell \leq \mu +1 \\
\label{eq:rb2} H(S_m^L) & \leq & 2\beta - \theta + (\ell -1)\omega_{\mu+1} + (\omega_{\mu+1 + \omega_{\mu+2}) }, \ \ 2 \leq \ell \leq \mu +2 .
\end{eqnarray}
\end{cor}
\subsection{The Bound On ER File Size \label{sec:bound1}}
In this section, we make use of Lem.~\ref{lem:colsum} and Cor.~\ref{cor:rowbound} to derive an upper bound on the file size $B$ of an ER code. This will also translate to an outer bound for the ER tradeoff.
\begin{thm} \label{thm:bound1} Let $B$ denote the file size of a ER regenerating code with full-parameter set ${\cal P}_f = \{(n,k,d),(\alpha,\beta)\}$. Let $\alpha = (d-\mu)\beta - \theta$. Then the ER file size $B$ is upper bounded by:
\begin{enumerate}
\item For $\mu=0, \ 0 < \theta < \beta$,
\begin{eqnarray*}
B & \leq & \hat{B} - \epsilon_1
\end{eqnarray*}
\item For $\mu \in \{ 1, 2, \ldots, k-3 \}, \ 0 \leq \theta < \beta$,
\begin{eqnarray*}
B & \leq & \hat{B} - \max \{ \epsilon_0, \epsilon_1 \}
\end{eqnarray*}
\item For $\mu=k-2, \ 0 \leq \theta < \left(\frac{d-k+1}{d-k+2}\right)\beta$,
\begin{eqnarray*}
B & \leq & \hat{B} - \epsilon_0,
\end{eqnarray*}
\end{enumerate}
where $\epsilon_0$ and $\epsilon_1$ are as given in Tab.~\ref{tab:eps_table}.
\end{thm}
\begin{proof} The proof is relegated to the Appendix.
\end{proof}
\begin{table}[h]
\centering
\begin{tabular}{||c|c||} \hline
\hline
& \\
Regime of $(\mu,\theta)$ & Lower bounds $\epsilon_0$ , $\epsilon_1$ on $\epsilon = \hat{B}-B$ \\
& \\
\hline
\hline
& \\
$ \begin{array}{c} \mu \in \{ 1, 2, \ldots, k-2 \} \text{ for all } \theta \\
\text{For } \mu=k-2, \ \theta < \frac{d-k+1}{d-k+2}\beta
\end{array}$ &
\large
$\begin{array}{lcl}
& & \text{Let } r_0 = \left\lfloor \frac{k-\mu}{\mu+1} \right\rfloor \\
&& \\
\epsilon_0 & = & \left\{ \begin{array}{lc} \frac{(d-k+1)(k-\mu-1)(\beta - \theta) \ - \ \theta}{(d-k+1)(k-\mu) \ + \ 1}, & k-\mu < \mu+1. \\
& \\
\frac{\left(d-\frac{(\mu+1)(r_0+3)}{2}+2 \right)r_0\mu(\beta - \theta) \ - \ \theta}{\left(d-\frac{(\mu+1)(r_0+3)}{2}+2\right)r_0(\mu+1) \ + \ 1}, & k-\mu \geq \mu+1 . \end{array} \right. \end{array}$ \\
& \\
\hline
& \\
$ \begin{array}{c} \mu \in \{ 0, 1, \ldots, k-3 \} \text{ for all } \theta \\
\text{For } \mu=0, \ \theta \neq 0
\end{array}$ &
\large
$\begin{array}{lcl}
& & \text{Let } r_1 = \left\lfloor \frac{k-\mu-1}{\mu+2} \right\rfloor \\
&& \\
\epsilon_1 & = & \left\{ \begin{array}{lc} \frac{(d-k+1)\left[(k-\mu-3)\beta \ + \ \theta\right]}{(d-k+1)(k-\mu-1) \ + \ 1}, & k-\mu-1 < \mu+2. \\
& \\
\frac{\left(d-\frac{(\mu+2)(r_1+3)}{2}+2 \right)r_1 \left[\mu\beta \ + \ \theta\right] }{\left(d-\frac{(\mu+2)(r_1+3)}{2}+2\right)r_1(\mu+2) \ + \ 1}, & k-\mu-1 \geq \mu+2. \end{array} \right. \end{array}$ \\
& \\
\hline \hline
\end{tabular}
\caption{Lower Bounds on the quantity $\hat{B}-B$}
\label{tab:eps_table}
\end{table}
\begin{cor} When $k \geq 3$, the normalized ER tradeoff is strictly away from the normalized FR tradeoff for all normalized operating points $(\bar{\alpha},\bar{\beta})$ with $\bar{\alpha} = (d-\mu)\bar{\beta} - \nu \bar{\beta}$ such that $(\mu, \nu)$ falls in the range $(\mu = 0, \ 0 \ < \ \nu \ < \ 1)$, $(\mu \in \{1,2,\ldots , k-3\}, 0 \ \leq \nu \ < \ 1)$ or $(\mu=k-2, 0 \ \leq \ \nu < \frac{d-k+1}{d-k+2})$.
\end{cor}
\begin{proof} We will show that the upper bound on the file size given in \ref{thm:bound1} satisfies the criterion in \eqref{eq:nonvanishing}. Let
\begin{eqnarray}
\label{eq:eps} \delta = \left\{ \begin{array}{lc} \epsilon_1 & \mu = 0, \theta \neq 0 \\
\max \{\epsilon_0, \epsilon_1\} & \mu \in \{1,2,\ldots , k-3 \} \\
\epsilon_0 & \mu = k-2, \theta < \frac{d-k+1}{d-k+2}\beta \end{array} \right.
\end{eqnarray}
Let $\alpha$ be related to $\beta$ as $\alpha = (d-\mu)\beta - \theta=(d-\mu)\beta - \nu\cdot\beta, \ \ \nu \in [0,1)$ by a fixed pair $(\mu,\nu)$ that falls in the range given. Then for a code with the file size $B$,
\begin{eqnarray*}
\frac{\beta}{B} & \geq & \frac{\beta}{\hat{B} - \delta }, \ \ \ \text{(using Thm.~\ref{thm:bound1})}\\
& = & \frac{\beta}{\hat{B}} \cdot \frac{1}{1-\left(\frac{\delta}{\hat{B}} \right)} \\
& = & \frac{\beta}{\hat{B}} \cdot \frac{1}{ 1 - \left(\frac{\delta}{ \beta \sum_{i=0}^{k-1} \min \{ (d-\mu)-\nu , (d-i) \} } \right)} \\
& \geq & \frac{\beta}{\hat{B}} + \delta_0,
\end{eqnarray*}
for some $\delta_0 > 0$, determined by the constants $\frac{\epsilon_0}{\beta}$ and $\frac{\epsilon_1}{\beta}$. It can be seen that $\frac{\epsilon_0}{\beta}$ and $\frac{\epsilon_1}{\beta}$ are independent of $\beta, B$ and dependent only on the fixed values of $\mu,\nu,k$ and $d$. This completes the proof.
\end{proof}
\section{Discussion On Various Known Upper Bounds On ER File Size \label{sec:oth_bounds}}
In this section, we briefly review the results from \cite{Tia}, \cite{Tia_544}, \cite{Duursma2014}, \cite{Duursma2015},\cite{MohTan}, all of them involving upper bounds on the ER file size. While bounds provided in \cite{Duursma2014}, \cite{Duursma2015} are not explicit, those presented in \cite{Tia}, \cite{Tia_544}, \cite{MohTan} have got the form of explicit algebraic expressions.
\subsection{Review of the Bounds in \cite{Tia},\cite{Tia_544}}
In \cite{Tia}, Tian characterized the optimal ER file size for the case of $(n,k,d)=(4,3,3)$. This was the first result establishing a non-vanishing gap for ER file size in comparison with the optimal FR file size. For the case of $(n,k,d)=(4,3,3)$, there are four bounds
\begin{eqnarray} \label{eq:433_trapezoid}
B & \leq & B_q, \ q = 0,1,2,3 ,
\end{eqnarray}
that follow from considering all possible trapezoidal configurations. For a given operating point $\alpha = (d-\mu)\beta -\theta$, one of these bounds dominate over the others. By suitably modifying the information theory inequality prover software(see \cite{ITIP}, \cite{Yeu}), Tian was able to characterize a bound
\begin{eqnarray*}
3B & \leq & 4 \alpha + 6 \beta,
\end{eqnarray*}
that is different from \eqref{eq:433_trapezoid}. Recently in \cite{Tia_544}, Tian made further progress with his computational approach to provide an upper bound on the ER file size for $(n,k,d)=(5,4,4)$. In both the case of $(4,3,3)$ and $(5,4,4)$, the bounds are achieved using the well-known class of layered codes\cite{TiaSasAggVaiKum}. These results are made part of the online collection of ``Solutions of Computed Information Theoretic Limits (SCITL)'' hosted at \cite{SCITL}.
\subsection{Review of the Bound in \cite{Duursma2014} \label{sec:duursma1}}
In the second of two bounds presented in~\cite{Duursma2014}, Duursma considers the region $Z_q$ in a trapezoidal configuration $(Q, Z_q)$, and tiles the region using rectangular blocks corresponding to random variables $S_M^L$, with $m := |M|$, $\ell := |L|$. This approach is an extension of the tiling-with-line-segments method, introduced in \cite{SasSenKum_isit} and used in the present paper in the derivation of Thm.~\ref{thm:bound1}. Duursma extends the upper bound given in \cite{SasSenKum_isit} to obtain a bound on $H(S_M^L)$, involving entropy expressions having a negative coefficient. Various carefully-chosen alternative bounds on $B$ are used to cancel out these negative terms leading to the improved bound:
\begin{eqnarray}
\label{eq:gauss} B + \sum\limits_{(M,L) \in {\cal M}} \ell B \le B_q + \sum\limits_{(M,L) \in {\cal M}} (B_{r+m-1} + (\ell - 1)(B_{r+m-2} - \beta)),
\end{eqnarray}
where $m:=|M|$, $\ell:=|L|$ and $r \ge \ell$ for every choice of $(M,L)$. In \eqref{eq:gauss}, ${\cal M}$ denotes a set of possible tilings of the trapezoidal region $Z_q$ using rectangular blocks, and $B_q$ remains as defined in Section~\ref{sec:trapezium}. To obtain the best possible explicit bound, one would then proceed to minimize this expression over all possible tilings. It can easily be checked that the bound in \eqref{eq:gauss} is tighter than the one given in \eqref{eq:Bq_bound}, by a difference of at most $\beta$.
\subsection{Review of the Bound in \cite{Duursma2015} \label{sec:duursma2}}
In \cite{Duursma2015}, Duursma augments the set of node random variables $\{W_i\}_{i=1}^{k}$ with another set of random variables $W'_{k+u}$ for $1 \leq u \leq \nu$ satisfying
\begin{eqnarray}
\label{eq:aux_var} H(S_i^j | W'_{k+u}) \le H(S_i^j | W_{[i+1, k]} W'_{[k+1, k+u-1]}) \text{ for } 1\leq i < j \leq p,
\end{eqnarray}
for a given value of $p$, $0 \leq p \leq k$. The bound on file size $B$ is obtained as
\begin{eqnarray*}
(\nu + 1)B \le (\nu + 1)B_{k-p} + \sum\limits_{u = 1}^{\nu} \left(H(W'_{k+u}) - {p \choose 2}\beta \right),
\end{eqnarray*}
where $B_{k-p}$ is as defined earlier. This results in general, in an implicit bound as it is not clear how the random variables $\{W'_{k+u}\}_{u=1}^{\nu}$ can be constructed. However, restricting to linear codes, the author is able to construct the $\{W'_{k+u}\}$ resulting in an explicit bound for every parameter set $(n,k,d)$. This bound matches with the one proved in \cite{PraKri_isit} for the special case of $(k+1,k,k)$-linear ER codes.
\subsection{Review of the Bound in \cite{MohTan} \label{sec:mohajer}}
In this section, we give a complete description\footnote{We have simplified the proof to some extent, and therefore certain arguments differ from what is presented in \cite{MohTan}.} of the proof of the bound due to Mohajer et al. in \cite{MohTan}. We start with recalling the bound given in \eqref{eq:trapezoid2} for a trapezoidal configuration $(Q, Z_q)$,
\begin{eqnarray}
\nonumber B & \leq & H(W_Q) + H(Z_q \mid W_Q) \\
\label{eq:soh_0}& = & H(W_Q) + H(X_q, S_R^P \mid W_Q),
\end{eqnarray}
where the sets $P$, $Q$, and $R$ are as defined in Sec.~\ref{sec:trapezium}. For convenience of notation, we modify the indexing of elements in sets $P$, $Q$ and $R$, without making any change in their respective sizes. Thus the sets $Q, P, R$ are defined by the same value of $q$, and hence the bound in \eqref{eq:trapezoid2} remains unaltered. With respect to the modified indexing, $Q = \{ -1, -2, \ldots, -q \}$, $P = \{ 1, 2, \ldots, p:=k-q \}$ and $R = \{ k+1, k+2, \ldots, d+1 \}$. Continuing from \eqref{eq:soh_0}, we write
\begin{eqnarray}
\nonumber B & \leq & H(W_Q) + H(X_q, S_R^P \mid W_Q) \\
\label{eq:soh_1} &\leq & q \alpha + \underbrace{\sum\limits_{i=1}^p H\left(S_i^{\left[i-1\right]} \mid W_Q\right)}_{{\cal R}(p)} + H\left(S_R^P \mid W_Q \right).
\end{eqnarray}
Instead of invoking the union bound as done in \eqref{eq:trapezoid_bound}, the entropic term ${\cal R}(p) := \sum\limits_{i=1}^p H\left(S_i^{\left[i-1\right]} | W_Q\right)$ is canceled out with the help of other expressions for $B$. In \eqref{eq:soh_2} that follows, the authors over-count conditional node entropy $H(W_i \mid W_{[i-1]})$ as $\alpha$, and later subtract out the error introduced in doing so. This leads to a different expression for $B$:
\begin{eqnarray}
\label{eq:soh_2} B &=& H(W_Q) + \sum\limits_{i=1}^p H(W_i \mid W_Q) - \sum\limits_{i=1}^p I(W_i ; W_{[i-1]} \mid W_Q) \nonumber \\
& \leq & q\alpha + p \alpha - \sum\limits_{i=1}^p I(S_i^{[i-1]} ; S_{[i-1]}^i \mid W_Q) \nonumber \\
& = & k\alpha - \underbrace{\sum\limits_{i=1}^p H\left(S_i^{\left[i-1\right]} \mid W_Q\right)}_{{\cal R}(p)} - \underbrace{\sum\limits_{i=1}^p H\left(S_{\left[i-1\right]}^i \mid W_Q\right)}_{{\cal C}(p)} + \underbrace{\sum\limits_{i=1}^p H\left(S_{\left[i-1\right]}^i, S_i^{\left[i-1\right]} \mid W_Q\right)}_{{\cal J}(p)}.
\end{eqnarray}
While \eqref{eq:soh_2} allows cancellation of ${\cal R}(p)$ in \eqref{eq:soh_1}, it introduces new entropic terms ${\cal C}(p)$ and ${\cal J}(p)$. A third expression for $B$ is obtained by over-counting entropy of columns in the trapezoidal region $Z_q$ using union bound, and then subtracting out the error introduced in doing so.
\begin{eqnarray}
\nonumber B & \leq & H(W_Q, S_{[d+1]}^P) \nonumber \\
\nonumber & \leq & q\alpha + \sum\limits_{i=1}^p H(S_{[d+1]}^i | W_Q ) - \sum\limits_{i=1}^p I\left(S_{[d+1]}^i; S_{[d+1]}^{[i-1]} \mid W_Q\right) \\
\label{eq:soh_31} & \leq & q\alpha + \sum\limits_{i=1}^p H\left(S_{[i-1]}^i \mid W_Q\right) + \sum\limits_{i=1}^p H\left(S_{[i+1 \ d+1]}^i \mid W_Q\right) - \sum\limits_{i=1}^p I\left(S_{[d+1]}^i; S_{[d+1]}^{[i-1]} \mid W_Q\right).
\end{eqnarray}
The following straightforward lemma is useful in producing a lower bound for $I\left(S_{[d+1]}^i; S_{[d+1]}^{[i-1]} \mid W_Q\right)$.
\begin{lem} \label{lem:useful} Let $X,Y,Z,U$ be random variables such that $Z \ = \ f_1(X,U) \ = \ f_2(Y,U)$ for some deterministic functions $f_1$, $f_2$. Then
\begin{eqnarray*}
I(X:Y \mid U) & \geq & H(Z \mid U).
\end{eqnarray*}
\end{lem}
By invoking Lem.~\ref{lem:useful} along with identifying $Z = \{S_{[i-1]}^i, S_i^{\left[i-1\right]} \}$, $X = S_{[d+1]}^i$, $Y = S_{[d+1]}^{[i-1]}$ and $U = W_Q$, it follows that
\begin{eqnarray}
\label{eq:soh_32} I\left(S_{[d+1]}^i; S_{[d+1]}^{[i-1]} | W_Q\right) & \geq & H\left(S_{[i-1]}^i, S_i^{\left[i-1\right]} | W_Q\right),
\end{eqnarray}
and substituting \eqref{eq:soh_32} back in \eqref{eq:soh_31}, the authors obtain the bound
\begin{eqnarray}
\label{eq:soh_3} B & \leq & q \alpha + \underbrace{\sum\limits_{i=1}^p H\left(S_{\left[i-1\right]}^i \mid W_Q\right)}_{{\cal C}(p)} + \sum\limits_{i=1}^p H\left(S_{\left[i+1 \ d+1\right]}^i \mid W_Q\right) - \underbrace{\sum\limits_{i=1}^p H\left(S_{[i-1]}^i, S_i^{\left[i-1\right]} \mid W_Q\right)}_{{\cal J}(p)}.
\end{eqnarray}
Summation of \eqref{eq:soh_1} \eqref{eq:soh_2} and \eqref{eq:soh_3} eliminates ${\cal R}(p)$, ${\cal C}(p)$ and ${\cal J}(p)$, and results in the bound:
\begin{eqnarray}
\label{eq:soh_final0}3B &\le& (3k-2p)\alpha + \sum\limits_{i=1}^p H\left(S_{\left[i+1 \ d+1\right]}^i \mid W_Q\right) + H\left(S_R^P \mid W_Q \right).
\end{eqnarray}
By applying union bound, it follows that
\begin{eqnarray}
\label{eq:soh_final}B &\le & \min_{0 \leq p \leq k} \frac{(3k-2p)\alpha + \frac{p(2(d-k)+p+1)\beta}{2} + (d-k+1)\min\{\alpha, p\beta\} }{3}.
\end{eqnarray}
To our knowledge, the bound in \eqref{eq:soh_final} due to Mohajer et al. remains the best known upper bound on ER file size in the region away from the MSR point.
\section{An Improved Upper Bound on ER File Size \label{sec:bound2}}
In this section, we first propose an improvement over the bound in \cite{MohTan}, that is described in Sec.~\ref{sec:mohajer}. The authors of \cite{MohTan} apply union bound on the last two terms in \eqref{eq:soh_final0} to obtain the final bound. But it is possible to avoid the union bound for the term $H\left(S_R^P \mid W_Q \right)$ when $d >> k$.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=2.8in]{IJICT_Imp_Soh_Bound.pdf}
\caption{The splitting up of the region corresponding to $S_R^P$. In this example, $a = 1$, $b > 0$.} \label{fig:soh_imp}
\end{center}
\end{figure}
The Fig.~\ref{fig:soh_imp} illustrates the region $S_R^P$ as it is viewed on the repair matrix. The rectangular region $S_R^P$, denoted by $\Gamma$, is of width $p$ and height $(d-k+1)$. Let us write
\begin{eqnarray*}
d-k+1 = a(p-1) + b, \ 0 \leq b < (p-1).
\end{eqnarray*}
Then $\Gamma$ can be split into $(a+1)$ sub-rectangles $\Gamma_1, \Gamma_2, \ldots, \Gamma_{a+1}$ of equal width $p$, and $\Gamma_i, 1 \leq i \leq a$ have the same height $(p-1)$. The last sub-rectangle $\Gamma_{a+1}$ is of height $b$, and it vanishes in the case $b=0$. Each rectangle $\Gamma_i, 1 \leq i \leq a$ is further split into two isosceles right triangles $\Gamma_{i1}$, $\Gamma_{i2}$ of base $(p-1)$ as illustrated in Fig.~\ref{fig:soh_imp}. By symmetry, we can write
\begin{eqnarray}
\nonumber H(S_P^R | W_Q) &\leq & a H\left( \Gamma_1 | W_Q \right) + H(\Gamma_{a+1}|W_Q)\\
\nonumber & \leq & 2a H\left( \Gamma_{11} | W_Q \right) + H(\Gamma_{a+1}|W_Q)\\
\label{eq:imp_soh_1}& \leq & 2a \sum_{i=1}^{p} H\left( S_i^{[i-1]} | W_Q \right) + b\min\{\alpha, p\beta\} .
\end{eqnarray}
We improve upon the the bound in \eqref{eq:soh_1} by substituting \eqref{eq:imp_soh_1}, and obtain that
\begin{eqnarray}
\label{eq:soh_rect} B &\le& q \alpha + (1+2a)\sum\limits_{i=1}^p H\left(S_i^{\left[i-1\right]} | W_Q\right) + b\min\{\alpha, p\beta\}.
\end{eqnarray}
This modification only affects the coefficient of the term ${\cal R}(p)$. The cancellation of the term ${\cal R}(p)$ is possible by appropriately scaling the bounds in \eqref{eq:soh_2} and \eqref{eq:soh_3}. This results in an improved bound whenever $a \geq 1$, and is stated in the theorem below. We refer to this bound as the {\em improved Mohajer-Tandon bound}.
\begin{thm} \label{thm:bound2} The ER file size $B$ of regenerating code with full-parameter set ${\cal P}_f = \{(n,k,d),(\alpha,\beta)\}$ is bounded by
\begin{eqnarray}
\label{eq:soh_improved} B & \leq & \min_{0 \leq p \leq k} \frac{\alpha(2(k-p)(1+a)+k(1+2a)) + b\min\{\alpha, p\beta\} + \frac{(1+2a)p(2(d-k)+p+1)\beta}{2} }{3+4a},
\end{eqnarray}
where $d-k+1 = a(p-1) + b$ and $0 \leq b < (p-1)$.
\end{thm}
We remark that the improved Mohajer-Tandon bound relies upon the same techniques introduced by Mohajer et al. of coming up with various expressions for $B$ allowing one to cancel out entropic terms that are otherwise difficult to estimate. Our incremental contribution is limited to identifying the symmetry in certain entropic terms as seen in the pictorial depiction on a repair matrix, and leveraging upon this symmetry to avoid certain union bounds. When $d > k$, the bound in Thm.~\ref{thm:bound2} leads to an outer bound on normalized ER tradeoff, that lies above the one due to \eqref{eq:soh_final}. A principal result of the paper stated in Thm.~\ref{thm:bound3} follows by combining both the Thm.~\ref{thm:bound1} and the Thm.~\ref{thm:bound2}.
\section{A Dual-Code-Based Approach To Bounding the ER File Size for Linear Codes\label{sec:linear_app}}
In this section, we investigate the maximum possible ER file size under the restricted setting of linear regenerating codes. Let ${\cal C}_{\text{lin}}$ denote a linear ER code with full-parameter set ${\cal P}_f = \{(n, k, d), (\alpha, \beta)\}$. We will continue to use $B$ to denote the file size. By linear, we mean that (a) the encoding mapping that converts the $B$ message symbols to $n\alpha$ coded symbols is linear, (b) the mapping that converts the node data into repair data that is transmitted during the repair of a failed node is linear and furthermore, (c) the mappings that are involved during data collection from a set of $k$ nodes and regeneration of a failed node using repair data from a set of $d$ nodes are linear. A linear regenerating code can be viewed as a linear block-code with length $n\alpha$ over $\mathbb{F}$ such that every set of $\alpha$ symbols (taken in order without loss of generality) are bunched together to correspond to a node.
\subsection{The Parity-Check Matrix And Its Properties\label{sec:pc}}
Since ${\cal C}_{\text{lin}}$ is a linear code, we can associate a generator matrix to the code. Let $G$ of size $(B \times n\alpha)$ denote a generator matrix of ${\cal C}_{\text{lin}}$. Without loss of generality, we assume that the first $\alpha$ columns of $G$ generate the contents of the first node, the second $\alpha$ columns of $G$ generate the contents of the second node, and so on. The first $\alpha$ columns taken together will be referred to as the first thick column of $G$. Similarly, the second thick column consists of columns from $\alpha+1$ to $2\alpha$, and so on. Overall, we will have $n$ thick columns in $G$. Let $H$ denote a parity-check matrix having size $(n\alpha - B) \times n\alpha$. The row-space of $H$ is the dual code of ${\cal C}_{\text{lin}}$. The definition of thick columns directly carries over to $H$. For any set $S \subseteq [n]$, we write $H|_S$ to denote the restriction of $H$ to the thick columns indexed by the set $S$. From definitions, we have that
\begin{eqnarray} \label{eq:B_rank}
B & = & \textsl{rank}(G) \ = \ n\alpha - \textsl{rank}(H) .
\end{eqnarray}
By \eqref{eq:B_rank}, it is sufficient to obtain a lower bound on $\textsl{rank}(H)$ to bound $B$ from above. This is precisely the approach taken here.
In the following two lemmas, we will translate the properties of data collection and exact-repair as properties of the parity-check matrix $H$. We remark here that these observations are already made in \cite{Duursma2014}.
\begin{lem}[Data Collection] \label{lem:H_datacollection} Let $H$ be a parity-check matrix of an ER linear regenerating code. Then $\textsl{rank}\left({H|_S}\right) = (n-k)\alpha$, for any $S \subseteq [n]$ such that $|S| = n-k$.
\end{lem}
\begin{proof}
This is a re-statement of Part $(1)$ of Proposition $2.1$ of \cite{Duursma2014}, and is equivalent to the data collection property.
\end{proof}
\begin{lem}[Exact Repair] \label{lem:H_repair}
Assume that $d= n-1$. Then the row space of $H$ of an ER linear regenerating code contains a collection of
$n\alpha$ vectors that can be arranged as the rows of an $(n\alpha \times n\alpha)$ matrix $H_{repair}$, which can be written in the block-matrix form:
\begin{eqnarray} \label{eq:Hrepair}
H_{repair} & = & \left[ \begin{array}{c|c|c|c} A_{1,1} & A_{1,2} & & A_{1,n} \\
\hline &&& \\
A_{2,1} & A_{2,2} & \hdots & A_{2, n} \\
\hline &&& \\
& & \ \ \vdots \ \ & \\
\hline &&& \\
A_{n,1} & A_{n,2} & & A_{n,n} \end{array}
\right],
\end{eqnarray}
where $A_{i,i}$ is defined to be the identity matrix $I_{\alpha}$ of size $\alpha$ and $A_{i,j}$ denotes an $\alpha \times \alpha$ matrix such that $\text{rank}\left(A_{i, j}\right) \leq \beta, 1 \leq i, j \leq n, i \neq j$.
\end{lem}
\begin{proof} The first $\alpha$ rows of the form
\begin{eqnarray*}
\left[ \begin{array}{c|c|c|c} I_{\alpha} & A_{1,2} & \cdots & A_{1,n} \end{array}
\right]
\end{eqnarray*}
can be obtained by the parity-check equations that are necessitated by the exact-repair requirement of the first node. In a similar manner, there must be parity-check equations that must allow repair of every node. These parity-check equations can be arranged to obtain the matrix $H_{repair}$. The requirements on the ranks of the sub-matrices $A_{ij}$ follow from the definition of regenerating codes, and the fact that $d=n-1$. In fact, the proof is indicated in Part $(2)$ of Proposition $2.1$ of \cite{Duursma2014}.
\end{proof}
For the case of $d=k=n-1$, the matrix $H_{\text{repair}}$ as given in Lem.~\ref{lem:H_repair} satisfies the condition given in Lem.~\ref{lem:H_datacollection}, and therefore $H_{repair}$ by itself defines an $(n, k=n-1, d=n-1)(\alpha, \beta)$ regenerating code. Since $\textsl{rank}(H) \geq \textsl{rank}(H_{repair})$, and our interest lies in regenerating codes having maximal file size, we will assume that $H = H_{repair}$ while deriving a lower bound on $\textsl{rank}(H)$ for the case of $d=k=n-1$.
\subsection{A Proof Of FR Bound For ER Linear Codes Using Dual Code \label{sec:dimakis_via_dual}}
In this section, we will present a simple proof of the FR bound \eqref{eq:cut_set_bd} for ER linear regenerating codes. Our proof of Theorem \ref{thm:new_bound_k_eq_d} will be built up on the proof of \eqref{eq:cut_set_bd} that is presented here.
As earlier, let $\mathcal{C}_{\text{lin}}$ denote an $(n, k, d = n-1) (\alpha, \beta)$ linear regenerating code, and let the matrix $H$ generate the dual code of $\mathcal{C}$. The key idea of the proof is to obtain a lower bound on the column rank of the matrix $H$. We use the notation $\rho(.)$ to denote the rank of a matrix. Let us define the quantities $\delta_j, 1 \leq j \leq n$ as follows:
\begin{eqnarray}
\delta_1 & = & \rho(H|_{[1]}), \label{eq:proof_diamkis_aa}\\
\delta_j & = & \rho(H|_{[j]}) - \rho(H|_{[j-1]}),\ 2 \leq j \leq n. \label{eq:proof_diamkis_0}
\end{eqnarray}
Next, we make the following claims:
\begin{eqnarray}
\label{eq:proof_dimakis_1} \delta_{j} &=& \rho(A_{j,j}) \\
&=& \alpha, \ \ 1 \leq j \leq n-k \\
\label{eq:proof_diamkis_2}
\delta_{j} & \geq & (\alpha - (j-1)\beta)^+, \ n-k + 1 \leq j \leq n.
\end{eqnarray}
Here we have set $a^+ := \text{max}(a, 0)$. The first claim \eqref{eq:proof_dimakis_1} follows from the fact that any $n-k$ thick columns of $H$ has rank given by $(n-k)\alpha$ as required by Lem.~\ref{lem:H_datacollection}. To show the second claim \eqref{eq:proof_diamkis_2}, one needs to first focus on the $j^{\text{th}}$ thick row of $H_{repair}$. By $j^{\text{th}}$ thick row, we mean the set of rows starting from $(j-1)\alpha + 1$ and reaching up to $j\alpha$ of $H_{repair}$. Next observe that
\begin{eqnarray}
\delta_j & \geq & \left(\rho(A_{j,j}) - \sum_{\ell=1}^{j-1}\rho(A_{j,\ell})\right)^+ \label{eq:dimakis_proof_3a} \\
\nonumber & = & \left(\rho(I_{\alpha}) - \sum_{\ell=1}^{j-1}\rho(A_{j,\ell})\right)^+ \\
& \geq & \left(\alpha - (j-1)\beta \right)^+, \ n-k + 1 \leq j \leq n, \label{eq:dimakis_proof_3}
\end{eqnarray}
where \eqref{eq:dimakis_proof_3} holds true since $\rho(A_{i,j}) \leq \beta$ by Lem.~\ref{lem:H_repair}. Thus we have shown \eqref{eq:proof_diamkis_2}. Next, invoking \eqref{eq:proof_dimakis_1} and \eqref{eq:dimakis_proof_3}, we bound the column-rank of $H$ from below as:
\begin{eqnarray}
\text{rank}(H) & = & \sum_{j=1}^{n}\delta_j \label{eq:proof_dimakis_rkH_delta_eq} \\
& \geq & (n-k)\alpha + \sum_{j=n-k+1}^{n}\left(\alpha - (j-1)\beta \right)^+. \label{eq:dimakis_proof_4}
\end{eqnarray}
An illustration of arriving at \eqref{eq:dimakis_proof_3a} and \eqref{eq:dimakis_proof_4} is given in Fig. \ref{fig:rankH_computation}.
\begin{figure}[h]
\centering
\includegraphics[width=4in]{IJICT_RankHcomputation.pdf}
\caption{A lower bound on $\rho(H)$, for the case of $(n = 5, k = 4, d = 4)$. Each term indexed by $j$ in the summation correspond to a lower bound on the incremental rank $\delta_j$. This bound is obtained by looking at the sub-matrices in $j^{th}$ thick row.}
\label{fig:rankH_computation}
\end{figure}
Consequently, it follows that
\begin{eqnarray}
B & = & n\alpha - \rho(H) \\
& \leq & n\alpha - (n-k)\alpha - \sum_{j=n-k+1}^{n}\left(\alpha - (j-1)\beta \right)^+ \\
& = & \sum_{j=0}^{k-1}\min(\alpha, (n-1-j)\beta) \label{eq:dimakis_proof_6}.
\end{eqnarray}
For $d< n-1$, the proof follows by first puncturing the code on any $(n-d-1)$ nodes to form a $(n'=d+1,k,d)$ ER linear regenerating code and then invoking the above analysis on the resultant new code. The way we express incremental ranks $\{\delta_j\}$ in \eqref{eq:proof_dimakis_1} and \eqref{eq:dimakis_proof_3a} will turn out to be useful in deriving a strong upper bound on the file size of linear ER codes in Sections \ref{sec:544} and \ref{sec:main_proof}.
\section{An Upper Bound On The File Size Of Linear ER Codes For The Case $(n=5,k=4,d=4)$\label{sec:544}}
In this section, we obtain a new upper bound on the file size of a linear ER code for parameters $(n=5,k=4,d=4)$. Taken along with the achievability using layered codes (see Sec.~\ref{sec:ach_lin}), we characterize the tradeoff for this case. As mentioned earlier, our technique is to lower bound the rank of the parity-check matrix $H$, leading to an upper bound on the file size by \eqref{eq:B_rank}. The lower bound on $\rho(H)$ that we derive here is in general tighter than what is obtained in \eqref{eq:dimakis_proof_4}. The principal result of this section is stated in Thm.~\ref{thm:544} below. Most of the ideas that are developed in the proof of Thm.~\ref{thm:544} will later be used in the next section to prove a general result for the case of $(n,k=n-1,d=n-1)$.
\begin{thm} \label{thm:544} Consider an ER linear regenerating code $\mathcal{C}_{\text{lin}}$ with full-parameter set $\{(n=5, k=4, d=4), (\alpha, \beta)\}$. Let $H$ denote a parity-check matrix of $\mathcal{C}_{\text{lin}}$. Then
\begin{eqnarray} \label{eq:bound_rank_H_544}
\rho(H) & \geq & \left \{ \begin{array}{c} \left \lceil \frac{10(\alpha - \beta)}{3} \right \rceil, \ 2\beta \leq \alpha \leq 4\beta \\ \\
\left \lceil \frac{15\alpha - 10\beta}{6} \right \rceil, \ \frac{4}{3}\beta \leq \alpha \leq 2\beta \\ \\
2\alpha - \beta, \ \beta \leq \alpha \leq \frac{4}{3}\beta \end{array} \right. .
\end{eqnarray}
\end{thm}
Note that $\alpha = \beta$ and $\alpha = 4\beta$ correspond to the MSR and MBR points respectively for the case of $(n=5,k=4,d=4)$. Next, we observe that for a fixed $\beta$, the bound given in \eqref{eq:bound_rank_H_544} corresponds to a piecewise linear curve with $\alpha$ on the $X$-axis and $\rho(H)$ on the $Y$-axis. Non-linear ceiling operation $\lceil.\rceil$ is used in \eqref{eq:bound_rank_H_544} to enforce integrality requirements on $\rho(H)$. However, it may be removed considering that $\rho(H)$ always takes integer values. We can view \eqref{eq:bound_rank_H_544} as a combination of the following three inequalities without paying attention to the limited range of $\alpha$:
\begin{eqnarray}
\rho(H) & \geq & \frac{10(\alpha - \beta)}{3} \label{eq:proof_544_line1}\\
\rho(H) & \geq & \frac{15\alpha - 10\beta}{6} \label{eq:proof_544_line2} \\
\rho(H) & \geq & 2\alpha - \beta \label{eq:proof_544_en}.
\end{eqnarray}
Here \eqref{eq:proof_544_en} follows from \eqref{eq:dimakis_proof_4} since $\alpha \geq \beta$ and $\left(\alpha - (j-1)\beta \right)^+\ge 0$ for $3 \leq j \leq 5$. Therefore, we need to prove only the remaining two inequalities \eqref{eq:proof_544_line1} and \eqref{eq:proof_544_line2} to complete the proof of Thm.~\ref{thm:544}. We proceed to prove them by obtaining two lower bounds to the incremental thick-column-rank of $H$ that are stronger than what is given in \eqref{eq:dimakis_proof_3}. To make this point clear upfront, a comparison of the bounds in \eqref{eq:dimakis_proof_4} and \eqref{eq:bound_rank_H_544} is shown in Fig.~\ref{fig:544_rankH_comparison}.
\begin{figure}[h]
\centering
\includegraphics[width=4.5in]{IJICT_RankH_544.pdf}
\caption{Comparison of the lower bounds on $\rho(H)$ as function of $\alpha$, for the case of $(n = 5, k = 4, d = 4)$ with $\beta = 48$. The dashed and the solid lines correspond to the cases of functional and exact repairs, respectively. See \eqref{eq:dimakis_proof_4} and \eqref{eq:bound_rank_H_544} for the corresponding equations. Here, lines $1$, $2$ and $3$ are given by \eqref{eq:proof_544_line1}, \eqref{eq:proof_544_line2} and \eqref{eq:proof_544_en} respectively.}
\label{fig:544_rankH_comparison}
\end{figure}
\subsection{Proof of Theorem~\ref{thm:544} \label{sec:proof_thm_433}}
We begin with setting up some notation. For any matrix $B$ over $\mathbb{F}$, we denote by $\mathcal{S}(B)$ the column space of $B$. Note that $\rho(B)$ is the same as the dimension of the vector space $\mathcal{S}(B)$. Next, we define $H^{(5)} = H_{repair}$, where $H_{repair}$ is as defined in \eqref{eq:Hrepair}. Let the matrix $H^{(5)}_j$ denote the $j^{\text{th}}$ thick column of $H^{(5)}, 1 \leq j \leq 5$, i.e., $H^{(5)} = [H^{(5)}_1 \ H^{(5)}_2 H^{(5)}_3 \ H^{(5)}_4 \ H^{(5)}_5 ]$. Next, we define matrices $H^{(4)}_j, 2 \leq j \leq 5$ such that the columns of $H^{(4)}_j$ form a basis for the vector space $\mathcal{S}\left(H^{(5)}_j\right) \cap \mathcal{S}\left(H^{(5)}|_{[j-1]}\right)$. Next, we define $H^{(4)}$ as
\begin{eqnarray*}
H^{(4)} = [H^{(4)}_2 H^{(4)}_3 \ H^{(4)}_4 \ H^{(4)}_5 ].
\end{eqnarray*}
For convenience of notation, we have used $H^{(4)}_2$ to denote the first thick column of $H^{(4)}$. Similarly, $H^{(3)}$ is obtained from $H^{(4)}$, where columns of $H^{(3)}_j$ form a basis for $\mathcal{S}\left(H^{(4)}_j\right) \cap \mathcal{S}\left(H^{(4)}|_{\{2,\ldots,j-1\}}\right)$:
\begin{eqnarray*}
H^{(3)} = [H^{(3)}_3 \ H^{(3)}_4 \ H^{(3)}_5 ] .
\end{eqnarray*}
Let $A_{i,j}^{(\ell)}$ denote the $i^{th}$ thick row of $H^{(\ell)}_j$. An illustration of the block-matrix representations of $H^{(5)}, H^{(4)}$ and $H^{(3)}$ is given in Fig.~\ref{fig:matrices_544}.
The key idea in the proof lies on the observation that $\rho(H^{(5)}) \geq \rho(H^{(4)}) \geq \rho(H^{(3)})$. We will show that \eqref{eq:proof_544_line1} and \eqref{eq:proof_544_line2} are necessary conditions respectively for $\rho(H^{(5)}) \geq \rho(H^{(4)})$ and $\rho(H^{(5)}) \geq \rho(H^{(4)}) \geq \rho(H^{(3)})$ to be true. The following remark underlines an important property of $\rho\left(A^{(\ell)}_{j,j}\right)$ preserved in the construction of $H^{(\ell)}_j$.
\begin{note}\label{rem:full_column_rank_diag_sub_matrix} The sub-matrices $A^{(\ell)}_{j,j}, 3 \leq \ell \leq 5, \ 5-\ell+1 \leq j \leq 5$ have full column rank, and $\rho\left(A^{(\ell)}_{j,j}\right) = \rho\left(H^{(\ell)}_j\right)$.
\end{note}
\begin{figure}[h]
\centering
\includegraphics[width=6in]{IJICT_HMatrices_544.pdf}
\caption{The matrices $H^{(5)}, H^{(4)}$ and $H^{(3)}$, and the associated block submatrix representations for the case $n=5$. The matrix $H^{(5)} = H_{repair}$, $H^{(4)}$ is defined based on $H^{(5)}$, and $H^{(3)}$ is defined based on $H^{(4)}$.}
\label{fig:matrices_544}
\end{figure}
\subsection{Proof of \eqref{eq:proof_544_line1}\label{sec:proof_bound1_544}}
We will be using the rank comparison $\rho(H^{(5)}) \geq \rho(H^{(4)})$ to prove \eqref{eq:proof_544_line1}. It follows from \eqref{eq:proof_dimakis_1}, \eqref{eq:dimakis_proof_3a} and \eqref{eq:proof_dimakis_rkH_delta_eq} that
\begin{eqnarray}
\rho\left( H^{(5)} \right) & \geq & \rho\left(A^{(5)}_{1,1}\right) \ + \ \sum_{j=2}^{5} \left\{ \left(\rho\left(A_{j,j}^{(5)}\right) - \sum_{\ell=1}^{j-1}\rho\left(A_{j,\ell}^{(5)}\right)\right)^+ \right\} \label{eq:544_H5_rank_ineq}.
\end{eqnarray}
We introduce slack variables $\{\alpha_j, 2 \leq j \leq 5\}$ that take non-negative integer values to convert \eqref{eq:dimakis_proof_3a} into equalities i.e.,
\begin{eqnarray}\label{eq:delta_equality_544}
\delta_j & = & \left(\rho\left(A_{j,j}^{(5)}\right) - \sum_{\ell=1}^{j-1}\rho\left(A_{j,\ell}^{(5)}\right)\right)^+ + \alpha_j, \ 2 \leq j \leq 5.
\end{eqnarray}
Hence, using \eqref{eq:proof_dimakis_rkH_delta_eq} we have:
\begin{eqnarray}
\rho\left( H^{(5)} \right) & = & \rho\left(A^{(5)}_{1,1}\right) \ + \ \sum_{j=2}^{5} \left\{ \left(\rho\left(A_{j,j}^{(5)}\right) - \sum_{\ell=1}^{j-1}\rho\left(A_{j,\ell}^{(5)}\right)\right)^+ + \alpha_j \right\}. \label{eq:544_H5_rank_eq}
\end{eqnarray}
$\rho\left( H^{(4)} \right)$ can be bounded from below quite similar to \eqref{eq:544_H5_rank_ineq} (see Remark \ref{rem:full_column_rank_diag_sub_matrix} also) to obtain
\begin{eqnarray}
\rho\left( H^{(4)} \right) & \geq & \rho\left(A^{(4)}_{2,2}\right) + \sum_{j=3}^{5} \left\{ \left(\rho\left(A_{j,j}^{(4)}\right) - \sum_{\ell=2}^{j-1}\rho\left(A_{j,\ell}^{(4)}\right)\right)^+ \right\} \label{eq:boundH4_544}.
\end{eqnarray}
Our aim at first is to find a lower bound for $\sum_{j=2}^{5}\alpha_j$. The analysis in Sec.~\ref{sec:dimakis_via_dual} in fact works with the trivial lower bound $\sum_{j=2}^{5}\alpha_j\geq 0$. But here, we substitute \eqref{eq:544_H5_rank_eq} and \eqref{eq:boundH4_544} in
\begin{eqnarray*}
\rho(H^{(5)}) & \geq & \rho(H^{(4)})
\end{eqnarray*}
to obtain a much tighter lower bound for $\sum_{j=2}^{5}\alpha_j$. Using this tighter bound in \eqref{eq:544_H5_rank_eq}, we will obtain a lower bound for $\rho\left( H^{(5)} \right)$ in terms of $\{ \rho\left(A^{(5)}_{i,j}\right), \rho\left(A^{(4)}_{i,j}\right)\}$. We know that the terms $\{ \rho\left(A^{(5)}_{i,j} \right)\}$ can be expressed in terms of $\alpha$ and $\beta$. In the following Lem.~\ref{lem:544_intersections}, we show how $\left\{\rho\left(A^{(4)}_{i,j} \right)\right\}$ can be expressed in terms of $\left\{\rho\left(A^{(5)}_{i,j} \right)\right\}$. Finally, all the terms involve $\{ \rho\left(A^{(5)}_{i,j} \right)\}$, and this will lead to the proof of \eqref{eq:proof_544_line1}.
\begin{lem} \label{lem:544_intersections} The following statements hold:
\begin{enumerate}[a)] \item \begin{eqnarray}
\rho\left(A^{(4)}_{j,j} \right) & = & \rho\left(A^{(5)}_{j,j} \right) \ - \ \left\{ \left(\rho\left(A_{j,j}^{(5)}\right) - \sum_{\ell=1}^{j-1}\rho\left(A_{j,\ell}^{(5)}\right)\right)^+ + \alpha_j\right\}, \ 2 \leq j \leq 5.\label{eq:544_key_lemma1}
\end{eqnarray}
\item \begin{eqnarray}
\sum_{\ell=2}^{j-1} \rho\left(A^{(4)}_{j,\ell} \right) & \leq & \sum_{\ell=1}^{j-1}\rho\left(A^{(5)}_{j,\ell} \right) \ - \ \rho\left(A^{(4)}_{j,j} \right), \ 3 \leq j \leq 5.\label{eq:544_key_lemma2}
\end{eqnarray}
\end{enumerate}
\end{lem}
\begin{proof} The proof is relegated to Appendix~\ref{app:pf_lem_544}.
\end{proof}
By making use of Lem.~\ref{lem:544_intersections}, we first obtain a lower bound on $\sum_{j=2}^{5}\alpha_j$, and subsequently a lower bound on $\rho\left( H^{(5)} \right)$ all in terms of $\{ \rho\left(A^{(5)}_{i,j} \right)\}$:
\begin{eqnarray}
\sum_{j=2}^{5}\alpha_j & \geq& \frac{1}{3}\left\{ -\rho\left(A^{(5)}_{1,1}\right) + \rho\left(A^{(5)}_{2,2}\right) +
2\sum_{j=3}^{5}\rho\left(A^{(5)}_{j,j}\right) -\right. \nonumber\\
&&\left. \left[2\left( \rho\left(A^{(5)}_{2,2}\right) - \rho\left(A^{(5)}_{2,1}\right)\right)^+ + 3\sum_{j=3}^{5}\left( \rho\left(A^{(5)}_{j,j}\right) - \sum_{\ell=1}^{j-1}\rho\left(A^{(5)}_{j,\ell}\right)\right)^+ + \sum_{j=3}^{5} \sum_{\ell=1}^{j-1}\rho\left(A^{(5)}_{j,\ell}\right)\right]
\right\}\label{eq:544_alpha_bound1} \\
\label{eq:544_bound1} \rho\left( H^{(5)} \right) & \geq & \frac{1}{3}\left\{ 2\sum_{j=1}^{5}\rho\left(A^{(5)}_{j,j}\right) - \sum_{j=2}^{5} \sum_{\ell=1}^{j-1}\rho\left(A^{(5)}_{j,\ell}\right)\right\}.
\end{eqnarray}
Finally, we apply $\rho\left(A^{(5)}_{j,j}\right) = \alpha, 1 \leq j \leq 5$ and $\rho\left(A^{(5)}_{i,j}\right) \leq \beta, 1 \leq i, j \leq 5, \ i \neq j$ to complete the proof of \eqref{eq:proof_544_line1}.
\subsection{Proof of \eqref{eq:proof_544_line2}\label{sec:proof_bound2_544}}
While proving \eqref{eq:proof_544_line1}, we leveraged upon the inequality
\begin{eqnarray*}
\rho(H^{(5)}) & \geq & \rho(H^{(4)}) .
\end{eqnarray*}
Here, we will make use of the chain
\begin{eqnarray*}
\rho(H^{(5)}) & \geq & \rho(H^{(4)})\ \geq \ \rho(H^{(3)}),
\end{eqnarray*}
to prove \eqref{eq:proof_544_line2}. First, we consider $\rho(H^{(4)}) \geq \rho(H^{(3)})$ and obtain a lower bound on $\rho(H^{(4)})$. This is carried out precisely the same way as how we obtained the lower bound \eqref{eq:544_bound1} on $\rho(H^{(5)})$. The only change required will be to adapt Lem.~\ref{lem:544_intersections} to express $\{A_{i, j}^{(4)}\}$ in terms of $\{A_{i, j}^{(3)}\}$. Thus we obtain that
\begin{eqnarray} \label{eq:544_bound2a}
\rho\left( H^{(4)} \right) & \geq & \frac{1}{3}\left\{ 2\sum_{j=2}^{5}\rho\left(A^{(4)}_{j,j}\right) - \sum_{j=3}^{5} \sum_{\ell=2}^{j-1}\rho\left(A^{(4)}_{j,\ell}\right)\right\}.
\end{eqnarray}
Observe that \eqref{eq:544_bound2a} is same as \eqref{eq:544_bound1} except for that $\{A_{i, j}^{(5)}\}$ are replaced with $\{A_{i, j}^{(4)}\}$. The limits of the summation are also modified accordingly.
We next consider the inequality $\rho(H^{(5)}) \geq \rho(H^{(4)})$ where $\rho(H^{(4)})$ is lower bounded as in \eqref{eq:544_bound2a} and $\rho(H^{(5)})$ is equated using \eqref{eq:544_H5_rank_eq}. It follows that
\begin{eqnarray}
\rho\left(A^{(5)}_{1,1}\right) + \sum_{j=2}^{5} \left\{ \left(\rho\left(A_{j,j}^{(5)}\right) -
\sum_{\ell=1}^{j-1}\rho\left(A_{j,\ell}^{(5)}\right)\right)^+ + \alpha_j \right\} \geq \frac{1}{3}\left\{ 2\sum_{j=2}^{5}\rho\left(A^{(4)}_{j,j}\right) - \sum_{j=3}^{5} \sum_{\ell=2}^{j-1}\rho\left(A^{(4)}_{j,\ell}\right)\right\}. \label{eq:long_544_2}
\end{eqnarray}
After invoking Lem.~\ref{lem:544_intersections}, we obtain the lower bound:
\begin{eqnarray}
\sum_{j=2}^{5}\alpha_j & \geq & \frac{1}{6}\left\{ -3\rho\left(A^{(5)}_{1,1}\right) + 3\sum_{j=2}^{5}\rho\left(A^{(5)}_{j,j}\right) - \right.\nonumber\\
&&\left. \left[6\sum_{j=2}^{5}\left( \rho\left(A^{(5)}_{j,j}\right) - \sum_{\ell=1}^{j-1}\rho\left(A^{(5)}_{j,\ell}\right)\right)^+ + \sum_{j=2}^{5} \sum_{\ell=1}^{j-1}\rho\left(A^{(5)}_{j,\ell}\right)\right]
\right\}\label{eq:544_alpha_bound2} .
\end{eqnarray}
Substituting \eqref{eq:544_alpha_bound2} back in \eqref{eq:544_H5_rank_eq}, we obtain the following lower bound on $\rho(H^{(5)})$:
\begin{eqnarray} \label{eq:544_bound2}
\rho\left( H^{(5)} \right) & \geq & \frac{1}{6}\left\{ 3\sum_{j=1}^{5}\rho\left(A^{(5)}_{j,j}\right) - \sum_{j=2}^{5} \sum_{\ell=1}^{j-1}\rho\left(A^{(5)}_{j,\ell}\right)\right\}.
\end{eqnarray}
Finally, we apply $\rho\left(A^{(5)}_{j,j}\right) = \alpha, 1 \leq j \leq 5$ and $\rho\left(A^{(5)}_{i,j}\right) \leq \beta, 1 \leq i, j \leq 5, \ i \neq j$ on \eqref{eq:544_bound2} to complete the proof of \eqref{eq:proof_544_line2}.
\section{An Upper Bound On The File Size Of Linear ER Codes for General $(n,k=n-1,d=n-1)$ \label{sec:main_proof}}
In this section, we generalize the result proved for $(n=5,k=4,d=4)$ in Sec.~\ref{sec:544} to $(n,k=n-1,d=n-1)$. We will only provide a sketch of the proofs, as the techniques remain the same as those presented in Sec.~\ref{sec:544} (see \cite{PraKri_arxiv} for details). Again, the upper bound on the file size is a direct corollary of a lower bound on rank($H$) and the bound is achievable using layered codes (see Sec.~\ref{sec:ach_lin}). In the following theorem, a lower bound on rank($H$) is established.
\begin{thm} \label{thm:rankH_new_bound_k_eq_d} Consider an ER linear regenerating code $\mathcal{C}_{\text{lin}}$ with full-parameter set $\{(n, k=n-1, d=n-1), (\alpha, \beta)\}$ with $n \geq 4$. Let $H$ denote a parity-check matrix of $\mathcal{C}_{\text{lin}}$. Then
\begin{eqnarray} \label{eq:bound_rank_H_gen}
\text{rank}(H) & \geq & \left \{ \begin{array}{cl} \left \lceil \frac{2rn\alpha - n(n-1)\beta}{r^2+r}\right \rceil, & \frac{d\beta}{r} \leq \alpha \leq \frac{d\beta}{r-1}, \ \ 2 \leq r \leq n - 2 \\
2\alpha - \beta, & \frac{d\beta}{n-1} \leq \alpha \leq \frac{d\beta}{n-2} \end{array} \right. .
\end{eqnarray}
\end{thm}
The corresponding theorem Thm.\ref{thm:544} for $(n=5,k=4,d=4)$ established that $\textsl{rank}(H)$ is lower bounded by a piecewise linear curve determined by $3$ inequalities. Here, we show such a behavior exists in general i.e., rank($H$) can be lower bounded by a piecewise linear curve determined by $(n-2)$ inequalities. The last inequality
\begin{eqnarray}
\text{rank}(H) & \geq & 2\alpha - \beta \label{eq:proof_k_eq_d_eq_nminus1_eq1},
\end{eqnarray}
is already established by \eqref{eq:dimakis_proof_4}, since $\alpha\ge \beta$ and $\left(\alpha - (j-1)\beta \right)^+\ge 0$ for $3\leq j\leq n$. Therefore to complete the proof, it remains to prove the following $(n-3)$ bounds on $\textsl{rank}(H)$, ignoring the range of $\alpha$:
\begin{equation}
\text{rank}(H) \geq \frac{2rn\alpha - n(n-1)\beta}{r^2+r}, \label{eq:general_bound_to_prove}
\end{equation}
parameterized by $2 \leq r \leq n-2$. We will set up some notations, and introduce a key lemma that are essential in describing a sketch of the proof.
\subsection{Notations and a Key Lemma \label{sec:proof_not}}
\subsubsection{The Matrices $\{H^{(t)}, \ 3 \leq t \leq n\}$} For any matrix $M$ over $\mathbb{F}$, we carry over the notation $\rho(M)$, $\mathcal{S}(M)$ from Sec.~\ref{sec:proof_thm_433}. Quite similar to the definition of $H^{(5)}$ in Sec.~\ref{sec:proof_thm_433}, we define $H^{(n)} = H_{repair}$, where $H_{repair}$ is as defined by Lem.~\ref{lem:H_repair}. We denote by $H^{(n)}_j$ the $j^{\text{th}}$ thick column of $H^{(n)}, 1 \leq j \leq n$, i.e.,
\begin{eqnarray*}
H^{(n)} = [H^{(n)}_1 \ H^{(n)}_2 \ \ldots \ H^{(n)}_n].
\end{eqnarray*}
Next, we define the matrices $H^{(t)}, 3 \leq t \leq n-1$ in an iterative manner as follows:
\vspace{0.1in}
\begin{enumerate}[Step 1.]
\item Let $ t=n-1$.
\item Define the matrices $H^{(t)}_j, n-t+1 \leq j \leq n$, such that the columns of $H^{(t)}_j$ form a basis for the vector space $\mathcal{S}\left(H^{(t+1)}_j\right) \cap \mathcal{S}\left(H^{(t+1)}|_{\{n-t, n-t+1, \ldots, j-1\}}\right)$.
\item Define the matrix $H^{(t)}$ as
\begin{eqnarray} \label{eq:proofgen_Htdef}
H^{(t)} = [H^{(t)}_{n-t+1} \ H^{(t)}_{n-t+2} \ \ldots \ H^{(t)}_n].
\end{eqnarray}
\item If $t \geq 4$, decrement $t$ by $1$ and go back to Step $2$.
\end{enumerate}
\vspace{0.1in}
Clearly, the ranks of the matrices $H^{(t)}, 3 \leq t \leq n$ are ordered as
\begin{eqnarray} \label{eq:proofgen_rankorderHs}
\rho(H^{(t)}) & \geq & \rho(H^{(t-1)}), \ 4 \leq t \leq n.
\end{eqnarray}
We use the notation $H_j^{(t)}, n-t+1 \leq j \leq n$ to refer to the $j^{\text{th}}$ thick column of the matrix $H^{(t)}$. While every thick column of $H^{(n)}$ has exactly $\alpha$ thin columns, thick columns of $H^{(t)}, 3 \leq t \leq n-1$ need not have the same number of thin columns. We point out for clarity that the thick columns of the matrix $H^{(t)}$ are indexed using $\{n-t+1, \ldots, n\}$. We have avoided $\{1, \ldots, t\}$ for the convenience of notation.
\subsubsection{Block Matrix Representation of the Matrix $H^{(t)}$ \label{sec:bsm}}
Since $H^{(n)} = H_{repair}$, it has a block matrix representation as given in \eqref{eq:Hrepair}. We write in short-hand
\begin{eqnarray}
H^{(n)} & = & \left( A^{(n)}_{i,j} , 1 \leq i, j \leq n \ \right ),
\end{eqnarray}
where $A^{(n)}_{i,i} = I_{\alpha}, 1 \leq i \leq n$. We introduce block matrix representations for $H^{(t)}, 3 \leq t \leq n-1$ as
\begin{eqnarray}
H^{(t)} & = & \left( A^{(t)}_{i,j} , 1 \leq i \leq n, n-t+1 \leq j \leq n \ \right ),
\end{eqnarray}
where $A^{(t)}_{i,j}$ is an $\alpha \times \rho(H^{(t)}_j)$ matrix over $\mathbb{F}$ such that
\begin{eqnarray}
\mathcal{S}\left(A^{(t)}_{i,j} \right) & \subseteq & \mathcal{S}\left(A^{(t+1)}_{i,j} \right) \ \bigcap \ \sum_{\ell=n-t}^{j-1}\mathcal{S}\left(A^{(t+1)}_{i,\ell}\right). \label{eq:proofgen_1}
\end{eqnarray}
Note that \eqref{eq:proofgen_1} is a direct consequence of our definition of the matrix $H^{(t)}$. Having set up the notation, we introduce the key lemma that establishes the relations among the ranks of the sub-matrices of $\{H^{(t)}\}$. The lemma is similar in spirit to Lem.~\ref{lem:544_intersections}, and its proof is omitted here.
\begin{lem} \label{lem:nkk_intersections}
\begin{enumerate}[a)]
\item For any $t, j$ such that $3 \leq t \leq n$ and $n-t+1 \leq j \leq n$, we have
\begin{eqnarray}
\rho\left(H^{(t)}_j\right) & = & \rho\left( A^{(t)}_{j,j}\right). \label{eq:proofgen_2}
\end{eqnarray}
\item For any $t, j$ such that $3 \leq t \leq n-1$ and $n-t+1 \leq j \leq n$, we have
\begin{eqnarray}
\rho\left(A^{(t)}_{j,j} \right) & = & \rho\left(A^{(t+1)}_{j,j} \right) \ - \ \left\{ \rho\left(H^{(t+1)}|_{\{n-t, \ldots, j\}}\right) - \rho\left(H^{(t+1)}|_{\{n-t, \ldots, j-1\}}\right) \right\}. \label{eq:proofgen_3}
\end{eqnarray}
\item For any $t, j$ such that $3 \leq t \leq n-1$ and $n-t+2 \leq j \leq n$, we have
\begin{equation}
\rho\left(A^{(t)}_{j,j} \right) + \sum_{\ell=n-t+1}^{j-1} \rho\left(A^{(t)}_{j,\ell} \right) \leq \sum_{\ell=n-t}^{j-1}\rho\left(A^{(t+1)}_{j,\ell} \right) . \label{eq:proofgen_4}
\end{equation}
\end{enumerate}
\end{lem}
\subsection{On The Proof of \eqref{eq:general_bound_to_prove} \label{sec:eq_general_bound_proof}}
The bounds in \eqref{eq:general_bound_to_prove} is obtained as a necessary condition for satisfying the chain of inequalities given by
\begin{eqnarray} \label{eq:chain_r}
\rho(H^{(n)}) & \geq & \rho(H^{(n-1)}) \ \geq \ \cdots \ \geq \ \rho(H^{(n-r+1)}).
\end{eqnarray}
In the analysis of \eqref{eq:chain_r}, we consider in the first step, the inequality $\rho(H^{(n-r+2)}) \geq \rho(H^{(n-r+1)})$ and obtain a lower bound on $\rho(H^{(n-r+2)})$. In the second step, we move on to the inequality $\rho(H^{(n-r+3)}) \geq \rho(H^{(n-r+2)})$ and obtain a lower bound on $\rho(H^{(n-r+3)})$. In the second step, we would make use of a lower bound on $\rho(H^{(n-r+2)})$ that was derived in the first step. This procedure is continued iteratively until we arrive at lower bound for $\rho(H^{(n)})$. The following theorem is a key intermediate step in this process.
\begin{thm} \label{thm:nkk_rankviainduction}
For any $s$ such that $1 \leq s \leq n-3$, and any $t$ such that $3+s \leq t \leq n$, the rank of the matrix $H^{(t)}$ is lower bounded by
\begin{eqnarray}
\rho\left( H^{(t)} \right) & \geq & \frac{2}{(s+1)(s+2)}\left\{ (s+1)\sum_{j=n-t+1}^{n}\rho\left(A^{(t)}_{j,j}\right) - \sum_{j=n-t+2}^{n} \sum_{\ell=n-t+1}^{j-1}\rho\left(A^{(t)}_{j,\ell}\right)\right\}. \label{eq:boundgen_rankH}
\end{eqnarray}
\end{thm}
\begin{proof} The proof is by induction on $s$, and see \cite{PraKri_arxiv} for details.
\end{proof}
One can identify Thm.~\ref{thm:nkk_rankviainduction} in the context of $(n=5,k=4,d=4)$. The bounds would then be associated with $(s=1, t=5)$, $(s=1, t = 4)$ and $(s = 2, t = 5)$, and are precisely those given in \eqref{eq:544_bound1}, \eqref{eq:544_bound2a} and \eqref{eq:544_bound2} respectively. To complete the proof of \eqref{eq:general_bound_to_prove}, we evaluate the bound in \eqref{eq:boundgen_rankH} for the $(n-3)$ pairs given by $(s, t=n), 1 \leq s \leq n-3$. By substituting the constraints $\rho\left(A^{(n)}_{j,j}\right) = \alpha, 1 \leq j \leq n$ and $\rho\left(A^{(n)}_{i,j}\right) \leq \beta, 1 \leq i, j \leq n, i \neq j$, we finally obtain that
\begin{eqnarray}
\rho\left( H^{(n)} \right) \geq \frac{2}{(s+1)(s+2)}\left\{ (s+1)\sum_{j=1}^{n}\alpha - \sum_{j=2}^{n} (j-1)\beta\right\} = \frac{2(s+1)n\alpha - n(n-1)\beta}{(s+1)(s+2)}, \ 1 \leq s \leq n-3. \label{eq:proofgenz}
\end{eqnarray}
By choosing $r=s+1$, \eqref{eq:general_bound_to_prove} follows from \eqref{eq:proofgenz}. This completes the proof of \eqref{eq:general_bound_to_prove}, and consequently that of the Thm.~\ref{thm:rankH_new_bound_k_eq_d}.
\section{On The Achievability Of The Outer Bounds On Normalized ER Tradeoff \label{sec:achievability}}
The outer bounds presented in the present paper matches with the performance of existing code constructions in certain cases, and we present two such results here.
\subsection{Characterization of Normalized ER tradeoff for the Case $k=3,d=n-1$\label{sec:ach_nonlin}}
In the case of $k=3$ and $d=n-1$, the repair-matrix bound is achieved by a construction that appeared in \cite{SenSasKum_itw}. We will given an example of the repair-matrix bound below:
{\em Example:} $(n=6,k=3,d=5):$ Using \eqref{eq:eps}, the bound on the ER file size $B$ can computed as
\begin{eqnarray}
\label{eq:635} B & \leq & \frac{10\alpha}{7} + \frac{34\beta}{7}, \ \ 5\beta \geq \alpha > \frac{13\beta}{4}.
\end{eqnarray}
Based on the bound in \eqref{eq:635}, an outer bound on the normalized ER tradeoff is drawn in Fig.~\ref{fig:plot1a}. It is required to have a single code construction ${\cal C}_{\text{int}}$ for the normalized operating point $(\bar{\alpha}_0,\bar{\beta}_0)= (\frac{13}{38},\frac{2}{19})$ to achieve the entire normalized ER tradeoff, as the remaining points can be achieved by space-sharing of the MSR code ${\cal C}_{\text{MSR}}$, the MBR code ${\cal C}_{\text{MBR}}$ and the code ${\cal C}_{\text{int}}$. The construction of ${\cal C}_{\text{int}}$ was provided in \cite{SenSasKum_itw}, and thus establishing that the repair-matrix bound is tight in this case.
\subsection{Characterization of Normalized ER Tradeoff for the Case $(n,k=n-1,d=n-1)$ under the Linear Setting\label{sec:ach_lin}}
In the case of linear codes, the bound presented in Theorem \ref{thm:new_bound_k_eq_d} is achieved by canonical layered codes that was introduced in \cite{TiaSasAggVaiKum}. When specialized to the case of $d = n-1$, the layered codes achieve points described by
\begin{equation} \label{eq:ach_Biren}
\left(\bar{\alpha} , \bar{\beta} \right) = \left(\frac{r}{n(r-1)}, \frac{r}{n(n-1)}\right), \ \ 2 \leq r \leq n-1
\end{equation}
on the $(\bar{\alpha} , \bar{\beta} )$-plane. If one substitutes $r = 2$ in \eqref{eq:ach_Biren}, it corresponds to the MBR point, and the achievable points move closer to the MSR point as $r$ increases. It is also proved that the point corresponding to $r = n-1$ lies on the FR tradeoff in the near-MSR region. An achievable region on the $(\bar{\alpha} , \bar{\beta} )$-plane is obtained by space-sharing codes for values of $r$, $2 \leq r \leq n-1$ along with an MSR-code. We can write the equation of the line segment obtained by connecting two points $\left(\frac{r}{n(r-1)}, \frac{r}{n(n-1)}\right)$ and $\left(\frac{(r+1)}{n((r+1)-1)}, \frac{(r+1)}{n(n-1)}\right)$, $ 2 \leq r \leq n-2$ as
\begin{eqnarray}
r(r-1)n\bar{\alpha} + n(n-1)\bar{\beta} & = & r^2+r,
\end{eqnarray}
and that of the line segment connecting the MSR point and the point corresponding to $r=n-1$ as
\begin{eqnarray*}
(n-2)\bar{\alpha} + \bar{\beta} & = & 1.
\end{eqnarray*}
This matches with the equations of line segments as given in Theorem \ref{thm:new_bound_k_eq_d}. The normalized linear tradeoff for $(n=6,k=d=5)$ is given in Fig.~\ref{fig:plot1b}.
\bibliographystyle{IEEEtran}
|
1,477,468,750,316 | arxiv | \section{Introduction}
The Mixture of Experts (MoE) \cite{jacobs1991adaptive} is the basis of many state-of-the-art deep learning models. For example, MoE-based layers are being used to perform efficient computation in high-capacity neural networks and to improve parameter sharing in multi-task learning (MTL) \cite{shazeer2017outrageously,ma2018modeling,lepikhin2020gshard}. While different MoE architectures are considered in the literature, two major components are needed in any MoE model: a set of experts (neural networks) and one or more trainable gates. A gate selects a combination of the experts that is specific to each input example. This allows experts to specialize in different partitions of the input space, which has the potential to improve the predictive performance and interpretability.
In Figure \ref{fig:MoE}(a), we show an example of a simple MoE architecture that can be used as a standalone learner or as a layer in a neural network
The literature on the MoE has traditionally focused on softmax-based gates, in which all experts are assigned nonzero weights \citep{jordan1994hierarchical}. More recent works propose \textsl{sparse gates} that assign nonzero weights to only a small subset of the experts \citep{DBLP:journals/corr/BengioBPP15,shazeer2017outrageously,rosenbaum2018routing,lepikhin2020gshard}. Sparsity in gating can have various advantages, including better computational efficiency, interpretability, and improved statistical performance in certain settings. Existing sparse gates are not differentiable, and reinforcement learning algorithms are commonly used to train these gates \cite{DBLP:journals/corr/BengioBPP15,rosenbaum2018routing}. In an exciting work, \citet{shazeer2017outrageously} introduced a new sparse gate (Top-k gate) and proposed training it using stochastic gradient descent (SGD). The ability to train the gate using SGD is appealing because it allows for training the neural network in an end-to-end fashion. However, the Top-k gate is not continuous, which can lead to convergence issues in SGD that in turn affect the statistical performance (as we demonstrate in our experiments).
In this paper, we introduce DSelect-k: the first, continuously differentiable and sparse gate for MoE. Given a user-specified parameter $k$, the gate selects at most $k$ out of the $n$ experts. This explicit control over sparsity leads to a cardinality-constrained optimization problem, which is computationally challenging. To circumvent this challenge, we propose a novel, unconstrained reformulation that is equivalent to the original problem. The reformulated problem uses a binary encoding scheme to implicitly enforce the cardinality constraint. We demonstrate that by carefully smoothing the binary encoding variables, the reformulated problem can be effectively optimized using first-order methods such as SGD.
\begin{figure}[htbp]
\vspace{-0.5cm}
\centering
\includegraphics[scale=0.4]{Gate.png}
\caption{\textbf{(a)}: An example of a MoE that can be used as a standalone learner or layer in a neural network. ``Ei'' is the $i$-th expert. \textbf{(b):} A multi-gate MoE for learning two tasks simultaneously. ``Task i NN'' is a neural network which generates the output of Task i. }
\label{fig:MoE}
\vspace{-0.2cm}
\end{figure}
\vspace{-0.2cm}
Our gate supports two mechanisms: \textsl{per-example gating} and \textsl{static gating}. The per-example gating is the typical gating technique used in MoE models, in which the weights assigned to the experts are a function of the input example \citep{jacobs1991adaptive, shazeer2017outrageously}. In static gating, a subset of experts is selected and the corresponding weights do not depend on the input \citep{rosenbaum2018routing}.
Based on our experiments, each gating mechanism can outperform the other in certain settings. Thus, we study both mechanisms and advocate for experimenting with each.
MTL is an important area where MoE models in general, and our gate in particular, can be useful. The goal of MTL is to learn multiple tasks simultaneously by using a shared model. Compared to the usual single task learning, MTL can achieve better generalization performance through exploiting the relationships between tasks \citep{caruana1997multitask}. One of the key problems in MTL is how to share model parameters between tasks \citep{ruder2017overview}. For instance, sharing parameters between unrelated tasks can potentially degrade performance. The multi-gate MoE \citep{ma2018modeling} is a flexible architecture which allows for learning what to share between tasks. Figure \ref{fig:MoE}(b) shows an example of a multi-gate MoE (in the simple case of two tasks). Here, each task has its own gate which adaptively decides whether to share experts with the other task. In our experiments, we study the effectiveness of our proposed gate in the context of the multi-gate MoE.
\textbf{Contributions: } On a high-level, our main contribution is DSelect-k: a new continuously differentiable and sparse gate for MoE, which can be directly trained using first-order methods. Our technical contributions can be summarized as follows. \textbf{(i)} The gate selects (at most) $k$ out of the $n$ experts, where $k$ is a user-specified parameter. This leads to a challenging, cardinality-constrained optimization problem. To deal with this challenge, \textcolor{black}{we develop a novel, unconstrained reformulation and we prove that it is equivalent to the original problem.} The reformulation uses a binary encoding scheme that implicitly imposes the cardinality constraint using learnable binary codes. \textbf{(ii)} To make the unconstrained reformulation smooth, we relax and smooth the binary variables. We demonstrate that, with careful initialization and regularization, the resulting problem can be optimized with first-order methods such as SGD. \textbf{(iii)} We carry out a series of experiments on synthetic and real MTL datasets, which show that our gate outperforms state-of-the-art gates in terms of parameter sharing and predictive performance. \textbf{(iv)} We provide an open-source implementation of DSelect-k.
\begin{comment}
\textbf{Contributions: } Our contributions can be summarized as follows. \textbf{(i)} We develop DSelect-k: the first, continuously differentiable sparse gate for MoE. The gate allows for selecting $k$ out of the $n$ experts, where $k$ is a user-specified parameter. This leads to a challenging, cardinality-constrained optimization problem. To deal with this challenge, we develop an equivalent unconstrained formulation, based on a novel binary encoding. We then smooth the binary encoding variables so that the resulting optimization problem becomes differentiable. We demonstrate that, with careful initialization and regularization, the resulting optimization problem can be optimized with first-order methods such as SGD.
\textbf{(ii)} We carry out a series of experiments on synthetic and real MTL datasets, to show that our gate outperforms state-of-the-art gates in terms of parameter sharing and predictive performance. \textbf{(iii)} We provide an open-source implementation of our gate that can serve as a drop-in replacement for existing gates.
\end{comment}
\subsection{Related Work}
\textbf{MoE and Conditional Computation: } Since MoE was introduced by \citet{jacobs1991adaptive}, an exciting body of work has extended and studied this model, e.g., see \citet{jordan1994hierarchical,jacobs1997bias,jiang1999identifiability}.
Recently, MoE-based models are showing success in deep learning. For example, \citet{shazeer2017outrageously} introduced the sparse Top-k gate for MoE and showed significant computational improvements on machine translation tasks; we discuss exact connections to this gate in Section \ref{sec:MoE}. The Top-k gate has also been utilized in several state-of-the-art deep learning models that considered MTL tasks, e.g., \citet{lepikhin2020gshard,ramachandran2018diversity,fedus2021switch}.
Our work is also related to the conditional computation models that activate parts of the neural network based on the input \citep{DBLP:journals/corr/BengioBPP15}. These models use a trainable gate that decides which part of the neural network to activate for each input example, with the goal of speeding up training or inference. Works in this area include \citep{bengio2013estimating,DBLP:journals/corr/BengioBPP15,shazeer2017outrageously,ioannou2016decision,wang2018skipnet}. Unlike our work, these works are based on non-differentiable models, or heuristics where the training and inference models are different.
\textcolor{black}{\textbf{Stochastic Subset Selection and Sparse Transformations: } A related line of work develops mechanisms for stochastic subset selection in neural networks, e.g., see \citep{paulus2020gradient,chen2018learning,xie2019reparameterizable} and the references therein. Specifically, these works allow for sampling $k$-subsets from a categorical distribution, based on extensions or generalizations of the Gumbel-softmax trick \citep{maddison2016concrete,jang2016categorical}. However, in the MoE we consider \textsl{deterministic} subset selection---determinism is a common assumption in MoE models (for example, in \citep{jacobs1991adaptive,jordan1994hierarchical,shazeer2017outrageously}) that can improve interpretability. In contrast, the approaches described above assume stochastic selection and are suitable in applications where there is an underlying sampling distribution, such as in variational inference \citep{kingma2013auto}. In Appendix \ref{sec:appendix_related_work}, we provide more context, and we discuss important differences between our proposal and these approaches. In addition, a related body of work has developed sparse transformations as alternatives to the softmax function, e.g., the sparsemax \citep{martins2016softmax} and its generalizations \citep{peters2019sparse,correia2019adaptively,
blondel2019learning}. However, there is an important distinction: our formulation imposes a cardinality constraint that controls the number of nonzeros precisely, whereas the latter approaches do not impose a cardinality constraint (e.g., sparsemax cannot control sparsity). The cardinality constraint is important for interpretability and computational efficiency in MoE because all examples or tasks will respect the desired sparsity level (in contrast, sparse transformations that do not precisely control sparsity, may assign some examples or tasks sparse combinations and others dense combinations).}
\textcolor{black}{\textbf{MTL: } In Appendix \ref{sec:appendix_related_work}, we review related literature on MTL.}
\section{Gating in the Mixture of Experts} \label{sec:MoE}
In this section, we first review the MoE architecture and popular gates, and then discuss how these gates compare to our proposal. We will assume that the inputs to the MoE belong to a space $\mathcal{X} \subset \mathbb{R}^{p}$. In its simplest form, the MoE consists of a set of $n$ experts (neural networks) $f_{i}: \mathcal{X} \to \mathbb{R}^{u}$, $i \in \{1,2,\dots, n\}$, and a gate $g: \mathcal{X} \to \mathbb{R}^n$ that assigns weights to the experts. The gate's output is assumed to be a probability vector, i.e., $g(x) \geq 0$ and $\sum_{i=1}^n g(x)_i = 1$, for any $x \in \mathcal{X}$. Given an example $x \in \mathcal{X}$, the corresponding output of the MoE is a weighted combination of the experts:
\begin{align} \label{eq:moe_output}
\sum_{i=1}^n f_i(x) g(x)_i.
\end{align}
Next, we discuss two popular choices for the gate $g(.)$ that can be directly optimized using SGD.
\textbf{Softmax Gate: } A classical model for $g(x)$ is the softmax gate: $\sigma(A x + b)$, where $\sigma(.)$ is the softmax function, $A \in \mathbb{R}^{n \times p}$ is a trainable weight matrix, and $b \in \mathbb{R}^{n}$ is a bias vector \cite{jordan1994hierarchical}. This gate is dense, in the sense that all experts are assigned nonzero probabilities. Note that static gating (i.e., gating which does not depend on the input example) can be obtained by setting $A = 0$.
\textbf{Top-k Gate: } This is a sparse variant of the softmax gate that returns a probability vector with only k nonzero entries \citep{shazeer2017outrageously}. The Top-k gate is defined by $\sigma(\textsl{KeepTopK}(A x + b))$, where for any vector $v$, $\textsl{KeepTopK}(v)_i := v_i$ if $v_i$ is in the top k elements of $v$, and $\textsl{KeepTopK}(v)_i := - \infty$ otherwise\footnote{To help in load balancing across the experts, \citet{shazeer2017outrageously} add Gaussian noise and additional regularizers to the model.}. This gate is conceptually appealing since it allows for direct control over the number of experts to select and is trained using SGD.
Moreover, the Top-k gate supports \textsl{conditional training}: in backpropagation, for each input example, only the gradients of the loss w.r.t. top k elements need to be computed. With a careful implementation, conditional training can lead to computational savings. However, the Top-k gate is not continuous, which implies that the gradient does not exist at certain inputs. This can be problematic when training is done using gradient-based methods. To gain more insight, in Appendix \ref{sec:visualization_appendix}, we plot the expert weights chosen by the Top-k gate during training with SGD. The results indicate an oscillatory behavior in the output of the Top-k gate, which can be attributed to its discontinuous nature: a small change in the input can lead to ``jumps'' in the output.
\textbf{Comparison with DSelect-k: } We develop DSelect-k in Section \ref{sec:our_gate}. Here we present a high-level comparison between DSelect-k and Top-k. Similar to Top-k, DSelect-k can select $k$ out of the $n$ experts and can be trained using gradient-based optimization methods. A major advantage of DSelect-k over Top-k is that it is continuously differentiable, which leads to more stable selection of experts during training---see Appendix \ref{sec:visualization_appendix} for visualizations of the expert weights during training. During inference, DSelect-k only needs to evaluate a subset of the experts, which can lead to computational savings. However, DSelect-k supports conditional training only partially. At the start of training, it uses all the available experts, so conditional training is not possible. As we discuss in Section \ref{sec:our_gate}, after a certain point during training, DSelect-k converges to a small subset of the experts, and then conditional training becomes possible. Our experiments indicate that DSelect-k has a significant edge over Top-k in terms of prediction and expert selection performance, so the full support for conditional training in Top-k comes at the expense of statistical performance.
\section{Differentiable and Sparse Gating} \label{sec:our_gate}
In this section, we develop the DSelect-k gate, for both the static and per-example gating settings. First, we introduce the problem setup and notation. To simplify the presentation, we will develop the gate for a single supervised learning task, and we note that the same gate can be used in MTL models. We assume that the task has an input space $\mathcal{X} \subset \mathbb{R}^{p}$, an output space $\mathcal{Y}$, and an associated loss function $\ell: \mathcal{Y} \times \mathbb{R} \to \mathbb{R}$. We denote the set of $N$ training examples by $\mathcal{D} = \{ (x_i, y_i) \in \mathcal{X} \times \mathcal{Y}\}_{i=1}^{N}$. We consider a learning model defined by the MoE in Equation \eqref{eq:moe_output}. For simplicity, we assume that the experts are scalar-valued and belong to a class of continuous functions $\mathcal{H}$. We assume that the number of experts $n = 2^m$ for some integer $m$---in Appendix \ref{sec:arbitrary_n}, we discuss how the gate can be extended to arbitrary $n$. For convenience, given a non-negative integer $i$, we denote the set $\{1, 2, \dots, i\}$ by $[i]$.
In Section \ref{sec:static_gating}, we develop DSelect-k for the static gating setting. Then, in Section \ref{sec:per-example_gating}, we generalize it to the per-example setting.
\subsection{DSelect-k for Static Gating} \label{sec:static_gating}
Our goal here is to develop a static gate that selects a convex combination of at most $k$ out of the $n$ experts. The output of the gate can be thought of as a probability vector $w$ with at most $k$ nonzero entries, where $w_i$ is the weight assigned to the expert $f_i$. A natural way to minimize the empirical risk of the MoE model is by solving the following problem:
\begin{subequations} \label{eq:constrained_example}
\begin{align}
\min_{f_1, \dots f_n, w} ~~~~ & \frac{1}{N} \sum_{(x, y) \in \mathcal{D}} \ell\Big(y, \sum_{i = 1}^{n} f_i(x) w_i \Big) \\
\mathrm{s.t.} ~~~~~~~~ & \| w \|_0 \leq k \label{eq:cardinality_constraint} \\
& \sum_{i=1}^{n} w_i = 1 , ~ w \geq 0. \label{eq:simplex_constraints}
\end{align}
\end{subequations}
In the above, the $L_{0}$ norm of $w$, $\| w \|_0 $, is equal to the number of nonzero entries in $w$. Thus, the cardinality constraint \eqref{eq:cardinality_constraint} ensures that the gate selects at most $k$ experts. Problem \eqref{eq:constrained_example} is a combinatorial optimization problem that is not amenable to SGD due to the cardinality constraint \eqref{eq:cardinality_constraint} and the simplex constraints in \eqref{eq:simplex_constraints}. In what follows of this section, we first transform Problem \eqref{eq:constrained_example} into an equivalent unconstrained optimization problem, based on a binary encoding scheme. However, the unconstrained problem cannot be directly handled using SGD due to the presence of binary variables. Thus, in a second transformation, we smooth the binary variables, which leads to an optimization problem that is amenable to SGD.
\textbf{Roadmap: } In Section \ref{sec:single_expert}, we introduce the \textsl{single expert selector}: a construct for choosing 1 out of $n$ experts by using binary encoding. In Section \ref{sec:combinatorial_gating}, we leverage the single expert selector to transform Problem \eqref{eq:constrained_example} into an unconstrained one. Then, in Section \ref{sec:smooth_gating}, we smooth the unconstrained problem and discuss how SGD can be applied.
\subsubsection{Single Expert Selection: Binary Encoding} \label{sec:single_expert}
The single expert selector (selector, for short) is a fundamental construct that we will later use to convert Problem \eqref{eq:constrained_example} to an unconstrained optimization problem. At a high-level, the single expert selector chooses the index of 1 out of the $n$ experts and returns a one-hot encoding of the choice. For example, in the case of $4$ experts, the selector can choose the first expert by returning the binary vector $[1 ~ 0 ~ 0 ~ 0]^T$. Generally, the selector can choose any of the experts, and its choice is determined by a set of binary encoding variables, as we will describe next.
The selector is parameterized by $m$ (recall that $m = \log_2{n}$) binary variables, $z_1, z_2, \dots, z_m$, where we view these variables collectively as a binary number: $z_m z_{m-1} \dots z_1$. The integer represented by the latter binary number determines which expert to select. More formally, let $l$ be the integer represented by the binary number $z_m z_{m-1} \dots z_1$. The selector is a function ${r}: \mathbb{R}^{m} \to \{0,1\}^{n}$ which maps $z := [z_1, z_2, \dots, z_m]^T$ to a one-hot encoding of the integer $(l+1)$. For example, if all the $z_i$'s are $0$, then the selector returns a one-hot encoding of the integer $1$. Next, we define the selector $r(z)$. For easier exposition, we start with the special case of $4$ experts and then generalize to $n$ experts.
\textbf{Special case of $4$ experts:} In this case, the selector uses two binary variables $z_1$ and $z_2$. Let $l$ be the integer represented by the binary number $z_2 z_1$. Then, the selector should return a one-hot encoding of the integer $(l+1)$. To achieve this, we define the selector $r(z)$ as follows:
\begin{align} \label{eq:r_z_4}
r(z) =
\begin{bmatrix}
\bar{z_1} \bar{z_2}, ~~
{z_1} \bar{z_2}, ~~
\bar{z_1} {z_2}, ~~
{z_1} {z_2}
\end{bmatrix}^{T}
\end{align}
where $\bar{z_i} := 1 - z_i$. By construction, exactly one entry in $r(z)$ is 1 (specifically, $r(z)_{l+1} = 1$) and the rest of the entries are zero. For example, if $z_1 = z_2 = 0$, then $r(z)_1 = 1$ and $r(z)_i = 0$ for $i \in \{2,3,4\}$.
\textbf{General case of $n$ experts: } Here we generalize the selector $r(z)$ to the case of $n$ experts. To aid in the presentation, we make the following definition. For any non-negative integer $l$, we define $\mathcal{B}(l)$ as the set of indices of the nonzero entries in the binary representation of $l$ (where we assume that the least significant bit is indexed by $1$). For example, $\mathcal{B}(0) = \emptyset$, $\mathcal{B}(1) = \{1\}$, $\mathcal{B}(2) = \{2\}$, and $\mathcal{B}(3) = \{1, 2\}$. For every $i \in [n]$, we define the $i$-th entry of $r(z)$ as follows:
\begin{align} \label{eq:gate_single}
{r(z)}_i = \prod_{j \in \mathcal{B}(i-1)} (z_j) \prod_{j \in [m] \setminus \mathcal{B}(i-1)} (1 - z_j)
\end{align}
In the above, $r(z)_i$ is a product of $m$ binary variables, which is equal to $1$ iff the integer $(i-1)$ is represented by the binary number $z_m z_{m-1} \dots z_1$. Therefore, $r(z)$ returns a one-hot encoding of the index of the selected expert. Note that when $n=4$, definitions \eqref{eq:r_z_4} and \eqref{eq:gate_single} are equivalent.
\subsubsection{Unconstrained Minimization} \label{sec:combinatorial_gating}
In this section, we develop a combinatorial gate which allows for transforming Problem \eqref{eq:constrained_example} into a unconstrained optimization problem. We design this gate by creating $k$ instances of the single expert selector $r(.)$, and then taking a convex combination of these $k$ instances. More formally, for every $i \in [k]$, let $z^{(i)} \in \{0,1\}^{m}$ be a (learnable) binary vector, so that the output of the $i$-th instance of the selector is $r(z^{(i)})$. Let $Z$ be a $k \times m$ matrix whose $i$-th row is $z^{(i)}$. Moreover, let $\alpha \in \mathbb{R}^{k}$ be a vector of learnable parameters. We define the \textsl{combinatorial gate} $q$ as follows:
\begin{align*}
q(\alpha, Z) = \sum_{i=1}^k {\sigma}(\alpha)_i {r}(z^{(i)}),
\end{align*}
where we recall that $\sigma(.)$ is the softmax function. Since for every $i \in [k]$, $r(z^{(i)})$ is a one-hot vector, we have $\| q(\alpha, Z) \|_0 \leq k$. Moreover, since the weights of the selectors are obtained using a softmax, we have $q(\alpha, Z) \geq 0$ and $\sum_{i=1}^{n} q(\alpha, Z)_i = 1$. Thus, $q(\alpha, Z)$ has the same interpretation of $w$ in Problem \eqref{eq:constrained_example}, without requiring any constraints. Therefore, we propose replacing $w$ in the objective of Problem \eqref{eq:constrained_example} with $q(\alpha, Z)$ and removing all the constraints. This replacement leads to an equivalent unconstrained optimization problem, as we state in the next proposition.
\begin{proposition} \label{prop:equivalence}
Problem \eqref{eq:constrained_example} is equivalent\footnote{By equivalent we mean that the two problems have the same optimal objective, and given an optimal solution for one problem, we can construct an optimal solution for the other.} to:
\begin{align} \label{eq:unconstrained_example}
\begin{split}
\min_{f_1, \dots f_n, \alpha, Z} ~~~~ & \frac{1}{N} \sum_{(x, y) \in \mathcal{D}} \ell\Big(y, \sum_{i = 1}^{n} f_i(x) q(\alpha, Z)_i \Big) \\
& z^{(i)} \in \{ 0, 1 \}^{m}, ~ i \in [k]
\end{split}
\end{align}
\end{proposition}
The proof of Proposition \ref{prop:equivalence} is in Appendix \ref{appendix:proofs}. Unlike Problem \eqref{eq:constrained_example}, Problem \eqref{eq:unconstrained_example} does not involve any constraints, aside from requiring binary variables. However, these binary variables cannot be directly handled using first-order methods. Next, we discuss how to smooth the binary variables in order to obtain a continuous relaxation of Problem \eqref{eq:unconstrained_example}.
\subsubsection{Smooth Gating} \label{sec:smooth_gating}
In this section, we present a procedure to smooth the binary variables in Problem \eqref{eq:unconstrained_example} and discuss how the resulting problem can be optimized using first-order methods. The procedure relies on the \textsl{smooth-step} function, which we define next.
\textbf{Smooth-step Function: } This is a continuously differentiable and S-shaped function, similar in shape to the logistic function. However, unlike the logistic function, the smooth-step function can output 0 and 1 exactly for sufficiently large magnitudes of the input. The smooth-step and logistic functions are depicted in Appendix \ref{sec:smooth_step}. More formally, given a non-negative scaling parameter $\gamma$, the smooth-step function, $S: \mathbb{R} \to \mathbb{R}$, is a cubic piecewise polynomial defined as follows:
$$
S(t) =
\begin{cases}
0 & \text{ if } t \leq -\gamma/2 \\
-\frac{2}{\gamma^{3}}t^3 + \frac{3}{2\gamma}t + \frac{1}{2} & \text{ if } -\gamma/2 \leq t \leq \gamma/2 \\
1 & \text{ if } t \geq \gamma/2
\end{cases}
$$
The parameter $\gamma$ controls the width of the fractional region (i.e., the region where the function is strictly between 0 and 1). Note that $S(t)$ is continuously differentiable at all points---this follows since at the boundary points $\pm \gamma/2$, we have: $S'(-\gamma/2) = S'(\gamma/2) = 0$. This function has been recently used for conditional computation in soft trees \cite{hazimeh2020tree} and is popular in the literature on computer graphics \citep{ebert2003texturing,rost2009opengl}.
\textbf{Smoothing: } We obtain DSelect-k from the combinatorial gate $q(\alpha, Z)$ by (i) relaxing every binary variable in $Z$ to be continuous in the range $(-\infty, +\infty)$, i.e., $Z \in \mathbb{R}^{k \times m}$, and (ii) applying the smooth-step function to $Z$ element-wise. Formally, DSelect-k is a function $\tilde{q}$ defined as follows:
\begin{align} \label{eq:smoothed_router}
\tilde{q}(\alpha, Z) := q(\alpha, S(Z)) = \sum_{i=1}^k {\sigma}(\alpha)_i {r}\big(S(z^{(i)})\big),
\end{align}
where the matrix $S(Z)$ is obtained by applying $S(\cdot)$ to $Z$ element-wise. Note that $\tilde{q}(\alpha, Z)$ is continuously differentiable so it is amenable to first-order methods. If $S(Z)$ is binary, then $ \tilde{q}(\alpha, Z)$ selects at most $k$ experts (this holds since $\tilde{q}(\alpha, Z) = q(\alpha, S(Z))$, and from Section \ref{sec:combinatorial_gating}, $q$ selects at most $k$ experts when its encoding matrix is binary). However, when $S(Z)$ has any non-binary entries, then more than $k$ experts can be potentially selected, meaning that the cardinality constraint will not be respected. In what follows, we discuss how the gate can be optimized using first-order methods, while ensuring that $S(Z)$ converges to a binary matrix so that the cardinality constraint is enforced.
We propose using $\tilde{q}(\alpha, Z)$ in MoE, which leads to the following optimization problem:
\begin{align} \label{eq:smooth_optimization_no_entropy}
\begin{split}
\min_{f_1, \dots f_n, \alpha, Z} ~~~~ & \frac{1}{N} \sum_{(x, y) \in \mathcal{D}} \ell\Big(y, \sum_{i = 1}^{n} f_i(x) \tilde{q}(\alpha, Z)_i \Big)
\end{split}
\end{align}
Problem \eqref{eq:smooth_optimization_no_entropy} can be viewed as a continuous relaxation of Problem \eqref{eq:unconstrained_example}. If the experts are differentiable, then the objective of Problem \eqref{eq:smooth_optimization_no_entropy} is differentiable. Thus, we propose optimizing MoE end-to-end using first-order methods. \textcolor{black}{We note that $\tilde{q}(\alpha, Z)$ uses $(k + k \log{n})$ learnable parameters. In contrast, the Top-k and softmax gates (discussed in Section \ref{sec:MoE}) use $n$ parameters. Thus, for relatively small $k$, our proposal uses a smaller number of parameters.} Next, we discuss how the DSelect-k gate's parameters should be initialized in order to ensure that it is trainable.
\textbf{Initialization: } By the definition of the smooth-step function, if $S(Z_{ij})$ is binary then $S'(Z_{ij}) = 0$, and consequently $\frac{\partial \ell}{\partial Z_{ij}} = 0$. This implies that, during optimization, if $S(Z_{ij})$ becomes binary, the variable $Z_{ij}$ will not be updated in any subsequent iteration. Thus, we have to be careful about the initialization of $Z$. For example, if $Z$ is initialized so that $S(Z)$ is a binary matrix then the gate will not be trained. To ensure that the gate is trainable, we initialize each $Z_{ij}$ so that $0 < S(Z_{ij}) < 1$. This way, the $Z_{ij}$'s can have nonzero gradients at the start of optimization.
\textbf{Accelerating Convergence to Binary Solutions: }
Recall that we need $S(Z)$ to converge to a binary matrix, in order for the gate $\tilde{q}$ to respect the cardinality constraint (i.e., to select at most $k$ experts). Empirically, we observe that if the optimizer runs for a sufficiently large number of iterations, then $S(Z)$ typically converges to a binary matrix. However, early stopping of the optimizer can be desired in practice for computational and statistical considerations, and this can prevent $S(Z)$ from converging. To encourage faster convergence towards a binary $S(Z)$, we will add an entropy regularizer to Problem \eqref{eq:smooth_optimization_no_entropy}. The following proposition is needed before we introduce the regularizer.
\begin{proposition} \label{prop:simplex}
For any $z \in \mathbb{R}^{m}$, $\alpha \in \mathbb{R}^{k}$, and $Z \in \mathbb{R}^{k \times m}$, ${r}(S(z))$ and $\tilde{q}(\alpha, Z)$ belong to the probability simplex.
\end{proposition}
The proof of the proposition is in Appendix \ref{appendix:proofs}. Proposition \ref{prop:simplex} implies that, during training, the output of each single expert selector used by $\tilde{q}(\alpha, Z)$, i.e., $r(S(z^{(i)}))$ for $i \in [k]$, belongs to the probability simplex. Note that the entropy of each $r(S(z^{(i)}))$ is minimized by any one-hot encoded vector. Thus, for each $r(S(z^{(i)}))$, we add an entropy regularization term that encourages convergence towards one-hot encoded vectors; equivalently, this encourages convergence towards a binary $S(Z)$. Specifically, we solve the following regularized variant of Problem \eqref{eq:smooth_optimization_no_entropy}:
\begin{align*}
\begin{split}
\min_{f_1, \dots f_n, \alpha, Z} ~~~~ & \sum_{(x, y) \in \mathcal{D}} \frac{1}{N} \ell\Big(y, \sum_{i = 1}^{n} f_i(x) \tilde{q}(\alpha, Z)_i \Big) + \lambda \Omega(Z)
\end{split}
\end{align*}
where $\Omega(Z) := \sum_{i=1}^{k} h \big(r(S(z^{(i)})) \big)$ and $h(.)$ is the entropy function. The hyperparameter $\lambda$ is non-negative and controls how fast each selector converges to a one-hot encoding. In our experiments, we tune over a range of $\lambda$ values. When selecting the best hyperparameters from tuning, we disregard any $\lambda$ whose corresponding solution does not have a binary $S(Z)$. In Appendix \ref{sec:convergence_appendix}, we report the number of training steps required for $S(Z)$ to converge to a binary matrix, on several real datasets. Other alternatives to ensure that $S(Z)$ converges to a binary matrix are also possible. One alternative is to regularize the entropy of each entry in $S(Z)$ separately. Another alternative is to anneal the parameter $\gamma$ of the smooth-step function towards zero.
\textcolor{black}{\textbf{Softmax-based Alternative to Binary Encoding:} Recall that our proposed selectors in \eqref{eq:smoothed_router}, i.e., ${r}\big(S(z^{(i)})\big)$, $i \in [k]$, learn one-hot vectors \textsl{exactly} (by using binary encoding).
One ``practical'' alternative for learning a one-hot vector is by using a softmax function with temperature annealing (or entropy regularization). Theoretically, this alternative cannot return a one-hot vector, but after training, the softmax output can be transformed to a one-hot vector using a heuristic (e.g., by taking an argmax). In Appendix \ref{sec:synthetic}, we perform an ablation study of our gate in which we replace the selectors with softmax functions (along with temperature annealing or entropy regularization). Although this alternative is heuristic, we consider it due to its popularity.}
\subsection{DSelect-k for Per-example Gating} \label{sec:per-example_gating}
In this section, we generalize the static version of DSelect-k, $\tilde{q}(\alpha, Z)$, to the per-example gating setting. The key idea is to make the gate's parameters $\alpha$ and $Z$ functions of the input, so that the gate can make decisions on a per-example basis. Note that many functional forms are possible for these parameters. For simplicity and based on our experiments, we choose to make $\alpha$ and $Z$ linear functions of the input example. More formally, let $G \in \mathbb{R}^{k \times p}$, $W^{(i)} \in \mathbb{R}^{m \times p}$, $i \in [k]$, be a set of learnable parameters. Given an input example $x \in \mathbb{R}^{p}$, we set $\alpha = G x$ and $z^{(i)} = W^{(i)} x$ in $\tilde{q}(\alpha, Z)$ (to simplify the presentation, we do not include bias terms). Thus, the per-example version of DSelect-k is a function $v$ defined as follows:
$$
v(G, W, x) = \sum_{i=1}^k {\sigma}(G x)_i r\big(S( W^{(i)} x)\big).
$$
In the above, the term $r\big(S( W^{(i)} x)\big)$ represents the $i$-th single expert selector, whose output depends on the example $x$; thus different examples are free to select different experts. The term ${\sigma}(G x)_i$ determines the input-dependent weight assigned to the $i$-th selector. The gate $v(G, W, x)$ is continuously differentiable in the parameters $G$ and $W$, so we propose optimizing it using first-order methods. Similar to the case of static gating, if $S( W^{(i)} x)$ is binary for all $i \in [k]$, then each $r\big(S( W^{(i)} x)\big)$ will select exactly one expert, and the example $x$ will be assigned to at most $k$ experts.
To encourage $S( W^{(i)} x)$, $i \in [k]$ to become binary, we introduce an entropy regularizer (similar in essence to that in static gating). However, unlike static gating, the regularizer here should be on a per-example basis, so that each example respects the cardinality constraint. By Proposition \ref{prop:simplex}, for any $i \in [k]$, $r\big(S( W^{(i)} x)\big)$ belongs to the probability simplex. Thus, for each example $x$ in the training data, we introduce a regularization term of the form: $ \Omega(W, x) := \sum_{i \in [k]} h\Big(r\big(S( W^{(i)} x)\big) \Big)$, and minimize the following objective function:
\begin{align*}
\sum_{(x, y) \in \mathcal{D}} \Big( \frac{1}{N} \ell \Big( y, \sum_{i = 1}^{n} f_i(x) v(G, W, x)_i \Big) + \lambda \Omega(W, x) \Big),
\end{align*}
where $\lambda$ is a non-negative hyperparameter. Similar to the case of static gating, we tune over a range of $\lambda$ values, and we only consider the choices of $\lambda$ that force the average number of selected experts per example to be less than or equal to $k$. If the application requires that the cardinality constraint be satisfied strictly for every example (not only on average), then annealing $\gamma$ in the smooth-step function towards zero enforces this.
\begin{comment}
Our goal here is to select 1 out of the $n$ experts in an input-dependent fashion. To achieve this, we make the (relaxed) binary encoding variables in $\tilde{r}$ a function of the input. Specifically, we define the learnable parameters: $\theta^{(i)} \in \mathbb{R}^{p}$, $i \in [m]$. For a given example $x \in \mathbb{R}^p$, we set each $z_i$ in $\tilde{r}$ as follows: $z_i = \langle \theta^{(i)}, x \rangle$. This leads to the per-example gate $\hat{r}(x; \theta)$ which is defined as follows for every $i \in [n]$:
\begin{align} \label{eq:gate_single_sample}
\begin{split}
{\hat{r}(x; \theta)}_i = \prod_{j=1}^{m} \Big( & \mathds{1}[j \in \mathcal{B}(i-1)] S(\langle \theta^{(i)}, x \rangle) \\ & + \mathds{1}[j \notin \mathcal{B}(i-1)] \bar{S}(\langle \theta^{(i)}, x \rangle) \Big)
\end{split}
\end{align}
The per-example gate $\hat{r}$ has the same form as the static gate $\tilde{r}$, except that the $z_i$'s are functions of the input example. For a given example $x$, if $S(\langle \theta^{(i)}, x \rangle)$ is binary for all $i$, then exactly one entry in $\hat{r}(x; \theta)$ will be one, and the rest are zeros. Moreover, Proposition \ref{prop:simplex} also applies in this case, so $\hat{r}(x; \theta)$ belongs to the probability simplex (for any $x$ and $\theta$). To encourage quick convergence towards binary values, we augment the loss function with the per-example entropy regularization term $H(\hat{r}(x; \theta))$, i.e., we solve the following optimization problem:
\begin{align} \label{eq:unconstrained_continuous_example_entropy_2}
\begin{split}
\min_{f, \theta} & \sum_{(x, y) \in \mathcal{D}} \Bigg( \ell\Big(y, \sum_{i = 1}^{n} f_i(x) \hat{r}(x; \theta)_i \Big) + \lambda H(\hat{r}(x; \theta)) \Bigg)
\end{split}
\end{align}
Similar to the static gate, we set $\lambda$....
\subsection{Gating: $k$ out of $n$ } Up to this point, we developed a static and per-example gates ($\tilde{r}$ and $\hat{r}$, respectively) that aim to select a single expert. Now we present a generalization of these gates that allows for selecting a convex combination of $k$ out of the $n$ experts. The key idea is to create $k$ instances of the 1-out-of-$n$ gate and then select the experts by taking a linear combination of the $k$ gates.
\textbf{Static Gating: } Define $k$ instances of the gate $\tilde{r}$, and let $z^{(i)} \in \mathbb{R}^{m}$ be the parameter vector associated with the $i$-th instance. Moreover, let $\alpha \in \mathbb{R}^{k}$ be a vector of learnable parameters. We define the gate as follows:
\begin{align} \label{eq:q}
q(\alpha, Z) = \sum_{i=1}^k {\sigma}(\alpha)_i \tilde{r}(z^{(i)}),
\end{align}
where we recall that $\sigma$ is the softmax function. In the above, ${\sigma}(\alpha)_i$ determines the weight assigned to the $i$-th instance $\tilde{r}$. Note that $q(\alpha, Z)$ is a probability vector (for any $\alpha$ and $Z$) since it is a convex combination of the probability vectors $\tilde{r}(z^{(i)})$, $i \in [k]$. Moreover, if $\| \tilde{r}(z^{(i)}) \|_0 = 1$ for all $i$, then $\| q(\alpha, Z) \|_0 \leq k$.
\end{comment}
\section{Experiments} \label{sec:experiments}
We study the performance of the DSelect-k gate in the context of MTL and compare with state-of-the-art gates and baselines. In the rest of this section, we present experiments on the following real MTL datasets: MovieLens, Multi-MNIST, and Multi-Fashion MNIST. \textcolor{black}{Moreover, in Appendix \ref{sec:appendix_experiments}, we present two additional experiments: (i) an experiment on data from a large-scale, real-world recommender system and (ii) an experiment on synthetic data (with up to $128$ tasks), in which we study the predictive and expert selection performance and perform ablation studies.}
\textbf{Competing Methods: } We focus on a multi-gate MoE, and study the DSelect-k and Top-k gates in the both static and per-example gating settings. In addition, we consider two MTL baselines. The first baseline is a MoE with a softmax gate (which uses all the available experts). The second is a \textsl{shared bottom} model \citep{caruana1997multitask}, where all tasks share the same bottom layers, which are in turn connected to task-specific neural networks.
\textbf{Experimental Setup: } All competing models were implemented in TensorFlow 2. We used Adam \citep{kingma2014adam} and Adagrad \citep{duchi2011adaptive} for optimization, and we tuned the key hyperparameters using random grid search (with an average of $5$ trials per grid point). Full details on the setup are in Appendix \ref{sec:experimental_details}.
\subsection{MovieLens}
\textbf{Dataset:} MovieLens \citep{harper2015movielens} is a movie recommendation dataset containing records for $4{,}000$ movies and $6{,}000$ users. Following \citet{wang2020small}, for every user-movie pair, we constructed two tasks. Task 1 is a binary classification task for predicting whether the user will watch a particular movie. Task 2 is a regression task to predict the user's rating (in $\{1,2,\dots, 5\}$) for a given movie. We use $1.6$ million examples for training and $200,000$ for each of the validation and testing sets.
\textbf{Experimental Details:} We use the cross-entropy and squared error losses for tasks 1 and 2, respectively. We optimize a weighted average of the two losses, i.e., the final loss function is $\alpha (\text{Loss of Task 1}) + (1-\alpha) (\text{Loss of Task 2}) $, and we report the results for $\alpha \in \{0.1, 0.5, 0.9 \}$. The same loss function is also used for tuning and testing. The architecture consists of a multi-gate MoE with $8$ experts, where each of the experts and the task-specific networks is composed of ReLU-activated dense layers. For each $\alpha$, we tune over the optimization and gate-specific hyperparameters, including the number of experts to select (i.e., k in DSelect-k and Top-k). After tuning, we train each model for $100$ repetitions (using random initialization) and report the averaged results. For full details, see Appendix \ref{sec:movie_lens_extra_details}.
\textbf{Results:} In Table \ref{table:movielens}, we report the test loss and the average number of selected experts\footnote{The number of experts is averaged over tasks and training repetitions, so it may take non-integer values.}. The results indicate that for all values of $\alpha$, either one of our DSelect-k gates (static or per-example) outperforms the competing methods, in terms of both the test loss and the number of selected experts. In only one case ($\alpha=0.1$), Top-k (static) performed marginally better than DSelect-k gate (static), but DSelect-k (per-example) achieves the best performance across all methods in this case. Notably, the softmax MoE is uniformly outperformed by the DSelect-k and Top-k gates, so sparsity in gating seems to be beneficial on this dataset. Moreover, there does not seem to be a clear winner between static and per-example gating: each performs better for certain $\alpha$'s.
\subsection{Multi-MNIST and Multi-Fashion MNIST}
\textbf{Datasets: } We consider two image classification datasets: Multi-MNIST and Multi-Fashion \citep{sabour2017dynamic}, which are multi-task variants of the MNIST \citep{lecun2010mnist} and Fashion MNIST \citep{xiao2017fashion} datasets. We construct the Multi-MNIST dataset similar to \citet{sabour2017dynamic}: (i) we start by uniformly sampling two images from the MNIST dataset and overlaying them on top of each other, and (ii) we shift the first and second digits by $4$ pixels (in each direction) towards the top-left and bottom-right corners, respectively. This procedure leads to $36 \times 36$ images with some overlap between the digits. The Multi-Fashion is constructed in a similar way by overlaying images from the Fashion MNIST dataset. For each dataset, we consider two classification tasks: Task 1 is to classify the top-left item and Task 2 is to classify the bottom-right item. We use $100{,}000$ examples for training, and $20{,}000$ examples for each of the validation and testing sets.
\textbf{Experimental Details: } We use cross-entropy loss for each of the tasks and optimize the sum of the two losses\footnote{Due to the symmetry in the problem, assigning the two tasks equal weights is a reasonable choice.}. The model is a multi-gate MoE with $8$ experts, where each expert is a convolutional neural network and each task-specific network is composed of a number of dense layers. We tune the optimization and gate-specific hyperparameters, including the number of experts to select (i.e., $k$ in DSelect-k and Top-k), and use the average of the task accuracies as the tuning metric. After tuning, we train each model for $100$ repetitions (using random initialization) and report the averaged results. For full details, see Appendix \ref{sec:multi_mnist_extra_details}.
\textbf{Results: } In Table \ref{table:mnist}, we report the test accuracy and the number of selected experts for the Multi-MNIST and Multi-Fashion MNIST datasets. On Multi-MNIST, DSelect-k (static) outperforms Top-k, in terms of both the task accuracies and number of selected experts. For example, it achieves over $1\%$ improvement in Task 2's accuracy compared to Top-k (static). DSelect-k (static) comes close to the performance of the Softmax MoE, but uses less experts ($1.7$ versus $8$ experts). Here the DSelect-k (per-example) does not offer improvement over the static variant (unlike the MovieLens dataset). On Multi-Fashion, we again see that the DSelect-k (static) performs best in terms of accuracy.
\begin{table*}[htbp]
\centering
\caption{\small{Test loss (with standard error) and average number of selected experts on MovieLens. The parameter $\alpha$ determines the weight of Task 1's loss (see text for details). The test loss is multiplied by $10^4$.}}
\label{table:movielens}
\resizebox{0.85\columnwidth}{!}{
\begin{tabular}{c|l|cc|cc|cc|}
\cline{3-8}
\multicolumn{1}{l}{} & \multirow{2}{*}{} & \multicolumn{2}{c|}{$\alpha = 0.1$} & \multicolumn{2}{c|}{$\alpha = 0.5$} & \multicolumn{2}{c|}{$\alpha = 0.9$} \\
\multicolumn{1}{l}{} & & Loss & Experts & Loss & Experts & Loss & Experts \\ \hline
\multicolumn{1}{|c|}{\multirow{3}{*}{\begin{tabular}[c]{@{}c@{}}Static\end{tabular}}} & DSelect-k & $4015 \pm 5$ & 2.7 & $\bm{3804} \pm 3$ & \bm{$1.5$} & $\bm{3690} \pm 2$ & \bm{$1.3$} \\
\multicolumn{1}{|c|}{} & Top-k & $\bm{4012} \pm 4$ & \bm{$2.0$} & $3818 \pm 2$ & 2.0 & $3693 \pm 6$ & 2.0 \\ \hline
\multicolumn{1}{|c|}{\multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}Per-example\end{tabular}}} & DSelect-k & $\bm{4006} \pm 6$ & \bm{$1.5$} & $\bm{3823} \pm 3$ & \bm{$1.2$} & $\bm{3679} \pm 2$ & $\bm{1.1}$ \\
\multicolumn{1}{|c|}{} & Top-k & $4027 \pm 8$ & 2.0 & $3841 \pm 4$ & 2.0 & $3741 \pm 3$ & 2.0 \\ \hline
\multicolumn{1}{|c|}{\multirow{2}{*}{Baselines}} & Softmax MoE & $4090 \pm 1$ & 8.0 & $3960 \pm 3$ & 8.0 & $3847 \pm 10$ & 8.0 \\
\multicolumn{1}{|c|}{} & Shared Bottom & $4037 \pm 2$ & - & $3868 \pm 2$ & - & $3687 \pm 1$ & - \\ \hline
\end{tabular}
}
\end{table*}
\begin{table*}[htbp]
\centering
\caption{\small{Test accuracy (with standard error) and number of selected experts on Multi-MNIST/Fashion.}}
\label{table:mnist}
\resizebox{0.95\columnwidth}{!}{
\begin{tabular}{cl|ccc|ccc|}
\cline{3-8}
\multicolumn{1}{l}{} & & \multicolumn{3}{c|}{Multi-MNIST} & \multicolumn{3}{c|}{Multi-Fashion MNIST} \\
\multicolumn{1}{l}{} & & Accuracy 1 & Accuracy 2 & Experts & Accuracy 1 & Accuracy 2 & Experts \\ \hline
\multicolumn{1}{|c|}{\multirow{3}{*}{\begin{tabular}[c]{@{}c@{}}Static\end{tabular}}} & DSelect-k & $\bm{92.56} \pm 0.03$ & $\bm{90.98} \pm 0.04$ & $\bm{1.7}$ & $\bm{83.78} \pm 0.05$ & $\bm{83.34} \pm 0.05$ & ${1.8}$ \\
\multicolumn{1}{|c|}{} & Top-k & $91.93 \pm 0.06$ & $90.03 \pm 0.08$ & 4 & $83.44 \pm 0.07$ & $82.66 \pm 0.08$ & 4 \\ \hline
\multicolumn{1}{|c|}{\multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}Per-example\end{tabular}}} & DSelect-k & $\bm{92.42}\pm 0.03$ & $\bm{90.7} \pm 0.03$ & $\bm{1.5}$ & $\bm{83.69} \pm 0.04$ & ${83.13} \pm 0.04$ & $\bm{1.5}$ \\
\multicolumn{1}{|c|}{} & Top-k & $92.27 \pm 0.03$ & $90.45 \pm 0.03$ & 4 & $83.66 \pm 0.04$ & $\bm{83.15} \pm 0.04$ & 4 \\ \hline
\multicolumn{1}{|c|}{\multirow{2}{*}{Baselines}} & Softmax MoE & $92.61 \pm 0.03$ & $91.0 \pm 0.03$ & 8 & $83.48 \pm 0.04$ & $82.81 \pm 0.04$ & 8 \\
\multicolumn{1}{|c|}{} & Shared Bottom & $91.3 \pm 0.04$ & $89.47 \pm 0.04$ & - & $82.05 \pm 0.05$ & $81.37 \pm 0.06$ & - \\ \hline
\end{tabular}
}
\end{table*}
\section{Conclusion}
We introduced DSelect-k: a continuously differentiable and sparse gate for MoE, which can be trained using first-order methods. Given a user-specified parameter $k$, the gate selects at most $k$ of the $n$ experts. Such direct control over the sparsity level is typically handled in the literature by adding a cardinality constraint to the optimization problem. One of the key ideas we introduced is a binary encoding scheme that allows for selecting $k$ experts, without requiring any constraints in the optimization problem. We studied the performance of DSelect-k in MTL settings, on both synthetic and real datasets. Our experiments indicate that DSelect-k can achieve significant improvements in prediction and expert selection, compared to state-of-the-art MoE gates and MTL baselines.
\\ \\
\textbf{Acknowledgements:} Part of this work has been done when Hussein Hazimeh was at Google Research. At MIT, Hussein Hazimeh and Rahul Mazumder acknowledge research funding from the Office of Naval Research [Grant ONR-N000141812298].
\clearpage
\bibliographystyle{plainnat}
|
1,477,468,750,317 | arxiv | \section{Introduction}
The p-mode oscillation frequencies vary with the activity cycle of the Sun \cite{sal09} and other solar-like stars \cite{gar10}. However, as observed in the Sun, these temporal variations present differences in amplitude, phase, and frequency among individual modes. Thus the frequency separations computed will be observed to vary with time.
Comparing these frequency separations with those from stellar models will lead to
differences in the fitted global parameters (radius, mass, age) of the star.
Generally, when the frequencies are measured we do not know the phase
of the activity cycle, unless the star can be measured continuously for
long periods of time.
We should then take into account when inferring the stellar parameters, that
these values may depend on when the data was obtained.
\section{Analysis of time-series observations and determination of frequencies}
We analyzed 5202 days of {velocity} observations collected by the space-based instrument GOLF onboard {\it SoHO} spacecraft, covering more than 14 years between 1996 and 2010 with an overall duty cycle of 95.4\% \cite{gar05}. {This dataset was split into non-independent, contiguous 365-day subseries with 91.25-day overlap} and their associated power spectra fitted to extract the mode parameters \cite{sal07} using a standard likelihood maximization function (power spectrum with a $\chi^2$ with 2 d.o.f. statistic). The formal uncertainties in each parameter were then derived from the inverse Hessian matrix.
The individual frequency separations measured with 365-day time series were obtained with a precision of about 0.07~$\mu$Hz.
Figure~\ref{fig:data} (left panel) shows the temporal variations of the individual $l = 0$ and $l = 2$ mode frequencies averaged between 2200 and 3300~$\mu$Hz (the reference values being taken as the average over 1996--1997). It is clear that the individual Sun-as-a-star p-mode frequencies have different temporal variations \cite{sal09}, which are consistent between radial velocity (GOLF) and intensity VIRGO observations \cite{sal10}. The averaged large ($\Delta\nu$) and small ($\delta\nu$) frequency separations also show significant temporal variations with solar activity. For example, Figure~\ref{fig:data} (right panel) shows that the small frequency separation $\delta\nu_{02}$ varies from peak-to-peak by about $0.2 \pm 0.02 \mu$Hz over the solar cycle, {which is presumably consistent with being signatures from surface effects}. For a broader perspective, it is very important to note that the Sun is considered a low-activity star, and many solar-type stars could exhibit much larger variations with activity cycle.
\section{Solar global parameters}
\subsection{Fitting strategy}
In order to determine the global model parameters of the Sun $P$,
we match the observational data $O$ to
the observables $M$ from stellar models \cite{jcd08a,jcd08b} which are characterized by $P$.
$P$ comprises the mass M, age A, initial hydrogen (or helium) mass fraction $X_0$, initial heavy element mass fraction $Z_0$, and the mixing-length parameter $\alpha$ (from the standard mixing-length theory of \cite{bv54}).
The parameters that best describe the data are obtained by minimizing a $\chi^2$ function:
\begin{equation}
\chi^2 = \sum_{i=1}^M \left (\frac{O_i - M_i}{\sigma_i} \right )^2,
\label{eqn:chi2}
\end{equation}
where there are $i=1,2,...,M$ independent observations and $\sigma$ is the observational error.
We use the Levenberg-Marquardt algorithm (LM) to minimize Eq.~\ref{eqn:chi2}
\cite{cre07,met09,ste09,cha10,mat10}.
The global objective of this work is to investigate the effect of the shifted frequency values on the determination of $P$ for any star with solar-like oscillations.
So we must define a coherent and consistent
method to fit the observational data to obtain
the set of best-fitting parameters not only for the Sun, but for other stars too.
Our strategy is to first fit the non-seismic and the average seismic quantities to obtain $P$ to
a reasonable range, and then using $P$ as initial guesses, proceed to use the individual
frequency separations (large and small) to obtain the final set of best-fitting parameters.
LM is a local minimization method and can therefore be sensitive to initial conditions.
To ensure that we obtain a global minimum we search for the best parameters by
initializing the minimization with different parameter values.
We set the initial $M$ as 1.0 \,$M_{\odot}$, and the initial A varies between 4 and 6 Gyr in steps of 0.5 Gyr.
Additionally, we hold the parameter $X_0$ fixed during each minimization, and instead
repeat the process for three values of $X_0$: 0.69, 0.71, 0.73. This gives in total 15 minimizations using the
input data at maximum activity and again at minimum activity.
\subsection{Results}
From the 15 best-fitted $P$, we selected those sets where $\chi^2 \le 29.1$ (4 fitted parameters, 19 data points to give 14 degrees of freedom at the 1\% confidence level) and where the fitted $T_{\rm eff}$ fell to
within 1$\sigma$ of 5777 K ($T_{\rm eff}$ was not a constraint in the second part of the fitting strategy).
These sets are represented in Fig.~\ref{fig:models}, where we show the values of age versus mass on the left panel, and radius versus effective temperature on the right.
The dark/lighter filled circles are the results for fitting during maximum/minimum activity.
Fig.~\ref{fig:models} left panel shows an offset in the fitted $P$ between
minimum and maximum activity.
To quantify this offset, we fitted a linear function to each group of results.
The difference between the fitted ages at opposite activity phases is 0.2 Gyr.
Alternatively if we take 4.8 Gyr as the solar age, the mass is fitted as a 1.0 \,$M_{\odot}$\ star at
minimum, and 1.03 \,$M_{\odot}$\ at maximum.
The right panel of Fig.~\ref{fig:models} shows the fitted $R$ and $T_{\rm eff}$ for these same models
(neither were input constraints).
We find that at minimum activity, the Sun's $T_{\rm eff}$ is about 50 K cooler for a given $R$, or
about 0.5\% smaller in $R$ for a given $T_{\rm eff}$.
\section{Conclusions}
Measuring the oscillation frequencies during different phases of a stellar activity cycle will lead to different values of the individual frequencies. Thus, estimates of the large and small frequency separations will be different depending on the observing period.
Using 365-day subseries of GOLF data, we determined the individual frequency separations to about 0.07 \,$\mu$Hz.
We used stellar models to determine the best-fitting parameters using the observations taken at minimum and maximum activity. We found that the age of the star is on average 0.2 Gyr older using the values from minimum activity, or for a given age, the mass is about 2\% larger using the data at
maximum activity.
We also found a small discrepancy in the fitted radius and effective temperature. At minimum activity,
the Sun is about 50 K cooler for a given fitted radius, or the radius is about 0.5\% smaller for a
given effective temperature.
Although we still need to study whether these differences in fitted values are detectable given the expected uncertainties, we note that \cite{jcd10} quote an error in the age of the planet-hosting star HAT-P-7 of 0.26 Gyr, of the order of this detected change.
The Sun, however, is considered a low-activity star, and
much stronger activity cycles on other solar-type stars have already been detected \cite{gar10}.
With long time-series data such as those from CoRoT or Kepler, we will not only detect
stellar activity cycles, but we will obtain very high precision on the seismic data, that the
uncertainties in the fitted parameters will be smaller than the fitted changes in these parameters.
\begin{figure*}
\includegraphics[width = 0.5\textwidth]{fshift02_golf.eps}
\includegraphics[width = 0.5\textwidth]{deltanu02_golf.eps}
\caption{\label{fig:data} {Left: Frequency shifts ($\mu$Hz) of the $l = 0$ ($\fullcircle$) and $l = 2$ ($\fullsquare$) modes measured from GOLF data. Right: Temporal variations of the averaged small frequency separation $\delta\nu_{02}$ ($\mu$Hz). }}
\end{figure*}
\begin{figure}
\includegraphics[width = 0.5\textwidth]{mass_age.eps}
\includegraphics[width = 0.5\textwidth]{teffrad.eps}
\caption{Left: Fitted age versus mass using the individual large and small frequency separations calculated at maximum (black) and minimum (grey) activity. Each point corresponds to a minimization. The lines are linear fits to the results. Right: The fitted radius and effective temperature for the same minimizations.
\label{fig:models}
}
\end{figure}
\ack
The authors want to thank Catherine Renaud for the calibration and preparation of the GOLF dataset. The GOLF instrument onboard SoHO is a cooperative effort of many individuals, to whom we are indebted. SoHO is a project of international collaboration between ESA and NASA.This research was in part supported by the European Helio- and Asteroseismology Network (HELAS), a major international collaboration funded by the European Commission's Sixth Framework Programme. DS acknowledges the support of the grant PNAyA2007-62650 from the Spanish National Research Plan. This work has been partially supported by the CNES/GOLF grant at the SAp CEA-Saclay.
\section*{References}
|
1,477,468,750,318 | arxiv | \section{Introduction}
\noindent
This paper is concerned with a conjecture of
D.~J. Benson~\cite{Benson:DicksonCompCoho} about the commutative algebra
of group cohomology rings.
There are several results relating the group structure
of a finite group~$G$ to the commutative algebra of its cohomology ring
$H^*(G) = H^*(G,k)$ with coefficients in a field of characteristic~$p$.
For the Krull dimension and depth we have the following inequalities,
where $S$ denotes a Sylow $p$-subgroup of~$G$.
Recall that the $p$-rank of $G$ is the rank of the largest elementary
abelian $p$-subgroup.
\begin{equation}
\label{eqn:QuillenDuflot}
\prank(Z(S)) \leq \operatorname{depth} H^*(S) \leq \operatorname{depth} H^*(G)
\leq \dim H^*(G) = \prank(G) \, .
\end{equation}
See Evens' book~\cite{Evens:book} for proofs of the
first inequality (Duflot's theorem) and the last one (due to Quillen)\@.
The second inequality is Theorem~2.1 of Benson's paper~\cite{Benson:NYJM2}
and
``must be well known''\@. The third is
automatic for finitely generated connected $k$-algebras.
Note that the dimension and depth only depend on $G,p$: not on~$k$.
These inequalities motivate the following definitions.
\begin{defn}
Let $G, p, k,S$ be as above. The
group-theoretic defect $\operatorname{gtD}_p(G)$,
and the
Cohen--Macaulay defect $\delta_p(G)$
are defined by
\begin{xalignat*}{2}
\operatorname{gtD}(G) & = \prank(G) - \prank(Z(S))
&
\delta(G) & = \dim H^*(G,k) - \operatorname{depth} H^*(G,k)
\end{xalignat*}
\end{defn}
\noindent
It follows from Eqn.~\eqref{eqn:QuillenDuflot} that
\begin{xalignat}{3}
\label{eqn:gtD-CMd}
0 \leq \delta(G) & \leq \operatorname{gtD}(G) &
\operatorname{gtD}(G) & = \operatorname{gtD}(S) & \delta(G) & \leq \delta(S) \, .
\end{xalignat}
The term Cohen--Macaulay defect (sometimes deficiency) is already in
use among workers in the field.
To state Benson's conjectures we need some terminology.
\begin{defn}
Let $p,k$ be as above. Let $A$ be a graded commutative $k$-algebra which
is both connected and finitely generated. Connected means that $A^0=k$
and $A^{{}<0}=0$.
Let $\zeta_1,\ldots,\zeta_r$ be a system of homogeneous elements in~$A^{{}>0}$,
and set $n_i = \left|\zeta_i\right| > 0$.
\begin{enumerate}
\item
The system is called a filter-regular system of parameters if multiplication
by~$\zeta_{i+1}$ has finite-dimensional kernel as an endomorphism of
$A/(\zeta_1,\ldots,\zeta_i)$ for each $0 \leq i \leq r$,
where $\zeta_{r+1}=0$.
Observe that a filter-regular system of parameters really is a system of
parameters.
\item
A very strongly quasi-regular system of parameters is a system which is a
filter-regular system of parameters by virtue of the property that
this kernel is restricted to degrees${} \leq n_1+\cdots+n_i+d_i$
for each $0 \leq i \leq r$, where $d_r=-r$ and $d_i = -i-1$ for all $i < r$.
\end{enumerate}
\end{defn}
\begin{theorem}[Benson]
\label{thm:portfolio}
Let $G$ be a finite group, $p$ a prime number and $k$ a field
of characteristic~$p$.
\begin{enumerate}
\item
\label{enum:DicksonFilter}
The cohomology ring $A = H^*(G,k)$ does have filter-regular systems of
parameters: the Dickson invariants (suitably interpreted) form one.
\item
\label{enum:FilterEqual}
Either every filter-regular system of parameters in~$A$ is very strongly
quasi-regular, or none are.
\item
\label{enum:OkuSas}
If the Cohen--Macaulay defect of~$G$ satisfies $\delta(G) \leq 2$ then
there is a very stong quasi-regular system of parameters in~$A$.
In particular $\delta(G) \leq 2$ holds for all 267 groups of order~$64$.
\item
\label{enum:hierarchy}
If there is a very strongly quasi-regular system of parameters in~$A$,
then the Castelnuovo--Mumford regularity of~$A$ is zero.
\end{enumerate}
\end{theorem}
\begin{proof}
The main reference is Benson's paper~\cite{Benson:DicksonCompCoho}\@.
Part~\ref{enum:DicksonFilter}) is Coroll.~9.8\@ and
Part~\ref{enum:FilterEqual}) is Coroll.~4.7(c)\@, whereas
Part~\ref{enum:hierarchy}) follows from Coroll.~4.7(c) and Theorem~4.2\@.
The first statement of Part~\ref{enum:OkuSas}) is Theorem~1.5
of~\cite{Benson:DicksonCompCoho}\@;
the second one was observed by Carlson~\cite{Carlson:Online3,CarlsonTownsley},
who computed the cohomology ring of every group of order~$64$\@.
The reader may find the tabulated data in~\cite[Appendix]{Benson:MSRI} useful.
\end{proof}
\begin{rk}
A weaker version of the $\delta(G)=2$ case of~\ref{enum:OkuSas}) was also
proved by Okuyama and Sasaki. It is a shame that their paper~\cite{OkuSas:Hsop}
appeared so late: I know that it had completed the refereeing process by the
end of April 2001, but it had been superseded by the time it was finally
published in 2004\@.
\end{rk}
\begin{conj}[Benson~\cite{Benson:DicksonCompCoho}]
\label{conj:Reg}
Let $G$ be a finite group, $p$ a prime number and $k$ a field
of characteristic~$p$. The cohomology ring $H^*(G,k)$ has Castelnuovo--Mumford
regularity zero.
\end{conj}
\begin{conj}
\label{conj:VSQR}
Let $G$ be a finite group, $p$ a prime number and $k$ a field
of characteristic~$p$. The cohomology ring $H^*(G,k)$ always contains a
very strongly quasi-regular system of parameters.
\end{conj}
\begin{rk}
Conjecture~\ref{conj:Reg} is Benson's Conjecture~1.1\@.
By Theorem~\ref{thm:portfolio}~\ref{enum:hierarchy}) it is a weak form of
Conjecture~\ref{conj:VSQR}, which is only implicitly present in Benson's paper.
Kuhn has shown that Conjecture~\ref{conj:Reg} has applications to
the study of central essential cohomology~\cite{Kuhn:Cess}\@.
\end{rk}
\noindent
The conjectures have been verified in two families of cases.
Benson showed in~\cite{Benson:wreathReg} that if Conjecture~\ref{conj:Reg}
holds for~$H$ then it also holds for the wreath product $G = H \wr C_p$.
And the second verification is the following theorem, the main
result of the present paper.
\begin{theorem}
\label{thm:main}
Conjecture~\ref{conj:VSQR} holds for every group of order less than 256\@.
\end{theorem}
\begin{proof}
By Theorem~\ref{thm:portfolio}~\ref{enum:OkuSas}) a counterexample has to
have $\delta(G) \geq 3$. By Proposition~\ref{prop:only128} the only groups
of order less than $256$ satisfying $\delta(G) \geq 3$ have order~$128$
and satisfy $\delta(G)=3$. By Proposition~\ref{prop:calc} there are fourteen
groups of order~$128$ with $\delta(G)=3$, and each of these satisfies
the conjecture.
\end{proof}
\section{Reduction to the case $|G|=128$}
\begin{proposition}
\label{prop:only128}
Let $G$ be a group of order less than $256$. Then $\delta(G) \leq 3$;
and if $\delta(G)=3$ then $|G|=128$.
\end{proposition}
\begin{proof}
Let $S$ be a Sylow $p$-subgroup of~$G$. In view of the inequality
$\delta(G) \leq \delta(S)$ and the restriction $|G| < 256$
it suffices to consider the case where $G$ itself is a $p$-group.
So suppose $G$ is a $p$-group with $\delta(G) \geq 3$.
By Lemma~\ref{lemma:Jordan} below it follows that $p=2$,
that $\delta(G)=3$, and that either $|G|=64$ or $|G|=128$.
But Carlson's computations [see Theorem~\ref{thm:portfolio}\ref{enum:OkuSas})
above] show that $\delta(G) \leq 2$ if $\left|G\right|=64$.
\end{proof}
\begin{lemma}
\label{lemma:Jordan}
Let $G$ be a finite group and $p$ a prime number.
\begin{enumerate}
\item
\label{enumi:Jordan3}
If $\delta(G) \geq 3$
or more generally if $\operatorname{gtD}(G) \geq 3$ then $p^5$ divides the order of~$G$.
If $p=2$ or $p=3$ then $p^6$ must divide $\left|G\right|$.
\item
\label{enumi:Jordan4}
If $\delta(G) \geq 4$ or more generally if $\operatorname{gtD}(G) \geq 4$ then $p^6$ divides
the order of~$G$. If $p=2$ or $p=3$ then $p^7$ must divide $\left|G\right|$.
\item
\label{enumi:Jordan128}
If $p=2$ and $\operatorname{gtD}(G) \geq 4$ then $\left|G\right|$ is divisible by $256$.
\end{enumerate}
\end{lemma}
\begin{proof}
\ref{enumi:Jordan3}):
By Eqn.~\eqref{eqn:gtD-CMd} we have $\delta(G) \leq \operatorname{gtD}(G)$.
It is apparent from the definition that a finite group and its Sylow
$p$-subgroups have the same group-theoretic defect.
So it suffices to consider the case where $G$~is a $p$-group and
$\operatorname{gtD}(G)\geq 3$.
Every nontrivial $p$-group has a centre of rank at least one. So a $p$-group
with $\operatorname{gtD} \geq 3$ must have a subgroup which is elementary abelian of rank~$4$.
It must be nonabelian too, so the order must be at least~$p^5$.
Suppose that
there is such a group of order~$p^5$. Then the centre is cyclic of order~$p$,
and there is an elementary abelian subgroup~$V$ of order~$p^4$.
This $V$~is a maximal subgroup of a $p$-group and therefore normal.
Let $a \in G$ lie outside~$V$. Then $G = \langle a,V\rangle$
and the conjugation action of $a$~on $V$ must be nontrivial of order~$p$.
So the minimal polynomial of the action divides $X^p-1 = (X-1)^p$.
This means that the action has a Jordan normal form with sole eigenvalue~$1$.
The eigenvectors in each Jordan block belong to the centre of~$G$. So as
the centre is cyclic there can only be one Jordan block, of size~$4$\@.
But for $p=2$
the size~$3$ Jordan block does not square to the identity, so there can
be no blocks of size $3$~or higher. Similarly there can be no size~$4$ block
for $p=3$, since it does not cube to the identity. So for $p=2,3$ there must
be more than one Jordan block.
\medskip
\noindent
\ref{enumi:Jordan4}): analogous.
\medskip
\noindent
\ref{enumi:Jordan128}): We need to show that if $\left|G\right|=128$
then $\operatorname{gtD}(G) \geq 4$ cannot happen. Such a group would need to contain
an elementary abelian $2$-group~$V$ satisfying the equation
$\prank(V) = 4 + \prank(Z(G))$.
The rank of the centre must be at least one, and if it is three or more
then $V=G$ and therefore $G$ is abelian, a contradiction. If the centre has
rank two, then $V$~has index two in~$G$ and we are in the same situation
as in~\ref{enumi:Jordan3}), except now we want two Jordan blocks. But we
need at least three, since $V$ has rank six and only Jordan blocks
of size one or two are allowed.
If the centre has rank one, then $V$~has rank five and we may pick a
subgroup~$H$ with $V \leq H \leq G$ and $[G:H]=[H:V]=2$. If $H$ is elementary
abelian then we are back in the immediately preceding case of an order
two action on a rank six elementary abelian. If~$H$ is not elementary abelian
then the usual Jordan block considerations mean that $C$ has rank at least~$3$,
where $C$~is the unique largest central elementary abelian of~$H$. By
applying the Jordan block considerations to the conjugation action
of $G/H$ on~$C$, we see that $Z(G)$ must have rank at least two, a
contradiction.
\end{proof}
\begin{remark}
In fact there are only two groups of
order $64$ with $\operatorname{gtD}=3$.
Here are their numbers in the
Hall--Senior list~\cite{HallSenior} and in the Small Groups
Library~\cite{BeEiOBr:Millennium}\@.
Their defects are taken from the tables in~\cite[Appendix]{Benson:MSRI}\@.
\settowidth{\djglength}{$000$}
\[ \begin{array}{c|c|c}
\text{Small Group} & \text{Hall--Senior} & \delta(G) \\
\hline
\makebox[\djglength][r]{$32$} & \makebox[\djglength][r]{$250$} & 2 \\
\makebox[\djglength][r]{$138$} & \makebox[\djglength][r]{$259$} & 1
\end{array} \]
As we shall see below there are $14$ groups of order~$128$
with $\delta(G)=3$.
\end{remark}
\begin{remark}
For $p \geq 5$ let $G$ be the following semidirect product group of order~$p^5$:
there is a rank four elementary abelian on the bottom and a cyclic group
of order~$p$ on top. The conjugation action is a size 4 Jordan block.
This group has $\operatorname{gtD}(G)=3$, since
\[
\begin{pmatrix}
1 & 1 & 0 & 0 \\ 0 & 1 & 1 & 0 \\ 0 & 0 & 1 & 1 \\ 0 & 0 & 0 & 1 \end{pmatrix}^n
= \begin{pmatrix} 1 & \binom n1 & \binom n2 & \binom n3 \\
0 & 1 & \binom n1 & \binom n2 \\ 0 & 0 & 1 & \binom n1 \\ 0 & 0 & 0 & 1
\end{pmatrix} \, .
\]
\end{remark}
\section{The groups of order 128}
\noindent
Let $G$ be a group of order~$128$.
By Lemma~\ref{lemma:Jordan}\ref{enumi:Jordan128}) we have $\operatorname{gtD}(G) \leq 3$.
\begin{proposition}
\label{prop:calc}
Only $57$ out of the $2328$ groups of order $128$ satisfy $\operatorname{gtD}(G) = 3$.
Of these $57$ groups, $43$ satisfy $\delta(G) \leq 2$. The remaining $14$ groups
satisfy $\delta(G)=3$. Each of these $14$ groups of order $128$ with
$\delta(G)=3$ satisfies Conjecture~\ref{conj:VSQR}\@.
According to the numbering of the Small Groups
Library~\cite{BeEiOBr:Millennium} these fourteen groups are:
numbers
$36$, $48$, $52$, $194$, $515$, $551$, $560$, $561$, $761$, $780$, $801$,
$813$, $823$ and $836$.
\end{proposition}
\begin{proof}
By machine computation.
Inspecting the Small Groups library,
one sees that there are 57 groups with $\operatorname{gtD}(G)=3$. See
Appendix~\ref{sect:appendix} for a discussion of how the $p$-rank
is computed.
\begin{table}
\settowidth{\djglength}{$0000$}
\newcommand{\gr}[1]{\makebox[\djglength][r]{$#1$}}
\newcommand{\ugr}[1]{\makebox[\djglength][r]{\underline{$#1$}}}
$\begin{array}{|c|cccc|c|c|cccc|}
\cline{1-5} \cline{7-11}
\text{gp} & K & d & r & \delta & \quad & \text{gp} & K & d & r & \delta \\
\cline{1-5} \cline{7-11}
\ugr{36} & 5 & 2 & 2 & 3 & & \gr{850} & 5 & 3 & 2 & 2 \\
\ugr{48} & 5 & 2 & 2 & 3 & & \gr{852} & 4 & 2 & 1 & 2 \\
\ugr{52} & 4 & 1 & 1 & 3 & & \gr{853} & 4 & 2 & 1 & 2 \\
\ugr{194} & 5 & 2 & 2 & 3 & & \gr{854} & 4 & 2 & 1 & 2 \\
\gr{513} & 5 & 3 & 2 & 2 & & \gr{859} & 4 & 2 & 1 & 2 \\
\ugr{515} & 5 & 2 & 2 & 3 & & \gr{860} & 4 & 2 & 1 & 2 \\
\gr{527} & 4 & 2 & 1 & 2 & & \gr{866} & 4 & 2 & 1 & 2 \\
\ugr{551} & 5 & 2 & 2 & 3 & & \gr{928} & 4 & 3 & 1 & 1 \\
\ugr{560} & 4 & 1 & 1 & 3 & & \gr{929} & 4 & 2 & 1 & 2 \\
\ugr{561} & 4 & 1 & 1 & 3 & & \gr{931} & 4 & 2 & 1 & 2 \\
\gr{621} & 5 & 3 & 2 & 2 & & \gr{932} & 4 & 2 & 1 & 2 \\
\gr{623} & 4 & 2 & 1 & 2 & & \gr{934} & 4 & 2 & 1 & 2 \\
\gr{630} & 5 & 3 & 2 & 2 & & \gr{1578} & 6 & 4 & 3 & 2 \\
\gr{635} & 4 & 2 & 1 & 2 & & \gr{1615} & 4 & 3 & 1 & 1 \\
\gr{636} & 4 & 2 & 1 & 2 & & \gr{1620} & 4 & 2 & 1 & 2 \\
\gr{642} & 4 & 2 & 1 & 2 & & \gr{1735} & 5 & 3 & 2 & 2 \\
\gr{643} & 4 & 3 & 1 & 1 & & \gr{1751} & 4 & 2 & 1 & 2 \\
\gr{645} & 4 & 3 & 1 & 1 & & \gr{1753} & 4 & 3 & 1 & 1 \\
\gr{646} & 4 & 2 & 1 & 2 & & \gr{1755} & 5 & 4 & 2 & 1 \\
\gr{740} & 4 & 2 & 1 & 2 & & \gr{1757} & 4 & 3 & 1 & 1 \\
\gr{742} & 4 & 2 & 1 & 2 & & \gr{1758} & 4 & 3 & 1 & 1 \\
\gr{753} & 5 & 3 & 2 & 2 & & \gr{1759} & 4 & 3 & 1 & 1 \\
\ugr{761} & 5 & 2 & 2 & 3 & & \gr{1800} & 4 & 2 & 1 & 2 \\
\gr{764} & 4 & 2 & 1 & 2 & & \gr{2216} & 5 & 4 & 2 & 1 \\
\ugr{780} & 4 & 1 & 1 & 3 & & \gr{2222} & 5 & 3 & 2 & 2 \\
\ugr{801} & 4 & 1 & 1 & 3 & & \gr{2264} & 5 & 3 & 2 & 2 \\
\ugr{813} & 4 & 1 & 1 & 3 & & \gr{2317} & 4 & 3 & 1 & 1 \\
\ugr{823} & 4 & 1 & 1 & 3 & & \gr{2326} & 4 & 4 & 1 & 0 \\
\cline{7-11}
\ugr{836} & 4 & 1 & 1 & 3 \\
\cline{1-5}
\end{array}$
\vspace*{5pt}
\caption{For each group of order $128$ with $\operatorname{gtD}(G)=3$, we give its number
in the Small Groups library, the Krull dimension $K$ and
depth $d$ of $H^*(G)$, the rank~$r = K - 3$ of $Z(G)$ and the
Cohen--Macaulay defect $\delta = K - d$. Underlined entries have $\delta = 3$\@.
Notation based on that of~\cite[Appendix]{Benson:MSRI}\@.}
\label{table:128}
\end{table}
These $57$ groups are listed
in Table~\ref{table:128}\@. The cohomology rings of these $57$ groups were
computed using an improved version of the author's cohomology
program~\cite{habil}\@.
These cohomology rings may be viewed online~\cite{fullBens}\@.
The cohomology rings were calculated using Benson's test for
completion~\cite[Thm 10.1]{Benson:DicksonCompCoho}\@.
Benson's test involves constructing a filter-regular system of parameters
and determining in which degrees it is not strictly regular. This means that
one automatically determines whether the group satisfies
Conjecture~\ref{conj:VSQR} when one computes cohomology using Benson's test.
The Cohen--Macaulay defect is another by-product of a computation
based on Benson's test.
The value of $\delta(G)$ for each of the $57$ groups
is given in Table~\ref{table:128}\@. The fourteen groups listed in the
statement of the proposition are indeed
the only ones with $\delta(G)=3$.
The computations showed that these $14$ groups do satisfy the conjecture.
\end{proof}
\begin{rk}
The test is phrased in such a way that it is easy to implement. With one
exception: it is not immediately clear how to construct a filter-regular
system of parameters in low degrees. This point is discussed in the next
section.
\end{rk}
\begin{rk}
The computation that took the longest time was group number $836$,
one of the $\delta(G)=3$ groups. Its cohomology ring has $65$ generators
and $1859$ generators.
\end{rk}
\begin{rk}
The distribution of these 57 groups by Cohen--Macaulay defect is as
follows:
\[
\begin{array}{c|cccc}
\delta(G) & 0 & 1 & 2 & 3 \\
\hline
\#G & 1 & 11 & 31 & 14
\end{array}
\]
\end{rk}
\begin{rk}
Some of these groups have been studied before.
Groups 928 and 1578 are wreath products: $D_8 \wr 2$ and $2^3 \wr 2$
respectively. By the Carlson--Henn result~\cite{CaHe:Wreath} one has
$\delta(D_8 \wr 2)=1$ and $\delta(2^3\wr2)=2$.
Groups 850 of order 128 is a direct product of the form $G=H \times 2$,
where $H$ is group number 32 of order $64$; and the same applies to group
1755 of order 128 and group 138 of order $64$\@. It is immediate that
$\delta(G)=\delta(H)$ for such groups, so Carlson's work guarantees $\delta(G)\leq2$
for both these groups of order~$128$.
Group number $2326$ is the extraspecial group~$2^{1+6}_+$; Quillen showed that
its cohomology ring is Cohen--Macaulay~\cite{Quillen:Extraspecial}\@.
Group number $931$ is the Sylow $2$-subgroup of the Mathieu groups
$M_{22}$~and $M_{23}$; its cohomology was studied by Adem and
Milgram~\cite{AdMi:M22}\@.
Group number $934$ is the Sylow $2$-subgroup of the Janko group~$J_2$; its
cohomology ring was calculated by Maginnis~\cite{Maginnis:J2}.
I am not aware of any previous cohomological investigations concerning
the other two groups that I can name. One of these is group $932$, the
Sylow $2$-subgroup of $G_2(3):2$. The other is group
number $836$, the Sylow $2$-subgroup of one double cover of the Suzuki
group $\mathit{Sz}(8)$. This group (number 836) turned out to have the most
complicated cohomology ring in the study.
\end{rk}
\section{The weak rank-restriction condition}
\noindent
How does one construct a filter-regular system of parameters in a cohomology
ring which is defined over the prime field~$\f$ and also lies in low degree?
An efficient implementation of Benson's test calls for an answer to this
question.
Benson shows that the Dickson invariants (suitably interpreted) form a
filter-regular system of parameters. This means a sequence of cohomology
classes in $H^*(G)$ whose restrictions to the elementary abelian subgroups
of~$G$ are (powers of) the appropriate Dickson invariants. Given information
about restriction to subrings it is a straightforward task to compute classes
with the appropriate restriction patterns. However the degrees involved
can be large.
\begin{defn}
(c.f.\@ \cite[\S8]{Benson:DicksonCompCoho}) \quad
Let $G$ be a $p$-group with $\prank(G)=K$. Let
$C = \Omega_1(Z(G))$. Homogeneous elements $\zeta_1,\ldots,\zeta_K
\in H^*(G)$ satisfy the \emph{weak rank restriction condition} if
for each rank elementary abelian subgroup $V \geq C$ of $G$ the
following holds, where $s = \prank(V)$:
\begin{quote}
\noindent
The restrictions of $\zeta_1,\ldots,\zeta_s$ to~$V$ form a
homogeneous system of parameters for $H^*(V)$; and
the restrictions of $\zeta_{s+1},\ldots,\zeta_K$ are zero.
\end{quote}
\end{defn}
\begin{lemma}
If $\zeta_1,\ldots,\zeta_K \in H^*(G)$ satisfy the weak rank restriction
condition then they constitute a filter-regular system of parameters.
\end{lemma}
\begin{proof}
By a well known theorem of Quillen (see e.g.\@ Evens' book~\cite{Evens:book}),
$\zeta_1,\ldots,\zeta_K$ is a homogeneous system of parameters for $H^*(G)$.
The proof of Theorem~9.6 of~\cite{Benson:DicksonCompCoho} applies
just as well to parameters satisfying the weak rank restriction condition,
because if $E$ is an arbitrary rank~$i$ elementary abelian subgroup of~$G$,
then setting $V = \langle C, E\rangle$ one has $V \geq C$ and
$C_G(V)=C_G(E)$, yet the rank of $V$ is at least as large as the rank of~$E$.
So the restrictions of $\zeta_1,\ldots,\zeta_i$ to $H^*(C_G(E))$ do form a
regular sequence, by the same argument based on the Broto--Henn approach
to Duflot's theorem.
\end{proof}
\begin{lemma}
\label{lemma:filterConstruct}
Let $G$ be a $p$-group with $\prank(G)=K$ and $\prank(Z(G))=r$. Let
$C = \Omega_1(Z(G))$. Suppose that homogeneous elements
$\zeta_1,\ldots,\zeta_K \in H^*(G)$ satisfy the following conditions:
\begin{enumerate}
\item
The restrictions of $\zeta_1,\ldots,\zeta_r$ to $H^*(C)$
form a regular sequence there; and
\item
For each rank $r+s$ elementary abelian subgroup $V \geq C$ of $G$ the
restrictions of $\zeta_{r+s+1},\ldots,\zeta_K$ to $V$ are zero,
and for $1 \leq i \leq s$ the restrictions of $\zeta_{r+i}$ to~$V$ is
a power of the $i$th Dickson invariant in $H^*(V/C)$.
\end{enumerate}
Then $\zeta_1,\ldots,\zeta_K \in H^*(G)$ is a filter-regular system of
parameters for $H^*(G)$. Moreover such systems of parameters exist.
\end{lemma}
\begin{rk}
By the $i$th Dickson invariant I mean the one which restricts nontrivially
to dimension $i$ subspaces, but has zero restriction to smaller subspaces.
That is, if $i < j$ then the $i$th Dickson invariant lies in lower degree
than the $j$th Dickson invariant.
\end{rk}
\begin{proof}
Such classes clearly satisfy the weak rank restriction condition. The existence
of $\zeta_1,\ldots,\zeta_r$ already follows from Evens' theorem that
the cohomology ring of an arbitrary subgroup $H \leq G$ is a finitely
generated module over the image of restriction from~$G$.
For the $\zeta_{r+i}$: these are given
by restrictions to each elementary abelian subgroup, and these restrictions
satisfy the compatibility conditions that one expects from genuine
restrictions, c.f.\@ Quillen's work on the spectrum of a cohomology
ring~\cite{Quillen:Spectrum}. This means that -- on raising these defining
restrictions by sufficiently high $p$th powers -- the $\zeta_{r+i}$
do indeed exist.
\end{proof}
\begin{rk}
The point is that Lemma~\ref{lemma:filterConstruct} is a recipe for
constructing a filter-regular system of parameters.
Recent work of
Kuhn~\cite{Kuhn:Cess} in fact shows that one can choose the generators of
$H^*(G)$ in such a way that $\zeta_1,\ldots,\zeta_r$ may be chosen from
amongst these generators.
An additional saving follows from the fact that if $\zeta_1,\ldots,\zeta_K$ is
a system of parameters and $\zeta_1,\ldots,\zeta_{K-1}$ is filter-regular,
then the whole system is automatically filter-regular. This means that
one can replace the $\zeta_K$ of Lemma~\ref{lemma:filterConstruct} by any
element that completes a system of parameters.
In earlier calculations, filter-regular parameters were constructed by hand
on a trial and error basis. Subsequently most calculations were performed
or re-performed using the parameter choice method of
Lemma~\ref{lemma:filterConstruct}\@. In the worst case calculations
this meant finishing the computation in degree 17, although the
presentation was finished earlier.
\end{rk}
\section{The $a$-invariants}
Let $k$~be a field and $R$ a connected finitely presented graded commutative
$k$-algebra. Let $M$ be a finitely generated graded $R$-module, and
$\mathfrak{m}$ the ideal in~$R$ of all elements in positive degree.
The $a$-invariants of $M$ are defined by
\[
a_{\mathfrak{m}}^i(M) = \max \{m \mid H^{i,m}_{\mathfrak{m}}(M) \neq 0 \} \, ,
\]
with $a_{\mathfrak{m}}^i(M) = -\infty$ if $H^i_{\mathfrak{m}}(M)=0$.
One can then take
\[
\operatorname{Reg}(M) = \max_{i \geq 0}\{a_{\mathfrak{m}}^i(M) + i \}
\]
as the definition of the Castelnuovo--Mumford regularity of~$M$.
Table~\ref{table:128aInvts} lists the $a$-invariants of $H^*(G)$ for
the $57$ groups of
order 128 with $\operatorname{gtD}(G)=3$\@.
\begin{table}
$\begin{array}{|r|c|cccc|c|r|c|cccc|}
\cline{1-6} \cline{8-13}
\text{gp} & K & a^{K-3}_{\mathfrak{m}} & a^{K-2}_{\mathfrak{m}}
& a^{K-1}_{\mathfrak{m}} & a^K_{\mathfrak{m}} & \vphantom{\Bigl(}\quad &
\text{gp} & K & a^{K-3}_{\mathfrak{m}} & a^{K-2}_{\mathfrak{m}}
& a^{K-1}_{\mathfrak{m}} & a^K_{\mathfrak{m}} \\
\cline{1-6} \cline{8-13}
36 & 5 & -5 & -5 & -5 & -5 & &
850 & 5 & -\infty & -6 & -5 & -5 \\
48 & 5 & -5 & -5 & -5 & -5 & &
852 & 4 & -\infty & -5 & -4 & -4 \\
52 & 4 & -4 & -4 & -4 & -4 & &
853 & 4 & -\infty & -5 & -4 & -4 \\
194 & 5 & -6 & -6 & -5 & -5 & &
854 & 4 & -\infty & -5 & -4 & -4 \\
513 & 5 & -\infty & -5 & -5 & -5 & &
859 & 4 & -\infty & -6 & -4 & -4 \\
515 & 5 & -5 & -5 & -5 & -5 & &
860 & 4 & -\infty & -5 & -4 & -4 \\
527 & 4 & -\infty & -4 & -4 & -4 & &
866 & 4 & -\infty & -5 & -4 & -4 \\
551 & 5 & -6 & -5 & -5 & -5 & &
928 & 4 & -\infty & -\infty & -4 & -4 \\
560 & 4 & -6 & -5 & -4 & -4 & &
929 & 4 & -\infty & -5 & -4 & -4 \\
561 & 4 & -5 & -4 & -4 & -4 & &
931 & 4 & -\infty & -3 & -4 & -4 \\
621 & 5 & -\infty & -5 & -5 & -5 & &
932 & 4 & -\infty & -6 & -4 & -4 \\
623 & 4 & -\infty & -4 & -4 & -4 & &
934 & 4 & -\infty & -3 & -5 & -4 \\
630 & 5 & -\infty & -5 & -5 & -5 & &
1578 & 6 & -\infty & -6 & -6 & -6 \\
635 & 4 & -\infty & -4 & -4 & -4 & &
1615 & 4 & -\infty & -\infty & -4 & -4 \\
636 & 4 & -\infty & -4 & -4 & -4 & &
1620 & 4 & -\infty & -4 & -4 & -4 \\
642 & 4 & -\infty & -4 & -4 & -4 & &
1735 & 5 & -\infty & -5 & -5 & -5 \\
643 & 4 & -\infty & -4 & -4 & -4 & &
1751 & 4 & -\infty & -6 & -4 & -4 \\
645 & 4 & -\infty & -\infty & -4 & -4 & &
1753 & 4 & -\infty & -\infty & -4 & -4 \\
646 & 4 & -\infty & -4 & -4 & -4 & &
1755 & 5 & -\infty & -\infty & -5 & -5 \\
740 & 4 & -\infty & -4 & -4 & -4 & &
1757 & 4 & -\infty & -\infty & -4 & -4 \\
742 & 4 & -\infty & -4 & -4 & -4 & &
1758 & 4 & -\infty & -\infty & -5 & -4 \\
753 & 5 & -\infty & -5 & -5 & -5 & &
1759 & 4 & -\infty & -\infty & -4 & -4 \\
761 & 5 & -5 & -5 & -5 & -5 & &
1800 & 4 & -\infty & -4 & -4 & -4 \\
764 & 4 & -\infty & -5 & -4 & -4 & &
2216 & 5 & -\infty & -\infty & -5 & -5 \\
780 & 4 & -6 & -4 & -4 & -4 & &
2222 & 5 & -\infty & -6 & -5 & -5 \\
801 & 4 & -4 & -4 & -4 & -4 & &
2264 & 5 & -\infty & -6 & -5 & -5 \\
813 & 4 & -4 & -4 & -4 & -4 & &
2317 & 4 & -\infty & -\infty & -4 & -4 \\
823 & 4 & -4 & -4 & -4 & -4 & &
2326 & 4 & -\infty & -\infty & -\infty & -4 \\
\cline{8-13}
836 & 4 & -4 & -5 & -4 & -4 \\
\cline{1-6}
\end{array}$
\vspace*{5pt}
\caption{For each of the 57 groups of order $128$ with $\operatorname{gtD}(G)=3$, we give
its number in the Small Groups library, the Krull dimension $K$
and the last four $a$-invariants of $H^*(G)$. For
$0 \leq i < K-3$ one has $a^i_{\mathfrak{m}} = -\infty$.
The defect $\delta(G)$ is three if and only if
$a^{K-3}_{\mathfrak{m}}$ is finite.}
\label{table:128aInvts}
\end{table}
In order to calculate the $a$-invariants, one uses Lemma~4.3
of~\cite{Benson:DicksonCompCoho}, which is based on methods of N.~V. Trung\@.
The lemma says that if $\zeta \in R^n$ is such that $\operatorname{Ann}_M(\zeta)$ consists
entirely of $\mathfrak{m}$-torsion, then
\begin{equation}
\label{eqn:Benson}
a^{i+1}_{\mathfrak{m}}(M) + n \leq a^i_{\mathfrak{m}}(M/\zeta M)
\leq \max(a^i_{\mathfrak{m}}(M), a^{i+1}_{\mathfrak{m}}(M) + n) \, .
\end{equation}
Replacing $\zeta$ by a suitable power if necessary, one can
arrange for $a^{i+1}_{\mathfrak{m}}(M) + n \geq a^i_{\mathfrak{m}}(M)$
and therefore $a^{i+1}_{\mathfrak{m}}(M) = a^i_{\mathfrak{m}}(M/\zeta M) - n$.
So the $a$-invariants of $H^*(G)$ may be computed recursively.
To start the recursion we need
$a^0_{\mathfrak{m}}(M)$, which is the $\mathfrak{m}$-torsion:
so if $\zeta \in R^n$ is such that $\operatorname{Ann}_M(\zeta)$ is finite dimensional,
then $a^0_{\mathfrak{m}}(M)$ is the top dimension of $\operatorname{Ann}_M(\zeta)$.
Hence by starting from a filter-regular system of parameters and
raising some of the parameters to higher powers if necessary, one
may compute the $a$-invariants of $H^*(G)$ by just computing kernels.
For a cohomology computation one chooses parameters in low degrees. So it is
perhaps surprising that a survey of the author's computations of all
256 nonabelian groups of order 64 and some 61 groups of order 128 led to
precisely one case where powers of the chosen parameters were necessary.
This is the Sylow $2$-subgroup of $L_3(4)$ which has $a$-invariants
$-\infty,-\infty,-3,-5,-4$: a filter-regular system of parameters
in degrees $4,4,2,2$ led to kernels with top degrees $-\infty,-\infty,5,5,8$,
leading to problems with the calculation of $a^3_{\mathfrak{m}}$. Squaring
the third parameter led to kernels with top degrees
$-\infty,-\infty,5,7,10$, which was sufficient to permit calculation of
the $a$-invariants.
\section{Excess and defect}
\begin{defn}
As in the introduction let $G$ be a finite group, $p$ a prime number and
$k$ a field of characteristic~$p$. We define
the Duflot excess $e(G)=e_p(G)$ by
\[
e_p(G) = \operatorname{depth} H^*(G,k) - \prank(Z(S)) \, .
\]
\end{defn}
\noindent
The following inequalities follow immediately from this definition taken
together with Equations
\eqref{eqn:QuillenDuflot}~and \eqref{eqn:gtD-CMd}\@.
\begin{xalignat}{3}
\label{eqn:excess}
0 \leq e(G) & \leq \operatorname{gtD}(G) &
\delta(G) + e(G) & = \operatorname{gtD}(G) &
e(G) & \geq e(S) \, .
\end{xalignat}
Quillen showed that the extraspecial $2$-group $G = 2^{1+2n}_+$ has
Cohen--Macaulay cohomology~\cite{Quillen:Extraspecial}.
So this group has $e(G)=n$ and $\delta(G)=0$.
Now let $p$ be an odd prime, and let $G$ be the extraspecial $p$-group
$G=p^{1+2n}_+$ of exponent~$p$. With the single exception of the case
$(p,n)=(3,1)$, Minh proved~\cite{Minh:EssExtra} that this group has
$\delta(G)=n$ and $e(G)=0$. In the one exceptional case the cohomology ring
is Cohen--Macaulay~\cite{MilgramTezuka}\@.
One good way to produce groups with small $e(G)/\delta(G)$ ratio
satisfying Conjecture~\ref{conj:Reg}
is by iterating the wreath product construction. By passing
from $H$~to $H \wr C_p$ one multiplies the $p$-rank by~$p$ but increases the
depth by one only~\cite{CaHe:Wreath}\@.
\begin{ques}
How (for large values of~$n$) are the $p$-groups of order $p^n$ distributed
on the graph with $\delta(G)$ on the $x$-axis and $e(G)$ on the $y$-axis?
\end{ques}
\section{Outlook}
\noindent
To test the conjecture further we need to find more high defect groups.
There are 24 groups of order~$3^6$ with $\operatorname{gtD}(G)=3$. The presence of essential
classes in low degrees demonstrates that at least three of these groups
have $\delta(G)=3$. These groups are numbers $35$, $56$ and $67$ in the Small
Groups Library. There are essential classes in degrees $4$, $2$ and $4$
respectively.
Recall from Carlson's paper~\cite{Carlson:DepthTransfer} that the presence
of essential classes means that $\operatorname{depth} H^*(G,k) = \prank Z(G)$ and
therefore $\delta(G)=\operatorname{gtD}(G)$.
Group number $299$ of order $256$ has $\operatorname{gtD}(G)=4$. The presence of an essential
class in $H^3(G)$ means that $\delta(G)=4$ too.
|
1,477,468,750,319 | arxiv | \section{Introduction}\label{s:intro
\subsection*{Overview}
The aim of this note is to prove that the commutative ring $R$ of
real-exponent polynomials in $n$ variables over any field~$\kk$ has
global dimension~$n + 1$ and flat dimension~$n$
(Theorem~\ref{t:gl.dim} and Corollary~\ref{c:fldim}). It might be
unexpected that $R$ has finite global dimension at all, but it should
be more expected that the flat dimension is achieved by the residue
field $\kk = R/\mm$ of~$R$ modulo its maximal graded ideal~$\mm$; a
Koszul-like construction shows that it is
(Proposition~\ref{p:fldim(k)} along with
Example~\ref{e:usual-koszul}). In one real-exponent variable the
residue field~$\kk$ also achieves the global dimension bound of~$2$
(Lemma~\ref{l:Ext(k,F)}), and this calculation lifts to $n$ variables
by tensoring with an ordinary Koszul complex
(Proposition~\ref{p:n+1}), demonstrating global dimension at least~$n
+ 1$. Projective and flat resolutions of all $R$-modules are
constructed from resolutions of the residue field in the proofs of
Theorems~\ref{t:gl.dim} and~\ref{t:flat-res} to yield the respective
upper bounds of $n + 1$ and~$n$. The results extend to the monoid
algebra for the positive cone of any subgroup of~$\RR^n$ satisfying a
mild density condition (Definition~\ref{d:G}
and~Theorem~\ref{t:G-flat-res}).
\subsection*{Background}
Global dimension measures how long projective resolutions of modules
can get, or how high the homological degree of a nonvanishing Ext
module can be \cite[Theorem~4.1.2]{weibel1994}. Finding rings of
finite global dimension is of particular value, since they are
considered to be smooth, generalizing the best-known case of local
noetherian commutative rings \cite{auslander-buchsbaum1957,serre1956},
which correspond to germs of functions on nonsingular algebraic
varieties.
The related notion of flat dimension (also called Tor dimension or
weak global dimension) measures how long flat resolutions of modules
can get, or how high the homological degree of a nonvanishing Tor
module can be. Flat dimension is bounded by global dimension because
projective modules are flat. These two dimensions agree for
noetherian commutative rings \cite[Proposition~4.1.5]{weibel1994}.
Without the noetherian condition equality can fail; commutative
examples include von Neumann regular rings that are infinite products
of fields (see~\mbox{\cite[p.\,98]{weibel1994}}), but domains are
harder~to~come~by.
The cardinality of a real-exponent polynomial ring a~priori indicates
a difference between flat and projective dimension that could be as
high as $1$ plus the index on~$\aleph$ in the cardinality of the real
numbers \cite[p.14]{osofsky1974}. In certain situations, such as in
valuation rings, ideals generated by $\aleph_n$ and no fewer elements
are known to cause global dimension at least $n+2$ \cite{osofsky1967}
(cf.~\cite[Theorem, p.14]{osofsky1974}). But despite $R$ having an
ideal minimally generated by all monomials with total degree~$1$, of
which there are~$2^{\aleph_0}$,
the dimension of the positive cone of exponents is more pertinent than
its cardinality. This remains the case when the exponent set is
intersected with a suitably dense subgroup of~$\RR^n$: the rank of the
subgroup is irrelevant (Section~\ref{s:dense}).
\subsection*{Methods}
The increase from global dimension~$n$ to $n + 1$ in the presence of
$n$~variables is powered by the violation of condition~5 from
\cite[Theorem~P]{bass1960}: a monomial ideal with an ``open orthant''
of exponents, such as the maximal ideal~$\mm_1$ in one indeterminate,
is a direct limit of principal monomial ideals
(Lemma~\ref{l:orthant-res}) but is not projective
(Lemma~\ref{l:Ext(k,F)}). This phenomenon occurs already for Laurent
polynomials~$L_1$ in one integer-exponent variable. But although
$\mm_1$ and~$L_1$ both have projective dimension~$1$, the
real-exponent maximal ideal~$\mm_1$ is a submodule of a projective
(actually, free) module; the inclusion has a cokernel, and its
projective dimension is greater by~$1$.
The most nontrivial point is how to produce a projective resolution of
length at most~$n + 1$ for any module over the real-exponent
polynomial ring~$R$ in $n$ variables. Our approach takes two steps.
The first is a length~$n$ Koszul-like complex (Definition~\ref{d:y})
in $2n$ variables that resolves the residue field and can be massaged
into a flat resolution of any module (Theorem~\ref{t:flat-res}). This
``total Koszul'' construction was applied to combinatorially resolve
monomial ideals in ordinary (that is, integer-exponent) polynomial
rings \cite[Section~6]{sylvan-resolution}. The integer grading in the
noetherian case makes this construction produce a Koszul double
complex, which is key for the combinatorial purpose of minimalizing
the resulting free resolution by splitting an associated spectral
sequence. It is not obvious whether the double complex survives to
the real-exponent setting, but the total complex does
(Definition~\ref{d:y}; cf.~\cite[Application~4.5.6]{weibel1994}), and
that suffices here because minimality is much more subtle---if it is
even possible---in the presence of real exponents
\cite{essential-real}.
\subsection*{Motivations}
Beyond basic algebra, there has been increased focus on non-noetherian
settings in, for example, noncommutative geometry and topological data
analysis.
Quantum noncommutative toric geometry
\cite{katzarkov-lupercio-meersseman-verjovsky2020} is based on dense
finitely generated additive subgroups of~$\RR^n$ instead of the
discrete sublattices that the noetherian commutative setting requires.
The situations treated by our main theorems, including especially
Section~\ref{s:dense}, correspond to ``smooth'' affine quantum toric
varieties and could have consequences for sheaf theory in that
setting.
The question of finite global dimension over real-exponent polynomial
rings has surfaced in topological data analysis (TDA), where modules
graded by $\RR^n$ are known as real multiparameter persistent
homology, cf.~\cite{lesnick-interleav2015}, \cite{essential-real}, and
\cite{bubenik-milicevic2020}, for example, or \cite{oudot2015} for a
perspective from quiver theory. The question of global dimension
arises because defining metrics for statistical analysis requires
distances between persistence modules, many of which use derived
categorical constructions
\cite{kashiwara-schapira2018,strat-conical,berkouk-petit2021}; see
\cite[Section~7.1]{bubenik-milicevic2020} for an explicit mention of
the finite global dimension problem.
Real-exponent modules that are graded by $\RR^n$ and satisfy a
suitable finiteness condition (``tameness'') to replace the too-easily
violated noetherian or finitely presented conditions admit finite
multigraded resolutions by monomial ideals
\cite[Theorem~6.12]{hom-alg-poset-mods}, which are useful for~TDA.
But even in the tame setting no universal bound is known for the
finite lengths of such resolutions
\cite[Remark~13.15]{essential-real}. The global dimension
calculations here suggest but do not immediately imply a universal
bound~of~$n +\nolinebreak 1$.
\subsection*{Notation}
The ordered additive group $\RR$ of real numbers has its monoid
$\RR_+$ of nonnegative elements. The $n$-fold product $\RR^n =
\prod_{i=1}^n \RR$ has nonnegative cone $\RR^n_+ = \prod_{i=1}^n
\RR_+$. The monoid algebra $R = R_n = \kk[\RR^n_+]$ over an arbitrary
field~$\kk$ is the ring of real-exponent polynomials in $n$ variables:
finite sums $\sum_{\aa \in \RR^n_+} c_\aa \xx^\aa$, where $\xx^\aa =
x_1^{a_1} \cdots x_n^{a_n}$. Its unique monoid-graded maximal
ideal~$\mm$ is spanned over~$\kk$ by all nonunit monomials.
Unadorned tensor products are over~$\kk$. For example, $R \cong R_1
\otimes \cdots \otimes R_1$ is an $n$-fold tensor product over~$\kk$,
where $R_1 = \kk[\RR_+]$ is the real-exponent polynomial ring in one
variable with graded maximal ideal~$\mm_1$.
\section{Flat dimension~\texorpdfstring{$n$}{n}}\label{s:fldim
\begin{lemma}\label{l:k-res-R1}
The filtered colimit $\dirlim_{\ve > 0} (R_1 \otni \<x^\ve\>)$ of the
inclusions of the principal ideals generated by $x^\ve$ for
positive~$\ve \in \RR$ is a flat resolution $\oK^1_\spot: R_1 \otni
\mm_1$ of\/~$\kk$ over~$R_1$.
\end{lemma}
\begin{proof}
Colimits commute with homology so the colimit is a resolution.
Filtered colimits of free modules are flat by Lazard's criterion
\cite{lazard1964}, so the resolution is flat.
\end{proof}
\begin{defn}\label{d:open-koszul}
The \emph{open Koszul complex} is the tensor product $\oK^\xx_\spot =
\bigotimes_{i=1}^n \oK^1_\spot$ over the field~$\kk$ of $n$ copies of
the flat resolution in Lemma~\ref{l:k-res-R1}. The $2^n$ summands of
$\oK^\xx_\spot$, each a tensor product of~$j$ copies of~$R_1$ and
$n-j$ copies of~$\mm_1$, are \emph{orthant ideals}.
\end{defn}
\begin{example}\label{e:open-koszul}
The open Koszul complex in two real-exponent variables is depicted in
Figure~\ref{f:open-koszul}. {}From a geometric perspective, take the
ordinary Koszul complex from Figure~\ref{f:ordinary-koszul}, replace
the free modules with their continous versions, and push the
generators as close to the origin as possible without meeting it. The
four possible orthant ideals are rendered in
Figure~\ref{f:open-koszul}. {}From left to right,
viewing them as tensor products, they correspond to the product of two
closed rays~$\kk[\RR_+]$, the product (in both orders) of a closed ray
with an open ray~$\mm$, and the product of two open rays. In $n$~
real-exponent variables the $2^n$ orthant ideals arise from all
$n$-fold tensor products~of~closed~and~open~rays.
\begin{figure}[ht]
\centering
{%
\psfrag{0}{$0$}
\psfrag{from}{$\from$}
\psfrag{oplus}{$\oplus$}
\includegraphics[width=5.4in]{ordinary-koszul}
}%
\vspace{-2ex}
\caption{Ordinary Koszul complex in two variables}
\label{f:ordinary-koszul}
\end{figure}
\begin{figure}[ht]
\centering
{%
\psfrag{0}{$0$}
\psfrag{from}{$\from$}
\psfrag{oplus}{$\oplus$}
\includegraphics[width=5in]{open-koszul}
}%
\vspace{-2ex}
\caption{Open Koszul complex in two real-exponent variables}
\label{f:open-koszul}
\end{figure}
\end{example}
\begin{prop}\label{p:fldim(k)}
The open Koszul complex~$\oK^\xx_\spot$ is a flat resolution
of\/~$\kk$ over~$R$.
\end{prop}
\begin{proof}
Lemma~\ref{l:k-res-R1} and the K\"unneth theorem
\cite[Theorem~3.6.3]{weibel1994}.
\end{proof}
Limit-Koszul complexes similar to~$\oK^\xx_\spot$ have previously been
used to compute flat dimensions of absolute integral closures
\cite{aberbach-hochster1997} in the context of tight closure.
\begin{example}\label{e:usual-koszul}
The sequence $\xx^{[\ve]} = x_1^\ve,\dots,x_n^\ve$ is regular in~$R$
\cite[Chapter~1]{bruns-herzog1998}, so the usual Koszul complex
$\KK_\spot(\xx^{[\ve]})$ is a length~$n$ free resolution of $B_n^\ve =
R/\<\xx^{[\ve]}\>$ over~$R$. Using this resolution, $\Tor_n^R(\kk,
B_n^\ve) = \kk$ because $\kk \otimes_R \KK_\spot(\xx^{[\ve]})$ has
vanishing~differentials.
\end{example}
\begin{lemma}\label{l:Rotimes2}
The real-exponent polynomial ring $R^{\otimes 2} = R \otimes R$ has
$2n$~variables
\begin{align*}
& \xx = x_1,\dots,x_n = x_1 \otimes 1, \dots, x_n \otimes 1
\\
\text{and}\quad
& \yy = \hspace{.15ex}y_1,\dots,\hspace{.15ex}y_n
= 1 \otimes \hspace{.2ex}x_1, \dots, 1 \otimes \hspace{.2ex}x_n.
\end{align*}
Over $R^{\otimes 2}$ is a directed system of Koszul complexes
$\KK_\spot(\xx^{[\ve]}-\yy^{[\ve]})$ on the sequences
$$%
\xx^{[\ve]} - \yy^{[\ve]} = x_1^\ve - y_1^\ve, \dots, x_n^\ve - y_n^\ve
$$
with $\ve > 0$. The colimit
$\dis%
\oK^{\xx-\yy}_\spot = \dirlim_{\ve > 0} \KK_\spot(\xx^{[\ve]}-\yy^{[\ve]})
$
is an $R^{\otimes 2}$-flat resolution of~$R$.
\end{lemma}
\begin{proof}
The general case is the tensor product over~$\kk$ of $n$ copies of the
case $n = 1$, which in turn reduces to the calculation $R^{\otimes
2}/\<x^\ve - y^\ve \mid \ve > 0\> \cong R$.
\end{proof}
\begin{defn}\label{d:y}
Denote by $R^\xx$ and $R^\yy$ the copies of~$R$ embedded in
$R^{\otimes 2}$ as $R \otimes 1$ and $1 \otimes R$. Fix an
$R^\xx$-module~$M$.
\begin{enumerate}
\item%
Write $M^\yy$ for the corresponding $R^\yy$-module, with the $\xx$
variables renamed to~$\yy$.
\item%
The \emph{open total Koszul complex} of an $R^\xx$-module~$M$ is
$\dis%
\oK^{\xx-\yy}_\spot(M)
=
\oK^{\xx-\yy}_\spot \otimes_{R^\yy} M^\yy.
$
\end{enumerate}
\end{defn}
\begin{remark}\label{r:orthant}
By Definition~\ref{d:open-koszul}, each of the $4^n$ summands
of~$\oK^{\xx-\yy}_\spot$ in Lemma~\ref{l:Rotimes2} is the tensor
product over~$\kk$ of an orthant $R^\xx$-ideal and an orthant
$R^\yy$-ideal.
\end{remark}
\begin{thm}\label{t:flat-res}
The open total Koszul complex $\oK^{\xx-\yy}_\spot(M)$ is a length~$n$
resolution of~$M$ over $R^{\otimes 2}$ for any $R^\xx$-module~$M$.
This resolution is flat over~$R^\xx$; more precisely, as an
$R^\xx$-module $\oK^{\xx-\yy}_\spot(M)$ is a direct sum of orthant
$R^\xx$-ideals.
\end{thm}
\begin{proof}
The tensor product $\oK^{\xx-\yy}_\spot \otimes_{R^\yy} M^\yy$ is over
$R^\yy$ and hence converts the orthant $R^\xx$-ideal decomposition for
$\oK^{\xx-\yy}$ afforded by Remark~\ref{r:orthant} into one for
$\oK^{\xx-\yy}_\spot(M)$.
Since tensor products commute with colimits, $\oK^{\xx-\yy}_\spot(M) =
\dirlim_{\ve > 0} \KK^\ve_\spot(M)$, where $\KK^\ve_\spot(M) =
\KK_\spot(\xx^{[\ve]} - \yy^{[\ve]}) \otimes_{R^\yy} M^\yy$.
Each complex $\KK^\ve_\spot(M)$ is the ordinary Koszul complex of the
sequence $\xx^{[\ve]} - \yy^{[\ve]}$ on the $R^{\otimes 2}$-module
$R^{\otimes 2} \otimes_{R^\yy} M^\yy$. But $\xx^{[\ve]} -
\yy^{[\ve]}$ is a regular sequence on this module because the $\xx$
variables are algebraically independent from the $\yy$ variables.
Thus $\KK^\ve_\spot(M)$ is acyclic by exactness of colimits.
Moreover, again by algebraic independence, the nonzero homology of
$\KK^\ve_\spot(M)$ is naturally the $R^\yy$-module~$M^\yy$, with an
action of $\kk[\xx^{[\ve]}]$ where $x_i^\ve$ acts the same way
as~$y_i^\ve$ due to the relation $x_i^\ve - y_i^\ve$.
\end{proof}
\begin{cor}\label{c:fldim}
The $n$-variable real-exponent polynomial ring has flat dimension~$n$.
\end{cor}
\begin{proof}
Example~\ref{e:usual-koszul} implies that $\fldim R \geq n$, and
$\fldim R \leq n$ by Theorem~\ref{t:flat-res}.
\end{proof}
\section{Global dimension~\texorpdfstring{$n + 1$}{n+1}}\label{s:gldim
\begin{lemma}\label{l:orthant-res}
Fix an orthant ideal~$\OO \neq R$. Choose a sequence
$\{\ee_k\}_{k\in\NN}$ such that $\ee_k =
(\ve_{1k},\dots,\ve_{nk}) \in \RR^n_+$ has
\begin{itemize}
\item%
$\ve_{ik} = 0$ for all~$k$ if the $i^\mathrm{th}$ factor of~$\OO$
is~$R_1$ and
\item%
$\{\ve_{ik}\}_{k \in \NN}$ strictly decreases with limit~$0$ if the
$i^\mathrm{th}$ factor of~$\OO$ is~$\mm_1$.
\end{itemize}
Let $F = \bigoplus_k \<\xx^{\ee_k}\>$ be the direct sum of the
principal ideals in~$R$ generated by the monomials with
degrees~$\ee_k$. Each summand $\<\xx^{\ee_k}\>$ is free with basis
vector~$1_k$, and $\OO$ has a free resolution \mbox{$0 \from F \from F
\from 0$} whose differential sends $1_k \in \<\xx^{\ee_k}\>$ to
\mbox{$1_k - \xx^{\ee_k - \ve_{k+1}}1_{k+1}$}.
\end{lemma}
\begin{proof}
The augmentation map $\OO \overset{\;\alpha}\ffrom F$ sends $1_k$
to~$\xx^{\ee_k}$. It is surjective by definition of~$\OO$. Since
$\alpha$ is graded by the monoid~$\RR^n_+$, its kernel can be
calculated degree by degree. In degree $\aa \in \RR_+$ the kernel is
spanned by all differences $\xx^{\aa-\ee_k} 1_k -
\xx^{\aa-\ee_\ell} 1_\ell$ such that $\ee_k$ and~$\ee_\ell$
both weakly precede~$\aa$; indeed, this subspace of the $\aa$-graded
component~$F_\aa$ has codimension~$1$, and it is contained in the
kernel because $\xx^{\aa - \ee_k} \xx^{\ee_k} = \xx^{\aa -
\ee_\ell} \xx^{\ee_\ell}$. The differential is injective
because each element $f \in F$ has nonzero coefficient on a basis
vector~$1_k$ with $k$ maximal, and the image of~$f$ has nonzero
coefficient on $1_{k+1}$.
\end{proof}
\begin{lemma}\label{l:Ext(k,F)}
$\kk = R_1/\mm_1$ has a free resolution of length~$2$, and
$\Ext^2_{R_1}(\kk,F) \neq 0$.
\end{lemma}
\begin{proof}
The resolution of~$\mm_1$ over~$R_1$ in Lemma~\ref{l:orthant-res}
(with $n = 1$) can be augmented and composed with the inclusion $R_1
\otni \mm_1$ to yield a free resolution of~$\kk$ over~$R_1$. The long
exact sequence from $0 \from \kk \from R_1 \from \mm_1 \from 0$
implies that $\Ext^{i+1}_{R_1}(\kk,-) \cong \Ext^i_{R_1}(\mm_1,-)$ for
$i \geq 1$. Now apply $\Hom(\mm_1,-)$ to the exact sequence $0 \to F
\to F \to \mm_1 \to 0$. The first few terms are $0 \to \Hom(\mm_1,F)
\to \Hom(\mm_1,F) \to R_1 \to \Ext^1(\mm_1,F)$. The image of
$\Hom(\mm_1,F) \to R_1$ is~$\mm_1$, so $\kk \into \Ext^1(\mm_1,F)
\cong \Ext^2(\kk,F)$ is nonzero.
\end{proof}
\begin{remark}\label{r:osofsky}
Any ideal that is a countable (but not finite) union of a chain of
principal ideals has projective dimension~$1$
\cite[p.14]{osofsky1974}. But it is convenient to have an explicit
free resolution of~$\mm_1$ over~$R_1$, and it is no extra work to
resolve all orthant ideals.
\end{remark}
\begin{prop}\label{p:n+1}
Set $\mm_1 = \<x_n^\ve \mid \ve > 0\>$ and $J = \<x_1,\dots,x_{n-1}\>
\subseteq R$. Using $x = x_n$ for~$R_1$, consider the $R_1$-module
$F$ in Lemma~\ref{l:Ext(k,F)} with $n = 1$ as an $R$-module via $R
\hspace{-.2ex}\onto\hspace{-.4ex} R_1$, where $x_i^\ve \mapsto 0$ for
all~$\ve > 0$ and $i \leq n-1$. Then $\Ext^{n+1}_R(R/I,F) \neq 0$
when~$I = J + \mm_1$.
\end{prop}
\begin{proof}
Let $\FF_\spot: 0 \from R_1 \from F \from F \from 0$ be the $R_1$-free
resolution of~$\kk$ obtained by augmenting the resolution of~$\mm_1$
in Lemma~\ref{l:orthant-res} with $n = 1$. Let $\KK_\spot =
\KK_\spot^{R_{n-1}}(\xx_{n-1})$ be the ordinary Koszul complex over
$R_{n-1}$ on the sequence $\xx_{n-1} = x_1,\dots,x_{n-1}$, which is a
free resolution of $R_{n-1}/\xx_{n-1} R_{n-1}$ over~$R_{n-1}$. Then
$\Tot(\FF_\spot \otimes_\kk \KK_\spot)$ is a free resolution of $R/I$
over~$R$. On the other hand,
\begin{align*}
\FF_\spot \otimes_\kk \KK_\spot
&\cong
\FF_\spot\otimes_{R_1}R_1 \otimes_\kk R_{n-1}\otimes_{R_{n-1}}\KK_\spot
\\&\cong
\FF_\spot\otimes_{R_1} R \otimes_{R_{n-1}}\KK_\spot
\\&\cong
\FF_\spot\otimes_{R_1}R \otimes_R R\otimes_{R_{n-1}}\KK_\spot
\\&=
\FF_\spot^R \otimes_R \KK_\spot^R,
\end{align*}
where $\FF_\spot^R = \FF_\spot\otimes_{R_1}R$ is an $R$-free
resolution of~$R/\mm_1 R$ and the ordinary Koszul complex $\KK_\spot^R
= R\otimes_{R_{n-1}}\KK_\spot = \KK_\spot^R(\xx_{n-1})$ of the
sequence $\xx_{n-1}$ in~$R$ is an $R$-free resolution of~$R/J$.
Using $(-)^*$ to denote the free dual $\Hom_R(-,R)$, compute
\begin{align}\label{eq:Hom}
\Hom_R(\FF_\spot^R \otimes_R \KK_\spot^R, F)
&\cong
\Hom_R\big(\FF_\spot^R, \Hom_R(\KK_\spot^R, F)\big)
\\\nonumber&\cong
\Hom_R(\FF_\spot^R, (\KK_\spot^R)^* \otimes_R F)
\\\nonumber&\cong
\Hom_R(\FF_\spot^R, (\KK_\spot^R)^* \otimes_R R_1 \otimes_{R_1} F),
\end{align}
where the bottom isomorphism is because the $R$-action on~$F$ factors
through~$R_1$. The differentials of the complex $(\KK_\spot^R)^*
\otimes_R R_1 \cong (\KK_\spot^R)^* \otimes_{R_{n-1}} \kk$
all vanish, and this complex has cohomology $R_1^{\binom{n-1}q}$ in
degree~$q$. Hence the total complex of~Eq.~(\ref{eq:Hom}) has
homology
\pagebreak[1]
\begin{align*}
\Ext^i_R(R/I,F)
&\cong
\bigoplus_{p+q=i} H_p \Hom_R\big(\FF_\spot^R, F^{\binom{n-1}q}\big)
\\&\cong
\bigoplus_{p+q=i} H_p \Hom_{R_1}\big(\FF_\spot, F^{\binom{n-1}q}\big)
\\&\cong
\bigoplus_{p+q=i} \Ext^p_{R_1}\big(\kk, F^{\binom{n-1}q}\big),
\end{align*}
where the middle isomorphism is again because the $R$-action on~$F$
factors through~$R_1$. Taking $p = 2$ and $q = n-1$ yields the
nonvanishing by Lemma~\ref{l:Ext(k,F)}.
\end{proof}
\begin{remark}\label{r:grothendieck}
The proof of Proposition~\ref{p:n+1} is essentially a Grothendieck
spectral sequence for the derived functors of the composite
$\Hom_{R_1}(\kk,-) \circ \Hom_{R_{n-1}}(R_{n-1}/\xx_{n-1},-)$ but the
elementary Koszul argument isn't more lengthy
than verifying the hypotheses.
\end{remark}
\begin{thm}\label{t:gl.dim}
The $n$-variable real-exponent polynomial ring has global
dimension~\mbox{$n\!+\!1$}.\!
\end{thm}
\begin{proof}
Proposition~\ref{p:n+1} yields the lower bound $\gldim R \geq n + 1$.
For the opposite bound, given any $R$-module~$M$, each module in the
length~$n$ flat resolution from Theorem~\ref{t:flat-res} has a free
resolution of length at most~$1$ by Lemma~\ref{l:orthant-res}. By the
comparison theorem for projective resolutions
\cite[Theorem~2.2.6]{weibel1994}, the differentials of the flat
resolution lift to chain maps of these free resolutions. The total
complex of the resulting double complex has length at most~$n + 1$.
\end{proof}
\begin{remark}\label{r:pd(k)}
As an $\RR^n$-graded module, the quotient $R/I$ in
Proposition~\ref{p:n+1} is nonzero only in degrees from $\RR^{n-1}
\subseteq \RR^n$. Hence $R/I$ is ephemeral \cite{berkouk-petit2021},
meaning more or less that its set of nonzero degrees has measure~$0$.
The projective dimension exceeding~$n$ is not due solely to this
ephemerality. Indeed, multiplication by~$x_n^1$ induces an inclusion
of $R/I$ into $R/I'$ for $I' = \<x_1,\dots,x_{n-1}\> + \<x_n^\ve \mid
\ve > 1\>$, which is supported on a unit cube in~$\RR^n_+$ that is
neither open nor closed.
Theorem~\ref{t:gl.dim} implies that $\Ext^{n+1}_R(R/I',N)
\onto\nolinebreak \Ext^{n+1}_R(x_n R/I,N)$ is surjective for all
modules~$N$, so $R/I'$ has projective dimension $n+1$. On the other
hand, it could be the closed right endpoints
\cite{functorial-endpoints}---that is, closed socle elements
\cite[Section~4.1]{essential-real}---that cause the problem.
Thus it could be
that sheaves in the conic topology (``\mbox{$\gamma$-topology}''; see
\cite{kashiwara-schapira2018,strat-conical,berkouk-petit2021}) have
consistently lower projective dimensions\mkl{.}
\end{remark}
\section{Dense exponent sets}\label{s:dense
The results in Sections~\ref{s:fldim} and~\ref{s:gldim} extend to
monoid algebras for positive cones of subgroups of~$\RR^n$ satisfying
a mild density condition. Applications to noncommutative toric
geometry should require restriction to subgroups of this sort.
\begin{defn}\label{d:G}
Let $G \subseteq \RR^n$ be a subgroup whose intersection with each
coordinate ray $\rho$ of\/~$\RR^n$ is dense.
Write \raisebox{0pt}[11pt][0pt]{$G_+ = G \cap \RR^n_+$} for the
positive cone in~$G$, set $\orho = \rho \cap \RR^n_+ \minus \{\0\}$,
and let \raisebox{0pt}[11pt][0pt]{$\oG_+ = \prod_\rho G \cap \orho_+$}
be the set of points in~$G$ whose projections to all coordinate rays
are strictly positive and still lie in~$G$. Set $R_G = \kk[G_+]$, the
monoid algebra of~$G_+$ over~$\kk$. Let $R_G^\xx$~and~$R_G^\yy$ be
the copies of~$R_G$ embedded in $R_G^{\otimes 2}$ as $R_G \otimes
1$~and~$1 \otimes R_G$. For $\ee \in \oG_+$ let $\xx^{[\ee]} =
x_1^{\ve_1},\dots,x_n^{\ve_n}$ be the corresponding sequence of
elements in~$R_G$.
\begin{enumerate}
\item%
The \emph{open Koszul complex} over~$R_G$ is the colimit
$\oK_\spot^\xx = \dirlim_{\ee \in \oG_+} \KK_\spot(\xx^{[\ee]})$.
\vspace{.3ex}
\item%
Fix an $R_G^\xx$-module~$M$. Write $M^\yy$ for the corresponding
$R_G^\yy$-module, with the $\xx$ variables renamed to~$\yy$. With
notation for variables as in Lemma~\ref{l:Rotimes2}, the \emph{open
total Koszul complex} of~$M$ is the colimit $\oK^{\xx-\yy}_\spot(M) =
\dirlim_{\ee \in \oG_+} \KK_\spot(\xx^{[\ee]}-\nolinebreak\yy^{[\ee]})
\otimes_{R^\yy}\nolinebreak M^\yy$.
\item%
Given a subset $\sigma \subseteq \{1,\dots,n\}$, the \emph{orthant
ideal} $I_\sigma \subseteq R_G$ is generated by all monomials
$\xx^{\ee}$ for $\ee \in G_+$ such that $\ve_i > 0$ for all $i \in
\sigma$.
\end{enumerate}
\end{defn}
\begin{example}\label{e:dense}
Let $G$ be generated by
$\bigl[\twoline 20\bigr], \bigl[\twoline \pi 0\bigr], \bigl[\twoline
11\bigr], \bigl[\twoline 0e\bigr]$ as a subgroup of~$\RR^2$, so $G$
consists of the integer linear combinations of these four vectors.
The intersection $G \cap \rho^y$ with the $y$-axis $\rho^y$ arises
from integer coefficients $\alpha$, $\beta$, $\gamma$, and $\delta$
such that
$$%
\bigl[\twoline 0y\bigr]
=
\alpha\bigl[\twoline 20\bigr] + \beta\bigl[\twoline \pi 0\bigr] +
\gamma\bigl[\twoline 11\bigr] + \delta\bigl[\twoline 0e\bigr].
$$
This occurs precisely when $2\alpha + \pi\beta + \gamma = 0$, and in
that case $y = \gamma + \delta e$. Since $\pi$ is irrational it is
linearly independent from~$1$ over the integers,
so $\beta = 0$ and hence $\gamma = -2\alpha$ is always an even
integer. Since $e$ is irrational, the only integer points
in $G \cap \rho^y$ have even $y$-coordinate:
$$%
G \cap \rho^y
=
\bigl\<\bigl[\twoline 02\bigr], \bigl[\twoline 0e\bigr]\bigr\>.
$$
The point $\bigl[\twoline 11\bigr] \in G$ has strictly positive
projection to~$\rho^y$, but that projection lands outside of~$G$.
Hence \raisebox{0pt}[11pt][0pt]{$\oG_+ = G \cap \orho^{\,x}_+ \times G
\cap \orho^{\,y}_+$} is a proper subgroup of~$G$, given the strictly
positive point \raisebox{0pt}[11pt][0pt]{$\bigl[\twoline 11\bigr] \in
G_+ \minus \oG_+$}. Nonetheless, $\oG_+$
contains positive real multiples of~$\bigl[\twoline 11\bigr]$
approaching
the origin, which is all the colimit in the proof of
Theorem~\ref{t:G-flat-res}~requires.
\end{example}
\begin{thm}\label{t:G-flat-res}
If a subgroup $G \subseteq \RR^n$ is dense in every coordinate
subspace of\/~$\RR^n$ as in Definition~\ref{d:G}, then
Theorem~\ref{t:flat-res} holds verbatim with $R_G = \kk[G \cap
\RR^n_+]$ in place of~$R$. Consequently, the ring~$R_G$ has flat
dimension~$n$ and global dimension~$n+1$.
\end{thm}
\begin{proof}
For $\sigma \subseteq \{1,\dots,n\}$ and $\ee \in \RR^n$ let
$\ee_\sigma \in \RR^n$ be the restriction of $\ee$ to~$\sigma$, so
$\ee_\sigma$ has entry~$0$ in the coordinate indexed by every $j
\not\in \sigma$. The $2^n$ summands of $\oK_\spot^\xx$ are orthant
ideals because $\KK_i(\xx^{[\ee]}) \cong \bigoplus_{|\sigma|=i}
\<\xx^{\ee_\sigma}\>$ naturally with respect to the inclusions induced
by the colimit defining~$\oK_\spot^\xx$. Each orthant ideal is flat
because this colimit is filtered: given two vectors $\ee_1, \ee_2 \in
\oG_+$, the coordinatewise minimum $\ee_1 \wedge \ee_2 \in \RR^n_+$
lies in~$\oG_+$ because its projection to each ray lies in~$G$.
Proposition~\ref{p:fldim(k)} therefore generalizes to~$R_G$ by the
exactness of colimits and the cokernel calculation $\kk = R_G/\mm$ for
the \mbox{$G$-graded} maximal ideal $\mm = \<\xx^\ee \mid \0 \neq \ee
\in G_+\>$. Example~\ref{e:usual-koszul} generalizes with no
additional work. Lemma~\ref{l:Rotimes2} generalizes by exactness of
colimits and the cokernel calculation $R_G \cong R_G^{\otimes
2}/\<\xx^{[\ee]} - \yy^{[\ee]} \mid \0 \neq \ee \in G_+\>$. The
conclusion of Remark~\ref{r:orthant} generalizes, but the reason is
direct calculation of $\oK_\spot(\xx^{[\ee]}-\yy^{[\ee]})$ as was done
for~$\oK_i^\xx$. The original proof of Theorem~\ref{t:flat-res} uses
that tensor products commute with colimits, but the generalized proof
avoids that argument by simply defining $\oK_\spot^{\xx-\yy}$ as the
relevant colimit. The rest of the proof and the generalization of the
flat dimension claim in Corollary~\ref{c:fldim} work mutatis mutandis,
given the strengthened versions of the results~they~cite.
The orthant ideal resolution in Lemma~\ref{l:orthant-res} generalizes
to~$R_G$ by the density hypothesis, including specifically the part
about intersecting with coordinate subspaces. The Ext calculation in
Lemma~\ref{l:Ext(k,F)} works again by density of the exponent set
of~$\mm_1$ in~$\RR_+$. The statement and proof of
Proposition~\ref{p:n+1} work mutatis mutandis for $R_G$ in place
of~$R$ as long as the power of~$x_i$ generating~$J$ lies in the
intersection of~$G$ with the corresponding coordinate ray of~$\RR^n$.
The proof of Theorem~\ref{t:gl.dim} then works verbatim, given the
strengthened versions of the results it cites.
\end{proof}
|
1,477,468,750,320 | arxiv | \section{Introduction}
Analysis of the event anisotropies (anisotropic flow)
in multiparticle production in non-central
nuclear collisions appeared to be one of the most informative paths in
understanding the physics and characterizing the properties of the
dense and hot strongly interacting medium. It has been observed a
continuous increase in the value of in-plane elliptic flow
($v_2>0$)~\cite{Barrette:1996rs}
from the top AGS energies to RHIC.
At RHIC, strong elliptic flow~\cite{Ackermann:2000tr}
comparable in strength to the predictions of ideal hydrodynamics,
and the hadronization via quark coalescence following from constituent
quark number scaling of differential
flow~\cite{Voloshin:2002wa,Molnar:2003ff},
together with the jet quenching, are the key
ingredients in the picture of sQGP (strongly coupled Quark Gluon
Plasma).
The field is rapidly developing and evolving. From the ``discovery
phase'' of the first years of RHIC operations it has been transforming into
detailed quantitative
description of the sQGP phase and subsequent hadronization.
The plots of $v_2/\eps$ vs particle density~\cite{Voloshin:2002wa,
Alt:2003ab,Voloshin:2007af}
that have been extensively used in
the assessment of the level of thermalization reached in the
relativistic nuclear collisions
and applicability of ideal hydrodynamics comes under scrutiny:
are the measurements of elliptic flow precise
enough, are the anisotropies what we think they are and how much are they
modified by fluctuation processes? When comparing to
hydrodynamical
calculations, are the proper initial conditions used in the calculations?
Could it be that we have missed some important physics building the models?
In the last couple years significant progress has
been reached answering each and every of these questions.
The role of viscosity, flow fluctuations, initial conditions
(eccentricity and initial flow (velocity) field) are a few questions
to report in this review. I also discuss preliminary measurements of
azimuthal
``out-of-plane'' anisotropies that are related to one of
the fundamental questions of the strong parity violation.
Finally, I briefly overview future measurements at RHIC and LHC.
Unfortunately, the space does not allow any detailed discussion of many
other very important developments,
such as flow of $\phi$-mesons~\cite{Afanasiev:2007tv,Abelev:2007rw}
that is important to understand the
relative flow development in the partonic and hadronic phases,
flow of the deuterons~\cite{Afanasiev:2007tv}
and $K^*$-mesons as tests of coalescence,
heavy flavor flow, etc. The KE scaling of elliptic flow observed by
PHENIX Collaboration~\cite{Adare:2006ti}
is still not fully understood/appreciated.
There is no doubt that these measurements will enrich our understanding
of the dense QCD medium even further.
Taking into account the significance of the $v_2/\eps$ plot in
establishing the sQGP picture,
I spend most of my time discussing recent developments related to
this plot. There have been a number of important findings:
(a) Along with several indirect indications that even in central Au+Au
collisions the thermal equilibrium is not complete, it was found that
even very small, compared to the conjectured low limit of $\eta/s$
values, the viscous effects lead to a significant reduction
in the predicted elliptic
flow compared to the ideal hydro case. This would lead to a
contradiction with experimental measurements if other effects,
responsible for an increase of elliptic flow (compared to the
``standard'' hydrodynamical calculation) would not be identified.
Several of such effects have been reported.
(b) First, it was noticed that ideal hydro calculations, if tuned to
describe spectra, yield larger elliptic flow than thought
previously.
(c) It was shown, that in some models, e.g. CGC, the initial eccentricity
can take significantly larger values than in optical Glauber model
that is usually used in hydro calculations.
The larger eccentricities inevitably lead to larger elliptic flow.
(d) Flow fluctuations, the nature of which is much better understood
in the last years, lead to an increase of {\em apparent} flow.
(e) Finally,
it was noticed that the gradients in the initial velocity
field also lead to the increase in final values of elliptic flow.
Any of the above mentioned effects can be quite significant, leading each to
20-30\% or even larger change in values of $v_2$.
The final ``assembly'' of these
effects into one reliable model is still under way.
An attempt of model independent
analysis of $v_2/\eps$ dependence on particle density based on
parametrization in terms of {\em Knudsen number} has been developed
in~\cite{Bhalerao:2005mm,Drescher:2007cd}.
Using an expression
$v_2/\eps =(v_2/\eps)_{hydro} (1+K/K_0)^{-1}$ (where the parameter
$K_0\approx 0.7$ is independently estimated from comparison
to model calculations)
to fit the data, see Fig.~1, the authors conclude that at
RHIC we might be still up to 30\% below the ideal ``hydro limit'' even for
the most central collisions.
Their estimate of the viscosity yields values of
$\eta/s=0.11-0.19$ depending on the CGC or Glauber initial conditions.
Similar fits to the STAR data performed by
R.~Snellings~\cite{Raimond:private} and
collaborators lead to similar conclusions.
\begin{figure}[t]
\begin{minipage}[t]{0.48\textwidth}
\includegraphics[width=0.95\textwidth]{drescher_fit.jpg}
\label{fig1}
\caption{Fit to $v_2/\eps$ in terms of Knudsen number~\cite{Drescher:2007cd}.}
\end{minipage}
\hspace{0.03\textwidth}
\begin{minipage}[t]{0.48\textwidth}
\includegraphics[width=0.95\textwidth]{romat.jpg}
\caption{Viscous hydro calculations~\cite{Romatschke:2007mq}
compared to data.}
\label{fig:romat}
\end{minipage}
\end{figure}
The magnitude of {\em the viscous effects} could be judged already from the
early calculations~\cite{Teaney:2001av}
where the hydro dynamical evolution at some intermediate stage was joined to
the transport model to simulate late (viscous) evolution of the
system. Later, viscosity was attempted to be introduced directly into
hydrodynamic calculations~\cite{Teaney:2003kp}.
Recently there have been performed a few ``full''
calculations~\cite{Romatschke:2007mq,Song:2007fn,Song:2007ux}
of the hydrodynamical expansion with viscous terms
explicitly included into equations.
Note that the latter (namely the
form of these terms) is not that everybody agrees
on~\cite{Song:2007fn}, though the difference in the results due to use
of somewhat different equations are likely small.
At the same time everybody agrees on the significance of the viscous
effects even for the ``minimal'' values of viscosity ($\eta/s=1/(4\pi)$).
The results presented in Figs.~\ref{fig:romat} and~\ref{fig:song}
show that even
the minimal viscosity lead up to $\sim$25-30\% reduction in flow
values in Au+Au collisions and probably more than 50\% in Cu+Cu.
Note that viscosity
coefficients calculated in pQCD are usually much larger than
would be allowed by the data.
In this sense, noteworthy are the recent calculations~\cite{Xu:2007jv},
which emphasize the importance of taking into account
$2 \leftrightarrow 3$ processes.
With these effects included,
the viscosity coefficient appeared to be about an order magnitude smaller
compared to previous calculations
and fall into the ``allowable'' by the data range.
One can wonder how is it possible that with viscous
effects to be that strong that
the ideal hydrodynamical calculations results are not much higher
than the experimental measurements?
Indeed, there have been identified several effects which possibly lead
to a significant increase in predicted flow. Taken together with viscous
effects they may restore the agreement with experiment.
Firstly, in~\cite{Huovinen:2007xh} it was explicitly demonstrated
that the ideal hydro calculations can not be ``tuned''
to describe both spectra and elliptic flow.
If one tunes the
model to describe spectra, the elliptic flow values appeared too
large, for about 20-30\%, compared to the data, see Fig.~\ref{fig:pasi}.
What leaves even more space for viscous effects, is the
observation~\cite{Hirano:2005xf,Drescher:2006pi} that
the initial eccentricity calculated in the CGC model yields
values up to 50\% larger compared to the ``standard'' optical Glauber
model. The effect has been further studied
in~\cite{Drescher:2007ax}; in~\cite{Lappi:2006xc} it was
discussed how eccentricity depends on details of CGC model, which can be taken
as possibility to investigate CGC model by measuring flow.
\begin{figure}[t]
\begin{minipage}[t]{0.48\textwidth}
\includegraphics[width=0.95\textwidth]{song2v2pt.jpg}
\caption{
$v_2(p_t)$ from ideal and viscous ($\eta/s=1/(4\pi)$) hydrodynamics~
\cite{Song:2007ux} }.
\label{fig:song}
\end{minipage}
\hspace{0.03\textwidth}
\begin{minipage}[t]{0.48\textwidth}
\centerline{\includegraphics[width=0.9\textwidth]{pasiv2pt.jpg}}
\caption{Pion $v_2(p_t)$ in two scenarios~\cite{Huovinen:2007xh}.
Solid line indicates the
results using parameters best fit to spectra.}
\label{fig:pasi}
\end{minipage}
\end{figure}
The role of flow fluctuations and non-flow effects is another
long standing problem that received a lot of attention and
significant progress has been made in the recent
couple years. In particular, the role of fluctuations
in the initial system geometry~\cite{Miller:2003kd} defined by nuclear
{\em participants}~\cite{Manly:2005zy}
(interacting nucleons or quarks) has been greatly
clarified~\cite{Voloshin:2006gz,Bhalerao:2006tp,Voloshin:2007pc,Alver:2008zz,Broniowski:2007ft}.
The following picture emerges:
Anisotropic flow is defined as the correlations to the reaction plane
spanned by the impact parameter and the beam axis.
At fixed impact parameter, the geometry of the {\em participant zone}
fluctuates, both, in terms of the value of the
eccentricity as well as the orientation of the major
axes.
Then the anisotropy develops along the plane spanned by the minor
axis of the participant zone and the beam direction,
the so called {\em participant plane}. As the true
reaction plane is not known and the event plane is estimated from
the particle azimuthal distribution ``defined'' by the participant plane,
the apparent (participant plane) flow appears to be always
bigger (and always ``in-plane'', $v_{2,PP}>0$) compared to the ``true''
flow as projected onto the reaction plane (see Fig.~\ref{fig:planesQ}).
It was noticed
in~\cite{Voloshin:2007pc} that in collisions of heavy nuclei the
fluctuations in the eccentricity
$(\eps_x,\eps_y)=(\la (\sigma_y^2-\sigma_x^2)/(\sigma_y^2+\sigma_x^2)\ra,
\la (2\sigma_{xy}^2/(\sigma_y^2+\sigma_x^2)\ra)$
can be well described by two-dimensional Gaussian.
What is not trivial is that for such Gaussian
fluctuations the higher cumulant flow ($v\{n\}, n\ge 4$) is not only
insensitive to non-flow but also to eccentricity fluctuations.
All of higher cumulants are
exactly equal to the ``true'' flow, namely as given by projection onto
the reaction plane. At the same time, the apparent (participant plane)
flow become unmeasurable in a sense that flow fluctuations could not
be separated from non-flow contributions by means of correlation
measurements.
An important conclusion from that
study was that in most cases (except, probably, in mid-peripheral
and peripheral Cu+Cu collisions, when the Gaussian approximation
breaks~\cite{Alver:2008zz}, and $\eps_{part}\{4\}$ does not agree with
$\eps_{std}$~\cite{Voloshin:2006gz}) the measurements of higher cumulant
flow values provide the elliptic flow
relative to the true reaction plane.
Similarly, the same value
is given by several other methods, such as Lee-Yang Zeroes, Bessel
Transform, and fit to q-distributions.
This greatly simplifies the comparison, e.g. of hydrodynamical calculations
to the data, as it says that in such calculation one should not
worry how to take into account the fluctuations in the initial
eccentricity (which is a non-trivial task) but just compare to the
``right'' measurement, e.g. $v_2\{4\}$.
This understanding also allowed to appreciate some earlier calculations
with uRQMD model~\cite{Zhu:2005qa,Zhu:2006fb}. There, it was shown
that using higher cumulants and/or LYZ method one indeed can measure
the elliptic flow very well, but it was not at all clear why there
have been observed no traces of the effects of flow fluctuations,
which were expected in this model.
Unfortunately this progress in understanding the nature of fluctuations
does not help in resolving the problem of {\em measuring}
flow fluctuations (in the participant plane) and non-flow.
Strictly speaking, to make any estimates of those one is required to
make assumptions.
Most often to suppress non-flow contribution the azimuthal correlations
between particles with large rapidity separation are used.
The problem with this method is that there is no reliable estimates of
how well it suppresses non flow and also how much the flow
fluctuations (in this case correlations) change after imposing such a cut.
At this conference the PHOBOS~\cite{PHOBOSv2fluc} and the
STAR~\cite{STARv2fluc} collaborations presented their
revised (compared to QM'06) results on flow fluctuations.
These results are in good qualitative and quantitative agreement,
see Figs.~\ref{fig:flucStar},~\ref{fig:flucPhobos}.
In~\cite{STARv2fluc} a conservative approach is taken and only upper
limits on fluctuations are reported. The PHOBOS Collaboration
uses estimates of non-flow effects from correlations
with large rapidity separations and report more restrictive range for
fluctuations.
Both agree that the current
measurements exhaust the (nucleon) eccentricity values obtained
in MC Glauber model and in
this sense somewhat favor models which predict smaller relative
fluctuations, such as the CGC model or MC Glauber taking into account
constituent quark substructure.
Another important and interesting direction that is just started
to be explored is the role of the
non-zero initial flow velocity profile.
The first obvious candidate here is to take into account the initial
velocity gradient along the impact parameter, Fig.~\ref{fig:becat}.
As shown in~\cite{Becattini:2007sr} such a gradient directly contributes to the
in-plane expansion rate (see Eq.~22 in~\cite{Becattini:2007sr}).
I would draw attention to the fact that those effects naturally
also lead to {\em directed} flow (see the same Eq.~22), which too briefly
addressed in~\cite{Troshin:2007cp}.
It will be very interesting to compare the calculations in such a model to
a very precise data from STAR~\cite{Wang:2007kz}
on directed flow obtained with the
reaction plane determined by neutrons in the Zero Degree Calorimeter.
The relation to other models~\cite{Csernai:2006yk,Snellings:1999bt}
predicting non-trivial dependence of directed flow on rapidity would
be also very interesting.
Speculating on this subject one would also notice that viscous effects
must also play an important role in such a scenario.
\begin{figure}[t]
\begin{minipage}[t]{0.48\textwidth}
\includegraphics[width=0.95\textwidth]{soren_v2fluc.jpg}
\caption{Elliptic flow fluctuations as estimated by STAR.}
\label{fig:flucStar}
\end{minipage}
\hspace{0.03\textwidth}
\begin{minipage}[t]{0.48\textwidth}
\includegraphics[width=0.95\textwidth]{phobos_v2fluc.jpg}
\caption{Elliptic flow fluctuations as estimated by PHOBOS
Collaboration.}
\label{fig:flucPhobos}
\end{minipage}
\end{figure}
\begin{figure}[t]
\begin{minipage}[t]{0.48\textwidth}
\includegraphics[width=0.95\textwidth]{planesQ.jpg}
\caption{Flow vector distribution at
fixed $(\eps_v,\eps_y)$~\cite{Voloshin:2007pc}}
\label{fig:planesQ}
\end{minipage}
\hspace{0.03\textwidth}
\begin{minipage}[t]{0.48\textwidth}
\includegraphics[width=0.95\textwidth]{becat-Fig.png}
\caption{Initial velocity profile in non-central nuclear
collisions~\cite{Becattini:2007sr}}
\label{fig:becat}
\end{minipage}
\end{figure}
\begin{figure}[t]
\begin{minipage}[t]{0.48\textwidth}
\includegraphics[width=0.95\textwidth]{uuvCentAu200Cu.pdf}
\caption{Azimuthal anisotropy correlator sensitive to strong
$\cal P-$violation effects.}
\label{fig:parity}
\end{minipage}
\hspace{0.03\textwidth}
\begin{minipage}[t]{0.48\textwidth}
\includegraphics[width=0.95\textwidth]{busza_v2LHC.jpg}
\caption{Extrapolation of $v_2$ values to LHC
energies~\cite{Busza:2007ke}.}
\label{fig:busza}
\end{minipage}
\end{figure}
It was shown in~\cite{Kharzeev:2004ey}
that the effect of the strong parity violation that lead
to non equal number of right and left fermions (quarks)
in the presence of strong magnetic fields of colliding nuclei
would result in charge separation (preferential emission of same
charge particles) in the direction perpendicular to
the reaction plane.
Such anisotropy, which very much resembles
``out-of-plane directed flow'' can be addressed with the
help of three-particle correlations~\cite{Voloshin:2004vk}
by measuring $\la \cos(\phi_\alpha+\phi_\beta-2\Psi_{RP}) \ra$,
where $\phi_{\alpha,\beta}$ are azimuthal angles of two (same or
opposite) charged particles, and $\Psi_{RP}$ is the reaction plane angle.
The estimates~\cite{Kharzeev:2007jp} (see also talk by H.~Warringa at
this conference)
indicate that the effect is
strong enough to be observed in heavy ion collisions.
At this conference the STAR Collaboration reported
the preliminary results~\cite{Voloshin:qm2008},
see Fig.~\ref{fig:parity},
that qualitatively agree with
estimates presented in~\cite{Kharzeev:2004ey,Kharzeev:2007jp}.
Note, that the correlator
$\la \cos(\phi_\alpha+\phi_\beta-2\Psi_{RP}) \ra$ is $\cal P$-even and
contain contributions from other effects not related to parity
violation.
A careful analysis of such a contribution is obviously
needed before any strong conclusion can be drawn from these measurements.
Coming years promise many new interesting data from low energy RHIC run
and, of course, from LHC. The main interest in the low energy RHIC scan,
anisotropic flow is no exception, is the search for the
QCD critical point. The scan would cover the energy region from top
AGS energies, over the CERN SPS, and higher. In terms of
anisotropic flow two major observables to watch would be a possible
``wiggle'' in $v_2/\eps$ dependence on particle density~\cite{Voloshin:1999gs}
and
``collapse'' of directed flow~\cite{Stocker:2007pd}.
RHIC also has plans to extend its reach
in terms of energy density using uranium beams. From the first
estimates and ideas of using uranium beam we now have real detailed
simulations~\cite{Nepali:2007an} of such collisions
with developed methods for a selection of
desired geometry of the collision.
The predictions for the LHC are rather uncertain, though most agree
that the elliptic flow will continue to increase~\cite{Abreu:2007kv},
partially due relatively smaller contribution of viscous effects.
Simple extrapolations~\cite{Busza:2007ke,Borghini:2007ub} of
the collision energy dependence of $v_2$ look rather ''reliable'.
Note that there exist calculations predicting {\em decrease} of the
elliptic flow~\cite{Krieg:2007sx}.
Another important observation is an increase in mass dependence
(splitting) of $v_2(p_t)$ due to a strong increase of radial flow.
In summary, we have had very exciting years of anisotropic flow study,
which greatly enriched our understanding of ultra-relativistic nuclear
collisions and multiparticle production in general. We are looking
forward for new physics with LHC and RHIC.
\begin{figure}[t]
\begin{minipage}[t]{0.48\textwidth}
\includegraphics[width=0.95\textwidth]{v1collapse.jpg}
\caption{``Collapse'' of directed flow as discussed in~\cite{Stocker:2007pd}.}
\end{minipage}
\hspace{0.03\textwidth}
\begin{minipage}[t]{0.48\textwidth}
\end{minipage}
\end{figure}
\vspace*{1mm}
I thank A. Poskanzer, R. Snellings, and A. Tang for numerous fruitful
discussions.
\vspace*{3mm}
|
1,477,468,750,321 | arxiv | \section{Introduction}
The recent progress in deep learning approaches for solving combinatorial optimization problems (COPs) has shown that specific subclasses of these problems can now be solved efficiently through training a parameterized model on a distribution of problem instances. This has most prominently been demonstrated for the vehicle routing problem (VRP) and its variants (\cite{kool2018attention, chen2019learning, xin2021multi}).
The current leading approaches model the VRP either as a local search problem, where an initial solution is iteratively improved or
as a sequential construction process successively adding customer nodes to individual tours until a solution is achieved.
Both types of approaches bypass the implicit bin-packing problem
that assigns packages (the customers and their demands) to a pre-defined maximum number of bins (the vehicles). We show that this assignment can be learned explicitly while also minimizing the main component of the vehicle routing objective, i.e. the total tour length.
Furthermore, finding a tour plan for a fixed fleet size constitutes an essential requirement in many practical applications. To this end, many small or medium-sized service providers cannot accommodate fleet size adjustments where they require additional drivers on short notice or where very high acquisition costs prohibit the dynamic extension of the fleet.
Figure \ref{fig:hist} shows the variation of fleet sizes for different problem sizes. For all problem sizes the baseline approaches (green, red, purple) employ more vehicles than our approach (blue, orange) and thereby inquire potentially more costs by requiring more vehicles. In contrast our vanilla approach (blue) guarantees to solve the respective problems with exactly the apriori available number of vehicles (4, 7, 11 for the VRP20, VRP50, VRP100 respectively) or less.
Our approach of learning to construct a complete, valid tour plan that assigns all vehicles a set of customers at once has so far not been applied to the VRP. We amend and extend the Permutation Invariant Pooling Model (\cite{kaempfer2018learning}) that has been applied solely to the simpler multiple Traveling Salesmen Problem (mTSP), for which also multiple tours need to be constructed, but are albeit not subject to any capacity constraints.
In this work we propose an end-to-end learning framework that takes customer-coordinates, their associated demands and the maximal number of available vehicles as inputs to output a bounded number of complete tours and show that the model not only learns how to optimally conjunct customers and adhere to capacity constraints, but that it also outperforms the learning-based reinforcement learning (RL) baselines when taking into account the total routing costs.
By adopting a full-fledged supervised learning strategy, we contribute the first framework of this kind for the capacitated VRP that is not only faster and less cumbersome to train, but also demonstrates that supervised methods are able, in particular under practical circumstances, to outperform state-of-the-art RL models.
\begin{figure}[htp]
\centering
\subfloat[VRP with 20 Customers]{%
\includegraphics[trim=0.2cm 0.0cm 1.5cm 1.0cm,width=0.32\textwidth]{histogram20_tours_SolG_greedy.pdf}%
}%
\hfill%
\subfloat[VRP with 50 Customers]{%
\includegraphics[trim=0.3cm 0.0cm 1.5cm 1.0cm,width=0.32\textwidth]{histogram50_tours_SolG_v2.pdf}%
}%
\hfill%
\subfloat[VRP with 100 Customers]{%
\includegraphics[trim=0.3cm 0.0cm 1.5cm 1.0cm,width=0.32\textwidth]{histogram100_tours_SolG_greedy.pdf}
}%
\caption{Histograms over the fleet sizes used for solving the VRP. Our approach (blue) is able to solve the VRP with 20, 50, 100 customers with 4, 7 and 11 vehicles respectively, while the baseline approaches (see section 5) utilize considerably more vehicles. Our approach (orange) refers to the setting of where we ensure a solution to be found (see section 4.3)}
\label{fig:hist}
\end{figure}
The contributions of this work can be summarized as follows:
\begin{itemize}
\item \textit{Tour-plan construction approach for the capacitated VRP with fixed Vehicle Costs}: We propose a deep learning framework that \textit{learns to solve} the NP-hard capacitated VRP and constructs a complete tour-plan for an specified fleet size. In this way it ensures to solve the problem for an apriori fixed number of vehicles.
\item \textit{Supervised learning for discrete optimization}: The proposed approach is the first ML-based construction approach for the VRP that relies \textit{entirely} on supervised learning to output feasible tours for all available vehicles.
\item \textit{Competitiveness}: We compare our model against prominent state-of-the-art deep learning and OR solvers through a thorough and unified experimental evaluation protocol that ensures comparability amongst different ML-based approaches. We show that our approach delivers competitive results in comparison to approaches that work only for unbounded fleet sizes and outperforms these methods when considering fixed vehicle costs.
\end{itemize}
\section{Related Work}
\textbf{Learned Construction Heuristics.}
The work by \cite{vinyals2015pointer} introducing \textit{Pointer Networks (PtrNet)} revived the idea of developing end-to-end deep learning methods to construct solutions for COPs, particularly for the TSP, by leveraging an encoder-decoder architecture trained in a supervised fashion. \cite{bello2016neural} and \cite{khalil2017learning} proposed RL to train the Pointer Network for the TSP, while \cite{nazari2018reinforcement} extended the PtrNet in order to make the model invariant to the order of the input sequence and applied it to the CVRP. The \textit{Attention Model (AM)} described in \cite{kool2018attention} features an attention based encoder-decoder model that follows the Transformer architecture (\cite{vaswani2017attention}) to output solution sequences for the TSP and two variants of the CVRP. \cite{xin2021multi} extend the AM by replacing the single decoder with multiple, identical decoders with no shared parameters to improve diversity among the generated solutions.
Besides these sequential approaches, "heatmap" construction approaches that learn to assign probabilities to given edges in a graph representation through supervised learning have recently been applied to routing problems. \cite{joshi2019efficient} use a Graph Convolutional Network to construct TSP tours and show that by utilizing a parallelized beam search, auto-regressive construction approaches for the TSP can be outperformed. \cite{kool2021deep} extend the proposed model by \cite{joshi2019efficient} for the CVRP while creating a hybrid approach that initiates partial solutions using a heatmap representation as a preprocessing step, before training a policy to create partial solutions and refining these through dynamic programming.
\cite{kaempfer2018learning} extend the learned heatmap approach to the number of tours to be constructed; their \textit{Permutation Invariant Pooling Network} addresses the mTSP (a TSP involving multiple tours but no additional capacity constraints), where
feasible solutions are obtained via a beam search and have been proven to outperform a meta-heuristic mTSP solver.
\noindent\textbf{Learned Search Heuristics.}
In contrast to construction heuristics that build solutions to routing problems from scratch, learned search heuristics utilize well-known local search frameworks commonly adopted in the field of operations research (OR) to learn to improve an initial given solution in an iterative fashion through RL.
The \textit{NeuRewriter} model (\cite{chen2019learning}) learns a policy that is composed of a "rule-picking" and a "region-picking" part in order to iteratively refine a VRP solution and demonstrates superior performance over certain auto-regressive approaches (\cite{nazari2018reinforcement, kool2018attention}).
Similarly, the \textit{Learning to Improve} approach by \cite{Lu2020A} learns a policy that iteratively selects a local search operator to apply to the current solution and delivers new state-of-the-art results amongst existing machine learning heuristics. However, it is computationally infeasible during inference in some cases taking more than thirty minutes to solve a single instance, which makes this approach incomparable to most other methods.
In contrast, \textit{Neural Large Neighborhood Search (NLNS)} (\cite{hottung2020neural}) is based on an attention mechanism and integrates learned OR-typical operators into the search procedure, demonstrating a fair balance between solution quality and computational efficiency, but can only be evaluated in batches of thousand instances in order to provide competitive results.
\section{Problem Formulation}
This section introduces the capacitated vehicle routing problem which implicitly encompasses a bin packing problem through finding a feasible assignment of customer nodes to a given set of vehicles. We showcase how existing ML-based approaches circumvent this property and solve a simpler variant of the problem. We then demonstrate how to cast the VRP into a supervised machine learning task
and finally
propose an evaluation metric that respects the utilization of vehicles in terms of
fixed vehicle costs.
\subsection{The Capacitated Vehicle Routing Problem}
Following \cite{baldacci2007recent} the Capacitated Vehicle Routing Problem (CVRP) can be defined in terms of a graph theoretical problem. It involves a
graph $G = (V,E)$ with vertex set $V = \{0,...,N\}$ and edge set $E$.
Each customer $i \in V_{\text{c}} = \{1,...,N\}$ has a demand $q_i$ which needs to be satisfied and each weighted edge $\{i,j\} \in E$ represents a non-negative travel cost $d_{ij}$. The vertex $0 \in V$ represents the depot.
A set $M$ of identical (homogeneous) vehicles with same capacity $Q$ is available at the depot.
The general CVRP formulation sets forth that "all \textit{available vehicles} must be used" (\citet[p.~271]{baldacci2007recent}) and $M$ cannot be smaller than $M_{\text{min}}$, corresponding to the minimal fleet size needed to solve the problem.
Accordingly, the CVRP consists of finding \textit{exactly} $M$ simple cycles with minimal total cost corresponding to the sum of edges belonging to the tours substitute to the following constraints: (i) each tour starts and ends at the depot, (ii) each customer is served exactly once and (iii) the sum of customer demands
on each tour cannot be larger than $Q$. A formal mathematical definition of the problem can be found in Appendix \ref{AppMathForm}.
Therefore the solution requires exactly $M$ cycles to be found, which requires in turn the value of $M$ to be set in an apriori fashion.
However, already the task of finding a \textit{feasible} solution with exactly $M$ tours is NP-complete, thus many methods choose to work with an unbounded fleet size (\citet[p.~375]{cordeau2007vehicle})
which guarantees a feasible solution by simply increasing $M$ if there are any unvisited customers left.
In contrast, our method tackles the difficulty of the assignment problem by jointly learning the optimal sequence of customer nodes and the corresponding optimal allocation of customers
to a fixed number of vehicles.
In order to do so, we rely on generated near-optimal targets that determine $M_{\text{min}}$ for each problem instance in the training set.
\subsection{The VRP as a Supervised Machine Learning Problem}
The goal of the supervised problem is to find an assignment $\hat{\mathbf{Y}}$ that allocates optimally ordered sequences of customers to vehicles.
Respectively each VRP instance $X$ which defines a single graph theoretical problem, is characterised by the following \textit{three entities}:
\begin{itemize}
\item A set of $N$ customers. Each customer $i$, $i \in \{1,...,N\}$ is represented by the features $\mathbf{x}^{\text{cus}}_i$.
\item $M$ vehicles, where each vehicle $k \in \{1,...,M\}$ is characterised by the features $\mathbf{x}^{\text{veh}}_k$.
\item The depot, the $0$-index of the $N$ customers, is represented by a feature vector $\mathbf{x}^{\text{dep}}$.
\end{itemize}
Consequently, a single VRP instance is represented as the set $X = (\mathbf{x}^{\text{dep}}; \mathbf{x}^{\text{cus}}_{1, \ldots,n}; \mathbf{x}^{\text{veh}}_{1,\ldots, M})$.
The corresponding ground truth target reflects the near-optimal tour plan and is represented by a binary tensor $\mathbf{Y}^{M \times N \times N}$, where $\mathbf{Y}_{k,i,j}=1$ if the $k^{\text{th}}$ vehicle travels from customer node $i$ to customer node $j$ in the solution and else $\mathbf{Y}_{k,i,j}=0$.
Let $\X$ represent the predictor space populated by VRP instances $X$ and let $\Y$ define the target space comprising a population of target instances $\mathbf{Y}$.
The proposed model is trained to solve the following assignment problem:
Given a sample $\D\in(\X \times \Y)^*$ from an unknown distribution $p$ and a loss function $\ell: \Y\times\Y\rightarrow\mathbb{R}$, find a model $\hat{y}: \X\rightarrow\Y$ which minimizes the expected loss, i.e.\,
\begin{equation}
\begin{aligned}
\label{eq3.1}
\min \ \EE_{(x,y)\sim p} \ \ell(y, \hat y(x))
\end{aligned}
\end{equation}
For a sampled predictor set $x$, the model $\hat y(x)$ outputs a stochastic adjacency tensor $\hat{\mathbf{Y}}^{M \times N \times N}$ that consists of the approximate probability distribution defining a tour plan.
\subsection{Fixed Vehicle Costs}
In order to highlight the importance of the assignment problem incorporated in the CVRP and the
possible impact of an unlimited fleet sizes in real world use cases,
we introduce fixed vehicle costs in the evaluation.
As mentioned above, a problem instance is theoretically always "solvable" in terms of feasibilty by any method that operates on the basis of unbounded fleet sizes. Nevertheless, the increase in the number of tours is undisputedly also a cost driver besides the mere minimization of route lengths.
Thus, we compare our approach to state-of-the-art RL approaches that do not limit the number of tours in a more holistic and realistic setting via the metric that we denote $\text{Cost}_\text{v}$:
\begin{equation}
\begin{aligned}
\label{eq3.2}
\text{Cost}_\text{v} =
\sum_{k,i,j} d_{ij} \mathbf{Y}_{k,i,j}
+ c_\text{v}\sum_k \mathds{1}(\mathbf{Y}_{k,0,0}=0)
\end{aligned}
\end{equation}
where $c_\text{v}$ is the fixed cost per vehicle \textit{used}, such that if a vehicle $k$ leaves the depot, we have $\mathbf{Y}_{k,0,0}=0$, else we indicate that this vehicle remains at the depot with $\mathbf{Y}_{k,0,0}=1$.
Concerning the value of $c_{\text{v}}$ we rely on data used in the \textit{Fleet Size and Mix VRP} in \cite{golden1984fleet} and find matching values corresponding to the capacities used in our experimental setting.
To this end, we incorporate fixed costs of $c_\text{v}=35$, $c_\text{v}=50$ and $c_\text{v}=100$ for the VRP with 20, 50 and 100 customers respectively (a summarizing Table can be found in Appendix \ref{AppVehicleCost}).
We want to note that even though the RL baselines are not specifically trained to minimize this more realistic cost function, our model is neither, as will be shown in section 4.
This realistic cost setting functions as mere statistical evaluation metric to showcase the implications of fixed vehicle costs.
\section{Permutation Invariant VRP Model}
The proposed model outputs a complete feasible tour-plan for the CVRP in the form of a stochastic adjacency tensor $\hat{\mathbf{Y}}$. Inspired by the Transformer-based architecture in \cite{kaempfer2018learning} that tackles the mTSP, we extend this architecture to additionally capture vehicle capacity constraints. The framework encompasses three components: (i) an embedding layer (\textit{Encoder}), (ii) an information extraction mechanism (\textit{Pooling}) and (iii) an output construction block (\textit{Decoder}). Figure \ref{model_graph} displays an overview of the model architecture.
\subsection{Model Architecture}
\begin{figure}[H]
\centering
\includegraphics[scale=0.8,trim=0.0cm 0.0cm 0.0cm 0.0cm,width=1.0\textwidth]{PermInvVRP_new.pdf}
\caption{Permutation Invariant VRP model. Framework overview.}
\label{model_graph}
\end{figure}
\noindent\textit{Embedding}. The three entities that make up a VRP instance $X$ are embedded separately, such that each encoding is embedded into a shared dimensional space $d_m$ by a linear projection:
\begin{align}
\label{eq4.1}
h^{(0)}_{\text{g}} = W_{\text{g}}\mathbf{x}^{\text{g}} + b_{\text{g}} \quad \text{for g $\in \{$dep, cus, veh$\}$}
\end{align}
where $h^{(0)}_{\text{g}} \in \mathbb{R}^{l_{\text{g}} \times d_m}$ is the initial embedding of each entity that will remain of different lengths $l_{\text{cus}}=N, l_{\text{veh}}=M, l_{\text{dep}}=1$.
The separate embedding for each entity and the architecture of the network ensure that the order of the elements in the entities is not relevant for the operations performed in the network, which establishes the permutation invariance.
\noindent\textit{Information Extraction}. The information extraction component iteratively pools contexts across entities and linearly transforms these context vectors $c_{g,1},\ldots,c_{g,l_g}$ with the entity embeddings of the current layer $h_g^{l}$.
For each element $r \in \{1,\ldots,l_g\}$ in an entity the contexts are pooled:
\begin{equation}
\label{eq4.2}
\begin{aligned}
c^{(l)}_{g,r} &= \text{pool}(h^{(l)}_{g,1},\ldots,h^{(l)}_{g,l_{g}}) \quad \text{for g $\in \{$dep, cus, veh$\}$}
\end{aligned}
\end{equation}
For the next layer's entity representation, the contexts and previous embeddings are linearly projected:
\begin{equation}
\label{eq4.4}
\begin{aligned}
h^{(l+1)}_{g} &= f_g([h^{(l)}_g; c^{(l)}_{g}]) \quad \text{for g $\in \{$dep, cus, veh$\}$}
\end{aligned}
\end{equation}
This pooling mechanism proceeds until a final representation of each entity is retrieved; $h^{(L)}_{\text{dep}}$, $h^{(L)}_{\text{cus}}$ and $h^{(L)}_{\text{veh}}$. These are then passed to the decoding procedure.
A more detailed description of the operations performed in the Pooling Layer is described in Appendix \ref{AppLOOPool}.
\noindent \textit{Decoder}.
The decoding step in the model constructs the output tensor $\hat{\mathbf{Y}} \in \mathbb{R}^{M \times N' \times N'}$ where $N'=N+1$ indicates the size of the full adjacency tensor including the depot. A preliminary step consists of forming a feature representation of all edges (potential paths) between all pairs of vertices in the graph, denoted as $E'$:
\begin{equation}
\label{eq4.5}
\begin{aligned}
&E = [h^{(L)}_{\text{dep}}; h^{(L)}_{\text{cus},1};\ldots; h^{(L)}_{\text{cus},N}] \quad \phantom{al} \in \mathbb{R}^{N' \times d_m} \\
&E' = [[E_{i:.} ; E_{j:.}] \phantom{l} | \phantom{l} i,j=1:N'] \quad \in \mathbb{R}^{(N'\cdot N') \times 2\cdot d_m}
\end{aligned}
\end{equation}
The final construction procedure combines the edge ($E'$) and fleet ($V=h^{(L)}_{\text{veh}}$) representations where
each vehicle representation $V_k=h^{(L)}_{\text{veh},k},\ k \in M$ enters the combination process twice:
\begin{enumerate}
\item In the linear transformation
with the edges $E'$, where $W_o$ and bias $b_o$ are the learned weights and $V'_k$ is an expanded version of $V_k$ to match the dimensionality of $E'$:
\begin{equation}
\label{eq4.6}
A_k = \text{ReLU}(W_o[E';V'_k] + b_o)
\end{equation}
\item In a scaled dot-product which returns compatibility scores for each path potentially travelled by vehicle $k$ to emphasize a direct interaction between vehicles and convolved edges:
\begin{equation}
\label{eq4.7}
\hat{\mathbf{Y}}_k = \text{softmax}(\frac{A_k^TV'_k}{\sqrt{d_m}})
\end{equation}
\end{enumerate}
Finally a softmax is applied to transform the compatibility scores of vehicles and edges into probabilities $\hat{\mathbf{Y}} \in \mathbb{R}^{M \times N' \times N'}$.
\subsection{Solution Decoding}
In order to transform the doubly-stochastic probability tensor $\hat{\mathbf{Y}}$ into discrete tours in terms of a binary assignment, we use a greedy decoding strategy.
In \textit{training}, a pseudo-greedy decoding (see Appendix \ref{PseudoGreedy}) renders potentially infeasible solutions by transforming $\hat{\mathbf{Y}}$ into a binary assignment tensor $\hat{\mathbf{Y}}^*$, where $\hat{\mathbf{Y}}^*=1$ for the \textit{predicted} path of vehicle $k$ between vertices $i$ and $j$ and 0 otherwise.\\
\begin{algorithm}[H]
\begin{algorithmic}[1]
\footnotesize
\REQUIRE $\hat{\mathbf{Y}}\in \mathbb{R}^{M \times N' \times N'}$, $\hat{\mathbf{Y}}^* \in \{0,1\}^{M \times N'\times N'}$, $q \in \mathbb{R}^{N'}$, $Q' \in \mathbb{R}^{M}$, $U$,$Q \in \mathbb{R}$
\ENSURE $\text{Routes } \hat{\mathbf{Y}}^* \in \{0,1\}^{M \times N'\times N'} $
\FOR{$ i \in U$}
\IF{$Q'- q_i < 0_{M}$}
\STATE $M \leftarrow M+1$ \Comment*[r]{Add new tour if "guarantee solution" is set}
\STATE $ \hat{\mathbf{Y}}^*_{M,.,.} \leftarrow 0$ \Comment*[r]{Update binary solution tensor}
\STATE $ Q'_M \leftarrow Q$\Comment*[r]{Update capacity tensor}
\ENDIF
\STATE $v \leftarrow \argmax(Q'- q_i)$ \Comment*[r]{Vehicle with max capacity left}
\STATE $V_a \leftarrow \textrm{argselect}(\hat{\mathbf{Y}}^*_v > 0)$ \Comment*[r]{Select all possible vertices for insertion}
\STATE $j_{\text{before}} \leftarrow V_a[\argmax(\hat{\mathbf{Y}}_{v,V_a,i})]$ \Comment*[r]{Assign incoming edge of inserted customer}
\STATE $j_{\text{after}} \leftarrow \argmax(\hat{\mathbf{Y}}^*_{v,j_{\text{before}},:})$ \Comment*[r]{Assign outgoing edge of inserted customer}
\STATE $\hat{\mathbf{Y}}^*_{v,j_{\text{before}},:}$ $ \ \leftarrow 0.0$ \Comment*[r]{Update Solution Tensor}
\STATE $\hat{\mathbf{Y}}^*_{v,j_{\text{before}},i} \ \leftarrow 1.0$
\STATE $\hat{\mathbf{Y}}^*_{v,i,j_{\text{before}}} \ \leftarrow 1.0$
\STATE $Q'_v \leftarrow Q'_v-q_i$ \Comment*[r]{Update $v$'s capacity after insertion}\ENDFOR\RETURN $\hat{\mathbf{Y}}^*$
\end{algorithmic}
\caption{Repair Greedy Solution. $q$ is the customer's demand vector, $Q'$ is the vector remaining capacity for all vehicles.}
\label{alg:make_valid}
\end{algorithm}
During \textit{inference}, the pseudo-greedy decoding is substituted by a strictly greedy decoding, and thereby accepts only tour-plans, which do not violate the respective capacity constraint. This is done by tracking the remaining capacity of all vehicles and masking nodes that would surpass the capacity.
Any un-assigned customers violating capacity constraints in the final state of the algorithm are recovered in a list $U$ and passed as input to a repair procedure for $\hat{\mathbf{Y}}^*$ (Algorithm \ref{alg:make_valid}).
For each unassigned customer in list $U$,
Algorithm \ref{alg:make_valid} assigns customer $i$ to the vehicle with most remaining capacity and positions the missing customer between $j_{\text{before}}$ and $j_{\text{after}}$ according to the distribution $\hat{\mathbf{Y}}$, before updating the predicted binary solution tensor $\hat{\mathbf{Y}}^*$ and the remaining capacity $Q'$ accordingly.
\subsection{Inference Solution Post-Processing}
Unfortunately the repair operation of Algorithm \ref{alg:make_valid}
leads to a situation where the resulting tours during inference are not necessarily local optima anymore. To improve these tours we add a heuristic post processing procedure doing several iterations of local search via the google OR-tools solver (\cite{ortools}).
Given a valid solution for a particular number of vehicles, the solver runs a (potentially time-limited) search that improves the initial solution $\hat{\mathbf{Y}}^*$.
During inference, the method optionally can
relax the bound on the fleet size to guarantee to find a solution which is important in cases where instances are particularly difficult to solve due to a
very tight bound on capacity. This is done by initially adding an artificial tour to the plan for the remaining missing customers before running the post-processing scheme. Thus, this mechanism enables our method to also solve instances it is not initially trained for to provide maximum flexibility.
\subsection{Training the Permutation Invariant VRP Model}
In order to train our model to learn the assignment of customer nodes to the available vehicles and adhere to capacity constraints, we extend the original negative log-likelihood loss by a penalty formulation ($L_{\text{over}}$) and a auxiliary load loss, controlled via weights $\alpha_{\text{over}}$ and $\alpha_{\text{load}}$.
The model's characteristic of permutation invariance induces the necessity for the training loss to be agnostic to the vehicle-order as well as the travel direction of the tours. Concerning the route-direction, we denote $\mathbf{Y}_{k,i,j}^0 = \mathbf{Y}_{k,i,j}$ and $\mathbf{Y}_{k,i,j}^1 = \mathbf{Y}_{k,j,i}$
Therefore, the loss to be minimized is the minimum (penalized) normalized negative log-likelihood of $\hat{\mathbf{Y}}$ with respect to the sampled target $\mathbf{Y}$:
\begin{equation}
\begin{aligned}
\footnotesize
\label{eq4.8}
\mathcal{L}(\hat{\mathbf{Y}})
:=
\min_{\pi, b\in\{0,1\}^M}
-\sum_{k=1}^M \left(\sum_{i,j=0}^N
\mathbf{Y}_{\pi(k),i,j}^{b_k} \log \hat{\mathbf{Y}}_{k,i,j} \right)
&+ \alpha_{\text{load}} |\text{load}(\hat Y_{k,.,.})-\text{load}(Y_{\pi(k),.,.})| \\
&+ \alpha_{\text{over}} L_{\text{over}}(\text{load}(\hat Y_{k,.,.}))
\end{aligned}
\end{equation}
with the load of a tour $T = \mathbf{Y}_{k,.,.} \in\{0,1\}^{N'\times N'}$
\begin{equation}
\begin{aligned}
\small
\label{eq4.9}
\text{load}(T) := \sum_{i,j=0}^{N'} T_{i,j} q_i
\end{aligned}
\end{equation}
and a shifted quadratic error for overloading
\begin{equation}
\begin{aligned}
\small
\label{eq4.10}
L_{\text{over}}(q) := \begin{cases}
0, & \text{if } q\leq Q
\\ (1 + q - Q)^2, & \text{else}
\end{cases}
\end{aligned}
\end{equation}
The calculation of the loss formulation in Equation \ref{eq4.8} would induce considerable computational overhead, therefore a less memory-intensive formulation of the same loss is implemented (see Appendix \ref{AppTrainLoss}).
The extensions to the original Permutation Invariant Pooling Network are summarized in Appendix \ref{AppDelineation}.
\section{Experiments}
In the following experiments we want to showcase and validate our main contributions:
\begin{enumerate}
\item Our method works for problems with bounded fleet sizes and outperforms state-of-the-art RL models when accounting for fixed vehicle costs.
\item Our supervised learning model is fast to train and delivers competitive results in comparison to the state-of-the-art, while utilizing considerably less vehicles.
\end{enumerate}
We consider three different problem sizes, with 20, 50 and 100 customer vertices respectively and generate solvable training instances following a rejection-sampled version of the data generating process in \cite{kool2018attention}. The near-optimal target instances are generated with google's OR-Tools guided local search solver (\cite{ortools}), where we set $M = 4, 7, 11$ and $Q = 30, 40, 50$ for each problem size respectively. In total we generate roughly 100,000 training samples per problem size. The hyperparameter settings are described in Appendix \ref{AppHyerparam}.
For the evaluation, we work with two versions of the test dataset in \cite{kool2018attention}, the originally provided one and a \textit{rejection sampled} version of it, as technically our model is trained to solve only the instances for which the lower bound $\sum_{i \in n} q_i \leq MQ$ holds.
We compare our model to recent RL methods that comprise leading autoregressive (AM (\cite{kool2018attention}), MDAM (\cite{xin2021multi})) as well as a search based (NLNS (\cite{hottung2020neural})) approaches.
The NeurRewriter (\cite{chen2019learning}) and L2I model (\cite{Lu2020A}) are not re-evaluated due to extensive training- and inference times.
Details concerning the re-evaluated (and not re-evaluated) baselines are found in Appendix \ref{sssec:baselines}.
Furthermore, we revisit different published results in a \textit{per-instance} re-evaluation to elevate comparability and demonstrate strengths and weaknesses of the methods. We note that most methods were originally evaluated in a per-batch fashion
where they can leverage the massive parallelization capabilities of current graphical processing units, while that setting is of less practical relevance.
\subsection{Fixed Vehicle Costs}
To validate the first point of our main contribution, we evaluate the performance of different RL methods and our own model on the metric defined in Equation \ref{eq3.2}. Table \ref{tab:comparative_sampled} shows the results for evaluating the methods on the total tour length ($\text{Cost}$), as well as the measure incorporating fixed costs for each vehicle ($\text{Cost}_{\text{v}}$).
\begin{table}[h]
\label{tab:comparative_fixedcost}
\centering
\small
\caption[Baseline Comparison Results]{
Results for different learning-based models on 10000 \textbf{rejection sampled} VRP Test Instances.
Cost including fixed vehicle costs ($\text{Cost}_{\text{v}}$) and without (Cost) is reported.
Regarding the percentage of solved instances we achieve 99\%, 99.5\% and 100\% coverage for the problem sizes 20, 50 and 100 respectively.}
\label{tab:comparative_sampled}
\addtolength{\tabcolsep}{-0.8pt}
\begin{tabular}{llcrclcrcllr}
\toprule
&\multicolumn{3}{c}{VRP20} && \multicolumn{3}{c}{VRP50} && \multicolumn{3}{c}{VRP100} \\
\cmidrule(l){2-4} \cmidrule(l){6-8} \cmidrule(l){10-12}
Model&Cost&$\text{Cost}_{\text{v}}$&$t/\text{inst}$&&Cost&$\text{Cost}_{\text{v}}$&$t/\text{inst}$&&Cost&$\text{Cost}_{\text{v}}$&$t/\text{inst}$\\
\midrule
NLNS & 6.22& 145.0&1.21s && 10.95 &367.3& 2.01s&&17.18 &913.3 & 3.02s\\
\midrule
AM (greedy)& 6.33 &147.4& 0.04s&& 10.90 &357.0& 0.12s && 16.68 &888.4& 0.22s\\
AM (sampl.)& 6.18 &143.7& 0.05s && 10.54 &352.2& 0.19s && 16.12 &873.4& 0.57s\\
MDAM (greedy)& 6.21 &147.3& 0.46s && 10.80 &370.40 & 1.08s && 16.81 & 961.3& 2.02s \\
MDAM (bm30)& 6.17 &140.3& 5.06s && 10.46 &351.66 &9.00s && 16.18 & 889.1& 16.58s \\
\midrule
\midrule
Ours & \textbf{6.16} & \textbf{135.5}& 0.05s&& 10.76 &\textbf{346.1}& 0.16s && 16.93 & \textbf{859.6} &0.84s \\%2h18m
Ours** & \textbf{6.17} & \textbf{136.0}& 0.05s && 10.77 &\textbf{346.1}& 0.16s && 16.93 & \textbf{859.6} &0.84s
\end{tabular}
\\ \small (**) \text{With the option of a guaranteed solution}
\end{table}
The results in Table \ref{tab:comparative_sampled} shows that method outperforms state-of-the-art RL methods on the VRP with fixed vehicle costs across all sizes of the problem and even outperforms them on the plain total tour length minimization cost for the VRP with 20 customers, while delivering the shortest inference times per instance together with the AM Model.
Furthermore, our method's results do not greatly differ whether we set the option of guaranteeing a solution (last row) to True versus accepting that there are unsolved samples, speaking for the strength of our model for being able to solve both problems incorporated in the VRP at once, the feasible assignment problem to a particular number vehicles and the minimization of the total tour length. This is also represented in Figure \ref{fig:hist}; while the amount of solutions solved with more than the apriori fixed fleet size is vanishingly small for the VRP20 and VRP50 it is exactly 0 for the VRP100.
For the VRP of size 50 the RL-based methods only outperform our greedily evaluated method on the routing length minimization when employing sampling or a beam search during inference, while being worse when used greedily.
Concerning the problem with 100 customers, our method is outperformed by the RL approaches on the vanilla total route length cost, but solves the majority of the problems with a significantly smaller fleet size, reflected by the considerably smaller total costs ($\text{Cost}_{\text{v}}$).
This is strongly supported by Figure \ref{fig:hist}, where we see that especially for the VRP with 100 customers, the RL-based methods consistently require more vehicles than needed.
\subsection{Comparative Results on the Benchmark Dataset}
We want to assess the competitiveness of our method also on the literature's bench-marking test set provided in \cite{kool2018attention}. Technically, these instances are not all solvable by our model, as it is not trained to solve problems where $\sum_{i \in n} q_i \leq MQ$. In Table \ref{tab:comparative_orig} we therefore indicate the percentage of solved instances by our plain model in brackets.
Looking at Table \ref{tab:comparative_orig}, our method's performance is generally on par with or slightly better than RL-based construction methods when employing a greedy decoding.
Concerning the per-instance run times, MDAM and the NLNS take orders of magnitude longer than the AM model and our method.
We acknowledge that exploiting parallelism to enhance computational efficiency is a justified methodological choice, but in terms of comparability,
we argue for the pragmatic approach of comparing per-instance runtimes to remedy problematic comparison of models using different architectures and mini batch sizes.
For completeness Table \ref{tab:comparative_orig_app} in Appendix \ref{AppBenchFull} illustrates the per-batch evaluation of the baselines. Appendix \ref{sssec:baselines} also discusses the discrepancies in performances of NLNS and MDAM with respect to their published results.
Even though the solution quality and competitiveness of our method decrease when considering larger problem sizes, the per-instance inference time remains among the best.
Nevertheless, we want to emphasize that the baselines in general require more tours than our method potentially leading to higher costs when employed in real-world scenarios.
\begin{table}[h]
\centering
\small
\caption[Baseline Comparison Results]{Results for different ML and OR Models on 10000 VRP Test Instances (\cite{kool2018attention}).}
\label{tab:comparative_orig}
\addtolength{\tabcolsep}{-2.8pt}
\begin{tabular}{llrclrclr}
\toprule
&\multicolumn{2}{c}{VRP20} && \multicolumn{2}{c}{VRP50} && \multicolumn{2}{c}{VRP100} \\
\cmidrule(l){2-3} \cmidrule(l){5-6} \cmidrule(l){8-9}
Model&Cost&$t$&&Cost&$t$&&Cost&$t$\\
\midrule
Gurobi& 6.10 & -&& - &- &&- &\\
LKH3& 6.14& 2h && 10.38 &7h && 15.65 &13h\\%12h59m
\midrule
&Cost&$t/\text{inst}$&&Cost&$t/\text{inst}$&&Cost&$t/\text{inst}$\\
\midrule
NLNS (t-limit)& 6.24& 2.41s && 10.96& 2.41s && 17.72 &2.02s\\
\midrule
AM (greedy)& 6.40 & 0.04s&& 10.98 & 0.09s && 16.80 & 0.17s\\
AM (sampl.)& 6.25 & 0.05s && 10.62 & 0.17s && 16.20 & 0.51s\\
MDAM (greedy)& 6.28 & 0.46s && 10.88& 1.10s && 16.89 & 2.00s\\
MDAM (bm30)& 6.15 & 5.06s && 10.54 &9.00s&& 16.26& 16.56s\\
\midrule
\midrule
Ours & 6.18 (\footnotesize{94\%*}) & 0.05s&& 10.81 (\footnotesize{94\%*}) & 0.16s && 16.98 (\footnotesize{98\%*}) &0.82s \\%2h18m
Ours** & 6.24 & 0.05s && 10.87& 0.17s && 17.02 &0.82s \\%2h18m
\bottomrule
\end{tabular}
\\ \small (*) Percentage of solved instances. \small (**) \text{With the option of a guaranteed solution}
\end{table}
\subsection{Training Times}\label{ssec:traintimes}
We showcase the fast training of our model against the NeuRewriter model (\cite{chen2019learning}) as a representative for RL methods. Table \ref{tab:train} shows the results from runs on an A100-SXM4 machine with 40GB GPU RAM. To train the NeuRewriter for the VRP50 and VRP100 with the proposed batch size of 64, 40GB GPU RAM are not sufficient. Instead, we used a batch size of 32 and 16 respectively, where the total runtime is estimated for completing 10 epochs.
Accounting for our dataset generation, we note that with 2 Xeon Gold 6230 CPU cores, it takes roughly 11 days to generate one dataset. That said, we emphasize that the OR Tools is straightforwardly parallelizable (emberrassingly parallel!), such that doubling the amount of cores, roughly halves the runtime.
\begin{table}[h]
\addtolength{\tabcolsep}{-3.0pt}
\caption{Training Times per Epoch and Total Train Time of NeuRewriter (\cite{chen2019learning}) and our Model}
\label{sample-table}
\begin{center}
\begin{tabular}{lcccccccc}
\multicolumn{1}{l}{} &\multicolumn{2}{c}{VRP20}& &\multicolumn{2}{c}{VRP50}
&&\multicolumn{2}{c}{VRP100} \\
\cmidrule(l){2-3} \cmidrule(l){5-6}
\cmidrule(l){8-9}
&$t/\text{epoch}$&$t$&&$t/\text{epoch}$&$t$&&$t/\text{epoch}$&$t$\\
\hline \\
NeuRewriter &15h & 6d&& $>$ 31h& $>$ 13d && $>$ 36h & $>$ 15d\\
Ours & 4m&4h && 5.4m& 4.5h&& 1h& 1.5d\\
\end{tabular}
\end{center}
\label{tab:train}
\end{table}
\section{Conclusion and Future Work}
With the proposed supervised approach for solving the VRP, we investigate a new road to tackle routing problems with machine learning that comes with the benefit of being fast to train and
which is able to produce feasible solutions for a apriori fixed fleet size.
We show that our model
is able to jointly learn to solve the VRP assignment problem
and the route length minimization.
Our work focuses on and showcases practical aspects for solving the VRP that are important for decision makers in the planning industry and shows that our method outperforms existing models when accounting for fixed vehicle costs.
In future work, we aim to alleviate the computational shortcomings of the train loss calculation, such that the model's fast training capability can be extended to problems with larger fleet sizes.
\section{Reproducibility Statement}
Considering the current pace at which new approaches and methods are emerging, not only but also for the research area of learning based combinatorial optimization, fast and easy access to source codes with rigorous documentation needs to be established in order to ensure comparability and verified state-of-the-arts.
To this end, we will provide the code for our fast supervised learning method on a github repository, which will be publicly available.
During the review process, the code will be made available to he reviewers.
On a broader scope, since the findings in this paper comprise not only our own results, but also those of re-evaluated existing approaches in the field, we plan to develop a benchmark suite on learning-based routing methods for which the re-evaluations and produced codes will build the basis for.
|
1,477,468,750,322 | arxiv |
\section*{Introduction}
Dimensionality sets profound limits on the stage where data takes place, therefore it is often crucial to know the intrinsic dimension of data to carry out meaningful analysis.
Intrinsic dimension provides direct information about data complexity, as such, it was recognised as a useful measure to describe the dynamics of dynamical systems\citep{Grassberger1983}, to detect anomalies in time series\citep{Houle2018177}, to diagnose patients with various conditions\citep{Dlask2017, Polychronaki2010, Sharma2017, Acharya2013} and to use it simply as plugin parameter for signal processing algorithms.
Most of the multivariate datasets lie on a lower dimensional manifold embedded in a potentially very high-dimensional embedding space.
This is because the observed variables are far from independent, and this interdependence introduces redundancies resulting in a lower intrinsic dimension (ID) of data compared with the number of observed variables.
To capture this -- possibly non-linear -- interdependence, nonlinear dimension-estimation techniques can be applied\citep{Sugiyama2013, Romano2016, Benko2018, Krakovska2019}.
To estimate the ID of data various aproaches have been proposed, for a full review of techniques see the work of Campadelli et al.\citep{campadelli2015intrinsic}.
Here we discuss the k-Nearest Neighbor (kNN) ID estimators, with some recent advancements in the focus.
A usually basic assumption of $k$NN ID estimators is that the fraction of points in a neighborhood is approximately determined by the intrinsic dimensionality ($D$) and distance ($R$) times a -- locally almost constant -- mostly density-dependent factor ($\eta(x, R)$, Eq.\,\ref{eq:knn}).
\begin{equation}\label{eq:knn}
\frac{k}{n} \approx \eta(x, R) * R_k^D
\end{equation}
where $k$ is the number of samples in a neighborhood and $n$ is the total number of samples on the manifold.
Assuming a Poisson sampling process on the manifold Levina and Bickel\citep{Levina2005} derived a Maximum Likelihood estimator, which became a popular method and got several updates\citep{Ghahramani2005, Gupta2010}.
These estimators are prone to underestimation of dimensionality because of finite sample effects and overestimations because of the curvature.
To address the challenges posed by curvature and finite sample, new estimators were proposed \citep{Rozza2012, Bassis2015, Ceruti2014, Facco2017}.
To tackle the effect of curvature, a minimal neighborhood size can be taken on normalized neighborhood distances as in the case of $\mathrm{MIND}_{\mathrm{ML}}$\citep{Rozza2012}.
To tackle the underestimation due to finite sample effects, empirical corrections were applied.
A naive empirical correction approach was applied by Camastra and Vinciarelli\citep{Camastra2002}: a perceptron was trained on the estimates computed for randomly sampled hypercubes to learn a correction function.
Motivated by the correction in the previous work, the IDEA method was created\citep{Rozza2012}; and a more principled approach was carried out, where the full distribution of estimates was compared to the distributions computed on test data sets using the Kullback-Leibner divergence ($\mathrm{MIND}_{\mathrm{KL}}$\citep{Rozza2012}, DANCo\citep{Ceruti2014}).
In the case of DANCo, not just the nearest neighbor distances, but the angles are measured and taken into account in the estimation process resulting in more accurate estimates.
In the recent years, further estimators have been proposed, such as the estimator that uses minimal neighborhood information leveraging the empirical distribution of the ratio of the nearest neighbors to fit intrinsic dimension\citep{Facco2017}, or other approaches based on simplex skewness\citep{Johnsson2015} and normalized distances \citep{Chelly2016, Amsaleg2015, Amsaleg2018, Amsaleg2019}.
In the following section, we revisit the manifold adaptive dimension estimator
proposed by Farahmand et al.\citep{Farahmand2007} to measure intrinsic dimensionality of datasets.
From Eq. \ref{eq:knn} we can take the logarithm of both sides:
\begin{equation}
\begin{split}
\ln \left( \frac{k}{n} \right) &\approx \ln{\eta} + D \ln{R_k}\\
\ln \left( \frac{2k}{n} \right) &\approx \ln{\eta} + D \ln{R_{2k}}
\end{split}
\end{equation}
If $\eta$ is slowly varying and $R$ is small, we can take it as a constant.
If we subtract the two equations from each other we get:
\begin{equation}
\ln{\left( 2 \right)} \approx D \ln{\left( \frac{R_{2k}}{R_k} \right)}
\end{equation}
Thus, to compute the local estimates, we fit a line through the log-distance $k$th and $2k$th nearest neighbor at a given location.
\begin{equation}
d(x) = \frac{\ln(2)}{\ln \left( R_{2k}/R_{k} \right)}
\end{equation}
To compute a global ID estimate, the authors proposed the mean of local estimates at sample-points, or a vote for the winner global ID value (the mode), if the estimator is used in integer-mode.
They proved that the above global ID estimates are consistent for $k>1$, if $\eta$ is differentiable and the manifold is regular.
They calculated the upper bound for the probability of error for the global estimate, however this bound contains unknown constants\citep{Farahmand2007}.
In this paper we propose an improved FSA estimator, based on the assumption that the density is locally uniform.
We suggest to use the median of local values for a global intrinsic dimension estimate.
We correct the underestimation effect by an exponential formula and test the new algorithm on benchmark datasets.
We apply the proposed estimator to locate epileptic focus on field potential measurements.
\begin{figure}[htb!]
\centering
\includegraphics[width=0.9\linewidth]{Figures/Figure1.pdf}
\caption{\textbf{Probability density function of the Farahmand-Szepesvári-Audibert estimator ($d$) for various dimensions ($D$) and neighborhood sizes ($k$).} \textbf{A-F} The sublots show that theoretical pdfs (continuous lines) fits to the histograms ($n=10000$) of local estimates calculated on uniformly sampled hypercubes ($D=2, 3, 5, 8, 10, 12$). The three colors denote three presented neighborhood sizes: $k=1$ (blue), $k=11$ (orange) and $k=50$ (green).}
\label{fig:szepes_pdf}
\end{figure}
\section*{Results}
\subsection*{Manifold adaptive dimension estimator revisited}
\subsubsection*{The probability density of Farahmand-Szepesvári-Audibert estimator}
We compute the probability density function of Farahmand-Szepesvári-Audibert (FSA) intrinsic dimension estimator based on normalized distances.
The normalized distance density of the $k$NN can be computed in the context of a $K$-neighborhood, where the normalized distance of $K-1$ points follows a specific form:
\begin{equation}\label{eq:pdf_r}
p(r|k, K-1, D) = \frac{D}{B(k, K-k)} r ^ {D k -1} (1 - r^D) ^ {K-k-1}
\end{equation}
where $r$ is the normalized distance of the $k$th neighbor by the distance of $K$th neighbor ($r_k = R_k / R_K$, $k<K$) and $B$ is the Euler-beta function (see SI\,\ref{si:r_pdf} for a derivation).
A maximum likelihood estimator based on Eq.\,\ref{eq:pdf_r} leads to the formula of the classical Levina-Bickel estimator (\citep{Levina2005}).
For a derivation of this probability density and the maximum likelihood solution see SI\,\ref{si:r_pdf} and SI\,\ref{si:ML} respectively.
We realize that the inverse of normalized distance appears in the formula of FSA estimator, so we can express it as a function of $r$:
\begin{equation}
d_k = \frac{\log{2}}{\log{\left( R_{2k} / R_{k} \right)} }
= -\frac{\log{2}}{\log{\left( R_{k} / R_{2k} \right)} }
= -\frac{\log{2}}{\log{r_k}}
\end{equation}
Where $r_k = R_k / R_{2k}$.
Thus, we can compute the pdf of the estimated values as plugging in $K=2k$ into Eq.\,\ref{eq:pdf_r} followed by change of variables:
\begin{equation}\label{eq:szepes_pdf}
q \left( d_k \right) \equiv p \left( r|k, 2k-1, D \right) \left| \frac{\mathrm{d}r}{\mathrm{d}d_k} \right| = \frac{D \log{(2)}}{B(k, k)} \frac{2^{-\frac{Dk}{d_k}} \left(1 - 2^{-\frac{D}{d_k} } \right)^{k-1}}{d_k^2}
\end{equation}
\begin{theorem}
The median of $q(d_k)$ is at $D$.
\end{theorem}
\begin{proof}
We apply the substitution $a=2^{-D / d_k}$ in Eq. \ref{eq:szepes_pdf} (Eq. \ref{eq:beta}):
\begin{align}
p(a) &= q(d_k) \left| \frac{\mathrm{d}d_k}{\mathrm{d}a} \right| =\\
&= \frac{D \log{(2)}}{B(k, k)} \frac{a^k (1-a)^{k-1} \log^2{a}}{D^2 \log^2{2}} \frac{D \log{2}}{a \log^2{a}} \\
&= \frac{1}{B(k, k)} a^{k-1} (1-a)^{k-1} \label{eq:beta}
\end{align}
The pdf in Eq.\ref{eq:beta} belongs to a beta distribution.
The cumulative distribution function of this density is the regularized incomplete Beta function with $k$ as both parameters symmetrically.
\begin{equation}\label{eq:Pa}
P(a) = I_a(k, k)
\end{equation}
The median of this distribution is at $a=\frac{1}{2}$, thus at $d_k=D$ since:
\begin{eqnarray}
a = 2^{-\frac{D}{d_k}} & = & \frac{1}{2}\\
D & = & d_k \label{eq:szepes_median_general}
\end{eqnarray}
\end{proof}
This means that the median of the FSA estimator is equal to the intrinsic dimension independent of neighborhood size, if the locally uniform point density assumption holds.
The sample median is a robust statistic, therefore we propose to use the sample median of local estimates as a global dimension estimate.
We will call this modified method the median Farahmand-Szepesvári-Audibert (mFSA) estimator.
Let's see the form for the smallest possible neighborhood size: $k=1$ (Fig.\,\ref{fig:szepes_pdf}).
The pdf for the estimator takes a simpler from (Eq. \ref{eq:szepes_pdf_k1}).
\begin{equation}\label{eq:szepes_pdf_k1}
q(d|k=1, D) = D \log(2) \frac{2^{-\frac{D}{d_1}}}{d_1^2}
\end{equation}
Also, we can calculate the cumulative distribution function analytically (Eq. \ref{eq:szepes_cdf_k1}).
\begin{equation}\label{eq:szepes_cdf_k1}
Q(d|k=1, D) = \int_0^{d_1} q(t|k=1, D) \quad \mathrm{d}t = 2^{-D/d_1}
\end{equation}
The expectation of $d_k$ diverges for $k=1$-- but not for $k>1$ -- although the median exists.
From Eq. \ref{eq:szepes_cdf_k1} the median is at $D$ (Eq. \ref{eq:szepes_median}).
\begin{equation}\label{eq:szepes_median}
Q(d_1=D) = 0.5
\end{equation}
\subsubsection*{Sampling distribution of the median}
We can easily compute the pdf of the sample median if an odd sample size is given ($n = 2l + 1$) and if sample points are drawn independently according to Eq. \ref{eq:szepes_pdf}.
Roughly half of the points have to be smaller, half of the points have to be bigger and one point has to be exactly at $m$ (Eq. \ref{eq:median}).
\begin{equation}\label{eq:median}
\begin{split}
p(m | k, D, n) &= \frac{1}{B(l+1, l+1)} \left[ P\left(a=2^{-D/m}\right) \left(1 - P\left(a = 2 ^{-D/m}\right) \right) \right] ^ {l} q(m)
\end{split}
\end{equation}
Where $p(a)$ and $P(a)$ are the pdf and cdf of $a$ (Eq.\,\ref{eq:beta},\,\ref{eq:Pa}) and $q$ is the pdf of the FSA estimator (Fig. \ref{fig:median_pdf}).
\begin{figure}[htb!]
\centering
\includegraphics[width=0.9\textwidth]{Figures/Figure2.pdf}
\caption{\textbf{The sampling distribution of the median for the FSA estimator ($k=1$) on uniformly sampled hypercubes.}
The figure shows the pdf of median-FSA estimator of points uniformly sampled from a square (\textbf{A}) and from a 5D hypercube (\textbf{B}) for three sample sizes: $n=11$ (blue), $n=101$ (orange) and $n=1001$ (green) respectively. The solid lines represent the theoretical pdf-s of the median and the shaded histograms are the results of simulations ($N=5000$ realizations).
}
\label{fig:median_pdf}
\end{figure}
\subsubsection*{Maximum Likelihood solution for the manifold-adaptive estimator}
If the samples are independent and identically distributed, we can formulate the likelihood function as the product of sample-likelihoods (Eq. \ref{eq:FS_L}).
We seek for the maximum of the log likelihood function, but the derivative is transcendent for $k>1$.
Therefore, we can compute the place of the maximum numerically (Eq. \ref{eq:FS_ML}).
\begin{eqnarray}
\mathcal{L} &=& \prod_{i=1}^{n} \frac{D \log{(2)}}{B(k, k)} \frac{2 ^{-D k / {d_k}^{(i)}} (1-2^{-D / {d_k}^{(i)}})^{k-1}}{{\left({d_k}^{(i)}\right)}^{2}} \label{eq:FS_L}\\
\log \mathcal{L} &=& n \log \frac{\log{(2)}}{B(k, k)} + n \log D - D k \log(2) \sum \frac{1}{{d_k}^{(i)}} + (k-1) \sum \log{ \left(1-2 ^ {-D / {d_k} ^ {(i)}} \right)} \\&&- 2 \sum \log({d_k}^{(i)})\nonumber\\
\frac{\partial \log \mathcal{L}}{\partial D} &=& \frac{n}{D} - \log(2) k \sum \frac{1}{{d_k}^{(i)}} + \log(2) (k-1) \sum \frac{1}{{d_k}^{(i)} (2^{D/{d_k}^{(i)}} - 1)} \stackrel{!}{=} 0 \label{eq:FS_ML}
\end{eqnarray}
For $k=1$, the ML formula is equal to the Levina-Bickel ($k=1$) and $\mathrm{MIND}_{\mathrm{1ML}}$ formulas.
\subsection*{Results on randomly sampled hypercube datasets}
Theoretical probability density function of the local FSA estimator fits to empirical observations (Eq. \ref{eq:szepes_pdf}, Fig. \ref{fig:szepes_pdf}).
We simulated hypercube datasets with fixed sample size ($n=10000$) and of various intrinsic dimensions ($D=2, 3, 5, 8, 10, 12$).
We measured the local FSA estimator at each sample point with $3$ different $k$ parameter values ($k=1, 11, 50$).
We visually confirmed that the theoretical pdf fits perfectly to the empirical histograms.
The empirical sampling distribution of mFSA fits to the theoretical curves for small intrinsic dimension values (Fig. \ref{fig:median_pdf}).
To demonstrate the fit, we drew the density of mFSA on two hypershpere datasets $D=2$ and $D=5$ with the smallest possible neighborhood ($k=1$), for different sample sizes ($n=11$, $101$, $1001$).
At big sample sizes the pdf is approximately a Gaussian\citep{Laplace1986}, but for small samples the pdf is non-Gaussian and skewed.
The mFSA estimator underestimates intrinsic dimensionality in high dimensions.
This phenomena is partially a finite sample effect (Fig.\,\ref{fig:szepes_ddep}), but edge effects make this underestimation even more severe.
We graphically showed that mFSA estimator asymptotically converged to the real dimension values for hypercube-datasets, when we applied periodic boundary conditions (Fig. \ref{fig:szepes_convergence}).
We found, that the convergence is much slower for hard boundary conditions, where edge effects make estimation errors higher.
We could estimate the logarithm of relative error with an $s$-order polynomial:
\begin{equation}
\log(E_{rel}) = \log \left( \frac{D}{d} \right) = \sum_{i=1}^s \alpha_i d^i
\end{equation}
The order of the polynomial was different for the two types of boundary conditions.
When we applied hard boundary, the order was $s=1$, however in the periodic case higher order polynomials fit the data.
Thus, in the case of hard-boundary, we could make the empirical correction formula:
\begin{equation}
D \approx C(\hat{d}) = d e^{\alpha_n d}
\end{equation}
where $\alpha_n$ is a sample size dependent coefficient that we could fit with the least squares method.
\begin{figure}[t!]
\centering
\includegraphics[width=0.9\textwidth]{Figures/Figure3.pdf}
\caption{\textbf{Intrinsic dimension dependence of the median-FSA estimator for uniformly sampled unit hypercubes with various sample sizes ($k=1$).} Subplots \textbf{A-F} show the mean of median-FSA estimator (thick line) values from $N=100$ realizations (shading) of uniformly sampled unit hypercubes with periodic boundary. }
\label{fig:szepes_ddep}
\end{figure}
\begin{figure}[t!]
\centering
\includegraphics[width=0.9\textwidth]{Figures/Figure4.pdf}
\caption{\textbf{Sample size dependence of the median-FSA estimator for uniformly sampled unit hypercubes with varied intrinsic dimension value ($k=1$).} Subplots \textbf{A-F} show the mean of median-FSA estimator (thick line) values from $N=100$ realizations (shading).
}
\label{fig:szepes_convergence}
\end{figure}
\begin{figure}[htb!]
\centering
\includegraphics[width=0.9\textwidth]{Figures/Figure5.pdf}
\caption{\textbf{Bias-correction of the median-FSA estimator for uniformly sampled unit hypercubes with various sample sizes ($k=1$).} Subplots \textbf{A-F} show the mean of median-FSA estimator (grey line) values from $N=100$ realizations (shading) of uniformly sampled unit hypercubes. The boundary condition is hard, so the edge effect makes under-estimation more severe. The colored lines show the corrected estimates according to the $\hat{w}_c = \hat{w} \exp(\alpha \hat{w})$. }
\label{fig:szepes_edgecorrect}
\end{figure}
\subsection*{Results on synthetic benchmarks}
We tested the mFSA estimator and its corrected version on synthetic benchmark datasets\citep{Hein2005, campadelli2015intrinsic}.
We simulated $N=100$ instances of $15$ manifolds ($M_i$, $n=2500$) with various intrinsic dimensions (see Table\,1,\,2,\,4 in Campadelli et al.\citep{campadelli2015intrinsic}, \href{http://www.mL.uni-saarland.de/code/IntDim/IntDim.htm}{http://www.mL.uni-saarland.de/code/IntDim/IntDim.htm}).
We estimated the intrinsic dimensionality of each sample and computed the mean, the error rate and Mean Percentage Error (MPE) for the estimators.
We compared the mFS, cmFS, the R and the matlab implementation of DANCo, and the Levina-Bickel estimator (Table \ref{tab:synthetic}).
cmFSA and DANCo was evaluated in two modes, in a fractal-dimension mode and in an integer dimension mode.
The mFSA and the Levina-Bickel estimator underestimated intrinsic dimensionality, especially in the cases when the data had high dimensionality.
In contrast, the cmFSA (cmFSA) estimator found the true intrinsic dimensionality of the datasets, it reached the best overall error rate ($0.277$) and 2nd best MPE (Fig.\,\ref{fig:error}, Table\,\ref{tab:synthetic}).
In some cases, it slightly over-estimated the dimension of test datasets.
Interestingly, DANCo showed implementation-dependent performance, the matlab algorithm showed the 2nd beast error rate ($0.323$) and the best MPE value (Table\,\ref{tab:synthetic}).
The R version overestimated the dimensionality of datasets in most cases.
\begin{figure}[htb!]
\includegraphics[width=\textwidth]{Figures/Figure6.pdf}
\caption{\textbf{Performance-comparison between cmFSA and DANCo on synthetic benchmark datasets.}
\textbf{A} Dataset-wise Mean Percentage Error (MPE) on benchmark data. cmFSA (blue) shows smaller MPE in $4$ cases and bigger MPE in $4$ cases compared with DANCo (matlab).
\textbf{B} Dataset-wise error rate for cmFSA and DANCo. cmFSA shows smaller error rates in $5$ cases and bigger error rates in $2$ cases compared with DANCo.
}
\label{fig:error}
\end{figure}
\begin{table}[htb!]
\centering
\caption{\newline\textbf{Dimension estimates on synthetic benchmark datasets.}
\newline
The table shows true dimension values, median-Farahmand-Szepesvári-Audibert, Maximum Likelihood, corrected median Farahmand-Szepesvári-Audibert and DANCo mean estimates from $N=100$ realizations.
The MPE values can be seen in the bottom line, the matlab version of DANCo estimator produced the smallest error followed by the cmFSA estimator.}
\include{Figures/Table1}
\label{tab:synthetic}
\end{table}
\subsection*{Analysing epileptic seizures}
To show how mFSA works on real-world noisy data, we applied it to human neural recordings of epileptic seizures.
We acquired field potential measurements from a patient with drug-resistant epilepsy by 2 electrode grids and 3 electrode strips.
We analyzed the neural recordings during interictal periods and during epileptic activity to map possible seizure onset zones (see Methods).
We found several characteristic differences in the dimension patterns between normal and control conditions.
In interictal periods (Fig.\,\ref{fig:memo_dims}\,A), we found the lowest average dimension value at the FbB2 position on the froto-basal grid.
Also, we observed a diagonal gradient of intrinsic dimensions on the cortical grid (Gr).
In contrast, we observed the lowest dimension values at the hippocampal electrode strip (JT), and the gradient on the cortical grid dissappeared during seizures (Fig.\,\ref{fig:memo_dims}\,B).
Curiously, the intrinsic dimensionality became higher at fronto-basal recording sites during seizure (Fig.\,\ref{fig:memo_dims}\,C).
\begin{figure}[t!]
\centering
\includegraphics[width=0.9\textwidth]{Figures/Figure7.pdf}
\caption{\textbf{mFSA Dimension estimates on intracranial Brain-LFP measurements during interictal activity and epileptic seizures.}
The figure shows the dimension estimates on an intracranial cortical grid (Gr A-F), a smaller Frontobasal grid (Fb A, B) and 3 electrode strips with hippocampal and temporal localization (JIH, BIH, JT). The areas with lower-dimensional dynamics are marked by stronger colors.
\textbf{A} Average of mFSA dimension values from interictal LFP activity (N=16, k=10-20).
\textbf{B} Average of mFSA dimension values from seizure LFP activity (N=18, k=10-20).
\textbf{C} Difference of dimension values.
Stronger red color marks areas, where the dynamics during seizure was smaller-dimensional than its interictal counterpart.
However, stronger blue indicates electrodes, where the during-seizure dynamics was higher dimensional than the interictal dynamics.
}
\label{fig:memo_dims}
\end{figure}
\section*{Discussion}
In this work we revisited and improved the manifold adaptive FSA dimension estimator.
We computed the probability density function of local estimates if the local density was uniform.
From the pdf, we derive the maximum likelihood formula for intrinsic dimensionality.
We proposed to use the median of local estimates as a global measure of intrinsic dimensionality, and demonstrated that this measure is asymptotically unbiased.
We tackled edge effects with a correction formula calibrated on hypercube datasets.
We showed that the coefficients are sample-size dependent.
Camastra and Vinciarelli \citep{Camastra2002} took a resembling empirical approach, where they corrected correlation dimension estimates with a perceptron, calibrated on d-dimensional datasets.
Our approach is different, because we tried to grasp the connection between underestimation and intrinsic dimensionality more directly, by showing that the dimension-dependence of the relative error is exponential.
The calibration procedure of DANCo may generalize better, because it compares the full distribution of local estimates rather than just a centrality measure\citep{Ceruti2014}.
Also, we are aware that our simple correction formula overlooks the effect of curvature and noise.
We tried to address the former with the choice of minimal neighborhood size ($k=1$), thus the overestimation effect due to curvature is minimal.
Additionally, the effect of noise on the estimates is yet to be investigated. There are several strategies to alleviate noise effects such as undersample the data while keeping the neighborhood fixed\citep{Facco2017}, or using a bigger neighborhood size , while keeping the sample size fixed.
Both of these procedures make the effect of curvature more severe, which makes the dimension estimation of noisy curved data a challenging task.
We benchmarked the new mFSA and corrected-mFSA method against Levina-Bickel estimator and DANCo on synthetic benchmark datasets and found that cmFSA showed comparable performance to DANCo.
For many datasets, R-DANCo overestimated the intrinsic dimensionality, which is most probably due to rough default calibration\citep{Johnsson2015}; the matlab implementation showed the best overall results in agreement with Campadelli et al\citep{campadelli2015intrinsic}.
This superiority was however dataset-specific: cmFSA performed genuinely the best in $4$, DANCo in $2$ out of the 15 benchmark datasets (with 7 ties, Table\,\ref{tab:synthetic}).
Also, cmFSA showed better overall error rate than DANCo.
Combining the performance measured by different metrics, we recognise that cmFSA found the true intrinsic dimension of the data in more cases, but when mistaken, it makes relatively bigger errors compared with DANCo.
The mFSA algorithm revealed diverse changes in the neural dynamics during epileptic seizures.
In normal condition, the gradient of dimension values on the cortical grid reflects the hierarchical organization of neocortical information processing\citep{Tajima2015}.
During seizures, this pattern becomes disrupted pointing to the breakdown of normal activation routes.
Some channels showed lower dimensional dynamics during seizures; that behaviour is far from the exception: the decrease in dimensionality is due to widespread synchronization events between neural populations\citep{Mormann2000}, a phenomenon reported by various authors \citep{Polychronaki2010, Bullmore1994, Paivinen2005}.
These lower-dimensional areas are possible causal sources\citep{Sugiyama2013, Krakovska2019, Benko2018} and candidates for being the seizure onset zone.
Interestingly, Esteller et al found, that the Higuchi fractal dimension values were higher at seizure onset and decreased to lower values as the seizures evolved over time\citep{Esteller1999}.
We found, that most areas showed decreased dimensionality, but few areas also showed increased dimension values as seizure takes place.
This may suggests that new - so far unused - neural circuits are activated at seizure onset; whether this circuitry contributes to or counteracts epileptic seizure is unclear.
\section*{Methods}
The simulations and the FSA algorithms were implemented in python3\citep{python3} using the numpy\citep{oliphant2006guide}, scipy\citep{2020SciPy-NMeth} and matplotlib\citep{hunter2007matplotlib} packages, unless otherwise stated.
\subsection*{Simulations}
We generated test-datasets by uniform random sampling from the unit $D$-cube to demonstrate, that theoretical derivations fit to data.
We measured distances with a circular boundary condition to avoid edge effects, hence the data is as close to the theoretical assumptions as possible.
To illustrate the probability density function of the FSA estimator, we computed the local FSA intrinsic dimension values (Fig.\,\ref{fig:szepes_pdf}).
We generated $d$-hypercubes ($n=10000$, one realization) with dimensions of $2$, $3$, $5$, $8$, $10$ and $12$, then computed histograms of local FSA estimates for three neighborhood sizes: $k=1$, $11$, $50$ respectively (Fig.\,\ref{fig:szepes_pdf}\,A-F).
We drew the theoretically computed pdf to illustrate the fit.
To show that the theoretically computed sampling distribution of the mFSA fits to the hypercube datasets, we varied the sample size ($n=11, 101, 1001$) with $N=5000$ realizations from each.
We computed the mFSA for each realization and plotted the results for $d=2$ (Fig.\,\ref{fig:median_pdf}\,A) and $d=5$ (Fig.\,\ref{fig:median_pdf}\,B).
We investigated the dimensionality and sample-size effects on mFSA estimates ( Fig.\,\ref{fig:szepes_ddep}\,A-F).
We simulated the hypercube data in the $2$-$30$ dimension-range, and applied various sample sizes: $n=10, 100, 1000, 2500, 10000$; one hundred realizations each ($N=100$).
We computed the mFSA values with minimal neighborhood size ($k=1$), and observed finite-sample-effects, and asymptotic convergence.
The finite sample effect was pronounced at low sample sizes and high dimensions, but we experienced convergence to the real dimension value as we increased sample size.
We repeated the analysis with hard boundary conditions.
We fitted a correction formula on the logarithm of dimension values and estimates with the least squares method (Eq.\,\ref{eq:alpha}), using all $100$ realizations for each sample sizes separately.
\begin{equation}\label{eq:alpha}
\alpha = \frac{\sum (\ln{E_i}) d^{(i)}}{\sum \left(d^{(i)}\right)^2}
\end{equation}
Where $E_i = D_i / d^{(i)}$ is the relative error, $D_i$ is the intrinsic dimension of the data, and $d^{(i)}$ are the corresponding mFSA estimates.
This procedure fit well to data in the intrinsic dimension range 2-30 (Fig.\,\ref{fig:szepes_edgecorrect}\,A-F).
Wider range of intrinsic dimension values (2-80) required more coefficients in the polynomial fit procedure (SFig.\,\ref{fig:calibration}\,A).
Also, we used orthogonal distance regression to fit the mean over realizations of $\ln E_i$ with the same $D_i$ value (SFig.\,\ref{fig:calibration}\,B).
We utilized the mean and standard deviation of the regression error to compute the error rate of cmFSA estimator, if the error-distributions are normal (SFig.\,\ref{fig:calibration}\,C-D).
We applied this calibration procedure ($n=2500$) to compute cmFSA on the following benchmark datasets.
\subsection*{Comparison on synthetic benchmark datasets}
We simulated $N=100$ instances of $15$ manifolds ($M_i$, $n=2500$) with various intrinsic dimensions (see Table\,1,\,2,\,4 in Campadelli et al.\citep{campadelli2015intrinsic}, \href{http://www.mL.uni-saarland.de/code/IntDim/IntDim.htm}{http://www.mL.uni-saarland.de/code/IntDim/IntDim.htm}).
We measured the performance of the mFSA and corrected-mFSA estimators on the benchmark datasets, and compared them with the performance of ML\citep{Levina2005} and DANCo\citep{Ceruti2014} estimators.
We used the matlab\citep{MATLAB2020, Lombardi2020}(\href{https://github.com/cran/intrinsicDimension}{https://github.com/cran/intrinsicDimension}) and an R package\citep{Johnsson2015} implementation of DANCo.
To quantify the performance we adopted the Mean Percentage Error (MPE, Eq.\ref{eq:mpe}) metric\citep{campadelli2015intrinsic}:
\begin{equation}\label{eq:mpe}
\mathrm{MPE} = \frac{100}{M N} \sum_{j=1}^{M} \sum_{i=1}^{N} \frac{|D_j-\hat{d}_{ij}|}{D_j}
\end{equation}
Where there is $N$ realizations of $M$ types of manifolds, $d_j$ are the true dimension values, $\hat{d}_{ij}$ are the dimension estimates.
Also, we used the error rate -- the fraction of cases, when the estimator did not find (missed) the true dimensionality -- as an alternative metric.
We found that the corrected-mFSA estimator produced the second smallest MPE and the smallest error rate on the test datasets (Fig. \ref{fig:error}).
\subsection*{Dimension estimation of interictal and epileptic dynamics}
We used data of intracranial field potentials from two subdural grids positioned -- parietofrontally and frontobasally -- on the brain surface and from three strips located in the left and the right hippocampus and in the right temporal cortex as part of presurgical protocol for a subject with drug resistant epilepsy.
This equipment recorded extracellular field potentials at $88$ neural channels at a sampling rate of 2048 Hz.
Moreover, we read in -- using the neo package\citep{neo14}-- selected $10$ second long chunks of the recordings from interictal periods ($N=16$) and seizures ($N=18$) to further analysis.
We standardised the data series and computed the Current Source Density (CSD) as the second spatial derivative of the recorded potential.
We rescaled the $10$ second long signal chunks by subtracting the mean and dividing by the standard deviation.
Then, we computed the CSD of the signals by applying the graph Laplacian operator on the time-series.
The Laplacian contains information about the topology of the electrode grids, to encode this topology, we used von Neumann neighborhood in the adjacency matrix. After CSD computation, we bandpass-filtered the CSD signals\citep{Gramfort2013} (1-30 Hz, fourth order Butterworth filter) to improve signal to noise ratio.
We embedded CSD signals and subsampled the embedded time series.
We used an iterative manual procedure to optimize embedding parameters (SFig. \ref{fig:memo_embed}).
Since the fastest oscillation is (30 Hz) in the signals, a fixed value with one fourth period ($2048 / 120 \approx 17$ samples) were used as embedding delay.
We inspected the average space-time separation plots of CSD signals to determine a proper subsampling, (with embedding dimension of D=2 (Fig. \ref{fig:memo_dims}\,A).
We found, that the first local maximum of the space-time separation was at around $5$ ms: $9-10$, $10-11$, $11-12$ samples for the $1 \%$, $25 \%$, $50\%$ percentile contour-curves respectively.
Therefore, we divided the embedded time series into 10 subsets to ensure the required subsampling.
Then, we embedded the CSD signal up to $D=12$ and measured the intrinsic dimensionality for each embeddings (Fig. \ref{fig:memo_dims}\,B\,C).
We found that intrinsic dimension estimates started to show saturation at $D>=3$, therefore we chose $D=7$ as a sufficiently high embedding dimension (averaged over $k=10-20$ neighborhood sizes).
We measured the intrinsic dimensionality of the embedded CSD signals using the mFSA method during interictal and epileptic episodes (Fig. \ref{fig:memo_dims}).
We selected the neighborhood size between $k=10$ and $k=20$ and averaged the resulting estimates over the neighborhoods and subsampling realizations.
We investigated the dimension values (Fig. \ref{fig:memo_dims}\,A\,B) and differences (Fig. \ref{fig:memo_dims}\,C) in interictal and in epileptic periods.
We found characteristic changes in the pattern of intrinsic dimensions during seizures, which may help to localize seizure onset zone.
\section*{Acknowledgments}
We are grateful for Ádám Zlatniczki for his comments on the manuscript.
\section*{Author contributions}
Zsigmond Benkő performed the analytical and numerical calculations and wrote the manuscript.
Marcell Stippinger corrected analytical calculations, wrote python code for numerical calculations and corrected the manuscript.
Roberta Rehus carried out exploratory data analysis and proofreading.
Dániel Fabó, Boglárka Hajnal, Loránd Erőss recorded the EEG data, helped with data analysis and contributed to the manuscript text.
Attila Bencze and András Telcs had profound effect on the mFSA derivations and contributed to the manuscript.
Zoltán Somogyvári led the research, helped to interpret the results of data analysis and contributed to the text.
\section*{Funding}
The research reported in this paper was supported by the BME NC TKP2020 grant of NKFIH Hungary, by the BME-Artificial Intelligence FIKP grant of EMMI (BME FIKP-MI/SC), by the National Brain Research Program of Hungary (NAP-B, KTIA\_NAP\_12-2-201), by the National Brain Project II, NRDIO Hungary, PATTERN Group, and by 2017-1.2.1-NKP-2017-00002 of NKFIH.
\bibliographystyle{unsrt}
|
1,477,468,750,323 | arxiv | \subsection{Derivation of the potential and electric field}
\label{subsect-derivation-V-and-E}
In this Appendix, we use superposition and Coulomb's law to obtain an
exact mathematical expression for the potential~$V_{\rm rect}(x,y,z)$
of a uniformly charged rectangle\cite{Hummer1996}. Knowing~$V_{\rm
rect}$, it is then straightforward to use superposition to obtain an
explicit expression for the potential~$V_C$ anywhere in space of a
uniformly charged cubic surface since this surface consists of six
identical uniformly charged squares, see
Eq.~(\ref{eq-V-cubic-surface-exact}) below. From~$V_C$, one can then
obtain an analytical expression for the electric field anywhere in
space via the relation~${\bf E}=-\nabla V_C$. The authors first
learned about some of these results from an interesting blog of
Michael Trott, who showed how to use Mathematica to calculate and plot
the three-dimensional potential of some charge distributions that have
sharp edges\cite{Trott2012}.
Rather than explain the following details to students, we recommend
that instructors give the students a black-box computer program that
returns as its output the electric field~${\bf E}(x,y,z) =
(E_x,E_y,E_z)$ and the potential~$V(x,y,z)$ at any
point~$(x,y,z)$. This output can then be plotted or studied
numerically.
Consider a rectangle of dimensions $a \times b$ that has a constant
surface charge density $\sigma$. We assume that the rectangle lies in
the $xy$-plane of an $xyz$-Cartesian coordinate system such that the
rectangle's vertices lie at the four points $(x,y,z) = (\pm a/2, \pm
b/2,0)$. The potential~$V_{\rm rect}(x,y,z)$ at some point~$(x,y,z)$
is then given by the following two-dimensional integral:
\begin{equation}
\label{eq-V-integral-for-rectangle}
V_{\rm rect}(x,y,z) = \int_{-a/2}^{a/2} \!
\int_{-b/2}^{b/2}
{ K \sigma \, dx' \, dy' \over
\left[
\left( x - x' \right)^2
+ \left( y - y' \right)^2
+ z^2
\right]^{1/2} } .
\end{equation}
This integral is a statement of superposition, and is the limit of a
discrete sum over the infinitesimal potentials~$dV = K dq / d$
at~$(x,y,z)$ created by infinitesimal squares of area $dA' = dx'
\times dy'$ and of infinitesimal charges~$dq = \sigma \, dA'$ that are
centered on the point~$(x',y',0)$. By direct evaluation or by using a
symbolic integrator such as those available in Mathematica or Maple,
one finds that this integral has the following value:
\begin{align}
\label{eq-V-analytical-expression-for-rectangle}
& V_{\rm rect}(x,y,z) = \frac{K \sigma}{2} \Bigg(
(b-2 y) \log \left( \sqrt{(a-2 x)^2+(b-2 y)^2+4 z^2}+a-2 x \right) \nonumber \\
& \: \qquad \qquad + (a-2 x) \log\left(
\sqrt{(a-2 x)^2+(b-2 y)^2+4 z^2}+b-2 y
\right) \nonumber \\
& \: \qquad \qquad - (b-2 y) \log\left(
\sqrt{(a+2 x)^2+(b-2 y)^2+4 z^2}-a-2 x
\right) \nonumber \\
& \: \qquad \qquad + (a+2 x) \log\left(
\sqrt{(a+2 x)^2+(b-2 y)^2+4z^2}+b-2 y \right) \nonumber \\
& \: \qquad \qquad - (a-2 x) \log\left(
\sqrt{(a-2 x)^2+(b+2 y)^2+4 z^2}-b-2 y \right) \nonumber \\
& \: \qquad \qquad - (a+2 x) \log\left(
\sqrt{(a+2 x)^2+(b+2y)^2+4 z^2}-b-2 y \right) \nonumber \\
& + (b+2 y) \bigg(
\log\left( \sqrt{(a-2 x)^2+(b+2y)^2+4 z^2}+a-2 x \right) \nonumber \\
& \: \qquad \qquad
- \log\left(\sqrt{(a+2 x)^2+(b+2 y)^2+4 z^2}-a-2 x \right) \bigg) \nonumber \\
& -2 z \Bigg[
\tan ^{-1}\left(\frac{(a-2 x) (b-2 y)}{2 z \sqrt{(a-2 x)^2+(b-2 y)^2+4 z^2}}
\right) \nonumber \\
& \qquad + \tan^{-1}\left(
\frac{(a+2 x) (b-2 y)}{2 z \sqrt{(a+2 x)^2+(b-2 y)^2+4
z^2}}\right) \nonumber \\
& \qquad + \tan^{-1}\left(
\frac{(a-2 x) (b+2 y)}{2 z \sqrt{(a-2 x)^2+(b+2 y)^2+4 z^2}}
\right) \nonumber \\
& \qquad + \tan^{-1}\left(
\frac{(a+2 x) (b+2 y)}{2 z \sqrt{(a+2 x)^2+(b+2 y)^2+4
z^2}}\right) \Bigg] \; \Bigg) .
\end{align}
The potential~$V_{\rm cube}(x,y,z)$ at any point~$(x,y,z)$ for a
uniformly charged cubic surface centered at the origin with side
length~$L$ is then obtained by first setting the rectangular
lengths~$a=b=L$ and then by adding shifted versions of
Eq.~(\ref{eq-V-analytical-expression-for-rectangle}) like this:
\begin{align}
V_C(x,y,z) &=
\quad V_{\rm rect}(x,y,z+1/2) + V_{\rm rect}(x,y,z-1/2) \nonumber \\
& \quad + V_{\rm rect}(z,x,y+1/2) + V_{\rm rect}(z,x,y-1/2) \nonumber \\
& \quad + V_{\rm rect}(z,y,x+1/2) + V_{\rm rect}(z,y,x-1/2) .
\label{eq-V-cubic-surface-exact}
\end{align}
Using a symbolic manipulation program like
Mathematica\cite{Mathematica15,Maple15}, one can then obtain an
explicit symbolic expression for the electric field~${\bf E}_{\rm
cube} = - \nabla{V_{\rm cube}}$ by symbolic partial differentiation
of Eq.~(\ref{eq-V-cubic-surface-exact}) with respect to the
variables~$x$, $y$, and~$z$. For example, the following brief
Mathematica code defines a function~\verb|ECubicSurface| that returns
a symbolic expression for the electric field vector~${\bf E}(x,y,z)$
at a given point~$(x,y,z)$:
\begin{verbatim}
ECubicSurface[ x_, y_, z_ ] := Module[
{ x0, y0, z0 } ,
- Grad[ VCubicSurface[ x0, y0, z0 ] , { x0, y0, z0 } ]
/. { x0 -> x, y0 -> y, z0 -> z }
] ;
\end{verbatim}
The line \verb|Grad[ VCubicSurface[ x0, y0, z0 ] , { x0, y0, z0 } ]|
takes the symbolic gradient (partial derivatives) of the expression
$V_{\rm cube}(x_0,y_0,z_0)$ with respect to the
vector~$(x_0,y_0,z_0)$. The following line
\verb|/. { x0 -> x, y0 -> y, z0 -> z }| performs a symbolic
substitution, replacing all symbols $x_0, y_0, z_0$ in the expression
for the electric field with respectively the values~$x,y,z$. The
resulting expression for~${\bf E}$ is several pages long but is
readily evaluated as needed.
\subsection{Logarithmic divergence of the electric field near a
corner}
\label{subect-log-divergence}
Using these exact expressions and Mathematica, we can show that the
magnitude of the electric field diverges logarithmically as one
approaches any edge or vertex of the charged cubic surface. This means
that, if~$P$ is some point and~$E(P)$ is the magnitude of the electric
field at~$P$, then $E \propto |\log(d)|$ in the limit that the
distance~$d$ of~$P$ to an edge or vertex becomes small ($d \ll L$).
For example, the Mathematica code
\begin{verbatim}
Series[
ECubicSurface[ 1/2 - x, 1/2 - x, 1/2 - x ] ,
{ x, 0, 2 } ,
Assumptions -> ( x > 0 )
]
\end{verbatim}
evaluates the Taylor series of ${\bf E}(1/2-x,1/2-x,1/2-x)$ to second
order in the small quantity~$x$ about~$x=0$, which corresponds to the
vertex $(1/2,1/2,1/2)$. Evaluating this code gives the answer
\begin{equation}
\label{eq-log-divergence-in-E}
{\bf E} = \left( -2 \log(x) -2.5 - 0.59 x + 3.4x^2 \right)
\left( \hat{\bf x} + \hat{\bf y} + \hat{\bf z} \right) .
\end{equation}
Eq.~(\ref{eq-log-divergence-in-E}) says that, as one approaches the
vertex~$(1/2,1/2,1/2)$ along the line $(1/2-x,1/2-x,1/2-x)$, with~$x$
becoming small, the electric field diverges as~$-2\log(x)$.
\subsection{Validation of the symbolic expression using a
simple numerical code based on discretization and superposition}
\label{subect-simple-numerical code}
Knowing the exact result
Eq.~(\ref{eq-V-analytical-expression-for-rectangle}) and its gradient
does not automatically imply that one can evaluate it correctly with a
computer program since there are multiple ways that an error can enter
during the process of writing and executing a Mathematica program. To
make sure that our results were correct, we developed an independent
numerical method and then used that method to confirm the correctness
of all the figures in this paper.
We did this by using a simple algorithm whose technical details can be
easily understood by freshman physics students, although it can be
challenging to program the algorithm for general charge
densities~$\rho(x,y,z)$. The key idea is illustrated in
Fig.~\ref{fig:superposition-over-point-charges} and the key steps are
summarized here:
\begin{enumerate}
\item the continuous charge distribution on each face of the cubic
surface is approximated with a square mesh of $N \times N$ identical
point charges, each of charge $\Delta{Q} = Q/N^2$;
\item at some point~$P=(x,y,z)$ of interest, sum the contributions of
each of the $6N^2$ point charges to the electric field and potential
at~$P$, using the elementary expressions $\Delta{\bf E}_i = K
\Delta{Q}_i \hat{\bf r} / d_i^2$ and $\Delta{V}_i = K \Delta{Q}_i /
d$. Here~$i$ is some integer label that goes over all the point
charges~$\Delta{Q}_i$, $d_i$ is the distance between~$P$ and the
$i$th charge, and $\hat{\bf r}$ is the unit vector pointing from the
$i$th point charge to~$P$.
\end{enumerate}
We found that a value of~$N \ge =10$ (at least 100~ point charges per
face) gave identical results at the level of the figures when compared
to the exact answer. Some small differences between the exact and
numerical results were found near edges (where the electric field
magnitude diverges) and when some point of interest was close to the
discrete grid of point charges (less than a few multiples of the
spacing between the grid points).
\begin{figure}
\centering
\includegraphics[width=0.3\textwidth]{figures/fig-superposition-of-charges-on-cube.eps}
\caption{The electric field or potential at a given point~$P$ in space
is obtained by approximating each face of the charged cubic surface
with a regular square grid of identical point charges and then by
adding up the electric field or potential due to each point charge
at~$P$.}
\label{fig:superposition-over-point-charges}
\end{figure}
\section{Introduction}
\label{sec:intro}
\input{intro.tex}
\section{Qualitative Insights}
\label{sec:qualitative}
\input{qualitative-insights}
\section{Quantitative Insights}
\label{sec:quantitative}
\input{quantitative-insights}
\section{Conclusions}
\label{sec:conclusions}
\input{conclusions}
\subsection{Only the interior electric field is interesting to consider}
\label{subsection-only-interior-field-interesting}
A first point is that only the electric field interior to the
uniformly charged non-conducting cubic surface is interesting to
explore. This is because the electric field exterior to the cubic
surface is qualitatively similar to the familiar electric field of a
positive point charge at the center of the cubic surface: ${\bf E}$
points away from the center of the cubic surface, and it diminishes in
magnitude with increasing distance from the center of the cube.
One way to see this is to observe that, for any point~$P$ outside the
cubic surface, one can find a plane such that the point~$P$ is on one
side of the plane and the entire cubic surface is on the other side.
(For example, the line segment connecting~$P$ and the center~$O$ of
the cubic surface must intersect a face of the cube. Any plane outside
the cubic surface that is parallel to that face and that intersects
the interior part of the line segment~$PO$ will suffice.) Since the
infinitesimal point charges that make up the cubic surface then lie on
one side of the plane, the total electric field~${\bf E}$ at point~$P$
due to these point charges must point away from the cubic surface
(although generally not radially, i.e., not parallel to the line
segment $PO$).
In contrast, the electric field interior to the cubic surface can be
complicated precisely because, at some point~$P$ inside the cubic
surface, there are contributions to the electric field at~$P$ from all
possible directions, associated with the infinitesimal point charges
making up the surface charge density. However, as we now discuss,
these contributions generally do not cancel to give zero.
\subsection{Electric field vectors are parallel to mirror
planes at points on such planes}
\label{subsection-E-parallel-to-mirror-planes}
A next step in our qualitative analysis is to take advantage of the
symmetries of the uniformly charged cubic surface. These symmetries
provide a way to deduce quickly and without calculation some
information about the direction of the electric field on certain
planes and lines, which are then the locations to consider first when
trying to understand the electric field. In
Section~\ref{sec:quantitative}, we will see that these symmetry planes
and lines are also good places to plot quantitative information about
the electric field.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{figures/fig-mirror-plane.eps}
\caption{A charge distribution~$\rho$ (here a sphere filled with a
spherically symmetric charge density~$\rho(r)$) has a mirror
symmetry plane~$\Pi$ if the plane divides the charge distribution
into two distinct sets such that, for every point~$P_1$ in one set,
there is a corresponding point~$P_1'$ in the other set such
that~$P_1$ and~$P_1'$ are mirror images with respect to~$\Pi$. The
mirror symmetry of~$\rho$ implies that electric field~${\bf E}$
created by~$\rho$ has a mirror symmetry. This means that the
electric field vector~${\bf E}(P_2)$ at some point~$P_2$ and the
electric field vector~${\bf E}(P_2')$ at the mirror image
point~$P_2'$ are themselves mirror images of each other as shown. As
the points~$P_2$ and~$P_2'$ approach the symmetry plane~$\Pi$, the
corresponding electric field vectors become identical and parallel
to~$\Pi$.}
\label{fig:mirror-plane-diagram}
\end{figure}
A charge distribution~$\rho(x,y,z)$ is said to have a mirror symmetry
plane (or mirror plane for short) if there is a plane~$\Pi$ that
divides the charge distribution into two distinct parts that are
mirror reflections of each other with respect to~$\Pi$ (see
Fig.~\ref{fig:mirror-plane-diagram}). This means that for every
point~$P_1$ of the charge distribution that lies on one side of the
plane~$\Pi$, there is a corresponding point~$P_1'$ (the mirror image
of~$P_1$ with respect to plane~$\Pi$) on the other side of the plane
such that the plane is perpendicular to and bisects the line
segment~$P_1P_1'$. The points~$P_1$ and~$P_1'$ are related in the same
way that one finds the image~$P_1'$ of a point~$P_1$ with respect to a
planar mirror via geometric optics\cite{Knight2012}.
Most charge distributions discussed in introductory physics courses
have mirror planes, for example a point charge, a uniformly charged
line segment, a cylindrical surface filled with an azimuthally
symmetric charge density, a sphere filled with a spherically symmetric
charge density, and an infinite uniformly charged plane. As shown in
Fig.~\ref{fig:symmetry-planes-and-lines-of-cubic-surface}, a uniformly
charged cubic surface has nine distinct mirror planes. Three of these
planes (Fig.~\ref{fig:symmetry-planes-and-lines-of-cubic-surface}(a))
have the property of passing through the center~$O$ of the cube and of
passing through the midpoints of four parallel edges. The six other
mirror planes
(Fig.~\ref{fig:symmetry-planes-and-lines-of-cubic-surface}(b)) have
the property of passing through the center of the cube and of passing
through two pairs of diagonally opposite vertices.
\begin{figure}
\centering
\includegraphics[width=0.9\textwidth]{figures/fig-symmetry-planes-symmetry-lines.eps}
\caption{(a) The square~IJKL lies in one of three mirror planes of the
cube that passes through the cube's center~$O$ and through the
midpoints of four parallel edges. (b) The rectangle~ACGE lies in one
of six mirror planes that pass through the cube's center~O and
through two pairs of diagonally opposite corners. (c) The line
segment~$M_1OM_2$ is one of three symmetry line segments that pass
through the cube's center and that connects the midpoints~$M_1$
and~$M_2$ of two opposing faces.
(d) The line segment~$AOG$ is one of four symmetry line segments
that passes through~$O$ and through two diagonally opposite corners,
here~$A$ and~$G$.
}
\label{fig:symmetry-planes-and-lines-of-cubic-surface}
\end{figure}
A mirror plane~$\Pi$ of a charge distribution is useful because, at
any point~$P$ on such a plane, the electric field vector~${\bf E}$
at~$P$ is parallel to~$\Pi$. This follows from the experimentally
observed uniqueness of motions governed by Newton's second law $m {\bf
a} = {\bf F}$ when applied to a point particle of mass~$m$ and
charge~$q$ in an electric field so that the force is given by~${\bf F}
= q {\bf E}$. If a charged point particle is placed at some
point~$P_2$ in some electric field with zero initial velocity (see
Fig.~\ref{fig:mirror-plane-diagram}), the particle is always
observed to move along a unique path, which implies that there can
only be one force vector at~$P_2$ and so only one electric field
vector~${\bf E} = {\bf F}/q$. It then follows that the electric field
at any point on a mirror plane cannot have a component perpendicular
to the plane (i.e., ${\bf E}$ must be parallel to~$\Pi$). Otherwise,
the mirror symmetry of the charge distribution would imply that there
must be two distinct electric field vectors at~$P$ that have equal and
opposite components perpendicular to the plane, a contradiction.
Further, for any point~$P$ that lies on a line~$L$ that is the
intersection of two non-parallel mirror planes of the same charge
distribution, the electric field at~$P$ must be parallel to the
line~$L$. This follows since the electric field vector at any such
point has to be parallel to two non-parallel planes at the same time,
which is possible only if the vector is parallel to their line of
intersection. For example, in
Fig.~\ref{fig:symmetry-planes-and-lines-of-cubic-surface}(c), the
mirror plane~$y=0$ that contains the green square intersects the
mirror plane~$x=0$ that contains the purple square at the line
corresponding to the $z$~axis, and so the electric field anywhere on
the $z$ axis is parallel to~$\hat{\bf z}$, and so has the form ${\bf
E}(0,0,z) = (0,0,E_z)$. Similarly, in
Fig.~\ref{fig:symmetry-planes-and-lines-of-cubic-surface}(d), we see
that the mirror plane containing the blue rectangle~$ACGE$ and the
mirror plane containing the orange rectangle~$ADGF$ intersect at a
line that contains the diagonal line segment~$AOG$, and so the
electric field anywhere on the line passing through~$AOG$ is parallel
to this line.
If a charge distribution has three mutually non-parallel mirror planes
that intersect at some point~$P$, the electric field must be zero
at~$P$. This follows since the electric field at~$P$ has to be
parallel to three distinct mirror planes, which is only possible if
the vector is the zero vector.
For example, we see in
Fig.~\ref{fig:symmetry-planes-and-lines-of-cubic-surface}(c) that the
center of the cubic surface~$O$ lies at the intersection of the mirror
planes~$x=0$, $y=0$, and~$z=0$ and so the electric field at~$O$ lies
entirely within each of these planes simultaneously which requires all
three of its components to be zero.
A last observation is that, because an electric field varies
continuously except where the charge density changes discontinuously
(for the charged cubic surface, ${\bf E}$ is continuous along any path
that does not cross the surface), the internal electric field of the
charged cubic surface close to a mirror plane must be almost parallel
to that plane. Similarly, the electric field close to the intersection
of two mirror planes must be nearly parallel to their line of their
intersection, and the electric field near a point that is common to
three distinct mirror planes must be close to zero in magnitude. So
symmetry and continuity substantially constrain the qualitative
properties of the electric field near regions of symmetry associated
with the cubic surface.
\subsection{The interior electric field near the midpoints of faces
points inwards, towards the cube's center}
\label{subsection-E-points-inwards-near-midpoints-of-faces}
Now that we understand that the electric field at a point on a mirror
plane is parallel to the mirror plane, we begin to obtain a
qualitative understanding of the electric field inside a uniformly
charged cubic surface. We claim that the internal electric field close
to any midpoint of a face must point inwards towards the center~$O$ of
the cubic surface. When this insight is combined with Gauss's law in
the next subsection, we will see that there must be places inside the
cubic surface where the electric field points outwards, from the
cube's center towards the cube's edges and corners.
To simplify some estimates that we make in the following discussion,
we assume that the cubic surface has unit length and unit surface
charge density in SI~units so that $L=1\,\rm m$, and~$\sigma = 1\,\rm
C/m^2$. With these values, the total charge on any face of the cubic
surface is~$Q=\sigma L^2 = 1\,\rm C$ and the total charge of the cubic
surface is\cite{LargeChargeComment}~$Q_{\rm tot}= 6Q = 6\,\rm C$. We
will also occasionally not indicate the physical units when referring
to spatial coordinates, which should be understood to always be in
units of meters.
To understand why the internal electric field must point inwards near
the center of any face, we use our knowledge of the electric field of
a point charge and of the electric field of an infinite uniformly
charged plane\cite{Knight2012}. In
Fig.~\ref{fig:symmetry-planes-and-lines-of-cubic-surface}(c), let us
consider a point~$P$ that lies just above the bottom midpoint~$M_2 =
(0,0,-1/2)$ on the line segment~$M_1OM_2$. If~$P$ is sufficiently
close to~$M_2$, the electric field at~$P$ due to the bottom
face~$EFGH$ will approximately be equal to the electric field of an
infinite plane with uniform charge density~$\sigma$. (See for example
page~762 of Ref.~\onlinecite{Knight2012}, where this is shown
analytically to be the case for a point sufficiently close to and
above the center of a uniformly charged disk.) Since the charge
density~$\sigma$ is positive, we conclude that the electric
field~${\bf E}(P)_{EFGH}$ at~$P$ due to the face~$EFGH$ is given
approximately by
\begin{equation}
\label{eq-E-EFGH}
{\bf E}(P)_{EFGH} \approx {\sigma \over 2 \epsilon_0} \, \hat{\bf z}
\approx 2\pi K\, \hat{\bf z} ,
\end{equation}
where we used the fact that the vacuum permittivity~$\epsilon_0$ and
Coulomb's constant~$K$ are related by~$1/\epsilon_0 = 4 \pi K$. To one
significant digit in multiples of~$K$, the electric field at~$P$ due
to the bottom face has magnitude~$E \approx 6K$ and points in the
positive $z$-direction, away from the charged face and towards~$O$.
Now let us consider the contribution~${\bf E}(P)_{ABCD}$ to the
electric field at~$P$ from the top face~$ABCD$ in
Fig.~\ref{fig:symmetry-planes-and-lines-of-cubic-surface}(c). Since~$P$
lies on the intersection of two distinct mirror planes of the
face~$ABCD$, the electric field at~$P$ due to~$ABCD$ must be parallel
to the $z$ axis. Further, since~$ABCD$ is covered with a positive
charge density, we conclude that ${\bf E}(P)_{ABCD}$ must point in the
$-\hat{\bf z}$ direction. Now if square~$ABCD$ were extended to be an
infinite uniformly charged plane~$z=1/2$ with density~$\sigma$ that
contains square~$ABCD$, then the electric field at~$P$ due to this
plane would be exactly opposite in direction and equal in magnitude to
the electric field due to the bottom face and the electric field
at~$P$ would be approximately zero. But since~$ABCD$ is a finite
portion of an infinite charged plane, the electric field at~$P$ due to
$ABCD$ must be smaller in magnitude than the electric field
magnitude~$2\pi K$ of an infinite plane with the same charge
density. We conclude that the total electric field at a point~$P$ that
is sufficiently close to~$M_2$ and that is due to the top and bottom
faces of the cube must point upwards, towards the cube's center~$O$.
We can get a quick estimate of the approximate magnitude of ${\bf
E}(P)_{ABCD}$ by approximating the distributed charge on the
face~$ABCD$ with a single point charge with total charge~$Q=1\,\rm C$
at the center~$M_1$ of this face. Coulomb's law then tells us that the
magnitude of the electric field at~$P$ due to~$ABCD$ is
\begin{equation}
\label{eq-E-from-M1-point-charge}
E(P)_{ABCD}
\approx {K Q \over L^2}
= { K \left[L^2 \sigma\right] \over L^2 }
= K \sigma
= K ,
\end{equation}
since we have assumed~$\sigma = 1\,\rm C/m^2$.
This simple estimate is too big for two reasons. (And indeed, the
quantitative calculations of Section~\ref{sec:quantitative} show that
the value of the electric field at~$M_2$ due to the top face is~$0.8K$
to one digit, so that the estimate
Eq.~(\ref{eq-E-from-M1-point-charge}) is too large by about 20\%.)
First, we can think of approximating the face~$ABCD$ with a single
point charge~$Q=1\,\rm C$ at its center as being achieved by
relocating each infinitesimal point charge~$dq$ on that face in turn
to the center of that face. But changing the position of a point
charge~$dq$ to the face's center moves the charge closer to~$P$
because the center of that face is closer to~$P$ than all the other
points of the face. Since smaller distances~$d$ in Coulomb's law ${\bf
E} \propto 1/d^2$ imply bigger electric fields, relocating all the
point charges on a face to its center makes the estimated total
electric field at point~$P$ larger than the actual value. Relocating a
point charge~$dq$ on~$ABCD$ to the face's center also changes the
orientation of the electric field vector at~$P$ due to~$dq$ to be more
parallel to the $z$-axis, which increases the component along the $z$
axis compared to its original component.
We now know that the total field at a point~$P$ near~$M_2$ due to only
the top and bottom faces gives an upwards vertical electric field of
approximate magnitude~$6K - 1K \approx 5K$. However, since a
point~$P$ near~$M_2$ lies below the symmetry plane~$z=0$ that divides
the cubic surface horizontally in half, there are more infinitesimal
charges on each vertical side that lie above~$P$ than below~$P$. This
implies that the net electric field at~$P$ due to the four side faces
must have a net downwards $z$-component that also needs to be
considered when estimating the total electric field vector~${\bf
E}(P)$ at~$P$.
We can again obtain a quick estimate by replacing each vertical side
face with a single point particle of charge~$Q=1\,\rm C$ at its center
as shown in
Fig.~\ref{fig:approximating-side-faces-with-point-charges}(a) and by
adding up the four electric field vectors at~$P$ due to these four
point charges. It is harder now to determine whether this
single-point-charge approximation will cause an overestimate or
underestimate of the exact value~${\bf E}(P)_{\rm side-faces}$ since,
upon relocating an infinitesimal charge~$dq$ on a side face to the
center of that side face, the distance to~$P$ can become larger or
smaller, and the electric field vector at~$P$ due to~$dq$ can become
more or less parallel to the $z$ axis.
In any case, by approximating the four side faces with point
charges~$Q=1\,\rm C$ at their centers, by defining~${\bf X}_P =
(0,0,-1/2)$ to be the position vector of point~$P$ at the bottom
midface and ${\bf X}_{F_1} = (1/2,0,0)$ to be the position vector of
the point charge at the front midface of the cubic surface, by
defining~$d_{PF_1}=\|{\bf X}_P - {\bf X}_{F_1}\| = 1/\sqrt{2}$ to be
the distance between the two vectors, by defining $\hat{\bf r}_{PF_1}
= ({\bf X}_P - {\bf X}_{F_1})/d_{PF_1} = (-1/\sqrt{2},0,-1/\sqrt{2})$
to be the unit vector that points from the front midface to~$P$, and
finally by observing that, by symmetry, the final $z$-component of the
electric field at~$P$ due to the four sides is four times the $z$
component from any one side, we find that
\begin{equation}
\label{eq-E-from-four-point-charges-on-side-faces}
{\bf E}(P)_{\rm side-faces}
\approx 4 \left(
\hat{\bf z} \cdot { K Q \over d_{PF_1}^2 } \hat{\bf r}_{PF_1} \right)
\approx - 4 \sqrt{2} K \, \hat{\bf z}
\approx - 5.7 K \, \hat{\bf z} .
\end{equation}
So by approximating the bottom face with an infinite plane and the
other five side faces with point charges~$Q=1 \,\rm C$ at their
centers, we get a total electric vector near~$M_2$ in the~$-z$
direction of length~$2\pi K - 1.0K - 5.7K \approx -0.4K$ pointing
downwards. This suggests that the electric field just inside the cubic
surface and near the midpoint of a face points outwards, away from the
origin~$O$.
\begin{figure}
\centering
\includegraphics[width=0.7\textwidth]{figures/fig-replacing-side-faces-with-point-charges.eps}
\caption{The electric field~${\bf E}$ at a point~$P$ at the midpoint
of a charged cubic surface (here the top) due to a side face (here
the front face) can be estimated by replacing the distributed charge
of the front face with one or more point charges. In (a), a single
point charge with~$Q=1\,\rm C$ at the center of the face is
used. Panels~(b) and~(c) show two ways that the charge on the front
panel could be approximated by two equal point charges with
charge~$Q=1/2\,\rm C$, by dividing the face into two equal smaller
rectangles and replacing the charges on each rectangle by a point
charge at its midpoint. }
\label{fig:approximating-side-faces-with-point-charges}
\end{figure}
However, since the estimate~$E_z \approx (2\pi - 1.0 + 5.7)K \approx
-0.4K$ involves cancellations of estimates bigger than~$K$ in
magnitude to give a final answer smaller than~$K$ in magnitude, we
need to worry about whether using a point charge to approximate the
electric field of a charged face is sufficiently accurate. Since the
four side faces contribute the second largest amount to the total
electric field at~$P$, it would be useful to explore whether a more
careful approximation of the electric field from each side face might
affect the sign of~$E_z$. A next step that is easy to compute would be
to approximate the surface charge of each side face with two equal
point charges with values~$q=Q/2=1/2\,\rm C$ as shown in panels~(b)
and~(c) of
Fig.~\ref{fig:approximating-side-faces-with-point-charges}. For panel
Fig.~\ref{fig:approximating-side-faces-with-point-charges}(b), the
total electric field at~$P$ due to the eight point charges of the
sides is given by
\begin{equation}
\label{eq-estimate-two-charges-per-side-face}
E_z(P)_{\rm 8-point-charges}
\approx 4 \, \hat{\bf z} \cdot \left(
{ K (Q/2) \over d_{PF_1}^2 } \hat{\bf r}_{PF_1} +
{ K (Q/2) \over d_{PF_2}^2 } \hat{\bf r}_{PF_2}
\right)
\approx - 4.9 K ,
\end{equation}
to two digits. Here we define~${\bf X}_{F_1}=(1/2,0,1/4)$ and~${\bf
X}_{F_2} = (1/2,0,-1/4)$ to be the position vectors of the two point
charges with charge~$Q/2 = (1/2)\,\rm C$ on the front face in
Fig.~\ref{fig:approximating-side-faces-with-point-charges}(b), and
then, as before, we define~$d_{PF_1} = \|{\bf X}_P - {\bf X}_{F_1}\|$,
$\hat{\bf r}_{PF_1} = ({\bf X}_P - {\bf X}_{F_1})/d_{PF_1}$, and
similarly for $d_{PF_2}$ and~$\hat{\bf r}_{PF_2}$. A calculation
similar to Eq.~(\ref{eq-estimate-two-charges-per-side-face}) for
Fig.~\ref{fig:approximating-side-faces-with-point-charges}(c) gives a
value of $E_z \approx -4.7K$ so both arrangements of two equal charges
per side lead to the same conclusion, that the magnitude of~${\bf
E}(P)$ due to all six faces is about $(2\pi - 1.0 - 4.9)K \approx
0.4K$ pointing in the $+\hat{\bf z}$ direction, which represents a
reversal in direction of the electric field near~$M_2$ compared to a
one-charge-per-face approximation.
One would presume that this new estimate using two charges per face is
more accurate than an estimate based on one charge per face since two
points charges per face should do a better job of getting the
magnitude and direction of the face's electric field at~$M_2$ correct.
Other calculations using more point charges per face indeed confirm
that the electric field near the midpoint of a face indeed points
towards the center. It is possible to calculate the exact total
electric field at a point~$P$ quite close to~$M_2$ (see
Appendix~\ref{appendix:exact-electric-field}) and one finds that~${\bf
E}(P) \approx 1.9K \hat{\bf z}$ to two digits. The exact
contribution to the electric field at~$M_2$ due to the side faces is
therefore $3.6K$ in the $-\hat{\bf z}$ direction, compared to the
estimates of~$4.9K$ or~$4.7K$ for panels~(b) and~(c) of
Fig.~\ref{fig:approximating-side-faces-with-point-charges}. So two
point charges per side face get the sign right but produce an error in
magnitude of the electric field at~$P$ of about 40\%.
We conclude that the internal electric field points towards the center
of the cubic surface for points sufficiently close to the midpoint of
any face. From this, we can deduce what is the qualitative form of the
electric field~${\bf E}$ along the entire line segment~$M_1OM_2$. (By
symmetry, this will also be the qualitative form of the electric field
along the other two line segments connecting midpoints of opposite
faces.) The symmetry arguments of
Sec~\ref{subsection-E-parallel-to-mirror-planes} imply that~${\bf E}$
at any point on~$M_1OM_2$ is parallel to~$M_1OM_2$ and so~${\bf E}$
must have the form~$(0,0,E_z)$ on this line segment. The
$z$-component~$E_z(z)$ is positive near~$M_2$ and, by symmetry, must
be negative near~$M_1$ and we further know that~$E_z(0)=0$ since the
center of the cube lies at the intersection of at least three distinct
mirror planes and so~${\bf E}$ must vanish there. We thus expect
$E_z(z)$ to be a smoothly varying odd function of~$z$, $E_z(-z) = -
E_z(z)$, that is positive for~$z < 0$ and that decreases through zero
to negative values for~$z > 0$. Further, the magnitude of~$E_z$
along~$M_1OM_2$ must always be less than the magnitude~$2\pi K \approx
6K$ of the electric field produced by an infinite plane with surface
charge density~$\sigma =1\,\rm C/m^2$. The quantitative calculation
Fig.~\ref{fig:E-V-along-symmetry-lines}(a) of
Section~\ref{sec:quantitative} over the range $-1/2 \le z \le 1/2$
shows that this qualitative thinking is correct.
\subsection{Gauss's law implies that there are places where the
interior electric field points outwards, towards edges and towards
vertices}
\label{subsection-application-of-gauss-law}
We now combine the insight that the internal electric field points
inwards near the center of faces with a qualitative application of
Gauss's law to deduce that there have to be locations inside the
charged cubic surface where the electric field points away from the
cube's center~$O$. Consider a Gaussian cubic
surface~$S=A'B'C'D'E'F'G'H'$ that is concentric with and lies just
inside the charged cubic surface $ABCDEFGH$ as shown in
Fig.~\ref{fig:cubic-Gaussian-surface}. Because there is no charge
inside the surface~$S$, Gauss's law gives:
\begin{equation}
\label{eq-Gauss-law-applied-to-cubic-surface}
{ Q_{\rm enclosed} \over \epsilon_0 }
= 0
= \Phi_{\rm total}
= \int_S {\bf E} \cdot d{\bf A}
= 6 \int_\Box {\bf E} \cdot d{\bf A}
= 6 \Phi_\Box ,
\end{equation}
so the flux~$\Phi_\Box$ through any face~$\Box$ of the cubic
surface~$S$ is zero. Here we have used the symmetry of the cube to
deduce that the flux~$\Phi_\Box$ through any face must be the same so
the flux integral~$\Phi_{\rm total}$ over the surface~$S$ is six times
the flux~$\Phi_\Box$ through any one face.
\begin{figure}
\centering
\includegraphics[width=0.3\textwidth]{figures/fig-cubic-Gaussian-surface.eps}
\caption{ (a) The cubic surface $S=A'B'C'D'E'F'G'H'$ is a Gaussian
surface that is concentric with the charged cubic surface $ABCDEFGH$
and that lies just within the charged cubic surface. The total
charge enclosed by~$S$ is zero which implies, by the symmetry of the
cubic surface, that the flux through each face such as $A'B'F'E'$ is
zero. }
\label{fig:cubic-Gaussian-surface}
\end{figure}
If we consider a particular face of the surface~$S$, say the front
face~$A'B'F'E'$ that lies at a coordinate~$x_0$ that is just less
than~$x=(1/2)\,\rm m$, the flux integral for that face becomes a
two-dimensional integral of the $x$-component~$E_x(x_0,y,z)$ of the
electric field vector since the face~$A'B'E'F'$ is perpendicular to
the $x$~axis:
\begin{equation}
\label{eq-flux-integral-front-face}
0 = \Phi_\Box
= \Phi_{A'B'F'E'}
= \int_{A'B'F'E'} {\bf E} \cdot d{\bf A}
= \int_{A'B'F'E'} {\bf E} \cdot \left( dA \, \hat{\bf x} \right)
= \int_{A'B'F'E'} E_x \, dy\,dz .
\end{equation}
The last integral $\int E_x \, dx \, dy$ can be thought of as the
limit of a finite sum of values $E_x(x_0,y_i,z_i) \Delta{A}_i$ over
some fine uniform grid of tiny identical square areas~$\Delta A_i$
that all have the same area $dA_i = \Delta{y}\,\Delta{z} =
\Delta{A}$. So the flux integral
Eq.~(\ref{eq-flux-integral-front-face}) can be thought of as
approximately equal to~$\Delta{A} \sum_i E_x(x_0,y_i,z_i)$, i.e., it
is proportional to the sum of the $x$-components of the electric field
values over the face~$A'B'F'E'$. Since this sum is zero by
Eq.~(\ref{eq-flux-integral-front-face}), we conclude that the values
of~$E_x$ cannot be everywhere positive or everywhere negative, else
the sum $\sum_i E_x(x_0,y_i,z_i)$ would be respectively positive and
negative, a contradiction.
But we know from
Sec.~\ref{subsection-E-points-inwards-near-midpoints-of-faces} that,
near the center of the face~$ABFE$ of the charged cubic surface, the
interior electric field points inwards towards the origin~$O$. Since
the face~$ABFE$ is perpendicular to the $x$-axis and lies at the
coordinate~$x=1/2\,\rm m$, this specifically implies that $E_x$ must
be negative near the center of~$ABFE$. But the electric field~${\bf
E}$ varies continuously everywhere inside the cubic surface. (Only
along a path that crosses the charged surface would ${\bf E}$ change
discontinuously.) Provided that the face~$A'B'F'E'$ is sufficiently
close to~$ABFE$, continuity of~$E_x$ implies that~$E_x$ must also be
negative over some finite region near the center of the
face~$A'B'F'E'$. But then the only way that
Eq.~(\ref{eq-flux-integral-front-face}) can hold is for~$E_x$ to be
positive on $A'B'F'E'$ in regions away from the middle of the face so
that the negative and positive values of~$E_x$ over the entire face
add to zero. We conclude that there must be points inside the charged
cubic surface where the electric field points away from the center of
the cube. This immediately implies that the interior electric field
must have a complicated structure, pointing inwards in some locations
(near the middle of each face) and outwards in other locations.
A simple way that~$E_x$ could be negative near the middle
of~$A'B'F'E'$ and positive away from the middle consistent with the
symmetry of a cube would be for~$E_x$ to be negative in some
square-like region near the face center and positive elsewhere on the
face. The quantitative calculations of Sec.~\ref{sec:quantitative}
show that this simplest case is what actually occurs, see
Fig.~\ref{fig:flux-through-front-face} below.
If we assume this simplest case, then we can understand qualitatively
that the electric field inside the cubic surface must actually point
towards edges or towards corners for points near edges or near
corners. For example, for interior points close to the points~$A'$,
$B'$, $F'$, and~$E'$ in Fig.~\ref{fig:cubic-Gaussian-surface}, the
components of~${\bf E}$ point outwards on three planes parallel to
respective~$y=0$, $x=0$, and~$z=0$. This tells us that the electric
vectors near these points must point outwards towards the
corners. These qualitative insights are confirmed by our quantitative
discussion in Section~\ref{sec:quantitative}.
The qualitative arguments in this section only hold for interior
Gaussian cubic surfaces~$A'B'C'D'E'F'G'H'$ whose faces are
sufficiently close to the faces of the charged cubic surface. How the
electric field varies more deeply in the cube's interior cannot be
worked out qualitatively and one has to turn to quantitative
calculations to understand the bigger picture. The quantitative
calculations show that the arguments of this subsection hold generally
for all interior cubic Gaussian surfaces: everywhere in the interior,
the electric field points inwards along the faces of a Gaussian cube
and outwards near the edges and corners of the Gaussian cube.
\subsection{A qualitative comparison of the non-conducting cubic surface with three
similar problems}
\label{subsect-related-systems-with-zero-field}
Before discussing our quantitative results, we compare our qualitative
conclusions about the nonzero electric field inside a uniformly
charged non-conducting cubic surface with three related problems for
which the electric field inside some interior region of a symmetric
charge distribution is zero. This comparison helps to clarify why the
electric field inside the non-conducting charged cubic surface is
nonzero.
First we ask: why is the electric field nonzero inside the charge
distribution~$\sigma_{\rm cube}$ consisting of a uniformly charged
cubic surface while the electric field is zero everywhere inside the
charge distribution~$\sigma_{\rm sphere}$ consisting of a uniformly
charged spherical surface? At first glance, these seem to be similar
problems since the charge distributions are both highly symmetric.
One answer is that a spherical surface has many more symmetries than a
cubic surface. For example, instead of having nine mirror planes
like~$\sigma_{\rm cube}$ (see
Fig.~\ref{fig:symmetry-planes-and-lines-of-cubic-surface}), the
distribution~$\sigma_{\rm sphere}$ has infinitely many mirror planes
since any plane passing through the center of the spherical surface is
a mirror plane of~$\sigma_{\rm sphere}$. These infinitely many mirror
planes allow one to show\cite{EisFnOfRadius} that the electric
field~${\bf E}$ inside~$\sigma_{\rm sphere}$ is radial, ${\bf E}=E
\hat{\bf r}$, and further that the electric field magnitude~$E$
depends only on radius,~$E=E(r)$. These two facts then lead to the
usual argument given in introductory physics
textbooks\cite{Knight2012}, that the flux integral~$\Phi = \int {\bf
E} \cdot d{\bf A}$ in Gauss's law, applied to a spherical Gaussian
surface of radius~$r$ concentric with and inside the uniformly charged
spherical surface, becomes a simple product~$\Phi = E(r) A$ and so the
vanishing of the flux (since there is no charge inside the spherical
Gaussian surface) implies the vanishing of the electric field. In
contrast, the charge distribution~$\sigma_{\rm cube}$ does not have
enough symmetry for one to conclude that the interior electric field
is radial (which it isn't), so the flux integral cannot be written as
the product of some area times some constant electric field magnitude.
We next consider an empty cubic region that lies within a symmetric
charge distribution that consists of three identical parallel pairs of
uniformly charged infinite non-conducting planes, see
Fig.~\ref{fig:cube-formed-by-three-pairs-charged-planes}. This figure
can be obtained by extending each face of the cubic surface in
Fig.~\ref{fig:charged-cubic-surface} to an infinite plane that
contains that face and that has the same surface charge
density~$\sigma$.
Since the electric field due to an infinite charged plane is uniform
on a given side of the plane and points away from the plane if~$\sigma
> 0$, we conclude that the electric field~${\bf E}$ must be zero
everywhere in the cubic region of
Fig.~\ref{fig:cube-formed-by-three-pairs-charged-planes} since, at any
point~$P$ inside the cube, the electric field vectors from opposing
planes are equal and opposite and so cancel exactly. The nonzero
internal electric field of Fig.~\ref{fig:charged-cubic-surface}
therefore arises from the finite size of the faces of the cubic
surface, which in turn implies that each face produces a nonuniform
electric field, that decreases in magnitude with increasing distance
from a face. Since a planar charge distribution in introductory
physics textbooks is often represented visually by a finite rectangle
contained in the plane, it is easy for students to conclude
incorrectly that the electric field inside the cubic surface
Fig.~\ref{fig:charged-cubic-surface} must be the same as that of
Fig.~\ref{fig:cube-formed-by-three-pairs-charged-planes}.
\begin{figure}
\centering
\includegraphics[width=0.3\textwidth]{figures/fig-cube-made-of-four-planes.eps}
\caption{A charge distribution consisting of three identical pairs of
parallel planes, each with a uniform surface charge
density~$\sigma$. The front and rear pair of planes are not shown to
make the diagram easier to understand. The electric field is zero
everywhere inside the central cubic region since the electric fields
of opposing planes cancel exactly at any interior point.}
\label{fig:cube-formed-by-three-pairs-charged-planes}
\end{figure}
The third example that we consider is the electric field inside a
charged {\em conducting} cubic surface that has the same size and same
total charge as the charged non-conducting surface of
Fig.~\ref{fig:charged-cubic-surface}. Introductory physics textbooks
discuss the fact that the electric field everywhere inside a hollow
conductor must be zero if that conductor, charged or not, is in
electrostatic
equilibrium\cite{Tipler2007,Serway2011,Young2011,Knight2012,Cutnell2014}.
So we have another situation that is confusing to student who are
learning about electrostatics: how can one have two identical cubic
surfaces with identical total charges, and yet one surface has a
nonzero interior electric field while the other surface has a zero
interior electric field?
The key insight here is that, on a conducting cubic surface, the
charges are mobile and move about (because of mutual repulsion) until,
in electrostatic equilibrium, the surface charge density~$\sigma$~is
nonuniform in just such a way that the surface and interior are all
equipotential\cite{Knight2012,Griffiths2012}. If there are no other
charges inside the surface, this implies a zero interior electric
field and that the external electric field is everywhere locally
perpendicular to the cubic surface (except at edges and
corners). Introductory physics textbooks mention briefly and
qualitatively\cite{Tipler2007,Knight2012,Young2011} that charged
non-spherical three-dimensional conductors in equilibrium have
nonuniform surface charge densities, and that these densities are
larger where the radius of curvature of the surface is smaller. But
these books and even the commonly used upper-level books on
electricity and magnetism\cite{Griffiths2012,Purcell2013} do not
discuss quantitative examples of nonuniform charge densities.
There is a complimentary insight, which is that the surface of a
uniformly charged non-conductor of non-spherical shape cannot be
equipotential. This follows since a charged non-spherical conductor
has a non-uniform surface charge density.
\begin{figure}
\centering
\includegraphics[width=0.7\textwidth]{figures/fig-surface-charge-density-vs-surface-potential-conducting-vs-nonconducting.eps}
\caption{(a) Color density plot of the nonuniform surface charge
density~$\sigma$ on the faces of a conducting equipotential cubic
surface with potential~$V= 1\,\rm V$, as calculated using a
commercial computer code\cite{ComsolCalculation}. The density is
approximately constant in the blue regions with value $\sigma
\approx 8 \times 10^{-11}\,\rm C/m^2$ and increases near the edges
(red brown regions) by a factor of about five. (b) The
non-conducting uniformly charged cubic surface is not equipotential,
as shown by the surface plot of the potential~$V(0.499,y,z)$ of
Eq.~(\ref{eq-V-cubic-surface-exact}) on the $yz$-face of a uniformly
charged non-conducting cubic surface. }
\label{fig:sigma-versus-V-nonconducting-vs-conducting-surfaces}
\end{figure}
Figure~\ref{fig:sigma-versus-V-nonconducting-vs-conducting-surfaces}
clarifies these two insights in the context of a cubic surface.
Fig.~\ref{fig:sigma-versus-V-nonconducting-vs-conducting-surfaces}(a)
shows a numerical approximation to the nonuniform surface charge
density~$\sigma$ for a charged conducting equipotential cubic surface
in electrostatic equilibrium\cite{ComsolCalculation}. (Note that
this~$\sigma$ is what the uniform charge density of
Fig.~\ref{fig:charged-cubic-surface} would evolve into if the
uniformly charged non-conducting cubic surface were to become
conducting.) For this calculation with a spatial resolution of
$120\times 120$ grid points per face, about 10\% of the area of each
face contains about 70\% of the total charge per face so most of the
surface charge ends up near the edges. As the spatial resolution
becomes finer, the surface charge density on the edges increases,
corresponding to the fact that the surface charge density
mathematically diverges to infinity where sharp edges occur on a
charged conductor\cite{Jackson1998}.
Fig.~\ref{fig:sigma-versus-V-nonconducting-vs-conducting-surfaces}(b)
shows the complimentary result of how the potential~$V$ is nonuniform
over one face of the uniformly charged non-conducting cube. (The
potential~$V$ is easily evaluated numerically using the analytical
expressions Eqs.~(\ref{eq-V-analytical-expression-for-rectangle})
and~(\ref{eq-V-cubic-surface-exact}) of
Appendix~\ref{appendix:exact-electric-field}.) The potential~$V$
varies modestly in magnitude by about 30~percent, from about~$7.7\,\rm
V$ at the corners to about $9.8\,\rm V$ at the face's center.
However, it is the gradient of~$V$ that determines the electric field
via ${\bf E} = -\nabla{V}$ and it is not apparent in
Fig.~\ref{fig:sigma-versus-V-nonconducting-vs-conducting-surfaces}(b)
that the local slope (magnitude of the gradient) is becoming vertical
and so diverges at the corners of the non-conducting cubic
surface. This divergence is not easy to understand qualitatively, and
we demonstrate this fact with a short Mathematica\cite{Mathematica15}
calculation in Appendix~\ref{appendix:exact-electric-field}.
We note that the calculation of the surface charge density on a
charged cubic conductor is a difficult calculation that is not
discussed even in graduate textbooks on electricity and magnetism like
Jackson\cite{Jackson1998}. The authors only know of numerical
calculations using specialized computer codes that have been carried
out mainly by electrical engineers who have been interested in the
capacitance of a cube-shaped
capacitor\cite{Reitan1951,Hwang2004,Velickovic2004}. Nevertheless, the
fact that the surface charge density on a conducting cubic surface in
electrostatic equilibrium is not uniform is easily understood by
undergraduates and they should be familiar with at least one
quantitative example like the one discussed in this paper.
\subsection{Confirmation of the qualitative insights of
Sect.~\ref{sec:qualitative}}
\label{subsec-confirmation-qualitative-insights}
We begin our quantitative discussion by using the exact expression
for~${\bf E}(x,y,z)$ in Appendix~\ref{appendix:exact-electric-field}
to show in Fig.~\ref{fig:flux-through-front-face} how the local
electric flux~$d\Phi(x_0,y,z) = {\bf E}(x_0,y,z)\cdot d{\bf
A}(x_0,y,z) = E_x(x_0,y,z) \, \Delta{y} \, \Delta{z}$ varies over
the front face~$A'B'F'E'$ of an interior cubic Gaussian surface (see
Fig.~\ref{fig:cubic-Gaussian-surface}) whose sides have length
$0.8\,\rm m$. Figure~\ref{fig:flux-through-front-face} shows a surface
plot of~$E_x(x_0,y,z)$ over the region $|y|, |z| \le 0.4\,\rm m$. The
middle bulge above the orange plane where~$E_x=0$ denotes where the
electric field points inwards (where~$E_x$ is negative).
This figure confirms the qualitative conclusions of
Sects.~\ref{subsection-E-points-inwards-near-midpoints-of-faces} and
of~\ref{subsection-application-of-gauss-law} that the interior
electric field near the midpoints of the faces points inwards. But
now we understand quantitatively how the flux is zero over this face:
the component~$E_x$ is negative in a larger central area but with
smaller magnitude~$|E_x|$, while the component~$E_x$ is positive in a
smaller area but with a larger magnitude outside the central
region. The larger number of smaller negative values balance the
smaller number of larger positive values, giving a net flux of zero.
\begin{figure}
\centering
\includegraphics[width=0.7\textwidth]{figures/fig-flux-through-front-face.eps}
\caption{Surface plot of the $x$-component~$E_x(x_0,y,z)$ of the
electric field vector on the front face~$x_0=0.4L$ of a Gaussian
cubic surface centered on the origin. This plot is proportional to
the local flux~$d\Phi = E_x \Delta{A}$. The orange plane indicates
where~$E_x$ has the value zero, so the surface above the plane in
the middle is where the flux is negative (the electric field points
into the Gaussian surface), and the surface below the plane is where
the flux is positive (electric field points out of the Gaussian
surface). The total flux through this front face is zero. }
\label{fig:flux-through-front-face}
\end{figure}
Panels~(a) and~(b) of Figure~\ref{fig:E-V-along-symmetry-lines} next
show how the electric field varies quantitatively along two symmetry
lines of the charged cubic
surface. Fig.~\ref{fig:E-V-along-symmetry-lines}(a) shows how~${\bf E}
= E_x \hat{\bf x}$ varies along the line that passes through the two
opposing midpoints~$(x,y,z)=(1/2,0,0)$ and~$(-1/2,0,0)$. This plot
confirms the earlier qualitative conclusion that the electric field
points inwards near the midpoints of faces and vanishes at the
center~$O$, and shows further that the electric field has a small
magnitude over much of the interior of cubic surface. (Note the flat
approximately zero behavior of~$E_x$ for $|x| \lesssim 0.3$.) The
magnitude of the interior electric field on this symmetry line is
everywhere smaller than the electric field field magnitude~$2\pi K
\sigma \approx 6.2K$ of an infinite plane with the same charge density
(the two horizontal tick marks on the $x=0$ vertical axis denote this
magnitude). As one proceeds along this symmetry line from just inside
to just outside the cubic surface (so $|x|$ increases from just less
than~1/2 to just greater than~1/2), the electric field magnitude~$E$
changes discontinuously to a finite value that is larger than the
electric field magnitude of an infinite plane of the same charge
density. That the electric field magnitude is larger just outside the
surface was explained in
Sect.~\ref{subsection-only-interior-field-interesting}.
\begin{figure}
\centering
\includegraphics[width=1.0\textwidth]{figures/fig-E-V-along-x-along-diagonal.eps}
\caption{ (a) Plot of $E_x(x,0,0)/K$ versus~$x$ along the $x$-axis
where~$K$ is Coulomb's constant. The two thin black curves are the
electric field component $E_x/K = \pm 6/x^2$ at location~$(x,0,0)$
for a point charge~$Q=6\,\rm C$ at the origin. The two horizontal
tick marks at $E/K = \pm 2 \pi \approx \pm 6.3$ on the~$x=0$
vertical line denote the magnitude of the electric field for an
infinite plane with the same charge density~$\sigma =1$. (b) Plot of
the electric field component~$E_{\rm diag} / K = [{\bf E}(u,u,u)
\cdot \hat{\bf n}]/K$ parallel to the cube diagonal~${\bf x}(s) =
s \hat{\bf n}$ where~$\hat{\bf n} = (1,1,1)/\sqrt{3}$. (c)
Potential~$V(x,0,0)$ plotted along the $x$-axis. The thin black
lines correspond to the potential~$V = 6/x$ associated with a point
charge~$Q=6\,\rm C$ placed at the center of the cubic surface. (d)
Potential~$V(u,u,u)$ plotted along the same diagonal as in
panel~(b). }
\label{fig:E-V-along-symmetry-lines}
\end{figure}
Figure~\ref{fig:E-V-along-symmetry-lines}(b) shows a similar plot
except along a symmetry line ${\bf x}(u) = (u/2,u/2,u/2)$ that
connects the diagonally opposite corners $(-1/2,-1/2,-1/2)$
and~$(1/2,1/2,1/2)$. From the sign of the electric field
component~$E_{\rm diag} = {\bf E} \cdot \hat{\bf n}$ along this line,
we see that, inside the cubic surface, the electric field everywhere
except at the origin points {\em outwards} towards the corners, in
agreement with our qualitative discussion in
Sect.~\ref{subsection-application-of-gauss-law}. We further see that
the electric field magnitude diverges to infinity at the corners,
which we explain briefly mathematically in
Sect~\ref{subect-log-divergence} of the appendix. In contrast, the
electric field is finite in magnitude at the midpoints of the cubic
surface (Fig.~\ref{fig:E-V-along-symmetry-lines}(a)).
In panels~(a) and~(b) of Fig.~\ref{fig:E-V-along-symmetry-lines}, we
also show by thin black curves the electric field corresponding to a
point particle at~$O$ with the same total charge~$Q=6\,\rm C$ as the
cubic surface. We expect the external electric field of the cubic
surface to converge to that of the point charge for distances
sufficiently far from the origin, but the quantitative calculation
shows the surprising result that the external field already acts
accurately like that of a point charge for distances that are as close
as one cube side~$L$ from the center.
These quantitative observations are reinforced by
figures~\ref{fig:vector-streamline-plot-x=0-plane},
\ref{fig:vector-streamline-plot-diagonal-plane},
and~\ref{fig:vector-plot-E-inside-and-outside-x=0-plane} which show
how the electric field vectors vary over two-dimensional regions that
lie within mirror planes of the charged cubic
surface. Fig.~\ref{fig:vector-streamline-plot-x=0-plane} plots the
electric vector field over a square~$MNPO$ that lies within the mirror
plane~$x=0$. In agreement with
Sect.~\ref{subsection-application-of-gauss-law} and with
Fig.~\ref{fig:flux-through-front-face}, the vector field plot
Fig.~\ref{fig:vector-streamline-plot-x=0-plane}(a) shows that the
electric field points inwards near midpoints~$M$ and~$P$, and that the
electric field points outwards towards the edge~$N$,
Because the interior electric field magnitude increases as one
approaches the cubic surface and diverges at the edges, the vectors
near the center of the cubic surface are not visible when the vectors
near the surface are displayed with moderate lengths. This difficulty
can be avoided by using a so-called streamline plot, which consists of
displaying unit electric field vectors~$\hat{\bf E}$ on some fine
regular mesh of spatial points, and then by drawing continuous curves
(the streamlines) that are locally tangent to these unit vectors, see
Fig.~\ref{fig:vector-streamline-plot-x=0-plane}(b). The streamline
plot shows everywhere in the interior how the electric field points
inwards near midpoints of faces and then gradually changes orientation
to point outwards towards the edge at point~$N$.
\begin{figure}
\centering
\includegraphics[width=0.8\textwidth]{figures/fig-vector-plot-x=0.eps}
\caption{(a) Schematic showing the relation of the plotting region,
square~$MNOP$ in the mirror plane~$x=0$, to the charged cubic
surface. (b) Vector plot of the electric field~${\bf E}(0,y,z) =
(0,E_y,E_z)$ over the square region~$MNOP$ defined by~$x=0$ and by
$0 \le y < 1/2$, $0 \le z < 1/2$. The electric field points inwards
near the centers~$M$ and~$P$ of the top and side faces, and point
outwards towards the edge that passes perpendicular to the
point~$N$. (c) A streamline plot of~${\bf E}$ over the same region
reveals the geometry of~${\bf E}$ everywhere in the interior. This
plot was created using the Mathematica command {\tt StreamPlot}.}
\label{fig:vector-streamline-plot-x=0-plane}
\end{figure}
Finally, Fig.~\ref{fig:vector-plot-E-inside-and-outside-x=0-plane}
shows a vector plot of the electric field over a square region $TUVO$
in the mirror plane~$x=0$ that includes the field external to the
charged cubic surface. As was already shown in
Fig.~\ref{fig:E-V-along-symmetry-lines}(a), the magnitude of the electric
field in the plane~$x=0$ is substantially larger just outside the
cubic surface than anywhere inside so, on the scale of this plot such
that the vectors just outside the surface are of modest size, the
interior electric field vectors are barely visible. The vector plot
confirms the discussion of
Sect.~\ref{subsection-only-interior-field-interesting} in that the
external electric field is qualitatively similar to that of a point
charge at the center~$O$ of the cubic surface, vectors point roughly
away from the center (but this is not a radial field) and decreases in
magnitude as one moves further away from the cubic surface.
\begin{figure}
\centering
\includegraphics[width=.8\textwidth]{figures/fig-vector-plots-diagonal.eps}
\caption{ Plots similar to
Fig.~\ref{fig:vector-streamline-plot-x=0-plane} but now over the
rectangle~$QRSO$ that lies in a mirror plane that contains the
diagonal line~${\bf x}(u) = (u,u,u)$. The plots were generated by
plotting the quantity ${\bf E}(u,u,z)$ over the ranges $0 \le u \le
0.7$ and $0 \le z \le 0.46$. The vector field ${\bf E}$ now points
outward from~$O$ to the edge at~$S$ and outward to the corner
at~$R$. }
\label{fig:vector-streamline-plot-diagonal-plane}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.8\textwidth]{figures/fig-vector-plot-E-in-and-out.eps}
\caption{(a) Relation of the plotting region, square~$TUVO$, to the
charged cubic surface. The gray area is the set of points~$x=0$,
$0.25 \le y < 0.75$, and $0.25 \le z < 0.75$. (b) Vector plot of the
electric field~${\bf E}(0,y,z)$ over the square~$TUVO$. We see
that, in the plane~$x=0$, the external electric field is
substantially stronger than the internal field, that the electric
field is particularly large near an edge, and that the external
electric field is approximately radial.}
\label{fig:vector-plot-E-inside-and-outside-x=0-plane}
\end{figure}
\subsection{The electric potential~$V$ associated with the charged
non-conducting cubic surface.}
\label{subsect-properties-of-V}
In this subsection, we briefly discuss some properties of the electric
potential~$V$ associated with the uniformly charged cubic surface,
using the analytical expression for~$V$ given in
Appendix~\ref{appendix:exact-electric-field}. We conclude that for
the electric field inside the charged cubic surface, plotting the
vector electric field is more insightful than plotting equipotential
surfaces of the scalar potential~$V$. This seems to contradict the
discussions of introductory physics
textbooks\cite{Knight2012,Tipler2007,Young2011}, but most examples
considered in those books involve just one or two point charges, or
one or two conductors of simple geometry such that the electric field
can be mainly understood by looking at equipotential contours in a
single plane.
As was the case for the electric field (see
subsection~\ref{subsection-only-interior-field-interesting}), only the
potential~$V$ inside the charged cubic surface requires understanding
since the potential outside the surface is qualitatively similar to
that of a point charge at the center~$O$ of the surface, as shown in
Fig.~\ref{fig:V-contourplot-external-x=0-plane}.
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth]{figures/fig-V-ext-contour-plot-x=0-plane.eps}
\caption{Equipotential contours of values~$V=9, 8, 7, 6, 5, 4\cdots,
3\,\rm V$ of the potential~$V(0,y,z)$ in the plane~$x=0$ external to
the charged cubic surface of Fig.~\ref{fig:charged-cubic-surface}.
The contours change smoothly from squarish contours just outside the
charged cubic surface ($V=9$) to the circular contours of a point
charge located at the center~$O$ of the cubic surface ($V=3$).}
\label{fig:V-contourplot-external-x=0-plane}
\end{figure}
Panels~(c) and~(d) of Figure~\ref{fig:E-V-along-symmetry-lines}, we
plot the potential~$V$ along the same two symmetry lines as in
panels~(a) and~(b) of the same figure. Because the electric field is
parallel to these symmetry lines for points on these lines (see
subsection~\ref{subsection-E-parallel-to-mirror-planes}), the negative
of the local slope of~$V$ in these plots directly gives the component
of~${\bf E}$ parallel to the symmetry line. In both cases, the
potential~$V$ asymptotes to the potential~$KQ/d$ of a point charge
at~$O$ (thin black curves in panels~(c) and~(d)) at distances that are
close to the surface of the cube.
From Fig.~\ref{fig:E-V-along-symmetry-lines}(c), we see that the
potential inside the cubic surface is nonuniform and varies over the
modest range $9.6 < V < 9.8$, but this modest variation in value is
misleading since it is the slope of this curve that determines the
magnitude of the electric field. The flat central portion of the curve
near~$x=0$ implies a zero slope and so small electric field component,
which is consistent with the central region of
Fig.~\ref{fig:E-V-along-symmetry-lines}(c). The discontinuous changes
in~$E_x$ from negative to positive finite values at $x=\pm L/2$ in
Fig.~\ref{fig:E-V-along-symmetry-lines}(a) at~$x= 1/2$ are difficult
to see from Fig.~\ref{fig:E-V-along-symmetry-lines}(a) so plotting the
electric field component provides more insight here than plotting~$V$.
Fig.~\ref{fig:E-V-along-symmetry-lines}(d) shows how the potential
varies along the symmetry line~${\bf x}(u) = (u,u,u)$ that passes
through the diagonally opposite vertices $\pm (1/2,1/2,1/2)$. The
range of~$V$ inside the cubic surface along this line is somewhat
broader than in panel~(a), $7.5 \le V \le 9.2$. The potential in the
middle region has again an approximately zero slope, consistent with
the small values of~${\bf E}$ near the center of the cubic
surface. Barely visible at the coordinate values $u = \pm 1/2$ is the
fact that the slope of~$V$ becomes vertical, corresponding to the
logarithmic divergence to infinity of the electric field magnitude at
a vertex of the cubic surface (see
Appendix~\ref{subect-log-divergence}).
Panels~\ref{fig:V-surfaceplots-on-two-interior-planes}(a) and
\ref{fig:V-surfaceplots-on-two-interior-planes}(b) are surface plots
of~$V$ for the same symmetry regions as respectively
Fig.~\ref{fig:vector-streamline-plot-x=0-plane}(a) and
Fig.~\ref{fig:vector-streamline-plot-diagonal-plane}(a). The vector
and streamline plots of~${\bf E}$ clearly provide more insight about
the magnitude and direction of the interior electric field than what
is provided from the corresponding surface plots of~$V$. We observe
that, since the electric field is parallel to these symmetry
rectangles at points on these rectangles, the two-dimensional gradient
$(-\partial_y V, -\partial_z V)$ for panel~(a) or $(-\partial_u V,
-\partial_z V)$ for panel~(b) give the full gradient of~$V$. This
means that the direction and magnitude of the electric field is
determined from just these surface plots (which would not be true for
a surface plot on a rectangle that does not lie within a mirror
plane).
For example, in
Fig.~\ref{fig:V-surfaceplots-on-two-interior-planes}(a) and in
Fig.~\ref{fig:V-surfaceplots-on-two-interior-planes}(b), the surface
is approximately flat near the origin, which implies that the gradient
and so the electric field~$-\nabla{V}$ has a small magnitude near the
center of the cubic surface. The subtle change in concavity of~$V$ in
Fig.~\ref{fig:V-surfaceplots-on-two-interior-planes}(a) corresponds to
the electric field pointing inwards along the face center~$P$ and
outwards towards an edge at~$N$. For example, the surface $V(0,y,z)$
has a negative slope from~$y=0$ to $y=1/2$ along the axis~$z=1/2$,
corresponding to the electric field pointing towards the edge at~$N$,
while the same surface~$V(0,y,z)$ has a positive slope from~$z=0$
to~$z=1/2$ along~$y=0$, corresponding to the electric field pointing
inwards from the face's midpoint~$M$. In contrast, in
Fig.~\ref{fig:V-surfaceplots-on-two-interior-planes}(b), $V(u,u,z)$
decreases as~$u$ increases along any constant~$z$, corresponding to
the electric field pointing outwards towards points~$R$ and~$S$ as
shown more clearly in the electric field vector plot
Fig.~\ref{fig:vector-streamline-plot-diagonal-plane}. These surface
plots of the potential simply do not provide as much insight as the
plots of the electric field vectors.
\begin{figure}
\centering
\includegraphics[width=0.8\textwidth]{figures/fig-V-surfaceplots-on-two-interior-planes.eps}
\caption{ (a) Plot of the potential~$V(0,y,z)$ over the square~$MNOP$
of Fig.~\ref{fig:vector-streamline-plot-x=0-plane}(a). The negative
local gradient~$-\nabla{V}$ of this surface, which is difficult to
determine visually from this plot, corresponds to the directions of
the electric field in
Fig.~\ref{fig:vector-streamline-plot-x=0-plane}. (b) Plot of the
potential~$V(u,u,z)$ over the rectangle~$QRSO$ defined by
Fig.~\ref{fig:vector-streamline-plot-diagonal-plane}(a). }
\label{fig:V-surfaceplots-on-two-interior-planes}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.8\textwidth]{figures/fig-two-equipotential-surfaces.eps}
\caption{ Panels (a) and (b) are equipotential surfaces for
respectively~$V=9.6$ and~$V=9.4$ inside the cube $|x|, |y|, |z| <
0.45$ as computed via the Mathematica function {\tt ContourPlot3D}
using the exact expression Eq.~(\ref{eq-V-cubic-surface-exact}) for
the potential~$V(x,y,z)$. At each point on these equipotential
surfaces, the electric field~${\bf E} = -\nabla V$ is perpendicular
to the surface and points inwards in~(a) and outwards towards edges
and vertices in~(b). }
\label{fig:two-equipotential-surfaces}
\end{figure}
Finally, Fig.~\ref{fig:two-equipotential-surfaces} shows two
three-dimensional equipotential surfaces for the potential
values~$V=9.6$ and~$V=9.4$ inside the charged cubic surface. (These
were calculated using the Mathematica command \verb|ContourPlot3D|.)
For~$V=9.6$ in Fig.~\ref{fig:two-equipotential-surfaces}(a), the
equipotential surface consists of six disconnected roughly spherical
caps near the midpoint of each face. Since the electric field is
perpendicular to an equipotential surface at any point on that
surface\cite{Knight2012}, Fig.~\ref{fig:two-equipotential-surfaces}(a)
tells us that, near the center of each face, the internal electric
field points inward but over a spread of angles, in agreement with
Fig.~\ref{fig:vector-streamline-plot-x=0-plane}(c) near the
point~$M$. For $V=9.4$ in
Fig.~\ref{fig:two-equipotential-surfaces}(a), the equipotential
surface is geometrically interesting and complex, with the local
normals to this surface being consistent with, but less easy to
understand physically, than the vector plots of
Fig.~\ref{fig:vector-streamline-plot-x=0-plane}(c) and
Fig.~\ref{fig:vector-streamline-plot-diagonal-plane}(c). However,
Fig.~\ref{fig:two-equipotential-surfaces}(b) does help one to
appreciate visually the complexity of the electric field inside the
charged cubic surface.
|
1,477,468,750,324 | arxiv | \section{Introduction}
Particle reinforced composite materials, such as polymer composites, concretes and short-fiber reinforced composites, have been widely used in a variety of engineering and industrial products.
The safety and reliability of composite structures are closely related to the mechanical properties of materials.
Macroscopically, the nonlinear mechanical behaviors such as fracture of composite structures depend on the microscopic heterogeneity of materials \cite{shackelford2016introduction,tang2013study}, including the shape, spatial distribution and volume fraction of particles.
Accurate modeling and efficient simulation of fracture in particle reinforced composite structures by considering the microstructure are of great significance for the optimal design of composite structures with increased fracture toughness.
Current techniques either assumed a homogeneous model, ignoring the microstructure characteristics, or considered a micro-mechanical model, involving intractable computational costs, especially for the large-scale structures, such as concrete dam.
Therefore, it is still challenging to analyze the fracture and failure in particle reinforced composite structures from a numerical point of view.
Different techniques have been proposed to address the discontinuities of fracture. Several methods involve discontinuities of particle reinforced composites within the local continuum
framework where partial differential equations are used as the governing equations, including the finite element method \cite{hillerborg1976analysis,kassam1995finite,ayyar2006microstructure}, extended finite element method \cite{sukumar2001modeling,huynh2009extended,rostami2019xfem,spangenberger2021extended}, cohesive zone model \cite{sun2006modeling,de2013mode,sun2020prediction,quintanas2020phase}, meshfree method \cite{ghosh2013computational,bui2018analysis} and phase-field model \cite{nguyen2015phase,kuhn2016discussion,zhang2020modelling,xia2021mesoscopic}.
However, since the mesh size must be smaller than the size of particles in composite structures while employing
above numerical methods, simulating the detailed microstructure at structural scale is not computationally feasible.
A possible way to balance the accuracy and efficiency is the multiscale analysis framework, which has been proved to be effective for the linear and continuous deformation problems of composites \cite{yang2020stochastic,shu2020multiscale,yang2020novel,yang2022efficient,yang2022second,yang2017high,ma2018multi,dong2022stochastic}. And the effect of microstructure on composite structures was also taken into account.
As for the fracture analysis, attempts inspired by the multiscale framework have been made to combine a homogenized model with a small area of explicit microstructure geometry representation near the location where cracks initiate and grow \cite{canal2012intraply}.
Nevertheless, the extension to the non-linear regime, and particularly to situations involving strain localization and fracture, are much more complex and the results' correctness is not always guaranteed.
All the above methods involve discontinuities within the local continuum framework. However, the difficulty in handling discontinuities mainly arises from the basic incompatibility of
discontinuities with the partial differential equations that are used in the governing equations of the local continuum framework.
Peridynamics, which was proposed by Silling in 2000 \cite{silling2000reformulation}, is a recently developed nonlocal theory to redefine mechanical
problems by replacing the partial differential equations with integral equations, which are mathematically compatible with discontinuities.
In peridynamics, a material point is assumed to interact with surrounding points in a certain neighborhood region.
A peridynamic bond is defined between a pair of points in the neighborhood. A bond is set to break irreversibly when it is stretched beyond the critical length \cite{silling2005meshfree}. And cracks are represented by
domains that are crossed by the broken bonds. Therefore, peridynamics can effectively simulate the crack nucleation and propagation.
Some works have studied the fracture in composites with peridynamics, including particle reinforced composites.
The homogenized peridynamic models, which ignored microstructure characteristics and heterogeneities of composite materials, have been applied to predict the fracture and failure in composites.
For example, Hu et al. \cite{hu2012peridynamic} developed a homogenized peridynamics description of fiber-reinforced composites, and studied the dynamic brittle fracture and damage.
Zhou et al. \cite{zhou2017analyzing} put forth a bond-based peridynamic model to study in-plane dynamic fracture process in orthotropic composites.
Sau et al. \cite{sau2019peridynamic} introduced a numerical scheme to compute the micropolar peridynamic stress, and the stress tensor can be analyzed at the points in failure zones.
Since the influence of microstructure on nonlinear mechanics behaviors of composites cannot be ignored, peridynamic models considering the explicit representation of the microstructure of composites have been proposed.
A mesoscopic peridynamic model was proposed in \cite{li2018meso} for meso-fracture simulation of cracking process in concrete.
Dong et al. \cite{dong2021improved} proposed an improved ordinary state-based peridynamic model and employed it to study mesoscale crack initiation and propagation of concrete under uniaxial tension.
Peng et al. \cite{peng2021application} established a micro-calculation peridynamic model of concrete by the MATLAB-ABAQUS co-simulation method and calculated the mode-I fracture test and mixed-mode fracture test.
In order to reduce the cumbersome microstructural characterization and huge computation while considering the microstructures
and heterogeneities of materials as much as possible, Chen et al. \cite{mehrmashhadi2019stochastically,wu2021stochastically} proposed the intermediately
homogenized peridynamic model, which took into account some microscale information of composites, i.e., the volume fraction of particles, ignoring
the topology of component phases in microstructure, such as the shape and distribution of particles.
An effective peridynamic model considering the multiscale structure of particle reinforced composite materials to simulate fracture and failure behaviors of composite structures with high efficiency is still lacking.
This paper aims to propose a new peridynamics-based statistical multiscale framework.
It sequentially couples the peridynamics at macroscale with the peridynamics at microscale based on statistical homogenization theory to simulate the macroscopic fracture of random particle reinforced composite structures.
In the newly proposed framework (see \autoref{Fig:sec2.2-1}), the heterogeneities of composites are described as the representative volume elements (see \autoref{Fig:sec2.1-1}), and the impact of RVEs, including the shape, spatial distribution and volume fraction of particles, on structure failure are extracted as two types of peridynamic parameters, namely, statistical critical stretch and equivalent micromodulus.
The peridynamics-based statistical multiscale framework proposed in this paper has the potential to simulate fracture in random particle reinforced composite strictures with sufficient accuracy and less computational cost, especially for large-scale structures.
The remainder of this paper is outlined as follows.
In Section 2, the multiscale representation of randomly distributed composites and the peridynamics-based statistical multiscale framework are described.
Section 3 is devoted to the proposed peridynamics-based statistical multiscale method.
Section 4 describes algorithm framework developed for the proposed approach, and a flowchart of this numerical algorithm is used to illustrate the implementation of the proposed approach.
In Section 5, two and three dimensional numerical examples are given to discuss the feasibility and effectiveness of the proposed method.
Finally, concluding remarks are presented in Section 6.
\section{Multiscale fracture of randomly distributed composites}
\subsection{Multiscale representation of randomly distributed composites}
We consider the composite structures made from the matrix and reinforced particles in this study. For the three-dimensional (3D) case, all the particles are considered as ellipsoids or polyhedrons inscribed inside the ellipsoids.
The size of each particle is denoted by the long axis $a$ of corresponding ellipsoid.
For this kind of composite structure, a multiscale representation \cite{han2010statistical}, as shown in \autoref{Fig:sec2.1-1}, is introduced as follows:
1) There exists a constant $\varepsilon$ satisfying $a\ll\varepsilon\ll L$, where $L$ denotes the size of the investigated composite domain $\Omega$ at the macroscale. Thus, the structure made from composite materials can be regarded as a set of RVEs at the microscale with the same size $\varepsilon$. The composite structure is supposed to be stationary random \cite{jikov2012homogenization}, i.e., all the RVEs have the same probability distribution of particles. Then, the composite structure has periodically random distribution of particles, and can be represented by the probability distribution $P$ inside a typical RVE with $\varepsilon$-size.
2) Each ellipsoid can be defined by nine random parameters: the size parameters, i.e., the length $a$, $b$ and $c$ of the ellipsoid in three axes, the location parameters, i.e., the coordinates $(x_{1}^0, x_{2}^0, x_{3}^0)$ of central point, the orientation parameters, i.e., the Euler angles $(\psi_1, \psi_2, \psi_3)$ of the rotations. Suppose that there are $N$ ellipsoids inside a RVE $\varepsilon Y^m$, then we can define a sample $\omega^m$ of particle distribution in a normalized RVE $Y^m$ as $\omega^m=(\bm{I}_1, \bm{I}_2\cdots,\bm{I}_N)\in P$, where $m =1,2,3, \ldots$ denotes the index of samples, and $\bm{I}_j$ denotes a sample of nine random parameters for the $j$-th ellipsoid, generated by their probability density functions.
For the two-dimensional (2D) case, all the particles are considered as ellipses defined by five random parameters, including the length $a$ and $b$ of two axes, the coordinates $(x_{1}^0, x_{2}^0)$ of the central point, and the orientation parameters $\psi$. A sample of particles distribution for a 2D RVE can be obtained similarly.
By the above multiscale representation, the investigated structure $\Omega$ is logically
composed of $\varepsilon$-size cells subjected to same probability
distribution model $P$, i.e., $\Omega=\bigcup_{(\omega^m,t\in Z)}\varepsilon(Y^m+t)$.
For the whole structure $\Omega$, define
$\omega=\{\omega^m|\bm x\in \varepsilon Y^m \subset \Omega\}$. Thus the elastic
tensor of the materials can be periodically expressed as $\mathbb{A}(\frac{\bm x}{\varepsilon},\omega\bigr)=\{a_{ijkl}\bigl(\frac{\bm x}{\varepsilon},\omega\bigr)\}$, and for a given sample $\omega^m$, the elastic tensor can be defined as follows:
\begin{equation}\label{eq:coeff}
a_{ijkl}\bigl(\frac{\bm x}{\varepsilon},\omega^m\bigr)=
\left\{
\begin{aligned}
& a^1_{ijkl}, \quad \text{if}\,\,\, \bm x\in \bigcup_{i=1}^{N}e_i, \\
& a^2_{ijkl}, \quad \text{if}\,\,\, \bm x\in \varepsilon Y^m-\bigcup_{i=1}^{N}e_i,
\end{aligned}
\right.
\end{equation}
where $\varepsilon Y^m$ denotes the domain of a RVE belonging to $\Omega$, $e_i$
is the $i$-th ellipse in the $\varepsilon Y^m$, and $a^1_{ijkl}$ and $a^2_{ijkl}$ are the
elastic coefficients of particles and matrix, respectively.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.55]{RVE.pdf}
\caption{Composite structures $\Omega$ (left) and the microscopic RVE (right).}\label{Fig:sec2.1-1}
\end{figure}
We introduce a variable $\bm y=\bm x/\varepsilon \in Y^m$ which denotes local coordinates on the normalized RVE $Y^m$, and then the elastic tensor can be expressed as
\begin{equation}\label{eq.coeff1}
\mathbb{A}(\frac{\bm x}{\varepsilon},\omega\bigr) =\mathbb{A}(\bm y,\omega\bigr).
\end{equation}
\subsection{Peridynamics-based statistical multiscale (PSM) methodological framework}
For the composite structure with randomly distributed particles, it macroscopically exhibits nonlinear mechanical behavior and size effects, which originates from its microscopic heterogeneity.
In order to explain the cracking behavior in the composite structure more reasonably, the microstructural characteristics and heterogeneities cannot be ignored. However, this will inevitably lead to a huge amount of computation, especially for the large structures made from composite materials with randomly distributed particles.
Based on the multiscale representation of randomly distributed composites and inspired by the statistical homogenization theory, an efficient PSM framework is proposed to analyze the fracture of random composite structure.
In the newly proposed framework, as shown in \autoref{Fig:sec2.2-1}, the heterogeneities of composites, including the shape, spatial distribution and volume fraction of particles, are characterized within the RVEs, and their impacts on the macroscopic structure failure are extracted as two types of peridynamic parameters, namely, statistical critical stretch $\bar{s}_0$ and equivalent micromodulus $\bar{c}_0$.
Since the fracture of composite structure occurs at some local RVEs, which is related to the loads on the RVEs, the micromechanical computation is applied to analyze the fracture in the RVEs to obtain the statistical critical stretch $\bar{s}_0$.
In order to estimate the critical stretch of the macroscopic homogenized material, the fracture of the RVEs of composites is simulated by the BPD model, which has advantages in simulating fractures. For the fracture of the RVEs, the mechanical response near the interfaces between particles and matrix is critical since cracks often initiate near the composite interfaces. However, the nonlocal PD bond model fails to accurately characterize the properties of materials on both sides of composite interfaces, so the elastic solution consistent with the local CCM model cannot be obtained, resulting in inaccurate fracture simulation in the RVEs. An alternative approach is to modify the mechanical parameters of PD bonds near the composite interfaces based on the elastic solution of CCM model and the pointwise elastic energy density equivalence between CCM and BPD models, so as to obtain the consistent solutions with local CCM model. Consequently, the corrected BPD model can better simulate the fracture of the RVEs \cite{YANG2022CICP}.
On the other hand, the above correction method destroys the stationary randomness of PD micromodulus distribution in the RVEs of composites, since the PD micromodulus correction is carried out one by one for the RVEs with specific random samples. Therefore, the homogenization strategy for the BPD model, i.e., upscaling the microscale PD micromodulus on the RVEs into the macroscale effective elastic tensor \cite{madenci2020peridynamic}, will no longer be applicable. Fortunately, estimating the effective elastic tensor of composites does not involve discontinuous deformation, and the elastic solution obtained from the CCM model still works. Therefore, the traditional homogenization method based on the CCM model can be used to obtain the effective elastic tensor of macroscale homogenized material. Then the equivalent PD micromodulus can be derived based on the elastic energy density equivalence between CCM and BPD models. Specifically, the PSM methodology framework proposed in this paper includes following three steps:
\begin{itemize}
\item At the microscale level, an energy-based correction method will be introduced to define the micromodulus of interface bonds in RVEs, which makes sure that for the elastic deformation of RVEs, the elastic energy density of CCM and BPD models will be equivalent.
And an improved microscale BPD model is obtained to analyze the fracture of RVEs. We can further define the effective
critical stretch $\hat{s}_{0i}^{\omega^m}(i=1,2,3)$ along different stretching directions as the critical tensile deformation when the RVE breaks completely.
According to the probability theory, we establish the computational model of statistical critical stretch $\bar{s}_0$ related to $\hat{s}_{0i}^{\omega^m}$ with different samples $\omega^m(m=1,2,\ldots,M)$.
\item At the microscale level, the statistical homogenization approach is introduced to solve the CCM model of the composite materials, and we build the computational model of homogenized elastic coefficients $\hat{\mathbb{A}}(\omega^m)$ defined as integral average over the RVEs.
According to the probability theory, we give the computational models of effective elastic tensor $\bar{\mathbb{A}}$ related to $\hat{\mathbb{A}}(\omega^m)$ with different samples $\omega^m(m=1,2,\ldots,M)$.
Further, according to the energy density equivalence between CCM and BPD models of macroscopic homogenized materials, it is able to derive the equivalent micromodulus $\bar{c}_0$ from the effective elastic tensor $\bar{\mathbb{A}}$.
\item At the macroscale level, based on above homogenization models, the macroscale BPD model with statistical critical stretch $\bar{s}_0$ and equivalent micromodulus $\bar{c}_0$ is then constructed to analyze the fracture in the macroscopic homogenized structures $\bar{\Omega}$.
\end{itemize}
\begin{figure}[ht]
\centering
\includegraphics[scale=0.65]{Methodology.pdf}
\caption{Methodological framework.}\label{Fig:sec2.2-1}
\end{figure}
\section{Peridynamics-based statistical multiscale (PSM) method}
\subsection{Microscopic peridynamic modeling for RVEs of composites}
\subsubsection{Micromodulus tensors for the RVEs}
For the two-phase composite materials, there exist three types of PD bonds $\bm \zeta$ in the RVE, including the particle bonds, matrix bonds and interface bonds, as shown in \autoref{Fig:sec3.2.1-1}. For a specified sample $\omega^m$, the PD bonds in the RVE can be defined as $\bm \zeta=\bm y'-\bm y(\bm y,\bm y'\in Y^m)$,
and corresponding micromodulus can be expressed by $c_0(\bm y,\bm \zeta,\omega^m)$.
Since the composite interfaces play an important role on the material failure, it is very important to accurately define the micromodulus of interface bonds in the BPD model.
In our previous work \cite{YANG2022CICP}, an energy-based micromodulus correction method was proposed to determine the mechanical properties of interface bonds in the BPD model for the composite structure with randomly distributed particles.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.6]{bond.pdf}
\caption{Three kinds of PD bonds in the RVE $Y^m$.}\label{Fig:sec3.2.1-1}
\end{figure}
A stretch boundary condition along the positive direction of $y_i$-axis ($i=1,2,3$) is specified on one side of the RVE $Y^m$ with the other sides being simply supported and a middle point being fixed (see \autoref{Fig:sec3.2.2-1}(a) for the example of $y_1$-axis). According to the CCM theory, the displacement field $\bm u^i_{\text{ccm}}(\bm y,\omega^m)(i=1,2,3)$ can be obtained through the equations of elasticity defined on the RVE $Y^m$.
For the infinitesimal elastic deformation, the elastic energy density of CCM and BPD models with the same distribution of displacement field should be equivalent.
Thus, the elastic energy density of CCM and BPD models with the displacement field $\bm u^i_{\text{ccm}}(\bm y,\omega^m)(i=1,2,3)$, i.e., $W^{i}_{\text{ccm}}(\bm y,\omega^m)$ and $W^i_{\text{pd}}(\bm y,\omega^m)(i=1,2,3)$, can be further obtained as
\begin{equation}\label{eq3.1.1}
W^{i}_{\text{ccm}}(\bm y,\omega^m)=\frac{1}{2}\bm \epsilon^i(\bm y,\omega^m):\mathbb{A}(\bm y,\omega^m):\bm \epsilon^i(\bm y,\omega^m),
\end{equation}
\begin{equation}\label{eq3.1.2}
W^i_{\text{pd}}(\bm y,\omega^m)=\frac{1}{4}\int_{\mathcal{H}_{\delta}(\bm y)} \frac{c_0(\bm y,\bm \zeta,\omega^m)+c_0(\bm y',\bm \zeta,\omega^m)}{2}\bigl(\bm \zeta\cdot(\bm u^i_{\text{ccm}}(\bm y,\omega^m)-\bm u^i_{\text{ccm}}(\bm y',\omega^m)) \bigr)^2dV_{\bm y'},
\end{equation}
where $\bm \epsilon^i(\bm y,\omega^m)(i=1,2,3)$ denotes the strain tensor defined as
\begin{equation*}
\bm \epsilon^i(\bm y,\omega^m)=\frac{1}{2}\bigl(\nabla\bm u^i_{\text{ccm}}(\bm y,\omega^m)+\nabla ^\text{T} \bm u^i_{\text{ccm}}(\bm y,\omega^m)\bigr).
\end{equation*}
Based on the energy equivalence between CCM and BPD model for elastic continuous deformation of the RVE, the correction parameters $\bm \alpha(\bm y)=(\alpha_1(\bm y),\alpha_2(\bm y),\alpha_3(\bm y))$ and $\bm \beta(\bm y)=(\beta_1(\bm y'),\beta_2(\bm y'),\beta_3(\bm y'))$ are defined to modify the micromodulus at the points $\bm y$ and $\bm y'$ in the RVE $Y^m$, respectively
\begin{equation}\label{eq3.1.3}
\alpha_i(\bm{y}) = \frac{W^i_{\text{ccm}}(\bm{y},\omega^m)}{W^i_{\text{pd}}(\bm{y},\omega^m)},\quad \beta_i(\bm{y}') = \frac{W^i_{\text{ccm}}(\bm{y}',\omega^m)}{W^i_{\text{pd}}(\bm{y}',\omega^m)}.
\end{equation}
Thus, the micromodulus at the point $\bm y$ and point $\bm y'$ are modified as
\begin{equation}\label{eq3.1.4}
\hat{c}_i^{\omega^m}(\bm{y},\bm \zeta) = \alpha_i(\bm y)c(\bm y,\bm \zeta,\omega^m),\quad \hat{c}_i^{\omega^m}(\bm{y}',\bm \zeta) =\beta_i(\bm y')c(\bm y',\bm \zeta,\omega^m).
\end{equation}
Further, the micromodulus of a bond $\bm \zeta$ can be defined by the harmonic averaging of $\hat{c}_i^{\omega^m}(\bm y,\bm \zeta)$ for $\bm y$ and $\hat{c}_i^{\omega^m}(\bm{y}',\bm \zeta)$ for $\bm y'$, namely
\begin{equation}\label{eq3.1.5}
k_i^{\omega^m}(\bm{y},\bm \zeta) = \frac{2\hat{c}_i^{\omega^m}(\bm{y},\bm \zeta)\hat{c}_i^{\omega^m}(\bm{y}',\bm \zeta)}{\hat{c}_i^{\omega^m}(\bm{y},\bm \zeta)+\hat{c}_i^{\omega^m}(\bm{y}',\bm \zeta)}.
\end{equation}
Finally, the scalar scaling is applied to define the micromodulus for the bond $\bm \zeta$
\begin{equation}\label{eq3.1.8-c}
\tilde{c}_0^{\omega^m}(\bm{y}, \bm \zeta) = \frac{1}{\sqrt{\bigl(\frac{n_1}{k_1^{\omega^m}(\bm{y},\bm \zeta)}\bigr)^{2} + \bigl(\frac{n_2}{k_2^{\omega^m}(\bm{y},\bm \zeta)}\bigr)^{2} + \bigl(\frac{n_3}{k_3^{\omega^m}(\bm{y},\bm \zeta)}\bigr)^{2}}},
\end{equation}
where $\bm \zeta/ |\bm \zeta| = (n_1, n_2, n_3)^{\text{T}}$ denotes the unit direction vector of bond $\bm \zeta$ in the RVE.
\subsubsection{Statistical critical stretch: microscopic peridynamic simulation}
Based the definition of micromodulus by energy-based correction method,
the improved BPD equilibrium equation, defined on the RVE $Y^m$, is written as follows:
\begin{equation}\label{eq:sec3.2.2-1}
\int_{\mathcal{H}_{\delta}(\bm y)} \bm f^{\omega^m}(\bm y', \bm y) dV_{\bm y'} + \bm b(\bm y) = 0, \quad \bm y,\bm y'\in Y^m.
\end{equation}
We denote by $\mathcal{H}_{\delta}(\bm y)$ the neighborhood in the RVE, and the pairwise force function $\bm f^{\omega^m}(\bm y', \bm y)$ can be defined as
\begin{equation}\label{eq:sec3.2.2-2}
\bm f^{\omega^m}(\bm y', \bm y) =\frac12 \tilde{c}_0^{\omega^m}(\bm y, \bm \zeta) \frac{\bm \zeta \otimes \bm \zeta}{|\bm \zeta|^2}\cdot\bigl( \bm u(\bm y') - \bm u(\bm y) \bigr)\hat{\kappa}(t,\bm \zeta),
\end{equation}
where $\bm u$ denotes the displacement in the RVE, and $\hat{\kappa}(t,\bm \zeta)$ denotes the history-dependent scalar-valued functions defined by Eq. \eqref{eq:sec3.2.2-5}.
The bond stretch $s_{\bm \zeta}$ in the RVE $Y^m$ can be defined as follows
\begin{align}\label{eq:sec3.2.2-4}
s_{\bm \zeta}=\frac{|\bm \zeta+\boldsymbol{u}(\boldsymbol{y}+\bm \zeta)-\boldsymbol{u}(\boldsymbol{y})|-|\bm \zeta|}{|\bm \zeta|}.
\end{align}
In this study, since the two-phase particle reinforced composite structures are considered, the failure law of the RVE $Y^m$ in composite structure is implemented by defining different history-dependent scalar-valued functions $\kappa$ for the matrix bonds, particle bonds and interface bonds as follows
\begin{align}\label{eq:sec3.2.2-5}
\hat{\kappa}(t,\bm \zeta)=
\left\{
\begin{array}{lr}
1, \, \text{if}\,\,\, s_{\bm \zeta}<s_0(\bm y,\bm \zeta) \quad \forall \ 0\leq t'\leq t, \\
0, \, \text{otherwise},
\end{array}
\right.
\end{align}
where $t'$ and $t$ denote the computational steps, $\bm \zeta$ denotes the three kinds of PD bonds, and $s_0(\bm y,\bm \zeta)$ denotes critical stretch value corresponding to the particle bonds, matrix bonds and interface bonds.
A stretch boundary condition along the positive direction of $y_i$-axis ($i=1,2,3$) is specified on one right side of the RVE $Y^m$ with the other sides being simply supported and a middle point being fixed (see \autoref{Fig:sec3.2.2-1}(a) for the example of $y_1$-axis).
According to the improved BPD model and failure criterion of PD bonds, a critical tensile displacement $\hat{u}_{0i}^{\omega^m}$ along the direction of $y_i$-axis $(i=1,2,3)$ is reached until the RVE breaks completely (see \autoref{Fig:sec3.2.2-1}(b)).
Since the RVE corresponds to a material point $\bm x$ in the macro-structure,
the effective critical bond stretch $\hat{s}_{0i}^{\omega^m}(i=1,2,3)$ along the $x_i$-direction $(i=1,2,3)$ can be defined as follows
\begin{equation}\label{eq:sec3.2.2-6}
\hat{s}_{0i}^{\omega^m}=\frac{\varepsilon+\tilde{u}_{0i}^{\omega^m}-\varepsilon}{\varepsilon}=\frac{\tilde{u}_{0i}^{{\omega^m}}}{\varepsilon}.
\end{equation}
\begin{figure}[ht]
\centering
\includegraphics[scale=0.8]{critical-stretch.pdf}
\caption{The critical bond stretch of the RVE $Y^m$ with a stretch boundary condition along the positive direction of $y_1$-axis.}\label{Fig:sec3.2.2-1}
\end{figure}
From the Kolmogorov strong law of large number, we can evaluate the expected critical stretch $\bar{s}_{0i}(i=1,2,3)$ along $x_i$-direction by taking different samples $\omega^{m}(m=1,2,\cdots,M)$
\begin{equation}\label{eq:s0i}
\bar{s}_{0i}=\lim_{M\rightarrow\infty}\frac{\displaystyle\sum_{m=1}^{M}\hat{s}_{0i}^{\omega^{m}}}{M}.
\end{equation}
The scalar scaling is then applied to define the statistical critical stretch $\bar{s}_0$ corresponding to the bond $\bm \xi$ as follows
\begin{equation}\label{eq:s0}
\bar{s}_0(\bm \xi)= \frac{1}{\sqrt{\bigl(\frac{v_1}{\bar{s}_{01}}\bigr)^{2} + \bigl(\frac{v_2}{\bar{s}_{02}}\bigr)^{2} + \bigl(\frac{v_3}{\bar{s}_{03}}\bigr)^{2}}},
\end{equation}
where $\bm \xi/ |\bm \xi| = (v_1, v_2, v_3)^{\text{T}}$ denotes the unit direction vector of bond $\bm \xi=\bm x-\bm x'(\bm x,\bm x'\in \bar{\Omega})$ in the macroscale homogenized structure $\bar{\Omega}$, which will be discussed in Section 3.2.
\subsubsection{Equivalent micromodulus: statistical homogenization approach}
The CCM model for description of elastic mechanical behavior in randomly distributed composite structure $\Omega$ is given by
\begin{eqnarray}\label{eq:sec3.2.3-1}
\vspace{2mm}\displaystyle -\frac{\partial}{\partial
x_j}\left[a_{ijkl}({\frac{\bm x}{\varepsilon},\omega})\frac{1}{2}\left(\frac{\partial u^{\varepsilon}_k}{\partial
x_l}+\frac{\partial u^{\varepsilon}_l}{\partial x_k}\right)\right]=q_i(\bm x),\ \ \text{in}\ \Omega,
\end{eqnarray}
where $\bm {u}^{\varepsilon}({\bm x},\omega)$ is the displacement vector.
As mentioned in Section 2.1, $\bm x$ and $\bm y$ denote the coordinate system of the macroscale defined on the structure $\Omega$ and the microscale defined on the normalized RVE, respectively, they are related to each other as $\bm y=\bm x/{\varepsilon}$.
Differentiation with respect to $\bm x$ can be defined as
\begin{eqnarray}\label{eq:sec3.2.3-3}
\frac{\partial }{\partial x_i}=\frac{\partial }{\partial
x_i} + \varepsilon^{-1} \frac{\partial }{\partial y_i},\
(i=1,2,3).
\end{eqnarray}
Inspired by the classical mathematical homogenization theory, one supposes that $\bm {u}^{\varepsilon}$ can be expressed as following asymptotic forms for a specified sample $\omega^m$ \cite{yang2020stochastic}
\begin{eqnarray}\label{eq:sec3.2.3-4}
\bm u^{\varepsilon}(\bm x,\omega)=\bm u^{0}(\bm x,\bm y,\omega^m)+\varepsilon \bm{u}^{1}(\bm x,\bm y,\omega^m)+\varepsilon^2 \bm u^{2}(\bm x,\bm y,\omega^m)+ \cdots.
\end{eqnarray}
Introducing Eqs. \eqref{eq:sec3.2.3-3} and \eqref{eq:sec3.2.3-4} into Eq. \eqref{eq:sec3.2.3-1} and matching the terms of the same order of $\varepsilon$,
and a series of equations are obtained if the coefficients of $\varepsilon^i(i=-2,-1,0,\ldots)$ from both sides of above equation are required to be equal to each other.
From the coefficients of $\varepsilon^{-2}$ following equation can be obtained $\bm u^{0}=\bm u^{0}(\bm x)$.
From the coefficients of $\varepsilon^{-1}$ following equation can be obtained
\begin{eqnarray*}\label{eq:sec3.3.1-8}
\bm {u}(\bm x,\bm y,\omega^m)= N_{\alpha}(\bm y,\omega^m)\frac{\partial \bm {u}^0}{\partial x_{\alpha}}(\bm x)+\bm {\breve{u}}(\bm x),
\end{eqnarray*}
where $\bm {\breve{u}}^1$ is a function which is independent of $\bm y$, and ${\bm N}_{\alpha}(\bm y,\omega^m)$ is a matrix-valued function defined in the RVE $Y^m$ as follows
\begin{equation}\label{eq:sec3.2.3-5}
\frac{\partial }{{\partial {y_j}}}\left[ {{a_{ijkl}}(\bm y,\omega^m)\frac{1}{2}\left( {\frac{{\partial {N_{\alpha km}}}}{{\partial {y_l}}} + \frac{{\partial {N_{\alpha lm}}}}{{\partial {y_k}}}} \right)} \right] = - \frac{{\partial {a_{ijm\alpha }}(\bm y,\omega^m)}}{{\partial {y_j}}},\quad \bm y \in Y^m,
\end{equation}
with following boundary condition
\begin{equation}\label{eq:sec3.2.3-6}
\bm {N}_{\alpha m}(\bm y,\omega^m)=0, \quad \forall \bm y\in\partial Y^m.
\end{equation}
From the coefficients of $\varepsilon^{0}$ following equation defined in the macroscale homogenized structure $\hat{\Omega}$ corresponding to the sample $\omega^m$ can be obtained
\begin{eqnarray}\label{eq:sec3.2.3-7}
- \frac{\partial }{{\partial {x_j}}}\left( {\hat{a}_{ijkl}(\omega^m)\frac{1}{2}\left( {\frac{{\partial u_k^{\text{0}}({\bm x})}}{{\partial {x_l}}} + \frac{{\partial u_l^{\text{0}}({\bm x})}}{{\partial {x_k}}}} \right)} \right) = {q_i}({\bm x}),\quad \bm x \in \hat{\Omega},
\end{eqnarray}
where $\hat{a}_{ijkl}(\omega^m)$ is the component of the homogenized elastic tensor $\hat{\mathbb{A}}(\omega^m)$ defined as
\begin{equation}\label{eq:sec3.2.3-8}
\hat{a}_{ijkl}(\omega^m) = \frac{1}{{\left| Y^m \right|}}\int_{Y^m} {\left( {a_{ijkl}^{}({\bm y},\omega^m) + a_{ijpq}^{}({\bm y},\omega^m)\frac{1}{2}\left( {\frac{{\partial N_{kpl}^{}({\bm y},\omega^m)}}{{\partial {y_q}}} + \frac{{\partial N_{kql}^{}({\bm y},\omega^m)}}{{\partial {y_p}}}} \right)} \right)} dy.
\end{equation}
From Kolmogorov strong law of large number, we can evaluate the effective elastic tensor $\bar{a}_{ijkl}$ by taking different samples $\omega^{m}(m=1,2,\cdots,M)$
\begin{equation}\label{eq:aijkl}
\bar{a}_{ijkl} =\lim_{M\rightarrow\infty}\frac{\displaystyle\sum_{m=1}^{M}\hat{a}_{ijkl}(\omega^m)}{M}.
\end{equation}
The equivalent micromodulus $\bar c_0(\bm x,\bm x')$ for the BPD model on homogenized structure $\bar{\Omega}$ can be defined through the effective elastic tensor $\bar{\mathbb{A}}=\{\bar{a}_{ijkl}\}$ as follows \cite{azdoud2013morphing}
\begin{equation}\label{eq:eff-microc}
\bar{\mathbb{A}}=\frac{1}{2}\int_{\mathcal{H}_{\delta}(\bm x)} \bar c_0(\bm x,\bm \xi) \frac{\bm \xi \otimes \bm \xi \otimes \bm \xi \otimes \bm \xi}{|\bm \xi|^2} dV_{\bm x'}.
\end{equation}
\subsection{Statistical peridynamic modeling for macroscopic homogenized structure}
According to the statistical homogenization theory, the effect of microstructure, including the shape, spatial distribution and volume fraction of particles, on the structure failure is homogenized as two types of peridynamic parameters, namely, the statistical critical stretch $\bar{s}_0$ and equivalent micromodulus $\bar{c}_0$. Therefore, the BPD equilibrium equation, defined on homogenized structure $\bar{\Omega}$, is written as follows:
\begin{equation}\label{eq:sec3.3-1}
\int_{\mathcal{H}_{\delta}(\bm x)} \bm f(\bm x', \bm x) dV_{\bm x'} + \bm b(\bm x) = 0, \quad \bm x,\bm x'\in \bar{\Omega}.
\end{equation}
We denote by $\mathcal{H}_{\delta}(\bm x)$ the neighborhood in macroscale homogenized structure $\bar{\Omega}$, and the pairwise force function $\bm f(\bm x', \bm x)$ can be defined as
\begin{equation}\label{eq:sec3.3-2}
\bm f (\bm x', \bm x) = \bar{c}_0(\bm x,\bm \xi) \frac{\bm \xi \otimes \bm \xi}{|\bm \xi|^2}\cdot \bigl( \bm u(\bm x') - \bm u(\bm x) \bigr)\bar{\kappa}(t,\boldsymbol{\xi}),
\end{equation}
where $\bm u$ denotes the displacement in the homogenized structure $\bar{\Omega}$, and $\bar{\kappa}(t,\bm \xi)$ denotes the history-dependent scalar-valued functions defined by Eq. \eqref{eq:sec3.3-5}.
The bond stretch $s_{\bm \xi}$ in the homogenized structure $\bar{\Omega}$ can be defined as follows
\begin{align}\label{eq:sec3.3-4}
s_{\bm \xi}=\frac{|\bm \xi+\boldsymbol{u}(\boldsymbol{x}+\boldsymbol{\xi})-\boldsymbol{u}(\boldsymbol{x})|-|\boldsymbol{\xi}|}{|\boldsymbol{\xi}|}.
\end{align}
The failure law of the homogenized structure $\bar{\Omega}$ is implemented by defining history-dependent scalar-valued functions as follows
\begin{align}\label{eq:sec3.3-5}
\bar{\kappa}(t,\boldsymbol{\xi})=
\left\{
\begin{array}{lr}
1, \, \text{if}\,\,\, s_{\bm \xi}<\bar{s}_0(\bm \xi) \quad \forall \ 0\leq t'\leq t, \\
0, \, \text{otherwise},
\end{array}
\right.
\end{align}
where $t'$ and $t$ denote the computational steps.
\section{Peridynamics-based statistical multiscale algorithm}
The proposed PSM framework includes some key models. The energy-based micromodulus correction model consists of Eqs. \eqref{eq3.1.1}-\eqref{eq3.1.8-c}, the statistical critical stretch model consists of Eqs. \eqref{eq:sec3.2.2-6}-\eqref{eq:s0}, the equivalent micromodulus model consists of Eqs. \eqref{eq:sec3.2.3-8}-\eqref{eq:eff-microc}, and the statistical peridynamic model for macroscopic homogenized structure consists of Eqs. \eqref{eq:sec3.3-1}-\eqref{eq:sec3.3-5}, which comprise a closed solving system that can be used for performing
objective simulation of composite structure with randomly distributed particles.
\begin{algorithm}[H]
\renewcommand{\thealgorithm}{of PSM}
\caption{computation steps of the proposed PSM method}
{\bf Algorithm I: Definition of corrected micromodulus $\tilde{c}_0^{\omega^m}(\bm{y}, \bm \zeta)$}\\
{\bf Input:} geometry of RVE $Y^m$ with sample $\omega^m$, elastic properties of particles and matrix\\
{\bf Output:} Micromodulus $\tilde{c}_0^{\omega^m}(\bm{y}, \bm \zeta)$ of PD bonds in $Y^m$
\begin{algorithmic}[1]
\STATE{Compute displacement field of the RVE through CCM theory.}
\STATE{Compute elastic energy density of the CCM and BPD models by Eqs. \eqref{eq3.1.1} and \eqref{eq3.1.2}.}
\STATE{Compute correction factor along different directions similarly by Eq. \eqref{eq3.1.3}.}
\STATE{Compute $\tilde{c}_0^{\omega^m}(\bm{y}, \bm \zeta)$ by harmonic average and scalar scaling in Eqs. \eqref{eq3.1.5}-\eqref{eq3.1.8-c}.}
\end{algorithmic}
\vspace{6 pt}
{\bf Algorithm II: Computation of statistical critical stretch $\bar{s}_{0}$}\\
{\bf Input:} geometry of RVE $Y^m$, micromodulus $\tilde{c}_0^{\omega^m}(\bm{y}, \bm \zeta)$\\
{\bf Output:} statistical critical stretch $\bar{s}_{0}$
\begin{algorithmic}[1]
\STATE{Compute displacement of the RVE by Eq. \eqref{eq:sec3.2.2-1} with a $y_i$-direction boundary condition.}
\STATE{Compute effective critical stretch $\hat{s}_{0i}^{\omega^m}$ related to sample $\omega^m$ by Eq. \eqref{eq:sec3.2.2-6}.}
\STATE{Compute statistical critical stretch $\bar{s}_{0}(\bm \xi)$ by Eqs. \eqref{eq:s0i} and \eqref{eq:s0}.}
\end{algorithmic}
\vspace{6 pt}
{\bf Algorithm III: Computation of equivalent micromodulus $\bar{c}_0$}\\
{\bf Input:} geometry of RVE $Y^m$, elastic properties of particles and matrix\\
{\bf Output:} equivalent micromodulus $\bar{c}_0$
\begin{algorithmic}[1]
\STATE{Compute cell function ${\bm N}_{\alpha}(\bm y,\omega^m)$ by solving Eqs. \eqref{eq:sec3.2.3-5} and \eqref{eq:sec3.2.3-6}.}
\STATE{Compute homogenized elastic tensor $\{\hat{a}_{ijkl}(\omega^{m})\}$ related to sample $\omega^m$ by Eq. \eqref{eq:sec3.2.3-8}.}
\STATE{Compute effective elastic tensor $\bar{\mathbb{A}}=\{\bar{a}_{ijkl}\}$ by Eq. \eqref{eq:aijkl}.}
\STATE{Compute micromodulus $\bar c_0(\bm x,\bm \xi)$ by Eq. \eqref{eq:eff-microc}.}
\end{algorithmic}
\vspace{6pt}
{\bf Algorithm IV: Computation of displacement filed $\bm u(\bm x)$}\\
{\bf Input:} geometry of $\bar{\Omega}$, statistical critical stretch $\bar{s}_{0}$, equivalent micromodulus $\bar{c}_0$\\
{\bf Output:} displacement filed $\bm u(\bm x)$ in homogenized structure $\bar{\Omega}$
\begin{algorithmic}[1]
\STATE{Compute displacement of the homogenized structure $\bar{\Omega}$ by Eq. \eqref{eq:sec3.3-1}.}
\STATE{Compute bond stretch by Eq. \eqref{eq:sec3.3-4} and judge bond failure by Eq. \eqref{eq:sec3.3-5}.}
\end{algorithmic}
\end{algorithm}
A detailed implementation procedure of the proposed PSM method for the fracture simulation of composite structure reinforced by randomly distributed particles is described in Algorithm of PSM.
The models are numerically implemented based on finite element discretization, in
which both the continuous elements (CE) and discrete elements (DE) are applied \cite{li2023peridynamics}.
The displacement field in the crack area belongs to $L^2$ space, and for the other area of the structure, the displacement field is in $H^1$ space.
According to the regularity of the structure, we apply the DEs in the crack area and CEs in the remainder of the structure.
Based on the computation of displacement, the bond stretch is obtained. Further, the initiation and propagation of crack are driven by the bond breaking according to bond failure criterion.
The update of crack area leads to an update of the meshes, which make some CEs transfer to DEs.
Besides,
\autoref{Fig:sec4.3-1} shows a flowchart of the proposed adaptive algorithm for fracture simulation.
\begin{figure}
\centering
\includegraphics[width=0.8\linewidth]{Algorithm.pdf}
\caption{Flowchart for computing fracture of random composite structure.}
\label{Fig:sec4.3-1}
\end{figure}
\section{Numerical examples}
The feasibility and effectiveness of the proposed PSM method are illustrated with the 2D and 3D examples in this section.
We consider the brittle composites with Young's modulus of $E_M$ = 71.7 GPa for the matrix, and $E_P$ = 427 GPa for the particles.
Poisson's ratio for both matrix and particles are set to be 1/3 in 2D examples and 1/4 in 3D examples, respectively.
All 2D examples are meshed by quadrilateral elements and 3D cases are meshed by hexahedral elements.
The horizon, $\delta$, of neighborhoods in the PD model is defined as three times of average grid size to ensure the integral accuracy of nonlocal effects in numerical computations.
The critical stretches of particle bonds, matrix bonds and interface bonds are set to 0.00338, 0.01161 and 0.007495, respectively in Section 5.1, Section 5.2.1 and Section 5.2.2, and they are set to 0.00338, 0.01161 and 0.00387, respectively in Section 5.2.3, and 0.01, 0.01 and 0.01, respectively in Section 5.3.
2D and 3D RVEs are chosen as the square and cube domains with side length of 1mm, respectively, in all the examples.
The micromodulus of both matrix and particles in the RVE with a given sample $\omega^m$ is assumed to be an exponential function \cite{azdoud2013morphing}
\begin{equation}\label{eq:mirco-exa}
c_0(\bm y, \bm \zeta, \omega^m) =\bigl(a_0+a_1\cos(2\theta)+a_2\cos(4\theta)\bigr) e^{-|\bm{\zeta}|/l}
\end{equation}
where $a_0$, $a_1$ and $a_2$ can be estimated by the Poisson's ratio and Young's modulus of matrix and particles \cite{azdoud2013morphing}, $\theta$ denotes the angle between the bond $\bm \zeta$ and $y_1$-axis, and $l$ is a characteristic length set to be one-third of average gird size of the RVE.
Similarly, after the statistical homogenization, the equivalent micromodulus can also be defined as
\begin{equation}\label{eq:emirco-exa}
\bar{c}_0(\bm x, \bm \xi) =\bigl(\bar{a}_0+\bar{a}_1\cos(2\varphi)+\bar{a}_2\cos(4\varphi)\bigr) e^{-|\bm{\xi}|/\bar{l}}
\end{equation}
where $\bar{a}_0$, $\bar{a}_1$ and $\bar{a}_2$ can be estimated by the effective Poisson's ratio and Young's modulus of composite materials, $\varphi$ denotes the angle between the bond $\bm \xi$ and $x_1$-axis, and $\bar{l}$ is a characteristic length set to be one-third of average gird size of macroscale homogenized structures.
It should be pointed out that the homogenized materials in Section 5.2.1 and 5.2.3 are isotropic and $\bar{a}_1=\bar{a}_2=0$ \cite{azdoud2013morphing}.
Moreover, our numerical experiments are performed on a desktop workstation with 96G memory and 2.20GHz Xeon 5220R CPU.
\subsection{Validation of accuracy and efficiency for the PSM method}
To validate the proposed PSM method, we consider two kinds of periodic composite structures, namely, the square structures with side length of 5mm and composed of RVE 1 as illustrated in \autoref{Fig:sec5.1-1}(a), and RVE 2 as illustrated in \autoref{Fig:sec5.1-1}(b). \autoref{fig:sec5-1} shows the geometry and boundary conditions of the composite structures and corresponding RVEs.
The biaxial tensile boundary condition with $\tilde{u}_1=0.02$ mm along $y_1$-axis is specified on the left and right sides of the RVEs, and the middle points on the left and right sides of RVEs are fixed along $y_2$-axis. Similar biaxial tensile boundary condition with $\tilde{u}_1=0.03$ mm along $x_1$-axis and fixed boundary condition along $x_2$-axis are specified for the macroscopic structures.
In order to demonstrate the accuracy and efficiency of the present method, we take the single-scale direct BPD fracture analyses of composite structures as reference.
The BPD simulations of the composite structures, RVEs and homogenized structures were implemented through 100 displacement increments (steps).
\autoref{fig:rve_frac} shows the crack paths and the stress in the $y_1$ direction versus displacement of two RVEs. It can be found from \autoref{fig:rve_frac}(a) and \autoref{fig:rve_frac}(b) that the microstructure can significantly affect the crack propagation path, and RVE 1 and RVE 2 break completely at step 14 and 11, respectively. Besides, we can obtain from \autoref{fig:rve_frac}(c) that the effective critical stretches of two RVEs (see Eq. \eqref{eq:sec3.2.2-6}) are 0.0056 and 0.0044, respectively. Based on the homogenization method, the equivalent Young's modulus (see Eq. \eqref{eq:sec3.2.3-8}) are obtained as 118.286 GPa and 110.354 GPa, respectively. Thus, the fracture of homogenized macroscopic structure can be simulated by the BPD model with effective critical bond stretch and micromodulus calculated by equivalent Young's modulus.
\autoref{fig:sec5_macro} shows the stress versus displacement for two kinds of composite structures simulated by the PSM method and single-scale direct BPD analyses.
From \autoref{fig:sec5_macro}, it can be seen that the failure process simulated by the PSM method is in good agreement with that by the single-scale direct BPD analyses.
The information of CE and DE meshes for the composite structures, RVEs and homogenized structures is listed in \autoref{tab:time_compare},
which shows the advantage of the proposed PSM method by saving almost 80\% computational time and it is very important in engineering computation.
As a result, the PSM method is effective to analyze the fracture behaviors of particle reinforced composite structures.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.4]{example1-1.pdf}
\caption{2D Macroscopic structure and related RVEs.}\label{Fig:sec5.1-1}
\end{figure}
\begin{figure}[H]
\centering
\subfigure[]{
\includegraphics[width=0.29\textwidth]{./sec5-1-macro_1.png}\label{fig:macro-5-1}}
\subfigure[]{
\includegraphics[width=0.29\textwidth]{./rve1_cell_geo.png}\label{fig:rve1_geo}}
\subfigure[]{
\includegraphics[width=0.29\textwidth]{./rve2_cell_geo.png}\label{fig:rve2_geo}}
\caption{(a) geometry and boundary conditions for the macroscopic structure, (b) and (c) geometry and boundary conditions for RVE 1 and RVE 2.}\label{fig:sec5-1}
\end{figure}
\begin{figure}[H]
\centering
\subfigure[]{
\includegraphics[width=0.33\textwidth]{./rve1_frac_cell.png}\label{fig:rve1_frac_cell}}
\subfigure[]{
\includegraphics[width=0.33\textwidth]{./rve2_frac_cell.png}\label{fig:rve2_frac_cell}}
\subfigure[]{
\includegraphics[width=0.29\textwidth]{./rve1_2_stress_curve_micro.png}\label{fig:rve1_2_stress_cell}}
\caption{Crack path for (a) 2D RVE 1 and (b) 2D RVE 2 at imposed displacement step 14 and 11, respectively; (c) stress versus imposed displacement for two kinds of RVEs.}\label{fig:rve_frac}
\end{figure}
\begin{figure}[H]
\centering
\subfigure[]{
\includegraphics[width=0.33\textwidth]{./rve1_stress_curve_macro.png}}
\subfigure[]{
\includegraphics[width=0.33\textwidth]{./rve2_stress_curve_macro.png}}
\caption{Stress versus imposed displacement for composite structure with (a) RVE 1, $\varepsilon=1/5$ and (b) RVE 2, $\varepsilon=1/5$ simulated by PSM method and single-scale direct BPD method, respectively.}\label{fig:sec5_macro}
\end{figure}
\begin{table}[H]
\setlength{\abovecaptionskip}{0cm}
\setlength{\belowcaptionskip}{0.3cm}
\centering
\caption{Comparison of computational time for 2D composite structures, RVEs and homogenized structures.}\label{tab:time_compare}
\scalebox{0.8}{
\begin{tabular}{cccccccc}
\toprule
\multirow{2}*{} & \multicolumn{3}{c}{Composite with RVE 1} & \quad & \multicolumn{3}{c}{Composite with RVE 2} \\ \cline{2-4} \cline{6-8}
& Composite & RVE 1 & Homogenized structure & \quad & Composite & RVE 2 & Homogenized structure \\ \hline
Elements & 77,296 & 5,820 & 28,224 & \quad & 91,128 & 5,770 & 28,224 \\
CE Nodes & 77,857 & 5,973 & 28,561 & \quad & 91,729 & 5,923 & 28,561 \\
DE Nodes & 309,184 & 23,280 & 112,896 & \quad & 364,512 & 23,080 & 112,896 \\
Time(s) & 1,191,714 & 17,047 & 242,212 & \quad & 1,263,596 & 52,318 & 346,950 \\
\bottomrule
\end{tabular}
}
\end{table}
\subsection{Fracture of 2D random composite structures by the PSM method}
\subsubsection{Composite structures with uniform distribution of particles}
We consider the composite structure with side length of 8mm and reinforced by uniform distribution of circular particles, as shown in \autoref{fig:sec5.2-1}, where the notches with width of 0.2mm and length of 1mm are preset at the left and right sides of the structure, and the shear and tensile boundary conditions with $\tilde{u}_1=\tilde{u}_2=0.1$mm are given.
The volume fraction of particles in the RVE is 14\%. The size and boundary condition of the RVE are the same as those in Section 5.1.
The BPD simulations of the RVE and homogenized structures were implemented through 100 displacement increments (steps).
\begin{figure}[H]
\centering
\includegraphics[scale=0.48]{example1-2.pdf}
\caption{Geometry and boundary conditions of the structure with uniform distribution of circular particles.}\label{fig:sec5.2-1}
\end{figure}
\begin{figure}[H]
\centering
\subfigure[]{
\includegraphics[width=0.4\textwidth]{./uniform_plot_c.png}\label{fig:uniform_plot_c}}
\subfigure[]{
\includegraphics[width=0.4\textwidth]{./uniform_plot_s.png}\label{fig:uniform_plot_t}}
\caption{The expected value of (a) $\bar{a}_0$ in micromodulus defined by Eq. \eqref{eq:emirco-exa}, (b) critical stretch with different number of samples.}\label{fig:uniform_plot}
\end{figure}
\autoref{fig:uniform_plot} displays the expected values of $\bar{a}_0$ in micromodulus defined by Eq. \eqref{eq:emirco-exa} and critical stretch of the random composite material.
Statistically, different samples
have different results. But accompanied by the increasing number of samples with the same statistical characteristic, the mathematical expectation of computational results
should converge. Obviously, as shown in \autoref{fig:uniform_plot}, the scatter of data
decreases with the increasing number of samples.
Therefore, 25 samples were taken in this study to avoid an unacceptable scatter of numerical results.
\autoref{fig:frac_uniform} shows the crack paths in a specified sample RVE simulated by the microscopic BPD model.
\autoref{fig:frac_biaxial} show the crack paths for the homogenized structure under biaxial tension and shear conditions simulated by the macroscopic BPD model with the expectations of micromodulus and critical stretch, which is consistent with the experimental results \cite{nooru1993experimental}.
Based on the proposed PSM framework, a single-scale direct BPD simulation of the composite structure can be replaced by the microscopic BPD simulation of RVEs and the macroscopic BPD simulation of homogenized structure with high efficiency.
\begin{figure}[H]
\centering
\subfigure[Step 12]{
\includegraphics[width=0.32\textwidth]{./frac_12.png}}
\subfigure[Step 16]{
\includegraphics[width=0.32\textwidth]{./frac_50.png}}
\subfigure[Step 100]{
\includegraphics[width=0.32\textwidth]{./frac_100.png}}
\caption{Crack paths of the RVE at different steps.}\label{fig:frac_uniform}
\end{figure}
\begin{figure}[H]
\centering
\subfigure[Step 8]{
\includegraphics[width=0.32\textwidth]{./a1_frac_8.png}}
\subfigure[Step 10]{
\includegraphics[width=0.32\textwidth]{./a1_frac_10.png}}
\subfigure[Step 100]{
\includegraphics[width=0.32\textwidth]{./a1_frac_100.png}}
\caption{Crack paths of the homogenized structure under biaxial tension and shear at different steps.}\label{fig:frac_biaxial}
\end{figure}
\subsubsection{Composite structures reinforced by particles with different inclination angles}
We consider the L-shaped composite panel reinforced by uniform distribution of elliptical particles, and the boundary conditions
and dimensions are presented in \autoref{fig:sec5.2-2}.
Five RVEs containing 14\% of elliptical particles with different inclination angles between the long axis of ellipse and $y_1$-axis, including $15^\circ$, $30^\circ$, $45^\circ$, $60^\circ$ and $75^\circ$ are taken into account as shown in \autoref{fig:angle_geo}. The size and boundary condition of the RVEs are the same as those in Section 5.1.
The simulations of the RVEs and homogenized structures were implemented by 100 displacement increments (steps).
\autoref{tab:angle_rve} presents the expected values of micromodulus defined by Eq. \eqref{eq:emirco-exa} and critical stretch of elliptical particle reinforced composites with inclination angle from $15^\circ$ to $75^\circ$.
From \autoref{tab:angle_rve}, it can be seen that the critical stretch $\bar{s}_{01}$ increases with the increase of inclination angle.
Since the biaxial tensile direction is parallel to $y_1$-axis, the smaller the inclination angle between the long axis of ellipse and $y_1$-axis is, the easier the stress concentration around ellipsoidal particles in the RVE is. Therefore, for the case of inclination angle $15^\circ$, the cracks are most likely to initiate, which leads to the smallest critical stretch. The similar results can be obtained for the critical stretch $\bar{s}_{02}$ from \autoref{tab:angle_rve}.
The crack paths of the homogenized L-shaped panels corresponding to the composites with five RVEs are shown in \autoref{fig:L_frac}, and the loading steps when crack initiates are 61, 67, 65, 65 and 63, respectively, which demonstrates that the shapes of particles in RVEs have the significant impact on the fracture of macroscopic structures.
Besides, the crack paths are similar to the results described in \cite{winkler2001experimental,narayan2019gradient}. Therefore, this example illustrates the effectiveness of the
PSM method.
\begin{figure}[H]
\centering
\includegraphics[scale=0.35]{L_beam_geo.png}
\caption{Geometry and boundary conditions of the L-shaped composite panel.}\label{fig:sec5.2-2}
\end{figure}
\begin{figure}[H]
\centering
\subfigure[$15^\circ$]{
\includegraphics[width=0.3\textwidth]{./angle_15.png}\label{fig:angle_15}}
\subfigure[$45^\circ$]{
\includegraphics[width=0.3\textwidth]{./angle_45.png}\label{fig:angle_45}}
\subfigure[$60^\circ$]{
\includegraphics[width=0.3\textwidth]{./angle_75.png}\label{fig:angle_75}}
\caption{Geometry of RVEs with uniform distribution of ellipsoidal particles with different inclination angles.}\label{fig:angle_geo}
\end{figure}
\begin{table}[H]
\setlength{\abovecaptionskip}{0cm}
\setlength{\belowcaptionskip}{0.3cm}
\centering
\caption{The coefficients of equivalent micromodulus defined by Eq. \eqref{eq:emirco-exa} and critical stretch of ellipsoidal particle reinforced composites with different inclination angles.}\label{tab:angle_rve}
\scalebox{0.8}{
\begin{tabular}{cccccc}
\toprule
& $15^\circ$ & $30^\circ$ & $45^\circ$ & $60^\circ$ & $75^\circ$ \\ \hline
$\bar{a}_{0}$ & $1.91\times 10^{18}$ & $1.88\times 10^{18}$ & $1.86\times 10^{18}$ & $1.84\times 10^{18}$ & $1.82\times 10^{18}$ \\
$\bar{a}_{1}$ & $1.72\times 10^{15}$ & $1.69\times 10^{15}$ & $1.67\times 10^{15}$ & $1.65\times 10^{15}$ & $1.63\times 10^{15}$ \\
$\bar{a}_{2}$ & $-4.38\times 10^{15}$ & $-4.31\times 10^{15}$ & $-4.25\times 10^{15}$ & $-4.21\times 10^{15}$ & $-4.16\times 10^{15}$ \\
$\bar{s}_{01}$ & 0.0050 & 0.0056 & 0.0058 & 0.0062 & 0.0068 \\
$\bar{s}_{02}$ & 0.0070 & 0.0066 & 0.0060 & 0.0056 & 0.0048 \\
\bottomrule
\end{tabular}
}
\end{table}
\begin{figure}[H]
\centering
\subfigure[$15^\circ$]{
\includegraphics[width=0.32\textwidth]{./L_beam_15_frac.png}\label{fig:L_15}}
\subfigure[$30^\circ$]{
\includegraphics[width=0.32\textwidth]{./L_beam_30_frac.png}\label{fig:L_30}}
\subfigure[$45^\circ$]{
\includegraphics[width=0.32\textwidth]{./L_beam_45_frac.png}\label{fig:L_45}}
\subfigure[$60^\circ$]{
\includegraphics[width=0.32\textwidth]{./L_beam_60_frac.png}\label{fig:L_60}}
\subfigure[$75^\circ$]{
\includegraphics[width=0.32\textwidth]{./L_beam_75_frac.png}\label{fig:L_75}}
\caption{Crack paths of homogenized L-shaped panel structures corresponding to RVEs with different inclination angles.}\label{fig:L_frac}
\end{figure}
\subsubsection{Composite structures with different volume fractions of particles}
We consider the composite structures with the geometry and boundary conditions as shown in \autoref{fig:volume_macro}.
The composite structures reinforced by uniform distribution of circular particles with the radius of 0.0522mm and ten volume fractions of particles are studied, including from 6\% to 24\% with an interval of 2\%, as shown in \autoref{fig:volume_geo}. Twenty samples are selected for the RVEs of each volume fraction.
The biaxial tensile boundary condition with $\tilde{u}_1=0.003$ mm along $y_1$-axis is preset on the left and right sides of RVEs, and the middle points on the left and right sides of RVEs are fixed along $y_2$-axis.
The simulations of the RVEs and homogenized structures were implemented by 100 and 166 displacement increments (steps), respectively.
\autoref{fig:volume_curve}(a) displays the expected values of $\bar{a}_0$ in micromodulus defined by Eq. \eqref{eq:emirco-exa} and critical stretch of the RVEs with different volume fractions of particles.
Statistically, different samples should
have different results. But accompanied by increasing number of samples with the same statistical characteristic, the mathematical expectation of computational results should converge.
Therefore, some samples were taken in this study to avoid an unacceptable scatter of numerical results.
From \autoref{fig:volume_curve}(a), it can be seen that with the increase of particle volume fraction, the equivalent micromodulus increases continuously, while the statistical critical stretch first increases slightly and then decreases continuously. This shows that although the particles play a reinforcing role in the composites, it does not mean that the more particles, the greater the statistical critical stretch of the composites. There exists an optimal reinforcement volume fraction \cite{jamshaid2022natural,ali2014seismic,duxiaoqi}, that is, in this example, the statistical critical stretch reaches the maximum value when the particle volume fraction is $10\%$, and when the particle volume fraction is more than $10\%$, the more particles, the smaller the statistical critical stretch of the composites.
\autoref{fig:volume_curve}(b) displays the average stress at the load boundary versus imposed displacement curve for the homogenized structures corresponding to the RVEs with different volume fractions of particles.
From \autoref{fig:volume_curve}(b), it can be found that the average stress versus imposed displacement curves for different particle fractions have similar morphology, that is, with the increase of imposed displacement, the stress first increases approximately linearly, and then decreases gradually after reaching the peak value.
The final crack path is shown in \autoref{fig:volume_frac_macro}. It can be found from \autoref{fig:volume_frac_macro} that the crack growth paths of the homogenized structures corresponding to the RVEs with $10\%$ and $24\%$ particle fractions basically remain horizontal, and the crack for the $24\%$ case is longer than that for the $10\%$ case.
By comparing \autoref{fig:volume_curve}(a) and \autoref{fig:volume_curve}(b), it can be found that 10 groups of results can be roughly divided into left and right parts (see \autoref{fig:volume_curve}(a)) and upper and lower parts (see \autoref{fig:volume_curve}(b)), respectively, with the data point in \autoref{fig:volume_curve}(a) or the curve in \autoref{fig:volume_curve}(b) corresponding to $16\%$ particle fraction as the boundary.
From the left part of \autoref{fig:volume_curve}(a), it is easy to see that the curve of statistical critical stretch is on the top of the curve of equivalent micromodulus when the particle fraction is less than $16\%$, and the result is opposite when the particle fraction is greater than $16\%$ in the right part of \autoref{fig:volume_curve}(b). We know that under the same deformation condition, the smaller the critical stretch, the easier the bond will break. On the other hand, under the condition of bearing the same bond force, the higher the bond micromodulus, the harder the bond will elongate and the harder the bond will fracture.
Therefore, there exists a competition between the statistical critical stretch and the equivalent micromodulus of composite materials for structural failure.
This conclusion can also be drawn in \autoref{fig:volume_curve}(b). For the cases that the particle fraction is less than $16\%$ (the upper curves), when the statistical critical stretch reaches the maximum (corresponding to the case of particle fraction $10\%$), the curve reaches the maximum stress peak value and imposed displacement value.
In addition, the peak stress of the curves corresponding to $6\%$ and $8\%$ particle fraction is smaller than that of the curve corresponding to $12\%$ and $14\%$ particle fraction, respectively. Although the statistical critical stretch of the former is much larger than that of the latter, the equivalent micromodulus of the former is much smaller than that of the latter, resulting in a lower stress peak value. This demonstrates that when the equivalent micromodulus is small enough, even if the statistical critical stretch is large, it cannot play a good role in retarding structural damage.
On the other hand, when the particle fraction is greater than $16\%$ (the lower curves), the statistical critical stretch plays a decisive role. It can be seen from \autoref{fig:volume_curve}(b) that even if the equivalent micromodulus gradually increases, the peak stress of the corresponding curve gradually decreases with the gradual decrease of the statistical critical stretch. We can conclude from the above analysis that the equivalent micromodulus and statistical critical stretch compete with each other and jointly affect the fracture of the macroscale structure. Therefore, one can design the microstructure of composite materials to obtain composite structures with specific fracture performance.
\begin{figure}[H]
\centering
\includegraphics[width=0.48\textwidth]{./volume_macro.png}
\caption{Geometry and boundary conditions of the structure with different volume fractions of particles.}\label{fig:volume_macro}
\end{figure}
\begin{figure}[H]
\centering
\subfigure[10\%]{
\includegraphics[width=0.35\textwidth]{./volume_10.png}\label{fig:volume_10}}
\subfigure[24\%]{
\includegraphics[width=0.35\textwidth]{./volume_22.png}\label{fig:volume_22}}
\caption{Geometry of RVEs with uniform distribution of circular particles with different volume fractions.}\label{fig:volume_geo}
\end{figure}
\begin{figure}[H]
\centering
\subfigure[]{
\includegraphics[width=0.45\textwidth]{./figure_plot_volume.png}}
\subfigure[]{
\includegraphics[width=0.52\textwidth]{./volume_macro_stress_curve.png}}
\caption{(a) The coefficient $\bar{a}_0$ of equivalent micromodulus defined by Eq. \eqref{eq:emirco-exa} and critical stretch versus volume fraction of particles, (b) stress versus imposed displacement for homogenized structures corresponding to RVEs with different volume fractions of particles.}\label{fig:volume_curve}
\end{figure}
\begin{figure}[H]
\centering
\subfigure[10\%]{
\includegraphics[width=0.43\textwidth]{./volume_10_frac_macro.png}\label{fig:volume_10_frac}}
\subfigure[24\%]{
\includegraphics[width=0.43\textwidth]{./volume_24_frac_macro.png}\label{fig:volume_24_frac}}
\caption{Crack paths of homogenized structures corresponding to RVEs with different volume fractions of particles at Step 166.}\label{fig:volume_frac_macro}
\end{figure}
\subsection{Applications of the PSM method to fracture in 3D random composite structures}
In this section, we apply the PSM method to simulate the fracture in 3D random composite structures.
\autoref{Fig:sec5.3-1}(c) shows the geometry and boundary condition with $\tilde{u}_3=0.1$mm of the composite structures, where a notch with length of 4 mm, width of 0.02 mm and height of 0.8mm is preset. Two kinds of RVEs reinforced by the uniform distribution of spherical and ellipsoidal particles, whose volume fraction is taken as 7.5\%, are considered, as shown in \autoref{Fig:sec5.3-1}(a).
The biaxial tensile boundary condition with $\tilde{u}_2=0.02$ mm along $y_2$-axis is specified on the left and right surfaces of the RVEs, and the top and bottom surfaces are fixed in the $y_3$ direction, as shown in \autoref{Fig:sec5.3-1}(b).
The BPD simulations of the RVE and homogenized structures were implemented through 50 displacement increments (steps).
\begin{figure}[H]
\centering
\includegraphics[scale=0.7]{example3-1.pdf}
\caption{Geometry and boundary conditions of 3D composite structure and related RVEs.}\label{Fig:sec5.3-1}
\end{figure}
\begin{figure}[H]
\centering
\subfigure[]{
\includegraphics[width=0.33\textwidth]{./frac_3d_1.png}\label{fig:frac_3d_1}}
\subfigure[]{
\includegraphics[width=0.33\textwidth]{./frac_3d_3.png}\label{fig:frac_3d_3}}
\subfigure[]{
\includegraphics[width=0.28\textwidth]{./3d_1_2_stress_curve_micro.png}\label{fig:3d_1_2_curve_mic}}
\caption{Crack path for (a) 3D RVE 1 and (b) 3D RVE 2 at imposed displacement load step 10 and 8; (c) stress versus imposed displacement for two kinds of RVEs.}\label{fig:rve_frac_3d}
\end{figure}
Since the 3D composite structure is composed of 160 RVEs, the number of elements and nodes of composite structure are much more than that of a single RVE and the homogenized structure, as shown in \autoref{tab:time_compare_3d}. Thus, the single-scale direct BPD fracture simulation of the 3D composite structures is almost impossible.
According the proposed PSM framework, we first take the fracture simulation of RVEs by the microscopic BPD model to determine the equivalent critical stretch. The crack path and stress curve versus imposed load steps for two kinds of RVEs are shown in \autoref{fig:rve_frac_3d}. Then, we apply the statistical homogenization method to obtain the equivalent micromodulus.
Finally, the composite structures can be homogenized to the macroscopic homogeneous structures, which can be simulated by the macroscopic PBD model with expectation of micromodulus and critical stretch.
\autoref{fig:frac_3d_point} shows the crack path for the 3D homogenized structures under three-point bending condition, and corresponding dissipative energy \cite{azdoud2014morphing} and stress curves are displayed in \autoref{fig:3d_curve}.
\begin{figure}[H]
\centering
\subfigure[]{
\includegraphics[width=0.42\textwidth]{./3d_point_frac_1.png}}
\subfigure[]{
\includegraphics[width=0.42\textwidth]{./3d_point_frac_2.png}}
\caption{Crack path in the homogenized structures of composites composed of (a) RVE1 and (b) RVE 2 with three point bending at step 50.}\label{fig:frac_3d_point}
\end{figure}
\begin{figure}[H]
\centering
\subfigure[]{
\includegraphics[width=0.33\textwidth]{./3d_energy_curve_macro.png}}
\subfigure[]{
\includegraphics[width=0.33\textwidth]{./3d_stress_curve_macro.png}}
\caption{(a) Dissipative energy and (b) stress versus imposed displacement for homogenized structures of composites composed of RVE 1 and RVE 2.}\label{fig:3d_curve}
\end{figure}
\begin{table}[H]
\setlength{\abovecaptionskip}{0cm}
\setlength{\belowcaptionskip}{0.3cm}
\centering
\caption{Comparison of computational time for 3D composite structures, RVEs and homogenized structures.}\label{tab:time_compare_3d}
\scalebox{0.8}{
\begin{tabular}{cccccccc}
\toprule
\multirow{2}*{} & \multicolumn{3}{c}{Composite with RVE 1} & \quad & \multicolumn{3}{c}{Composite with RVE 2} \\ \cline{2-4} \cline{6-8}
& Composite & RVE 1 & Homogenized structure & \quad & Composite & RVE 2 & Homogenized structure \\ \hline
Elements & 655,360 & 4,096 & 2,744 & \quad & 655,360 & 4,096 & 2,744 \\
CE Nodes & 786,080 & 4,913 & 3,375 & \quad & 786,080 & 4,913 & 3,375 \\
DE Nodes & 5,242,880 & 32,768 & 21,952 & \quad & 5,242,880 & 32,768 & 21,952 \\
Time(s) & - & 560,010 & 367,668 & \quad & - & 571,430 & 351,020 \\
\bottomrule
\end{tabular}
}
\end{table}
\section{Conclusions}
The PD models are promising for the simulation of fracture, as they allow discontinuities in the displacement field. However, they are still computationally expensive, especially for fracture simulation of large-scale composite structures with randomly distributed of particles.
Then, a novel PSM method is presented to predict
the fracture of composite structures with the heterogeneities at microscale being taken into account, which makes it possible to efficiently simulate the fracture of composite structures.
A key point of the approach is that the scale separation-based statistical multiscale strategy is used to define a macroscale BPD model with an equivalent micromodulus and a statistical critical stretch that are evaluated based on microscale RVEs.
Consequently, the proposed method
can take into account the microstructural characteristics of composites when performing macroscopic fracture analysis, and at the same time save the computational cost.
The validity and efficiency of the PSM method and its CE/DE numerical algorithm have been verified by comparison with the single-scale direct PD fracture analyses.
Moreover, the equivalent micromodulus and statistical critical stretch compete with each other and jointly impact the fracture performance of macroscale structure.
\section{Acknowledgements}
This research was supported by the National Natural Science Foundation of China (12272082, 51739007), Strategic Priority Research Program of Chinese Academy of Sciences (XDC06030102), Special Scientific Research Program of Shaanxi Provincial Department of Education (22JK0586) and Natural Science Foundation of Chongqing (CSTB2022NSCQ-MSX0296).
|
1,477,468,750,325 | arxiv | \section{Introduction}
\subsection{Cluster Mergers and Their Effect on Member Galaxies}
Mergers between clusters are exciting events, involving around $10^{64}$ ergs of kinetic energy. On cluster-wide scales the effects of the dissipation of this energy are evident in the intracluster medium (ICM). X-ray observations of the ICM in presumed cluster mergers often reveal non-spherical morphologies, sometimes with temperature variations and shocks. Similarly, the presence of diffuse radio emission in the form of relics and halos seems to correlate with known cluster mergers. In such a dynamic environment, what is the effect on individual galaxies?
Any answers are likely complicated by numerous, and often competing, effects. Galaxies in groups which are being accreted by clusters may undergo bursts of star formation as time-varying tidal fields transfer gas to the galaxies' centers \citep{bekki1999}. Similar ``pre-processing'' of galaxies is found in simulations where galaxy-galaxy interactions and tidal perturbations are enhanced in groups and substructures around clusters \citep{gnedin2003}. On a larger scale, when clusters merge with each other they develop shocks in their intracluster gas and interactions between individual galaxies and these shocks may lead to starbursts \citep{roettiger1996}. In the case of individual galaxies already within clusters, the successive influence of numerous high-speed passages may lead to starbursts \citep[``harrassment,'' e.g.][]{moore1998}. Gas-rich galaxies entering a cluster would experience ram pressure through interaction with the intracluster gas \citep{gunn1972}, possibly compressing their molecular clouds and initiating bursts of star formation \citep{dressler1983}. However, ram pressure is also a prime candidate for suppressing star formation in clusters, as it can efficiently remove the gas supply of the entering galaxies, thereby extinguishing any star formation \citep[e.g.,][]{fujita1999,quilis2000}. As a result, the issue of whether or not a starburst ever occurs is an open one, with some authors finding evidence for such starburst histories \citep[e.g.,][]{poggianti1999} with others favoring ``strangulation'' of star formation \citep[e.g.,][]{balogh1999}. Lastly, one should bear in mind that all of these arguments may be moot: the member galaxies of any single cluster may already be quiescent and gas poor, leaving little material available for transformation in a cluster merger.
Recently, radio investigations have been used to study galaxy activity in cluster mergers. Radio emission is an excellent indicator of galaxy activity, as it can result from active galactic nuclei (AGN) or star formation \citep[see ][for a review]{condon1992}. In fact, in the absence of any contamination due to an AGN the radio luminosity of a galaxy (typically measured at 1.4~GHz) is directly proportional to its star formation rate \citep[e.g.,][]{yun2001}. Much of the interest in radio investigations of cluster mergers was inspired by A2125, a merging cluster studied in tandem with the relaxed cluster A2645 in \citet{dwar1999} and \citet{owen1999}. The clusters are of the same richness and redshift, yet A2125 has about an order of magnitude more radio galaxies. Subsequent radio studies of presumed cluster mergers include the Shapley supercluster \citep[A3556, A3558, and A3562,][]{venturi1997, venturi2000, giacintucci2004}, A2255 \citep{my2255}, A2256 \citep{my2256}, a large sample of intermediate-redshift clusters \citep{morrison2003}, and a much more comprehensive analysis of A2125 \citep{owen2005a,owen2005b}. The resulting picture may not be so simple, with the fraction of galaxies hosting radio sources increased in some mergers (A2125, A2255), normal in others (A2256), and potentially even decreased in others (A3558). These differences might be explained by noting the examined populations and considering the timing of the merger. The former point is merely that the effects of a cluster merger are likely different on star-forming galaxies than they would be on brighter and more massive AGN, whereas the latter notes that cluster mergers proceed on $\sim$Gyr timescales while the radio emission is associated with events whose duration is about an order of magnitude less. Consequently, there are two clear directives for further observations: 1) to investigate previously-studied clusters to consistent and deep sensitivity limits, and 2) to investigate a larger number of merging clusters in order to better understand the role of merger timing.
\subsection{An Ideal Case Study: The Shapley Supercluster}
The Shapley supercluster represents an ideal testbed for further investigation of the effects of cluster mergers on the evolution of member galaxies. The true extent of the supercluster is not determined, as redshift surveys covering increasing areas of the sky have continued to reveal its enormous scale through interconnected structures \citep[e.g.,][]{drinkwater2004}. At the center of the supercluster lies A3558 (or Shapley 8), a very rich cluster (Abell richness class 4) flanked by A3556 to the west and A3562 as well as a pair of poor clusters identified in both the optical and X-ray \citep[SC1327-312 and SC1329-313, see e.g.,][]{breen1994,bardelli1998b} to the east. This collection of clusters forms the core of the supercluster, and has been studied extensively at a variety of wavelengths. Furthermore, all of these clusters lie in the plane of the sky at $z\sim0.048$ making this a particularly attractive supercluster for a single observational campaign.
Several research collaborations have committed large amounts of optical observing time to the collection of hundreds of galaxy spectra in the Shapley supercluster, and the resulting dynamical assessments depict an active region rich in substructure. The supercluster has been studied on large scales (tens of Mpc) in an attempt to estimate its total mass \citep{quintana1995,quintana2000,reisenegger2000,drinkwater2004} since it lies in the direction of the Local Group's motion relative to the cosmic microwave background. \citet{bardelli1998b} presented a substructure analysis of the core of the supercluster, drawing on the velocities presented in a sequence of works \citep{bardelli1994,bardelli1996,bardelli1998a}. They applied the {\scshape dedica} algorithm to identify up to 21 significant substructures within the supercluster core, and concluded that either of two hypotheses held: 1) the core of the Shapley supercluster represents a cluster-cluster merger viewed shortly after one cluster core has passed the other for the first time, or 2) the entire region is a complex of lesser mergers (such as groups merging with each other and with larger clusters).
Similarly, X-ray observations indicate a dynamically rich system. The cluster known as SC1327-312 was first identified on the basis of {\sl Einstein} data \citep{breen1994}, and subsequent studies with various X-ray observatories have continued to reveal interesting properties: for example, {\sl ROSAT} \citep{kull1999,ettori2000}, {\sl ASCA} \citep{hanami1999}, {\sl Beppo-SAX} \citep{bardelli2002}, and {\sl XMM-Newton} \citep{finoguenov2004}. All of the five clusters are linked by filamentary X-ray structure \citep{kull1999}, and appear to be interacting. \citet{hanami1999} suggested a dynamical sequence for the clusters in the chain including a relaxed poor cluster (SC1327-312), two rich ``developed'' clusters with substructure (A3558 and A3562), an ongoing merger (SC1329-313), and a candidate post-merger (A3556). Their interpretations were largely based on departures from expected relationships between $L_X$, $T$, and $\sigma_v$ (optical velocity dispersion). The ongoing merger in SC1329-313 was deduced from its fitted Fe K$\alpha$ line, indicating the cluster was out of ionization-equilibrium (the fitted energy of the iron line was consistent with the H-like iron K$\alpha$ line instead of the usual He-like line expected for the measured temperature of the cluster). \citet{bardelli2002} were only able to confirm this result for SC1329-313 at the 2$\sigma$ level, and instead argued that the whole system represented the aftermath of a major cluster-cluster merger seen after the first core passage, an interpretation also based on their optical and radio investigations. In this interpretation, the clusters other than A3558 are the remains of the merger. \citet{finoguenov2004} argued that SC1329-313 had recently passed through the northern outskirts of A3562. Despite the varied interpretations of the merger history and partners, it is clear that the core of the Shapley supercluster is a dynamically-rich system.
Prior radio works targeting specific regions within the system also reveal a complex environment. \citet{venturi1997} studied A3556 using the Australia Telescope Compact Array (ATCA) and Molonglo Observatory Synthesis Telescope (MOST), identifying nine cluster radio galaxies including the unusual narrow-angle tail source J1324-3138 \citep{venturi1998}. This appears to be the remnant of a radio galaxy, where the central engine has switched off possibly in response to the cluster merger event. \citet{venturi2000} extended the same general radio survey of the Shapley supercluster to include the cores of the other clusters in the chain, increasing the number of cluster radio galaxies to 28. Using these identifications to construct the radio luminosity function (RLF), they found a deficit of powerful radio galaxies in the system relative to the cluster RLF determined in \citet{ledlow1996}. They argued that cluster mergers may switch off existing radio sources, as appeared to be the case for J1324-3138. \citet{giacintucci2004} continued the overall survey, in particular covering the region around A3562. They identified 33 cluster radio galaxies (26 of which were not in the preceding works), and found that the deficit of powerful radio galaxies noted in \citet{venturi2000} was restricted to A3558. Among these sources were a number of lower radio luminosity sources, presumed to be starbursts.
In this paper, a comprehensive radio and optical view of the entire core of the supercluster is presented. The radio data were obtained with the NRAO Very Large Array (VLA) and consist of a 63-pointing mosaic which covers nearly 7 square degrees at a fairly uniform sensitivity of $\sim$80 $\mu$Jy per 16\arcsec{} beam. This makes it unique in that it covers the entire core of the Shapley supercluster in a single observational campaign, and implies that AGN and galaxies forming stars at rates as low as 1.0 M$_\odot$ year$^{-1}$ are detected. The radio data are complemented by new optical imaging in $B$ and $R_c$. The areal coverage and uniform sensitivity of these data represent the key advantages of this study: populations of active galaxies down to low activity levels may be studied across the full range of environmental conditions within the supercluster. This enables stronger conclusions about the potential correlation of increased activity within individual galaxies to regions of dynamical activity within the clusters which make up the supercluster.
For the sake of consistency with prior papers investigating the radio galaxy populations of nearby clusters, a cosmology with $H_0 = 75$ km~s$^{-1}${} Mpc$^{-1}$ and $q_0 = 0.1$ has been adopted. This yields a luminosity distance of 186.8 Mpc to the supercluster (using $z=0.048$), meaning $1^{\prime\prime} = 0.82$ kpc. For reference, these values become $D_L = 210.3$ Mpc and $1^{\prime\prime} = 0.93$ kpc for the {\it WMAP} cosmology with $H_0 = 71$ km~s$^{-1}${} Mpc$^{-1}$ and $\Omega_M = 0.27$, $\Omega_\Lambda = 0.73$.
\section{Data and Reductions}
\subsection{Radio}
The initial radio observations were performed in 2001 June with the VLA in its CnB configuration. This configuration, observing at a frequency of 1.4~GHz, is well suited to the study of star-forming galaxies in the Shapley supercluster with an angular resolution of $\sim$15\arcsec{} (12.4 kpc), meaning low surface brightness emission spread over the disk of a galaxy will not be missed. The extended north arm of the configuration also results in a more circular beam. The program was scheduled over five days, each consisting of a 6-hour block centered on transit of A3558 (see Table \ref{tbl-radobs} for a listing of all pointings and observation dates). Unfortunately, data were lost due to thunderstorms and strong winds stowing the array on 2001 June 20 and mechanical problems on 2001 June 21. To replace these data, a single additional 6-hour track was scheduled during the subsequent CnB configuration of the VLA in 2002 September. The conditions for these observations were less favorable, as they were performed during daytime where the Sun is a source of additional noise.
The observational strategy consisted of observing the core of the supercluster through a mosaic of individual VLA pointings. These were arranged in a hexagonal grid to provide nearly uniform sensitivity across the entire area, with an 18\arcmin{} spacing between adjacent pointings. This mosaic strategy is the same as that employed by the NRAO VLA Sky Survey \citep[NVSS,][]{condon98} and Faint Images of the Radio Sky at Twenty centimeters \citep[FIRST,][]{becker95} although with a tighter grid spacing. Originally, 70 pointings were planned but this was pared down to 63 due to the lost observing time. The seven pointings removed from the grid were taken from the edges to minimize the effect on the final mosaic coverage. The final 63 pointings are summarized in Table \ref{tbl-radobs}, and a footprint of the covered area may be viewed in Figure \ref{fig-foot}.
The radio data were obtained in line mode at 1.4~GHz, consisting of 28 channels each of 3.125~MHz bandwidth (two sets of seven channels at each polarization). Each pointing center was visited only once to minimize the amount of time spent moving among pointings. To account for the varying system temperature of the VLA at 1.4~GHz as a function of elevation, the dwell times were adjusted in order to achieve a similar sensitivity at each pointing. The minimum dwell time (for sources observed near transit) was 20 minutes, while sources at the beginning and ends of the runs were observed for 27 minutes. Although the author sure thought he was clever in doing this, he failed to respect the importance of good ($u,v$) coverage. For the most part, the pointings were prioritized such that the edges of the supercluster corresponded to those with poorer ($u,v$) coverage. To improve the ($u,v$) coverage of a few central pointings, they were revisited for brief periods ($\sim10$ minutes) during the 29 September 2002 observations. The effect of the ($u,v$) coverage will be addressed in more detail below. Flux calibration was achieved using 3C286, with calibration of phases and bandpasses performed from roughly hourly observations of the nearby calibrator source J1316-336.
The NRAO's Astronomical Image Processing System (AIPS) was used to calibrate and image the data, following the usual procedures for 1.4~GHz data.\footnote{See {\url http://www.vla.nrao.edu/astro/guides/lowfreq/analyses/}} Briefly, the data for each pointing were calibrated (including antenna-specific weights) and reduced individually. Ideally, one would handle the ($u,v$) data for each pointing in a manner which both produces a consistent beam across all pointings and does so with good characteristics (i.e., a nice Gaussian with minimal sidelobes). This is where the effect of moving to each position in the pointing grid only once is most relevant. For several of the pointings at the edges of the pointing grid, the resulting beam is highly elongated. This renders it virtually impossible to obtain a consistent beam across all pointings, necessitating the compromise of choosing parameters which produce beams with the same area and using a circular restoring beam of equal area in the final maps. Table \ref{tbl-radobs} indicates the initial beam sizes. Each pointing was imaged in $\sim10$ fields, with four fields covering the primary beam and additional fields dedicated to outlying bright sources. Iterations of imaging and self-calibration were performed until the final images for each pointing were created. At this point, the cleaned sources were restored with a 16\arcsec{} circular beam for combining into the final mosaic. The rms noise of these individual pointings ranged from about 75 to 125 $\mu$Jy beam$^{-1}$.
The final mosaic was created by combining the reduced images for all 63 pointings. At this stage, the flanking fields dedicated to bright sources were ignored and only the four fields covering the primary beam for each pointing were used. The contribution of each image to the mosaic was properly weighted by its signal-to-noise, as discussed in \citet{condon98}, with the images truncated at the point where the response of the VLA had dropped to $30\%$ of its peak (i.e., each pointing images about a 40\arcmin{} diameter). The noise characteristics of the final mosaic are quite good, with a typical rms of about 80 $\mu$Jy beam$^{-1}$ over the area described as within 2 Mpc of the centers of the individual Abell clusters. The noisiest region within this area has a local noise around 115 $\mu$Jy beam$^{-1}$ and is found to the East of the center of Abell 3558. It is caused by several bright sources, particularly the quasar at J133019.1-312259 \citep[e.g.,][]{hewitt1993}. The cleanest regions have rms noise as low as 62 $\mu$Jy beam$^{-1}$. The final mosaic is available upon request from the author.
The catalog of radio sources was then created using the task SAD, which identifies peaks in a map and fits them with Gaussians. Parameters of importance are saved from these fits, including the source position, peak flux density, integral flux density, major and minor axis, plus position angle. SAD also creates a residual map, which was inspected to identify any sources for which Gaussian fits failed (usually strong extended sources). These were manually added to the radio source catalog. The relative flux scale was found to be consistent with the NVSS via comparison of the fluxes of unresolved sources.
\subsection{Optical}
Optical images in both $B$ and $R_c$ (hereafter referred to simply as ``$R$'') filters were obtained in 2002 March and April using the Cerro Tololo Inter-American Observatory 1.5m telescope with the Cassegrain Focus CCD Imager. Using the $f/7.5$ focus provided a wide field of view of 14\farcm8 with 0\farcs44 pixels. Even so, the large area of the core of the supercluster required numerous individual telescope pointings: 152 fields in each $B$ and $R$, arranged in a simple grid with a 14\arcmin{} spacing. Figure \ref{fig-foot} depicts the coverage of the optical fields in relation to the radio mosaic. Data for this project were collected on the nights of March 24 and 25 (kindly provided by Michael Ledlow during a run for a related program), and March 29, March 31, and April 1. Sky brightness was an issue, as these dates straddled the full Moon. The exposures were prioritized such that the $B$ images were collected during the darker conditions, and exposure times were adjusted slightly for the conditions on a nightly basis. The $R$ exposures ranged from 120 to 150 seconds, while the $B$ exposures were either 180 or 240 seconds. The seeing varied from 1\farcs1 (about the best possible with the $f/7.5$ focus) to 2\farcs0, with most exposures falling in the $1\farcs3 - 1\farcs4$ range.
Data reduction followed the standard reduction steps. The images were bias subtracted and then flat fielded using twilight frames collected during several of the nights. The astrometry was registered using about 20 unsaturated USNO A2.0 stars per field \citep{monet2000}, yielding rms errors of under 0\farcs25. The photometry was set by observations of ``Selected Area'' fields from \citet{landolt1992}, performed nightly and at a range of airmass. The derived relationships were:
\begin{equation}
R = 23.29 - 2.5\log (cts) + 2.5 \log (t) - 0.09X
\end{equation}
\begin{equation}
B = 23.16 - 2.5\log (cts) + 2.5 \log (t) - 0.29X - 0.05(B - R)
\end{equation}
where $cts$ is the background-subtracted counts from the source, $t$ is the integration time measured in seconds, and $X$ is the airmass. The consistency of the photometry was checked using stars at the edges of the science exposures, where the overlap of the pointings produced numerous comparison objects (see below).
Lists of optical sources were generated by field using SExtractor \citep{bertin1996}. The most important parameters proved to be those related to the background as scattered light was an issue, particularly for the $R$ images collected during times of high sky brightness. This produced images with a gradient in the apparent background which often changed sharply along one edge of the CCD. Choosing a background mesh size too large failed to properly respond to this effect, while smaller background mesh sizes performed poorly around the larger real objects. In this latter case, some of the counts corresponding to the bigger elliptical galaxies and brighter saturated stars get included in the background resulting in extracted magnitudes for these objects which are fainter than their true magnitudes. Since one of the main uses of the optical magnitudes is to select galaxies brighter than prescribed limits, the chosen mesh size erred more toward correct handling of the scattered light. This means that the reported magnitudes for the brightest elliptical galaxies ($m_R \lesssim 14.5$) are up to $\sim0.1$ mags fainter than their real values. The derived magnitudes for all fields correspond to the fixed aperture size of 15\farcs9 in radius, which is the Gunn-Oke aperture \citep{gunn1975} at the assumed redshift of the supercluster ($z=0.048$). Initially, the color term in the above photometric equations was ignored and later introduced when the output catalogs were merged. Further, when the source catalogs for the individual fields were merged the magnitudes were also corrected for Galactic extinction using the values of \citet{schlegel1998}, based on the $A_B$ and $A_R$ values for the center of each 14\farcm8 field. Limiting magnitudes, based on inspection of number count histograms, ranged from 18.5 to 19.5 in $R$ and from 19.5 to 21 in $B$.
The consistency of the photometry from field to field was then checked using the regions of overlap. The magnitude errors reported by SExtractor are based simply on counting statistics, so this step essentially determines the errors resulting from application of the above photometric equations to the collected data. It would also reveal any magnitude zero point shifts as would occur under non-photometric conditions. All designated stars whose photometry was not flagged by SExtractor were used. The consistency of the $B$ photometry was excellent, with a standard deviation under 0.05 mag for objects with magnitudes brighter than $B=17$ and still under 0.15 mag for galaxies as faint as $B=18$. As previously suggested, the $R$ photometry was less consistent with a standard deviation of less than 0.1 mag out to $R=16$ and rising to 0.2 mag at $R=17$. For simplicity, these findings have been generalized and an additional error of 0.05 mag in $B$ and 0.1 mag in $R$ have been included in the reported photometry. Lastly, the merged catalog of all optical sources was edited to remove duplicates (see additional discussion below).
\subsection{Source Identification}\label{sec-id}
The radio and optical source catalogs were then merged to create the final list of radio galaxies. The adopted conventions for creation of the radio galaxy catalog were:
\begin{itemize}
\item{The radio peak flux must be greater than 330 $\mu$Jy, chosen as a 5$\sigma$ detection in the regions of the mosaic map with the lowest noise.}
\item{Either the radio peak flux or integral flux must be greater than five times the local rms noise, as evaluated in a 6\arcmin{} box centered on the peak in the radio emission.}
\item{The galaxy must be brighter than $m_R = 17.36$, which corresponds to $M_R = -19$ in the adopted cosmology. This limit is one magnitude fainter than that used in \citet{my2255} and related papers, allowing the present study to sample cluster radio galaxies down to fainter levels. It is also safely brighter than the completeness limit of the worst $R$ band fields. Although radio sources with detected optical counterparts fainter than 17.36 are present in the data, most such radio galaxies are safely assumed background objects.}
\item{The separation between radio and optical position must be less than 7\arcsec{} (except in cases of extended powerful radio galaxies). The probability of chance coincidence of radio and optical objects satisfying the above criteria was evaluated by shifting the optical catalog by arbitrary amounts and re-performing the correlation. From this analysis, it is expected that about 4 sources in the final list are false. However, the false detection probability for sources within 2 Mpc of the cluster centers and with $M_R \leq -20$ and $L_{1.4} \geq 6.8 \times 10^{21}$ W Hz$^{-1}$ is much lower. These are the limits used for the comparison sample in the radio galaxy fraction analysis (see Section \ref{sec-rgfrac}), and the false detection probability corresponding to the 7\arcsec{} separation limit approximately matches that used in construction of that comparison sample.}
\end{itemize}
Perhaps the most critical step in source identification was a thorough visual inspection. The radio contours were overlaid on the optical images along with markers identifying sources in each the radio and optical catalogs. In the case of the radio data, this step allowed for extended radio galaxies to be identified in addition to any other problem cases not fit by SAD. The extended radio sources are often missed because the peak in their emission does not necessarily coincide with the optical location of the galaxy. For the optical data, the visual inspection provided a check on the SExtractor results including the star/galaxy segregation for problem objects such as close pairs of faint stars and galaxies. These false bright objects were removed from the catalog of optical sources. The use of markers for each of the radio and optical catalogs also enabled the removal of any duplications which were missed in the previous steps. Finally, a small number of optical galaxies in close proximity to bright stars were absent from the SExtractor catalogs. Magnitudes for these galaxies were determined manually by summing counts within irregular shaped apertures designed to avoid contamination by the nearby stars.
The final list is presented in Table \ref{tbl-radgals}. This table includes the optical position, magnitudes and color, integral radio flux, peak radio flux, a marker indicating whether the radio emission was unresolved, local noise at the location of the radio source, and separation between the optical and radio positions. Those sources for which the AIPS task JMFIT found a minimum size of zero for the major axis were assumed to be unresolved. For such sources, the fitted peak flux density is a better representation of the true flux of the source than the fitted integral radio flux \citep[e.g., see][]{owen2005a}. In addition, the NASA/IPAC Extragalactic Database\footnote{NED is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration.} (NED) was searched to determine whether velocity information was available for each of the 210 entries in Table \ref{tbl-radgals}. A total of 123 galaxies had public velocity information, including at least 104 members of the Shapley supercluster (see next Section). This information is also presented in Table \ref{tbl-radgals}. In all subsequent text, the 210 radio-detected galaxies will be referred to simply as ``radio galaxies,'' with the 104 spectroscopically-confirmed members of the Shapley supercluster called ``cluster radio galaxies'' or sometimes ``cluster members.''
\section{Analysis}
\subsection{Overview of Radio Galaxy Population}\label{sec-overview}
To determine which radio galaxies belong to the Shapley supercluster, the individual cluster recession velocities and dispersions ($\sigma_v$) of \citet{bardelli1998a} were used. Allowing galaxies within $\pm 3\sigma_v$ of any cluster/substructure, the minimum and maximum velocities describing the supercluster are 10934 km~s$^{-1}${} and 17684 km~s$^{-1}$. In practice, these limits correspond to those for the brighter galaxies in A3558 \citep[$<v> = 14309$ km~s$^{-1}${} and $\sigma_v = 1125$ km~s$^{-1}$;][]{bardelli1998a}. As noted previously, there are 104 cluster member radio galaxies identified in the present study, nearly doubling the total identified in prior studies. These include all of the cluster radio galaxies reported in the prior radio studies of the supercluster \citep{venturi1997,venturi2000,giacintucci2004}, with the exception of two spirals with low integral flux.\footnote{13:32:05.6 -31:52:30 at 0.49 mJy and 13:30:52.1 -32:08:56 at 0.73 mJy \citep{giacintucci2004}. The local noises at these positions in the mosaic radio map are each about 75 $\mu$Jy beam$^{-1}$ rms.} Of the 17 radio galaxies with velocities formally placing them outside the supercluster, 6 are background radio sources and 11 are foreground. The majority of these foreground objects are likely related to the supercluster, having velocities greater than 9000 km~s$^{-1}${} and typically located in the eastern region of the studied area. The other three foreground galaxies belong to the Hydra-Centaurus wall \citep[$\sim4000$ km~s$^{-1}$, see][]{drinkwater2004}. A further two radio galaxies were associated with galaxy pairs in NED. Although the galaxies hosting the radio emission did not have NED velocities, their companions did have cluster velocities and hence they may tentatively be considered cluster members which would bring the total to 106. There are 87 remaining radio galaxies for which no velocity data are available. Of these, 19 are probable background galaxies on the basis of their $B - R$ colors (see below).
Table \ref{tbl-radgals} includes nine galaxies with radio emission extended beyond their optical sizes, seven of which are cluster members (see Figures \ref{fig-132145} - \ref{fig-133542}). Of the cluster members, three have $L_{1.4} \geq 10^{23}$ W Hz$^{-1}$ and are thus in the realm of classical Fanaroff-Riley objects \citep{fr1974}: J132357-313845, J132802-314521, and J133331-314058. In each case, the radio morphology appears to be ``head-tail'' in nature. J132357-313845, depicted in Figure \ref{fig-132357}, is discussed at length in \citet{venturi1998}. Similarly, J133331-314058 (Figure \ref{fig-133331}) is analyzed in detail in \citet{venturi2003}. The third possible head-tail source, J132802-314521, consists of a strong core centered on the galaxy plus what appears to be a diffuse tail (Figure \ref{fig-132802}). It is possible that these features are unrelated and the potential tail is a background source. However, the luminosity of the core portion of the radio emission is $1.1 \times 10^{23}$ W Hz$^{-1}$. Such high luminosities are usually associated with extended emission, lending credence to the association of the diffuse emission as a low surface brightness extension of the source. With this emission, the luminosity becomes $1.3 \times 10^{23}$ W Hz$^{-1}$. Similarly, the optical magnitude of the galaxy ($M_R = -22.1$) is typical of Fanaroff-Riley sources. A caveat to this interpretation is that the higher resolution observations of \citet{venturi2000} (10\arcsec $\times$ 5\arcsec) did not resolve the core emission, as might be expected if it were a jet associated with the diffuse emission. The detection of up to three head-tail sources across the clusters at the core of the Shapley supercluster is typical of rich clusters in general \citep[e.g., see][]{ledlow1995}. An additional four extended radio galaxies belonging to the cluster have general morphologies which would place them among Fanarof-Riley objects, although in each case the luminosity is below $10^{23}$ W Hz$^{-1}$. These include: J132145-310300 (Figure \ref{fig-132145}), with emission extended to the north and south suggestive of weak ``jets'' but $L_{1.4}$ of only $3.2 \times 10^{22}$ W Hz$^{-1}$; J132206-314616 \citep[Figure \ref{fig-132206} and see][]{venturi1997}, which has the appearance of a compact double but a luminosity of only $3.0 \times 10^{22}$ W Hz$^{-1}$, much lower than the $L_{1.4} \geq 10^{25}$ W Hz$^{-1}$ typical of such sources; J133048-314325 (Figure \ref{fig-133048}), with $L_{1.4} = 3.1 \times 10^{22}$ W Hz$^{-1}$ and discussed in more detail in Section \ref{ssec-evolution}; and J133542-315354 (Figure \ref{fig-133542}) at $6.0 \times 10^{22}$ W Hz$^{-1}$, which might be contaminated by emission from faint background galaxies to the west. The remaining two extended radio galaxies are strong background AGN: J132210-312805 (Figure \ref{fig-132210}) exhibits the morphology of a strong double-lobed radio source, and J132950-312258 (Figure \ref{fig-132950}) with a NED velocity of 58775 km~s$^{-1}${} implying $L_{1.4} = 9.7 \times 10^{23}$ W Hz$^{-1}$.
\subsection{The Cluster Red Sequence and Radio Galaxy Colors}
An advantage to having optical data in two filters is that the cluster red sequence (also referred to as the E/S0 ridge line) may be identified. This is useful in characterizing the radio galaxies, particularly those for which spectra are not available. Galaxies which lie along the red sequence are most likely supercluster member ellipticals, while galaxies above the red sequence (i.e., larger $B - R$ for a given $R$ magnitude) may safely be assumed to be background objects. Bluer objects may be star-forming galaxies associated with the supercluster, but may also be foreground or background objects. In most cases, NED velocities enable this determination.
The red sequence was fit using the method of \citet{lopezcruz2004}, with a few small variations. Briefly, the red sequence is parametrized as a line such that $y_i = a + bx_i$ where $y_i$ is the $B - R$ color for a given galaxy, $x_i$ is its $R$ magnitude, and $a$ and $b$ are the intercept and slope, respectively. To fit the red sequence, the deviation of each galaxy from a specified $a$ and $b$ is determined. The distribution of these deviations is evaluated using the biweight location and scale \citep{beers1990}, and a range of $a$ and $b$ are searched to find the pair that minimizes these quantities. All galaxies brighter than $m_R=17.36$ were used in the fits. Instead of using a radial cutoff of 0.2$r_{200}$ as in \citet{lopezcruz2004}, in the present paper the red sequence was fit for galaxies within 2 Mpc of the adopted cluster centers. In addition, the galaxies used in fitting the red sequence were iteratively 3$\sigma$ clipped to remove outliers. This clipping generally only removed very red background galaxies. The resulting fits are depicted in Figure \ref{fig-redseq} and summarized in Table \ref{tbl-rsfit}.
The fits for the individual clusters have steeper slopes than those of similar redshift clusters reported in \citet{lopezcruz2004}. The relationship derived in that study predicts a slope of $-0.047$, shallower than all of the fitted slopes in the present study. However, this difference is unlikely to be significant. A simple test is to evaluate the colors predicted by the different fits in Table \ref{tbl-rsfit} for representative $R$ magnitudes of cluster galaxies. For galaxies with $m_R$ from 12.7 to 17.3, the fits for the individual clusters produce colors which vary by only 0.1 -- less than the photometric errors for the galaxies. For context, the derived relation for the full galaxy sample regardless of radial separation from the cluster cores has a slope of $-0.054$, comparable to the generalized results of \citet{lopezcruz2004}. As the primary purpose of the red sequence fits for this paper is the identification of background galaxies and categorization of cluster radio galaxies as star-forming or AGN, further analysis of the red sequence fits is deferred.
Colors for the radio galaxies presented in Table \ref{tbl-radgals} are depicted in Figure \ref{fig-colors}. The change in the nature of the galaxy activity is apparent. The optically-brightest galaxies predominantly lie on the cluster red sequence and are presumably powered by AGN. Moving to fainter optical magnitudes, bluer colors are more prevalent as star formation becomes the origin for the radio emission.
\subsection{Radio Galaxy Fractions}\label{sec-rgfrac}
\subsubsection{Analysis by Cluster}
One of the key science drivers for studying the Shapley supercluster is to assess the fraction of active galaxies in relation to other clusters of galaxies. For this purpose, the analysis presented in \citet[][hereafter MO03]{my2255} for A2255 is performed for the individual clusters in the Shapley supercluster using the 18 nearby clusters presented in \citet{miller2001} as a comparison sample. Briefly, the radio galaxy fraction for a given cluster is evaluated as the number of radio galaxies with luminosity greater than a prescribed limit of $6.8 \times 10^{21}$ W Hz$^{-1}$ divided by the total number of galaxies. This radio luminosity is based on the completeness limit of the NVSS at the most distant cluster of the comparison sample, and assumes a spectral index of 0.7 for the k correction (where $S_\nu \propto \nu ^ {-\alpha}$). In the Shapley supercluster, it corresponds to a flux of 1.64 mJy which is well above the lower limits of the full radio mosaic. Velocity information is applied in order to remove foreground and background objects from the radio galaxies which comprise the numerator of the fraction. The allowed velocity ranges for each cluster, taken from \citet{bardelli1998a} and corresponding to $\pm 3\sigma_v$ about each cluster's systemic velocity, were: A3556, 12428 -- 16286 km~s$^{-1}$; A3558, 10934 -- 17391 km~s$^{-1}$; A3562, 11753 -- 17231 km~s$^{-1}$; SC1327-312, 12771 -- 16971 km~s$^{-1}$; and SC1329-313, 12520 -- 15921 km~s$^{-1}$. Complete velocity information is unavailable for the denominator, so the galaxy counts are corrected for contamination as in MO03 by assuming $N = \mathcal{N}10^{0.6m}$, where $\mathcal{N} = 1.26 \times 10^{-5}$ galaxies steradian$^{-1}$. This value of $\mathcal{N}$ was derived directly from regions outside the comparison sample clusters. The analysis is performed for galaxies with $M_R \leq -20$ and within 2 Mpc projected separation of the respective cluster centers. The adopted center positions were (J2000 coordinates): A3556, 13:24:06.2 -31:39:38; A3558, 13:27:54.8 -31:29:32; A3562, 13:33:31.8 -31:40:23; SC1327-312, 13:29:47.0 -31:36:29; and SC1329-313, 13:31:36.0 -31:48:46. As in MO03, the radio galaxy fractions are evaluated for optically bright objects ($M_R \leq -22$), intermediate objects ($-22 < M_R \leq -21$, a range which includes $M^*$), and faint objects ($-21 < M_R \leq -20$). The actual testing is performed using a $\chi ^2$ statistic which evaluates the probability that a given cluster deviates from the fraction predicted by the pooled comparison sample.
Table \ref{tbl-abs} is provided to assist the reader in identifying which galaxies from Table \ref{tbl-radgals} enter the radio galaxy fraction analysis. In addition to the galaxies' optical positions, it includes their absolute $R$ magnitudes, radio luminosities (computed using integral fluxes for resolved sources and peak fluxes for unresolved sources, as described in Section \ref{sec-id}), and projected distance to each of the five clusters. There are 40 galaxies which meet the requirements outlined above, plus an additional seven for which velocities are not available. Of these seven, two are the aforementioned members of galaxy pairs whose partners have measured velocities placing them within the supercluster. As such, they are probable members of the supercluster. The remaining five galaxies include four potential cluster members and a probable background galaxy, evaluated on the basis of their optical colors. Analysis of radio galaxy fractions has been performed both with and without the six possible supercluster galaxies which lack direct velocity measurements (the presumed background galaxy was removed from consideration). As they are distributed across the supercluster, at most two such galaxies enter the calculation for any individual cluster. Consequently, they have little effect on the calculated fractions (see Tables \ref{tbl-brightfracs} - \ref{tbl-faintfracs}). The latter portion of Table \ref{tbl-abs} includes the same information for cluster radio galaxies which did not enter the radio galaxy fraction analysis.
Results of the radio galaxy fraction analysis are presented in Tables \ref{tbl-brightfracs}, \ref{tbl-intfracs}, and \ref{tbl-faintfracs} for the bright, intermediate, and faint optical magnitude bins. There is frequent evidence for significant variation in the radio galaxy fraction in the Shapley clusters. In general, there appears to be a deficit of radio galaxies in the intermediate optical magnitude bin (Table \ref{tbl-intfracs}). This holds for all five clusters at greater than the 95\% confidence level, although inclusion of the few galaxies without velocities reduces the significance below 95\% for A3562 and SC1329-313. The comparison sample indicates that about 15\% of galaxies in this optical magnitude range should be radio sources, while the values computed for individual clusters in the Shapley supercluster were all under 10\%, even when the radio galaxies lacking velocities were included. To assess the importance of cosmic variance in the computed significance levels, the fractions were recalculated under the assumption that all counted galaxies were cluster members (i.e., no background correction and hence a lower limit to the radio galaxy fraction) and using twice the assumed background correction. The resulting ranges for the calculated significance of any excess or deficit of radio galaxies are included in Tables \ref{tbl-brightfracs} - \ref{tbl-faintfracs}. It can be seen that the background correction does not appear to cause the deficit of intermediate optical magnitude radio galaxies.
The reduction in activity among intermediate optical magnitude galaxies is offset somewhat by an increase in the probability for the brightest galaxies to host radio sources, although number statistics limit the significance of this statement. The clusters generally have about 10 to 20 galaxies in this magnitude range, of which the comparison sample indicates that 24\% should be radio sources. In A3556, more than half of all bright galaxies are radio detections, translating to an increase in activity significant at about 99\% confidence. With the exception of SC1329-313, the other clusters have marginal excesses significant at about the 85\% level. These results are much more dependent on the background correction.
As with A2255, the most significant variation is found in the optically-fainter galaxies. The optical luminosity function insures that there are many such galaxies per cluster, and in general the fraction of these galaxies which are also radio emitters is quite small -- less than 2\%. Both A3556 and A3558 have radio galaxy fractions consistent with this figure, but A3562 has a very strong excess of radio galaxies among its fainter galaxy population with nearly 11\% of these galaxies being radio sources. The significance of this excess is well over 99.9\%, regardless of the background correction or whether a single galaxy without measured velocity is included. As would be expected based on the large overlap in areas covered by the 2 Mpc radial sampling limit (e.g., refer to Figure \ref{fig-foot}), SC1327-312 and SC1329-313 also have larger radio galaxy fractions than would be expected based on the comparison sample, at 98.5\% and 99.9\% significance respectively. The significance level for each of these three clusters would be higher if their allowed velocity ranges for radio galaxies were greater (i.e., as less rich systems within the supercluster they have lower $\sigma_v$ and consequently some radio galaxies with velocities consistent with supercluster membership are excluded from consideration as being outside the allowed range of $\pm3 \sigma_v$ for the individual clusters).
\subsubsection{Local Radio Galaxy Fractions Across the Supercluster}
An advantage to performing one study for the entire core of the supercluster is that any variations in radio galaxy fraction across the clusters may be assessed in light of environmental factors. To this end, a map showing the distribution of the radio galaxy fractions was created. At each point in the map the radio galaxy fraction was computed within a 0.5 Mpc radius, using a radio detection threshold of $6.8 \times 10^{21}$ W Hz$^{-1}$ and corrected galaxy counts with magnitudes $M_R \leq -20$ (i.e., luminosity and magnitude limits identical to the preceding analysis). The choice of a 0.5 Mpc sampling region was motivated by the desire to include large enough areas to have meaningful numbers of sources while still avoiding edge effects within the main body of the supercluster. Including regions beyond 2 Mpc of the respective cluster centers adds three additional radio galaxies assumed to be cluster members but for which no velocity information was available, and two further cases where radio detections without velocity measurements were assumed to be background sources on the basis of their red colors. Thus, contamination by foreground/background radio galaxies is still minimal. The resulting map is presented in Figure \ref{fig-fracN}, where the grey-scale indicates the radio galaxy fraction. To help indicate where high fractions result from low number statistics (e.g., only two local galaxies of which one is a radio source), contours of galaxy surface density are provided. The increase in activity in A3562 is readily apparent, particularly in the direction of A3558.
The limits of $6.8 \times 10^{21}$ W Hz$^{-1}$ and $M_R \leq -20$ for the fractions analysis were set by the available comparison sample data. The present study includes much deeper data, so it is illustrative to consider fainter sources. Figure \ref{fig-fracT} parallels Figure \ref{fig-fracN} in that it is for sources with $M_R \leq -20$, but in this case radio sources with $2.1 \times 10^{21} \leq L_{1.4} \leq 6.8 \times 10^{21}$ W Hz$^{-1}$ are considered detections. The lower limit on radio luminosity corresponds to 0.5 mJy (i.e., 5$\sigma$ detections in the noisier regions of the mosaic) while the upper limit prevents direct overlap with the radio galaxies used in Figure \ref{fig-fracN}. Figure \ref{fig-fracP} is for the optically faintest galaxies in the present study, those with $-20 < M_R \leq -19$. The radio detection limit is again $2.1 \times 10^{21}$ W Hz$^{-1}$; none of these fainter galaxies have $L_{1.4} > 6.8 \times 10^{21}$ W Hz$^{-1}$. As would be expected, most of the optically-fainter galaxies lack spectroscopic velocity measurements and are evaluated as cluster or non-cluster sources on the basis of $B - R$ color.
\section{Discussion}\label{sec-discuss}
\subsection{Implications of Radio Galaxy Fractions for Evolution}\label{ssec-evolution}
The identification of 210 radio galaxies in the core of the Shapley supercluster provides a good database for evaluation of galaxy activity. Only 34 of these radio galaxies may be removed from consideration as galaxies merely seen in projection on the supercluster, half of which are known foreground/background objects on the basis of existing optical spectroscopy with the other half appearing too red to be consistent with supercluster membership. Of 104 confirmed cluster members, at least 40 may be used for a statistical analysis of radio galaxy fractions which parallels that of MO03.
The analysis of radio galaxy fractions indicates varied activity across the supercluster. In general, there are two broad findings which need to be understood: 1) the decrease in the radio galaxy fraction for galaxies with optical magnitudes near $m^*$, generally seen across the entire core of the supercluster (i.e., the whole area studied); and 2) the strong increase in the radio galaxy fraction for optically-faint galaxies in the vicinity of A3562. These two points will now be discussed in light of environmental effects and prior theoretical and observational findings.
Examination of the locations of the radio galaxies (Figures \ref{fig-fracN} - \ref{fig-fracP}) points to the region around A3562 as one of high activity. This is especially the case for the abundant optically-fainter galaxies, where the increase in the radio galaxy fraction for A3562 was significant at very high confidence. The fact that this increase corresponds to a specific region within the core of the supercluster indicates that there must be some difference in environment intrinsic to this region. In MO03, it was argued that A2255 had a larger active galaxy population as a result of an ongoing cluster-cluster merger. The A2255 merger axis is believed to be perpendicular to the observer and viewed very near the time when the cores of the merging partners are coincident. According to the simulations of \citet{roettiger1996}, at this particular time galaxies will cross the shock front formed between the merging clusters and experience a spike in ram pressure, thereby instigating starbursts. Recent X-ray analysis supports a similar picture in regards to the merger history and timing for A3562.
\citet{finoguenov2004} used a 6-pointing {\it XMM} mosaic to study the hydrodynamics of A3562 and the SC1329-313 group. They found an interesting assortment of properties, including temperature and entropy substructure and evidence for disruption of the core of SC1329-313. On $\sim$100 kpc scales, the emission of A3562 and SC1329-313 point towards one another, indicating interaction. The properties are well described by a scenario in which SC1329-313 has recently passed to the north of the core of A3562, traveling toward the west, and been deflected to the south and towards the observer. The motion was supersonic, having an implied Mach number $\approx$1.3. This passage would then have created a shock wave, as well as having produced oscillations of the core of A3562.
Examination of Figure \ref{fig-fracN} is qualitatively consistent with this scenario. The region associated with an excess of radio galaxies corresponds to the expected location of the shock associated with the passage of SC1329-313. In the \citet{roettiger1996} model, early in the merger the shock front acts as a buffer for any gas-rich galaxies in SC1329-313, protecting them from ram pressure removal of their gas. However, at a time around core passage the shock front decelerates and the galaxies pass through it, thereby experiencing a sharp increase in ram pressure leading to starbursts. An important consideration to this model for the Shapley supercluster, however, is the pre-merger nature of the active galaxies. Should they have been members of either A3562 or SC1329-313, the high efficiency of ram pressure stripping \citep[e.g.,][]{quilis2000} implies that it would be reasonable to expect that they were already gas poor and not likely to undergo a subsequent starburst. Consequently, the cluster merger itself may be indicative of larger-scale interactions: as the pair of clusters merge along a filament, outlying groups and field galaxies along the filament are also thrown into the mix. It is these galaxies which would cause the increase in the observed radio galaxy fraction. In a similar light, the merger may induce activity in the outlying groups through tidal interactions \citep{bekki1999,gnedin2003}. A picture along these same lines is evoked for the unusually high fraction of radio galaxies in A2125 \citep{owen2005b}, and seems to fit for the curious radio galaxy J133048-314325 in the Shapley supercluster.
Figure \ref{fig-133048} depicts J133048-314325, which superficially has the radio morphology of a wide-angle tail (WAT) but a radio luminosity a factor of ten lower than such sources. Two other galaxies are easily visible nearby this source, the radio galaxy J133051-314417 and a face-on spiral galaxy (J133047-314339) within the radio contours of J133048-314325 \citep[note that this spiral galaxy was designated the radio counterpart in lower resolution observations by ][]{giacintucci2004}. These three galaxies appear to be a compact group, as indicated by their small spatial and velocity separation (a maximum separation of just 1\arcmin, or about 50 kpc; NED velocities of 13394, 13226, and 13815 km~s$^{-1}${} in order of increasing galaxy RA). A potentially useful analogy for this system is that of Stephan's Quintet in which a high velocity intruder galaxy (NGC~7318A) has interacted with the intragroup medium (IGM) around NGC~7318A and NGC~7319. Radio continuum emission from this system \citep[e.g.,][]{xu2003} arises from a combination of sources, including the Seyfert nucleus of NGC~7319, star formation, and the large ($\sim40$ kpc) shock caused by the interaction. Much of the star formation, interestingly enough, is within the IGM and not simply associated with individual galaxies of Stephan's Quintet. While this star formation does contribute to the 1.4~GHz radio continuum, the majority of the radio emission is associated with a radio ridge delineating the shock front (about 1.5 mJy for star formation, 35 mJy for the shock, and 29 mJy for the Seyfert galaxy NGC~7319). Inspection of the radio contours of J133048-314325 suggests a similar interaction of the sources, potentially explaining their activity. J133051-314417 would be the intruder galaxy, with support for this interpretation being its $\sim500$ km~s$^{-1}${} velocity offset from the other galaxies and the general shape of the radio emission which is suggestive of J133051-314417 having passed through the IGM in a southeasterly direction. The extended morphology of J133048-314325 would then be the combination of possible AGN emission from the galaxy itself, plus shock and star formation emission. That J133048-314325 might be an AGN is suggested by its optical properties, which include what might be an unresolved optical nucleus coincident with the peak in the radio emission, its blue color $B - R = 1.11$, and an unusually bright absolute magnitude of $M_R = -23.7$. It is possible that the unresolved optical component and bright total magnitude arise from a foreground star, as no emission lines are noted in the spectroscopy reported by \citet{stein1996}, the literature source for the NED velocity. However, it would be a remarkable coincidence if a foreground star were to lie in projection directly on the nucleus of a galaxy which has moderately strong radio emission. The portion of the J133048-314325 emission that would be associated with the shock would then be the ridge to the southeast. Using TVSTAT to evaluate the flux associated with this feature produces a radio luminosity of $\sim1.5 \times 10^{22}$ W Hz$^{-1}$, comparable to that of the shock ridge in Stephan's Quintet ($L_{1.4} = 2.9 \times 10^{22}$ W Hz$^{-1}$). Its size, $\sim$60 kpc, is also similar to that of the shock ridge in Stephan's Quintet. A contribution to the total emission along the northwest edge could then come from star formation in the spiral galaxy J133047-314339. The entire system lies about 11\arcmin{} to the northwest of the cataloged position of SC1329-313, very near the peak in the galaxy surface density indicated in Figure \ref{fig-fracN}. Thus, it is reasonable to think that the hypothesized interaction akin to that seen in Stephan's Quintet is the result of tidal forces from the larger merger of SC1329-313 with A3562.
This hypothesis can easily be tested by future observations. Additional radio observations at shorter wavelengths would reveal any spectral index variations, where any shock emission would have steeper spectral index \citep[e.g., the shock ridge in Stephan's Quintet has $\alpha=0.93\pm0.13$][]{xu2003}. Higher resolution observations might also help separate any AGN emission associated with J133048-314325. Finally, \citet{xu2003} use infrared observations and long-slit optical spectroscopy to reveal regions associated with shocked gas and those associated with photoionization from star formation.
Returning to our discussion of radio galaxy fractions, there is less direct evidence for the ``preferred'' location of activity around SC1329-313 when examining fractions at fainter optical and radio limits. However, these results may still be interpretted within the framework described above. The inclusion of galaxies with fainter radio luminosities while keeping the optical limit at $M_R \leq -20$ (Figure \ref{fig-fracT}) produces a more uniform distribution of radio galaxy activity within the supercluster. For the faintest optical galaxies studied, those with $M_R$ between -19 and -20, the radio sources are again found mainly within A3562 (Figure \ref{fig-fracP}). In each of these cases, the star formation rates implied by the radio emission are $\approx1 - 4$ M$_\odot$ yr$^{-1}$ \citep{yun2001}. This would be a substantial amount of star formation in the fainter galaxies (i.e., those with $M_R \lesssim -20$), and a typical star formation rate for normal brighter spirals such as the Milky Way. Thus, it again appears that the optically fainter galaxies are likely to be starbursts, and that these types of galaxies are found preferentially in the vicinity of A3562.
Why would there be different effects on the faint and intermediate magnitude populations? The answer to this question might lie in the different galaxy morphologies that are included within these magnitude ranges. Galaxies in the intermediate magnitude range will be larger and more likely to have significant bulges. It is a common prediction of evolutionary models that bulges stabilize galaxies against transformation \citep[e.g., galaxy-galaxy mergers and harassment,][]{mihos1996,moore1998}. This is because the predicted evolution is usually driven by the rapid inflow of gas to the nuclei of galaxies, where it creates a nuclear starburst. Thus, brighter galaxies with bulges are more resistant to evolutionary changes than fainter, disk-dominated galaxies. Although this can easily explain why more dramatic change is seen in the fainter galaxy population, it is still difficult to understand why there would be a deficit of intermediate magnitude radio galaxies in the Shapley supercluster relative to other clusters. It is possible that the radio galaxies in this magnitude range found in other clusters represent the steady trickle of isolated field galaxies that enter the clusters, and that the dramatic merger environment of the Shapley supercluster precludes such objects.
\subsection{Comparison to Prior Radio Studies}
Are these results consistent with the prior radio studies of the core of the Shapley supercluster? \citet[][hereafter V00]{venturi2000} examined A3558 and argued for a reduction in galaxy activity in this region, and \citet[][G04]{giacintucci2004} studied A3562 and found a level of activity normal for clusters in general.
V00 noted a reduced fraction of radio galaxies in A3558 relative to the cluster survey of \citet[][LO96]{ledlow1996}. In the present study, there is a reduced fraction of radio galaxies in the intermediate optical magnitude range for A3558 while the fractions among the brighter and fainter optical magnitudes are normal. In Figure \ref{fig-rlf}, the cumulative radio luminosity function (RLF) for the Shapley supercluster and that for A3558 alone are plotted against the RLF derived for the comparison sample from \citet{miller2001}. In each case, the RLF is for galaxies with $M_R \leq -20$ and within 2 Mpc projected separation of their cluster centers (hence the A3558 RLF is a subset of the presented Shapley supercluster RLF). The LO96 RLF was constructed using only elliptical galaxies with $M_R \leq -20.5$, so it is not immediately comparable to those in the figure. However, applying a simple scaling which amounts to saying that 13.5\% of all cluster galaxies with $M_R \leq -20$ are ellipticals with $M_R \leq -20.5$ produces an excellent agreement with the bright end of the RLF derived from the \citet{miller2001} comparison sample. Since essentially all radio galaxies with $L_{1.4} \geq 10^{23}$ W Hz$^{-1}$ are bright ellipticals, this agreement is expected.
As can be seen in Figure \ref{fig-rlf}, there does appear to be a deficit of the more luminous radio galaxies in A3558 and in the Shapley supercluster core region in general. This might have been expected based on the result of a lower fraction of intermediate optical magnitude radio galaxies, although that finding was irrespective of radio luminosity and was countered slightly by the increased fraction of optically bright radio galaxies. The difficulty in assessing any deficit of strong radio galaxies relative to the LO96 RLF is that galaxies with such high luminosities are intrinsically rare, so demonstrating a statistically significant absence for a specific cluster is hampered by number statistics. In their analysis, V00 attempted to perform a more direct comparison with the LO96 RLF by correcting total galaxy counts to estimate the number of ellipticals and S0s in the A3558 environment. They arrived at a net of 185 elliptical and S0 galaxies, of which 17 were identified radio galaxies. Integration of the LO96 RLF would predict 26, and thus the deficit appears significant at about 97\% confidence. However, the LO96 RLF was constructed for ellipticals only and LO96 noted that only $\sim$3 of their sample of 188 radio sources were S0 galaxies with the remainder being ellipticals. Should just 24 of the 185 estimated counts used in the V00 analysis be S0 galaxies, the significance of the deficit drops under 90\%.
It is instructive to examine this issue more closely. Following the LO96 prescription, all galaxies within 0.6 Mpc of the center of A3558 were selected (i.e., 0.3$R_{Abell}$). These were further culled to remove galaxies with $M_R > -20.5$ and $B - R < 1.46$, for more direct comparison with the V00 analysis. This produced a list of 43 galaxies, of which three had radio luminosities high enough for consideration. A simple integration of the LO96 RLF would predict about 6 radio galaxies, and thus yields a deficit that would be claimed with 91\% significance. The morphologies of the 43 selected galaxies were assessed visually using the $R$ images along with ellipticities determined by SExtractor. For most of these galaxies, NED also provided a morphology (including confirmation of cluster membership via a published velocity) and the assignments were consistent. Removal of non-ellipticals left only 24 galaxies, including the three radio galaxies. This is almost exactly the fraction predicted by the LO96 RLF. This analysis underscores the uncertainties involved in comparing luminosity functions constructed using small numbers of galaxies. Note that although the preceding discussion was based on photometry, the alternate procedure of V00 using spectroscopy is similarly effected: selecting galaxies on the basis of non-emission line spectra will include S0 galaxies as well as ``passive spirals'' \citep{bekki2002,goto2003} and potentially even normal spirals whose fiber spectra sample just their bulge components. In summary, although there may be a reduction in the fraction of A3558's elliptical galaxy population which host radio sources, it is questionable whether this reduction is significant.
G04 noted only marginal evidence for an increase in the radio-emitting faint galaxy population in A3562, whereas in the current work it is reported as highly significant. Similarly, their analysis of the data presented in MO03 suggested that the radio galaxy fraction in A2255 was only marginally greater than the comparison clusters. This difference in interpretation is caused primarily by the adopted statistical methods. Using the same magnitude ranges described above, G04 calculated the fraction of radio galaxies for each cluster. They then calculated the fraction for the full comparison sample as the mean of these values, with the standard deviation representing the error.\footnote{A more appropriate error for the comparison case would be the standard deviation of the mean, i.e. the dispersion divided by the square root of the number of clusters. The error for any individual cluster would be represented by the dispersion of the full sample.} The procedure of MO03 (and used here) treats the comparison sample as a whole. In essence, it is an application of the binomial distribution where the comparison sample is used to determine the expected probability. Using numbers for faint galaxies from Table \ref{tbl-faintfracs}, 20 of 1104.6 comparison sample galaxies were radio detections so the statistical question is: what is the probability that finding 11 of 113.2 galaxies (i.e., the results for A3562) is consistent with this parent distribution? The $\chi ^2$ nature of the actual test takes into account that the ``true fraction,'' i.e., the 20 of 1104.6 from the comparison sample, is not known with certainty as would be assumed if one used the standard binomial probability function. These differences in applied statistical tests are the reason that G04 ascribed only marginal significance to the excess activity in A2255. Note that in the case of A3562, the G04 numbers for radio detections and total galaxies do not produce a significant result regardless of the statistical method used. Presumably this is the result of differences in the areas surveyed.
\subsection{Initiated Starbursts or Strangled Field Galaxies?}
Although the radio data presented herein identify the region around SC1329-313 to have an excess of active galaxies, they only indirectly address the level of this activity. The important question with respect to cluster evolutionary models, particularly those applied to the Butcher-Oemler effect, is whether the active galaxies are starbursts or more normal star-forming galaxies which have just entered the supercluster. In some studies, clusters of galaxies at moderate redshifts ($z \sim 0.4$) show evidence for starbursts \citep[e.g.,][]{poggianti1999} while other studies suggest star formation is gradually extinguished as stripping removes the gaseous halos from galaxies, thereby removing the reservoir needed for future star formation \citep[``strangulation,'' e.g.,][]{balogh1999}.
Unfortunately, the data presented herein do not allow for straightforward testing of such issues as the radio emission may include an AGN component. In the case of the optically fainter galaxies ($M_R > -20$), a good case can be made for starbursts on the basis of their relatively high radio luminosities and the fact that AGN are rare in low mass galaxies. For the brighter radio galaxies, ideally one would use optical spectroscopy in comparison with models \citep[as done in ][]{poggianti1999} or data at other wavelengths such as the UV in order to quantify the past-to-present star formation and fraction of total stellar mass created in recent starbursts \citep[e.g.,][]{kauffmann2003,salim2005}. As alluded to in the case of the fainter galaxies, one possible available avenue is to rely on the strength of the radio emission to indicate the level of present star formation and the $R$ magnitude as a proxy for total stellar mass. The ratio of these quantities would then be a measure of the relative strength of the current star formation.
The radio-to-optical flux ratio, $r$, was calculated using the definition presented in \citet{machalski1999}: $r = \log (S_{1.4}/\mathnormal{f}_R)$, where $\mathnormal{f}_R$ is the flux density at 6940$\mbox{\AA}$ determined using the $R$ magnitude via $\mathnormal{f}_R = 2.78 \times 10^{6 - 0.4R}$. To reduce contamination due to AGN, galaxies brighter than $M_R = -22$ were removed along with those redder than $B - R = 1.45$ (i.e., remove probable red sequence objects down to the fainter magnitudes studied). In addition a minimum 1.4~GHz flux of 0.5 mJy was required to remove potential bias caused by variation in the sensitivity of the radio data across the supercluster. After removal of galaxies with published velocities placing them outside the supercluster, this resulted in a set of 83 galaxies for testing (48 of which did have published velocities indicating supercluster membership). Specific regions were compared via a Wilcox test to determine whether any exhibited evidence for higher $r$ values, and hence more likely greater relative levels of star formation. No highly significant variation in $r$ values across the core of the supercluster was found, although there is slight evidence that those in the vicinity of SC1329-313 do have higher $r$. This result was always less than $2\sigma$, and slightly dependent on sub-samples used (e.g., how the regions were defined in RA and Dec, whether or not galaxies without published velocities were included, what optical magnitude ranges were compared, etc.). Hence, this simple statistical analysis is suggestive of increased starburst activity associated with SC1329-313, although this clearly needs to be confirmed through a more direct study.
\section{Conclusions}
This paper has presented a comprehensive radio study of the core of the Shapley supercluster. The VLA has been used to map a nearly 7 square degree area through a mosaic of pointings which provide moderately uniform noise characteristics across the entire area. In conjunction with optical imaging, radio detections for 210 galaxies with $m_R \leq 17.36$ were presented. These include 104 galaxies with velocities placing them within the supercluster, which approximately doubles the previously known total of radio galaxies in the core of the Shapley supercluster. In addition, 2 radio galaxies are members of cataloged pairs whose companions have supercluster velocities and 8 radio galaxies are likely associated with the supercluster but are formally placed in the foreground (9000 km~s$^{-1}${} $< cz <$ 10934 km~s$^{-1}$). Of those radio galaxies without velocity measurements, 68 have optical colors which do not rule out supercluster membership.
Across the entire core of the supercluster, intermediate optical magnitude ($-22 < M_R \leq -21$) galaxies appear less likely to host radio sources than their counterparts in a large comparison sample. About 5\% of such galaxies were found to be radio sources with $L_{1.4} \geq 6.8 \times 10^{21}$ W Hz$^{-1}$, whereas 15\% of the comparison sample were. This deficit is offset somewhat by the brighter galaxies ($M_R \leq -22$) of the Shapley supercluster being more likely to be radio sources.
While these results generally pertained to the entire region surveyed, a more dramatic effect was observed in the fainter galaxies ($-21 < M_R \leq -20$) localized around A3562 and SC1329-313. These galaxies were much more likely to be radio sources than anticipated on the basis of comparison clusters, with the high significance of the result being unaffected by changes in the assumed background. On the basis of their blue colors and radio luminosities, these galaxies are presumably starbursts. It is fascinating that this very region is identified by recent X-ray analysis as the location of an ongoing merger of SC1329-313 with A3562 and the rest of the supercluster. The remainder of the supercluster does not exhibit any statistical excess of radio galaxies among its optically fainter population, underscoring the potential importance of cluster mergers in galaxy evolution. This interpretation is consistent with radio studies of other active clusters, notably A2125 and A2255.
Examination of the galaxies with extended radio emission also revealed two interesting candidates for future observation. J132802-314521 is a strong radio source with a luminous core coincident with the galaxy ($L_{1.4} > 10^{23}$ W Hz$^{-1}$). This emission appears to connect to larger-scale diffuse emission, suggesting this source might be another head-tail radio galaxy within the supercluster. Extended radio emission around the galaxy J133048-314325 is a potential analog to the well-known system Stephan's Quintet. In this interpretation, J133051-314417 would have interacted with J133048-314325 and the nearby spiral J133047-314339. The resulting radio emission would be the combination of emission from the individual galaxies as well as shocks in the IGM. Interestingly, this source is near the peak of the galaxy distribution centered on SC1329-313, suggesting that environmental processes in groups involved with larger-scale mergers may be important for galaxy evolution.
\acknowledgments
The author thanks an anonymous referee, whose careful reading of the manuscript led to improvements in the clarity of the statistical analysis and additional insight into the extended radio galaxies. Much of this work was completed while I held a National Research Council Associateship at NASA's Goddard Space Flight Center. I also acknowledge the support of NASA through the American Astronomical Society's Small Research Grant Program, which supplied the computing resources to analyze the radio data. Most significantly, this project benefitted greatly from the assistance of Michael Ledlow. Mike's enthusiastic support of the project included invaluable comments on observing proposals and assistance with the data collection. I am unable to express enough how sad his passing was, and how great a loss it is to his friends and colleagues.
|
1,477,468,750,326 | arxiv | \section{Introduction}
A strong light-matter interaction in semiconductor microcavities gives rise to formation of composite quasiparticles called exciton polaritons \cite{weisbuch1992}. Being a superposition of excitons and photons, they provide a strong optical non-linearity and may be characterized by a very light effective mass. These fascinating properties advantage polaritons over cold atoms in demonstrating collective many-body phenomena at high temperatures \cite{christopoulos2007,su2018}. However, the main difference of polaritonic systems from the conventional atomic condensates is a strong dissipation stemming from the finite lifetime of microcavity photons which necessitates external pumping to maintain polariton population.
The experimental methods for excitation of coherent polaritons may be divided into two classes: non-resonant and resonant (or quasi-resonant). The first class may be also referred to as incoherent pumping where the phase of the forming condensate does not depend on the phase of the pump. This regime is frequently realized in polariton lasers \cite{christopoulos2007} that can be pumped both optically and electrically \cite{schneider2013}. The nonresonant electrical or optical pumping creates a reservoir of incoherent polaritons. If the pumping strength exceeds the threshold value, the polaritons condense to a single quantum state \cite{kasprzak2006,sun2017}. The relaxation process is accelerated due to the bosonic stimulation by an occupancy of the final state. Such a spontaneous buildup of a quantum coherence is not phase-selective in the sense that the phase of the condensate is chosen spontaneously during the condensation.
When the condensation simultaneously occurs in several closely spaced condensation centers pinned to the system inhomogeneities, the interaction between condensates leads to their mutual synchronization \cite{baas2008,wouters2008}. Although the total ensemble of the interacting condensates remains invariant to the global phase shift, the phase difference between neighbouring condensates is locked by the coupling \cite{ohadi2016}. Besides the synchronization can also take place between different polarization components of the spinor polariton condensate in the presence of intrinsic Josephson or spin-orbit coupling (caused by the TE-TM splitting) between the polarizations \cite{ohadi2015}.
{It is crucial that the coupling between the driven-dissipative condensates is inherently complex, i.e. it affects not only the energy of the coupled state as the conventional conservative (Josephson) coupling typical for atomic condensates, but also the net losses \cite{aleiner2012}}. Since { for the driven dissipative systems the coupling is determined self-consistently with the amplitudes of the condensates~\cite{kalinin2018networks}}, it is expected that several coupled condensates may be phase locked in various configurations which are characterized by different eigenfrequencies and condensation thresholds. The particular state of the ensemble is chosen during the condensation according to the selection mechanism, which favors the state with the lowest polariton lasing threshold to grow faster than the other \cite{aleiner2012}. Recently it was demonstrated that under certain conditions the steady-state configuration of an ensemble of coupled polariton condensates can be associated with the global minimum of a particular spin Hamiltonian \cite{berloff2017,lagoudakis2017} assuming that the phases of the condensates are mapped to two-dimensional classical spins. The phenomenon of establishment of a mutual coherent state of several polariton condensates can be considered as a synchronization of interacting polariton lasers by analogy with the coherent dynamics of arrays of interacting lasers \cite{Nixon2012,Gaeta2018}.
In contrast to the nonresonant pump, the quasiresonant excitation of polaritons with a coherent light provides a reliable tool of control of their properties. Namely, polaritons are formed in the state which assumes the frequency from the pump and is phase-locked to the pump.
In this paper we address the problem of synchronization of the coherent polariton state created by nonresonant pumping to the coherent light having frequency that is close to the frequency of the condensate. In the simplest case of a single condensate whose eigenfrequency matches the frequency of the laser light, the solution of this problem is trivial: the coherent excitation cancels the invariance to the global phase shift, thus the condensate is phase locked with the laser light \cite{kalinin2018Ising,caputo2019}. Here we demonstrate that even in the absence of a precise resonance, a coherent laser light is capable to impose its frequency and its phase on the condensate. Drawing an analogy with the synchronization of a dissipative oscillator by a continuous driving force \cite{pikovsky2003}, we consider this problem in terms of the synchronization phenomenon.
The problem under study is relevant to the recent studies of phase-locked polariton condensates aimed at the realization of polariton simulators. The manipulation of the condensate phase by the coherent light can be associated with the action of an effective magnetic field on the particular pseudo-spin. The study of the interaction of the coherent light with an ensemble of coupled condensates is important as a tool of control over phase locking in an $XY$-polariton simulator. Recently the particular case of a coherent pumping at the frequency being a multiple to the condensate eigenfrequency was considered \cite{kalinin2018Ising}. {Here we consider different regimes of the synchronization of the coupled polariton condensates to the external near-resonant coherent drive. Special attention is paid to the symmetry breaking bifurcation and the formation of the synchronized asymmetric states. }
The paper is organized as follows. Section \ref{SecII} presents the model system we consider. Section \ref{SecIII} describes the synchronization of a single nonresonantly excited condensate by the coherent pumping and determine the conditions of synchronization. In section \ref{SecIV} we extent the problem into the pair of coupled polariton condensates uniformly illuminated by the coherent light.
Concluding the paper we discuss possible implementations of the predicted phenomenon.
\section{The system under study}\label{SecII}
We consider an exciton-polariton condensate excited by the incoherent pump in a planar semiconductor microcavity in the presence of the low-intensity quasi-resonant coherent laser light.
The formation of the polariton condensate is described by the widely accepted mean-field model characterizing the polariton system by the complex wave function $\Psi$ of the coherent state and by the density $N_r$ of reservoir excitons \cite{wouters2007}:
\begin{subequations}\label{MainModel}
\begin{eqnarray}
i\hbar \partial_T \Psi &=& \left(- \frac{\hbar^2}{2m} \left(\partial_{XX} + \partial_{YY}\right) + g_c |\Psi|^2 + g_r N_r \right. \label{MainModela} \\
\nonumber & & + \left. \frac{i\hbar}{2}\left(RN_r - \gamma_c\right) \right)\Psi + Fe^{-i\Delta t}, \\
\partial_T N_r &=& P - \left(\gamma_r + R |\Psi|^2\right) N_r.\label{MainModelb}
\end{eqnarray}
\end{subequations}
Here $T$ is time, $m$ is the polariton effective mass, the terms $g_c$ and $g_r$ are responsible for the energy shifts due to the polariton-polariton repulsion and the interaction with the reservoir, respectively. Populations of both condensate and reservoir dissipate with the rates $\gamma_c$ and $\gamma_r$, respectively. The net dissipation is balanced by the pump $P$ creating the incoherent excitons which then scatter to the condensate with the rate $RN_r|\Psi|^2$. The last term in the right-hand side of \eqref{MainModela} accounts for the spatially uniform coherent driving whose frequency is detuned from the bottom of the lower polariton branch by $\Delta$.
For the sake of simplicity of the following calculations we rewrite the equations \eqref{MainModela} and \eqref{MainModelb} in a dimensionless form:
\begin{subequations}\label{MainModelNorm}
\begin{eqnarray}
i\partial_t \psi &=& \left( -\frac{1}{2} \left(\partial_{xx} + \partial_{yy}\right) + i(n_r-1) + \right. \\
\notag && \left. \vphantom{-\frac{1}{2}} + |\psi|^2 + g n_r \right) \psi + fe^{-i\delta t}, \label{NormEq1} \\
\partial_t n_r &=& p -\left( \gamma + \beta |\psi|^2 \right)n_r, \label{NormEq2}
\end{eqnarray}
\end{subequations}
where we have redefined $t=T\gamma_c/2$, $x=X\sqrt{m\gamma_c\left/2\hbar \right.}$,
$y=Y\sqrt{m\gamma_c\left/2\hbar \right.}$,
$\psi=\Psi\sqrt{{2g_c}\left/{\hbar\gamma_c}\right.}$,
$n_r= RN_r/\gamma_c$,
$g=2g_r\left/\hbar R \right.$,
$\gamma=2\gamma_r\left/\gamma_c\right.$,
$\beta=\hbar R\left/ g_c\right.$,
$p=2RP\left/\gamma_c^2\right.$,
$f=F\sqrt{{8g_c}\left/{\hbar\gamma_c^3}\right.} $ and
$\delta=2\Delta\left/\gamma_c \right.$.
\section{Synchronization of a single polariton condensate by a coherent laser light}\label{SecIII}
We start considering the interaction of a single polariton condensate with the external coherent light. We assume that the nonresonant pump has a Gaussian shape $p(\mathbf{r})=p_0 \exp\left( -\frac{(\mathbf{r} - \mathbf{r}_0)^2}{2l^2} \right)$, where $\mathbf{r}$ is an in-plane vector. In the absence of the resonant excitation the condensate is formed in the steady state $\psi(\mathbf{r},t) = \psi(\mathbf{r})\exp(-i\mu t)$ provided that the incoherent pump amplitude $p_0$ exceeds the threshold value $p_{\rm th}$.
In the framework of the mean-field model \eqref{MainModelNorm}, the spontaneous choice of the condensate phase which can be parameterized as $\varphi = \rm{arg}\left(\psi(\mathbf{r}=\mathbf{r}_0)\right)$ is provided by tacking a white noise distribution of the polariton field at the initial moment of time.
As the coherent laser pumping is switched on, it tends to impose its frequency and phase on polaritons. However, the weak driving is unable to impose its phase to the condensate, provided that the frequency mismatch between the laser and the condensate is large. Instead, it perturbs the condensate steady state inducing oscillations of polariton density, -- see the blue curve in Fig.~\ref{Fig:SingleCondSynch}(a). These oscillations can be detected by the frequency comb in the temporal spectrum of the condensate (see Fig.~\ref{Fig:SingleCondSynch}(b)). The oscillations are governed by the nonlinear mixing of the condensate eigenfrequency $\mu$ and the driving frequency $\delta$.
As the driving frequency approaches the frequency of the unperturbed condensate, the steady state restores, see the red curve in figure \ref{Fig:SingleCondSynch}(a). The condensate frequency merges with the frequency of the coherent pump, $\mu=\delta$, manifesting itself the synchronization between the condensate and the light, see the red line in Fig.~\ref{Fig:SingleCondSynch}(b). Note that the merging of the spectral lines happens regardless of whether the coherent pump is tuned below or above the condensate frequency. Indeed, the range of the driving frequencies providing synchronization is shown in
Fig.~\ref{Fig:SingleCondSynch}(c), which illustrates the evolution of the condensate spectrum as the driving frequency $\delta$ scans from the red-detuned to the blue-detuned region, while the driving amplitude $f$ remains fixed. The spectral width of the synchronization domain is dependent on $f$ as well as on the properties of the condensate. For instance, the variation of the amplitude of the incoherent pump $p_0$, that corresponds to the variation of the condensate eigenfrequency $\mu$, also reveals synchronization in the finite range of pumping amplitudes, see Fig.~\ref{Fig:SingleCondSynch}(d).
\begin{figure}
\includegraphics[width=\linewidth]{f1.pdf}
\caption{Synchronization of a single polariton condensate by an external coherent light. Panels (a) and (b) show the dynamics of the condensate density maximum $|\psi_0|^2=|\psi(\mathbf{r}=\mathbf{r}_0)|^2$ and the corresponding temporal spectra $S(\omega)=\int \psi\left(\mathbf{r}=\mathbf{r}_0\right) e^{-i \omega t} dt$ normalized to unity. The red curves correspond to the driving frequency $\delta=1.96$ which is close to the condensate eigenfrequency at $p_0=4$, while the blue curves correspond to the strongly nonresonant driving $\delta=1.8$.
The solid curves were calculated with use of the full model \eqref{MainModelNorm}. The dashed curves correspond to the predictions of the simplified model \eqref{Eq_toy_1}. (c) The evolution of the condensate spectral density $S(\omega)$ as the driving frequency $\delta$ varies while amplitude of the nonresonant pump is fixed to $p_0=4$. (d) The same as on (c) but when the amplitude of the incoherent pump $p_0$ varies while $\delta=1.96$. The white dashed lines on (c) and (d) indicate the condensate eigenfrequency $\mu$. For all panels $g=0.46$, $\beta=2.5$, $\gamma=0.4$, $f=0.07$ and $l=0.93$.}\label{Fig:SingleCondSynch}
\end{figure}
To achieve analytical results and to explain the synchronization phenomenon in simple terms we simplify the model \eqref{MainModelNorm} by substituting it with a single ordinary differential equation for the complex amplitude $A$ of the polariton field:
\begin{eqnarray}\label{Eq_toy_1}
i\partial_t A = \left( \alpha |A|^2 -\tilde \delta +i\Gamma - i\nu |A|^2 \right) A +\tilde f,
\end{eqnarray}
which was written in the reference frame rotating with the effective driving frequency $\tilde\delta$. Here we have introduced the net gain $\Gamma$, the saturation of the gain $\nu|A|^2$ accounting for the effect of the reservoir depletion, the effective nonlinear frequency shift $\alpha$ and the effective driving force $\tilde f$. The correspondence between the parameters of the model \eqref{MainModelNorm} and their counterparts in \eqref{Eq_toy_1} is discussed in Appendix A.
Equation \eqref{Eq_toy_1} is relatively easy to analyze. In the absence of the coherent pump, the steady state solution reads: $A=\sqrt{n_0}\exp\left({-i\mu_0 t}\right)$, where $n_0=\Gamma/\nu$ and $\mu_0 =\alpha n_0 - \tilde\delta$ in the rotating frame corresponds to the frequency mismatch between the driving force and the condensate eigenfrequency. In the presence of the coherent pumping the only possible steady state is characterized by the eigenfrequency equal to the driving frequency, $\mu_0=0$. Thus seeking the solution in the form $A(t)=\sqrt{n}\exp\left[i\varphi(t)\right]$ and assuming weakness of the coherent pump in the sense that it does not affect the condensate amplitude, $n=n_0$, we get
\begin{equation}
\sqrt{n_0}\partial_t \varphi = \left(\tilde \delta - \alpha n_0 \right) \sqrt{n_0} - \tilde f\cos\left( \varphi \right).
\end{equation}
The stable stationary solution of this equation, $\partial_t \varphi = 0$, corresponds to the establishment of the synchronization regime. It requires
\begin{equation}\label{sync_condition_simple}
\tilde f \geq \left|\tilde \delta- \alpha\Gamma/\nu \right|\sqrt{\Gamma/\nu}.
\end{equation}
This simple synchronization condition defines the critical value of the driving strength necessary to slave polariton condensate.
It implies that in the case of the weak driving the synchronization occurs either in the vicinity of the resonance, $\tilde \delta = \alpha\Gamma/\nu$, or close to the condensation threshold where the condensate occupancy $\Gamma/\nu$ is low. In the close proximity of the threshold, $\Gamma/\nu \rightarrow0$, the condensate can not resist synchronization even if the driving is weak and strongly detuned. Equation \eqref{sync_condition_simple} indicates also that the synchronization condition is insensitive to the sign of the frequency mismatch.
A strict analysis of the synchronized (steady-state) solution of equation \eqref{Eq_toy_1} yields the following equation for the condensate density:
\begin{equation}\label{SingleCondAmplitude}
\left(\alpha n - \tilde \delta\right)^2 n + \left(\Gamma-\nu n \right)^2n = {\tilde f}^2,
\end{equation}
which may have {either one or} three real roots. Let us note that this approach accounts for the influence of the coherent pump on the amplitude of the condensate and thus is more general than the approach based on the assumption that only the phase but not the density of the condensate is affected by the driving force.
The typical shape of the $n(\tilde f)$-dependence for the solution is shown in Fig.~\ref{Fig:SynchDomain}(a). {Although the steady-state solutions exist for any value of the coherent pumping strength, the synchronous regime is established only for those states which are stable against small perturbations.} The analysis of the Lyapunov exponents yields that {the synchronized solution} is stable, provided that
\begin{subequations}
\begin{eqnarray}\label{StabConditions}
&2\nu n > \Gamma, \label{StabCondition1}\\
&3\left(\alpha^2+\nu^2\right)n^2-4\left(\Gamma\nu+\alpha\delta\right)n +\delta^2+\Gamma^2 >0.\label{StabCondition2}
\end{eqnarray}
\end{subequations}
}%
\noindent {These conditions are simultaneously satisfied for the upper branch of the S-shaped curve (see the blue curve in Fig.~\ref{Fig:SynchDomain}(a)). However } at the left folding point $\tilde f=\tilde f_{b2}$ the solution becomes unstable and the synchronized state transforms into a state with two incommensurable frequencies. Mathematically it means that the pair of stable and unstable fixed points corresponding to the upper and the intermediate branches of the S-shaped curve collide at $\tilde f = \tilde f_b$ and disappear { giving birth to a stable limit cycle}. The birth of the limit cycle manifests itself in the spectral frequency comb shown in Fig.~\ref{Fig:SingleCondSynch}(b).
Note that the lower branch of the S-shaped curve does not support synchronization {for arbitrary small values of the driving strength} as it is typically unstable in a stark contract with the conventional optical bistability regime \cite{baas2004}, that is realized in the absence of the incoherent pumping. {In particular, the whole lower branch is unstable provided that $\Gamma>2\nu n_{b2}$ (see the condition \eqref{StabCondition1}), where $n_{b2}$ determines the condensate density at the right bending point, see Fig.~\ref{Fig:SynchDomain}(a). In this case,} the position of the left bending point $\tilde f_{b1}$, which can be easily obtained from \eqref{SingleCondAmplitude} taking $\partial_n \tilde f=0$, should be considered as the critical value of the coherent pump amplitude above which the synchronization occurs.
{If equation \eqref{SingleCondAmplitude} admits a single real solution (see the red curve in Fig.~\ref{Fig:SynchDomain}(a)), the inequality \eqref{StabCondition2} always holds and the stability of the synchronized state is governed by the condition
\eqref{StabCondition1}. }
Thus, the solution is also unstable at the weak driving strength, when the steady state condensate population $n$ vanishes. However the condensate becomes stable {through a supercritical Hopf bifurcation} at $\tilde f \geq \tilde f_s$ manifesting establishment of the synchronization regime. The critical driving strength $ \tilde f_s \equiv \tilde f(n_s)$ is determined from \eqref{SingleCondAmplitude} taking
$n=n_s=\Gamma/2\nu$.
The inequalities $\tilde f>\tilde f_s(\tilde \delta)$ and $\tilde f > \tilde f_{b2}(\tilde \delta)$ constitute the synchronization conditions which are shown in the phase diagram Fig.~\ref{Fig:SynchDomain}(b) plotted on the parameter plane $ (\Gamma, f )$. Note that the simplified condition \eqref{sync_condition_simple} describes the synchronization phenomenon well enough, especially in the vicinity of the resonance, $\alpha\Gamma/\nu = \tilde\delta$, see the dash-dotted curve.
\begin{figure}
\includegraphics[width=\linewidth]{f2.pdf}
\caption{(a) The $n(\tilde f)$-dependence of the steady-state solution of \eqref{Eq_toy_1}. The stable states corresponding to the synchronization with the coherent light are shown with solid curves while the dashed curves indicate unstable solutions. The gain parameter is $\Gamma=0.6$ for the blue curve and $\Gamma=0.24$ for the red curve. (b) The phase diagram for the synchronization of a single polariton condensate with the coherent light. The purple region corresponds to the synchronization domain predicted by the toy model \eqref{Eq_toy_1}. The red region framed by the red circles is obtained from the numerical solution of the full system \eqref{MainModelNorm}. The simplified condition \eqref{sync_condition_simple} is shown with the dash-dotted curve. The yellow squares show the boundary of the synchronization domain under the homogeneous illumination of the symmetric polariton dyad (the interspot distance is $d=17$, see Fig.~\ref{Fig:TwoCondSynch}(a)), while the green diamonds indicate the same for the antisymmetric dyad ($d=14$). For all data $\delta=1.96$. The rest parameters for both panels are the same as in Fig.~\ref{Fig:SingleCondSynch}.} \label{Fig:SynchDomain}
\end{figure}
For the sake of comparison, the results of numerical simulations of the full 2D model \eqref{MainModelNorm} are also shown in the phase diagram, Fig.~\ref{Fig:SynchDomain}, see the red circles indicating the boundary of the synchronization domain. The best coincidence between the models is achieved in the vicinity of the resonance. However the region close to the condensation threshold is worse described by the toy model \eqref{Eq_toy_1} since it factors out the reservoir-induced blue shift. The blue-shift is accounted for in the effective detuning $\tilde \delta$ (see Appendix A).
Far from the threshold this assumption is justified by the effect of depletion of the reservoir. Indeed, if $\beta |\psi|^2 \gg \gamma$, the reservoir density $n_r=p/\left(\gamma+\beta\left|\psi\right|^2\right)$ is depleted. In contrast, if the condensate density $|\psi|^2$ is comparable to or lower than $\gamma/\beta $, the impact of the reservoir-induced blue shift is crucial. As a result, the simplified model overestimates the value of the frequency mismatch $ \alpha\Gamma/\nu - \tilde \delta$ near the threshold shifting the boundary of the synchronization domain towards the higher values of the driving strength, see Fig.~\ref{Fig:SynchDomain}(b).
\section{Polariton dyad in the presence of a coherent laser light}\label{SecIV}
\subsection{Synchronization of the symmetric and antisymmetric polariton dyad to the coherent light}
Another fundamental question is what happens when more than one condensate is illuminated by a coherent laser light.
We proceed considering the simplest case of a polariton dyad created by two spatially separated spots of the incoherent pump. In particular, we take $p(\mathbf{r})=p_1 \exp\left( -\frac{(\mathbf{r} - \mathbf{r}_1)^2}{2l_1^2} \right) + p_2 \exp\left( -\frac{(\mathbf{r} - \mathbf{r}_2)^2}{2l_2^2} \right)$, with $d=\left|\mathbf{r}_1 -\mathbf{r}_2 \right|$ being the interspot distance. Because of the outflow of polaritons from under the pump spots, the condensates interact with each other. In the absence of the coherent pump this coupling causes the formation of a mutually synchronized (coherent) state. If the pumps are identical ($p_1=p_2$ and $l_1=l_2$), polaritons may condense either in in-phase (symmetric) or in anti-phase (antisymmetric) configurations depending on the mutual coupling strength, which is governed by the interspot distance $d$ and the velocity of the polariton outflow \cite{ohadi2016}. In particular, the symmetric and antisymmetric states alternate as the interspot distance varies while the nonresonant pump power remains fixed, see Fig.~\ref{Fig:TwoCondSynch}(a). Note that in both configurations the condensates are equally populated. For the sake of simplicity we leave out of the scope of this paper the class of symmetry broken solutions which appear in the weak lasing regime \cite{aleiner2012}.
The external {coherent driving tends to lock phases of the condensates.} Hence it may alter the structure of the dyad. In this paper we focus on the case of a uniform illumination assuming that the coherent light excites the microcavity at the normal incidence. Experimentally this is feasible with strongly defocused laser beam.
{We start analyzing the synchronization scenario in the framework of the full model \eqref{MainModelNorm}.} It is anticipated, that, by analogy with the case of a single condensate, a weak driving is unable to synchronize the dyad. At the same time, an intense coherent driving should dominate over the inter-condensate coupling suppressing the intrinsic structure of the dyad and synchronizing both condensates.
However, the behaviour at the intermediate driving strength lacks of understanding. Nevertheless, it is obvious that the synchronization scenario should depend on whether the dyad is symmetric or antisymmetric in the absence of the coherent light.
When the nonresonant pump provides the in-phase configuration of the polariton dyad, the synchronization scenario is pretty much the same as for a single condensate although the synchronization {conditions} are slightly affected by the inter-condensate coupling (see the yellow squares in Fig.~\ref{Fig:SynchDomain}(b)). {Indeed, in the absence of the coherent pump the coupling alters the frequency of the dyad and the amplitudes of the condensates in respect to the case of a stand-alone polariton condensate. Since the dyad state remains symmetric for any value of the driving amplitude (the condensates are in-phase and equally populated both in synchronous and asynchronous regimes), the symmetric dyad can be associated to a single driven oscillator which effective frequency and the loss rate are affected by the complex inter-condensate coupling.}
The case of the condensates phase locked with $\pi$-phase difference in the absence of the coherent light is much richer. {The typical synchronization scenario for this case is illustrated in Fig.~\ref{Fig:TwoCondSynch}. The weak driving induces oscillations of the condensate amplitude. The increase of the driving strength results in synchronization of the dyad to the coherent pump.} From symmetry reasons, it is obvious that the antisymmetric state cannot be synchronized by the homogeneous driving without modification of the entire structure of the solution. Instead, it is anticipated that the {a pair of identical} condensates {illuminated by the homogeneous coherent pumping} should be synchronized in the symmetric state.
However, it appears that, as the coherent driving strength increases above the critical value $f_1$ (see Fig.~\ref{Fig:TwoCondSynch}), the limit cycle, which corresponds to the {oscillating} regime, is superseded by the steady state { with \textit{broken symmetry}}. Despite of the fact that the incoherent pump spots are identical, the condensates have different populations and their relative phase equals neither $0$ or $\pi$ (see Fig.~\ref{Fig:TwoCondSynch}(c) and Fig.~\ref{Fig:TwoCondSynch}(d))
\begin{figure}
\includegraphics[width=\linewidth]{f3.pdf}
\caption{Synchronization of the polariton dyad with the coherent laser light. (a) Switching between the symmetric and antisymmetric states of the dyad with the growth of the inter-condensate distance $d$ {for the fixed incoherent pump power}. (b) The spectral density $S(\omega)$ calculated at $\mathbf{r}=\mathbf{r}_2$ from the numerical simulations of the dynamics of the full model. Panels (c) and (d) show the dependencies of the steady state population imbalance $\Delta n$ and the phase difference $\Delta \varphi$ on the coherent pump amplitude $f$, respectively. The blue curves correspond to the numerical solution of the full model \eqref{MainModelNorm} while the red curves show the predictions of the equations \eqref{Eq_toy_2_1} and \eqref{Eq_toy_2_2}. (e) The blue line shows the limit cycle trajectory on the $\left(\Delta n, \sin(\Delta \varphi)\right)$ plane in the close proximity of the bifurcation point $f=f_1$. Large red dots correspond to the steady-state solutions exactly at the bifurcation point. Stable symmetry broken states, which appear at $f\in \left[f_1,f_2\right)$, are shown with small dots. The parameters are: $p_0=4$, $\delta=1.96$, $g=0.46$, $\beta=2.5$, $\gamma=0.4$ and $l_1=l_2=0.93$. } \label{Fig:TwoCondSynch}
\end{figure}
With the further increase of the coherent pump amplitude, the degree of asymmetry of the state, i.e. the population imbalance $\Delta n=n_1-n_2$ and the phase difference $\Delta \varphi = \varphi_1 - \varphi_2$ decrease. Here $n_{1,2}\equiv n\left(\mathbf{r}=\mathbf{r}_{1,2}\right)$ and $\varphi_{1,2}\equiv \varphi\left(\mathbf{r}=\mathbf{r}_{1,2}\right)$. Both $\Delta n$ and $\Delta\varphi$ vanish at $f=f_2$ as a second bifurcation happens and the symmetric steady state forms, see figures \ref{Fig:TwoCondSynch}(c) and \ref{Fig:TwoCondSynch}(d). In this regime, the coherent pump dominates over inter-condensate interactions and governs the phase configuration of the dyad.
\subsection{Synchronization scenario in terms of the coupled oscillators model}
To study the synchronization scenario in more details we again resort to the simplified model considering the interacting condensates as a pair of linearly coupled oscillators:
\begin{subequations}\label{Eq_toy_2}
\begin{eqnarray}
&&i\partial_t A_1 = \left(\alpha |A_1|^2 -\tilde \delta +i\Gamma - i\nu |A|^2 \right) A_1 + \sigma A_2 +\tilde f, \label{Eq_toy_2_1} \\
&&i\partial_t A_2 = \left(\alpha |A_2|^2-\tilde \delta +i\Gamma - i\nu |A|^2 \right) A_2 +\sigma A_1 +\tilde f. \label{Eq_toy_2_2}
\end{eqnarray}
\end{subequations}
We assume that the coupling parameter is complex, $\sigma = \sigma_{\rm J}+i\sigma_{\rm d} $. The conservative (Josephson) coupling $\sigma_{\rm J}$ results in the frequency splitting between the symmetric and antisymmetric states. The $\sigma_{\rm d}$ component is responsible for the dissipative coupling which accounts for the fact that the net losses of the coupled state do depend on the relative phase. The numerical values of these parameters are obtained by fitting of the results of the simulations of the full 2D model \eqref{MainModelNorm} to the results predicted by the ordinary differential equations \eqref{Eq_toy_2_1} and \eqref{Eq_toy_2_2}. The details of this fitting procedure are presented in Appendix A.
{In what follows we focus on the case of the antisymmetric polariton dyad which corresponds to the positive dissipative coupling parameter, $\sigma_{\rm d}>0$.} The synchronization scenario for this case, predicted by model \eqref{Eq_toy_2_1} and \eqref{Eq_toy_2_2}, is shown in Fig.~\ref{Fig:TwoCondSynch} with red lines {while the predictions of the full model \eqref{MainModelNorm} correspond to the blue curves}. Although the simplified model fails to predict the position of the bifurcation points $f_{1,2}$ accurately, both the spontaneous symmetry breaking phenomenon and the break of the synchronous regime are reproduced qualitatively correctly. Thus we believe that the analysis of the simplified model is capable to describe the behavior of the full system. {The discrepancy between the models should be attributed to the renormalization of the coupling parameter $\sigma$ by the presence of the homogeneous driving. Indeed, the coupling strength is determined by the interference of the overlapping condensate wave functions. An intensive homogeneous resonant pumping modifies the distribution of the condensate phase affecting thereby the overlap between neighbouring condensates and altering the coupling parameter.} %
The spontaneous symmetry breaking scenario, which is realized as the pump intensity decreases below $f=f_2$, indicates the presence of a supercritical pitchfork bifurcation. {This statement can be proved in terms of the coupled oscillators model \eqref{Eq_toy_2_1} and \eqref{Eq_toy_2_2} applying} a perturbation theory in the vicinity of the bifurcation point $f_2$. The details of the calculations are accumulated in Appendix B.
In the leading order of approximation the deviation $\Delta A_{1, \, 2}$ of the symmetry broken solution $(A_1, A_2)$ from the symmetric one can be found in the form $\Delta A_{1, \, 2}=a \left( \xi_{1,3} + i \xi_{2,4} \right) $ where $\xi_{1, 2,3,4}$ are real constants defined in Appendix B.
The equation for the amplitude $a$ has the form
\begin{equation}
\partial_t a=\lambda a + \epsilon a^3 \label{norm_form}
\end{equation}
with real coefficients $\lambda$ and $\epsilon$.
Equation \eqref{norm_form} is the normal form of a pitchfork bifurcation.
Calculation of the coefficients entering \eqref{norm_form} yields $\epsilon <0$. So, the symmetry breaking indeed goes through supercritical pitchfork bifurcation.
{The analysis of the dynamics predicted by the coupled oscillators model reveals that} the bifurcation switching the system between the synchronous and asynchronous (oscillating) regimes at $f=f_1$ is an infinite-period bifurcation, which consists in the appearance of the two pairs of stable and unstable states on the limit cycle. It destroys the motion around the limit cycle trajectory and the system switches to the one of the two stable fixed points corresponding to the solutions with broken symmetry. The limit cycle trajectory in the vicinity of the bifurcation point calculated with the coupled oscillators model is shown in Fig.~\ref{Fig:TwoCondSynch}(e). The large red dots on the phase trajectory mark the position the stable fixed points at $ f=f_1$.
{Note that the described synchronization scenario is robust against the change of the domain of the inter-condensate distances $d$ which provide the antisymmetric configuration of the undriven dyad, see Fig.~\ref{Fig:TwoCondSynch}(a). However the threshold value of the coherent pump intensity corresponding to the establishment of the synchronous regime, $f=f_1$, is sensitive to the parameters of the system, see the green diamonds in Fig.~\ref{Fig:SynchDomain}. In particular, it depends on the frequency mismatch between the coherent pumping frequency and the condensate and on the inter-condensate distance $d$ which governs the inter-condensate coupling}. Besides, in the case of a strong frequency mismatch, the spontaneous symmetry breaking bifurcation and the bifurcation accompanying formation of the synchronous state merge. {In this case the symmetry broken state disappears and the oscillating regime, typical for the weak coherent pumping, is directly superseded by the symmetric state, $\delta \varphi=0$, as the driving strength increases above $f_2$. In order to determine the conditions of existence of the symmetry broken state we further simplify the coupled oscillators model reducing it to the model of symmetrically driven coupled phase oscillators. }
\subsection{Coupled phase oscillators model}
By analogy with the case of a single oscillator (see equation \eqref{sync_condition_simple}), we assume that the driving strength is weak and does not affect the condensate populations, $\left|A_1\right|=\left|A_2\right|=\sqrt{\Gamma/\nu}$. Thus, taking $A_{1,2}=\left|A_{1,2}\right| \exp\left[{i\varphi_{1,2}}\right]$, the state of the dyad is described by the coupled equations for the condensate phases:
\begin{eqnarray}\label{Eq_phase_osc}
\partial_t\varphi_{1,2} &=& q - \sigma_{\rm J} \cos\left(\varphi_{2,1} - \varphi_{1,2}\right) + \\
\nonumber &+& \sigma_{\rm d} \sin\left(\varphi_{2,1} - \varphi_{1,2}\right) - {f}^\prime\cos \varphi_{1,2},
\end{eqnarray}
where $q= \tilde{\delta} - \alpha \Gamma/\nu$ is an effective detuning and $f^\prime=\tilde{f}\sqrt{\nu/\Gamma} $. In the absence of the driving, $f^\prime=0$, these equations predict the formation the symmetric state $\varphi_2-\varphi_1=0$ at positive dissipative coupling $\sigma_{\rm d}>0$ and the antisymmetric state $\varphi_2-\varphi_1=\pi$ at $\sigma_{\rm d}<0$.
\begin{figure}
\includegraphics[width=\linewidth]{f4.pdf}
\caption{Bifurcation diagram of the stationary solutions of the coupled phase oscillators model \eqref{Eq_phase_osc}. (a) For $\sigma_{\rm d} =- 1$ and $\sigma_{\rm J} = 0.5$. (b) For $\sigma_{\rm d} = 1$ and $\sigma_{\rm J} = 0.5$. The shaded domains correspond to the stable states: the blue one to the symmetric state and the red one to the symmetry broken states. The red line corresponds to condition \eqref{eq_bs_driving}, the yellow one -- to \eqref{eq_f2}, the black horizontal lines -- to condition \eqref{eq_bs_detuning} and the blue one -- to the boundary given by \eqref{eq_symstate_cond}. The solid segments of the curves indicate the threshold driving required for synchronization. The hatched domains correspond to dynamically unstable states: the diagonal hatch to the unstable symmetric state and the square hatch to the unstable symmetry broken state. } \label{Fig:PhaseModelDiagram}
\end{figure}
Being quite simple, model \eqref{Eq_phase_osc} predicts the same synchronization scenario as observed in 2D case. In particular, equations \eqref{Eq_phase_osc} have both symmetric $\varphi_1=\varphi_2$ and the symmetry broken $\varphi_1 \neq \varphi_2$ stationary solutions, see Appendix C. The corresponding bifurcation diagram is shown in Fig.~\ref{Fig:PhaseModelDiagram}.
The analysis shows that the solution with broken symmetry appears provided that the coherent driving strength exceeds the threshold value, which is defined as
\begin{equation}\label{eq_bs_driving}
\tilde{ f}_1 = 2 \sqrt{\Gamma/\nu}\left(\sqrt{ \left( \sigma_{\rm J}+q \right)^2 \left( \sigma_{\rm J}^2+\sigma_{\rm d}^2 \right)} - \sigma_{\rm J}\left(\sigma_{\rm J}+q\right) \right)^{1/2}
\end{equation}
for the detuning $q$ belonging to the range
\begin{equation}\label{eq_bs_detuning}
\left|q+\sigma_{\rm J}\right| < 2\sqrt{\sigma_{\rm J}^2 + \sigma_{\rm d}^2}.
\end{equation}
Although the stationary solutions with broken symmetry exist at any value of the complex coupling, for the dyad which is symmetric at zero driving (i.e. at $\sigma_{\rm d}>0$) they are always dynamically unstable and hence these synchronous states cannot be observed experimentally. In contrast, the antisymmetric dyad ($\sigma_{\rm d}<0$) {can transform into the stable states with broken symmetry.} Besides, it can be synchronized in the symmetry broken state even at the vanishing driving strength provided that its frequency matches the eigenfrequency of the stand-alone polariton dyad, namely, at $q=-\sigma_{\rm J}$, see \eqref{eq_bs_driving} and Fig.~\ref{Fig:PhaseModelDiagram}a. In the latter case the symmetry broken state bifurcates from the antisymmetric solution $\varphi_2-\varphi_1=\pi$ at $f^{\prime}=0$. With the increase of the driving strength the degree of asymmetry of the state, i.e. the phase difference $\varphi_2-\varphi_1$, decreases until the symmetry broken states disappear at the pitchfork bifurcation (which is of supercritical type at $\sigma_{\rm d}<0$ and of subcritical type in the opposite case) at $f^\prime = f^\prime_2$, where:
\begin{equation}\label{eq_f2}
\tilde{f}_2=\sqrt{\Gamma/\nu}\sqrt{4\sigma_{\rm d}^2 + \left(q-\sigma_{\rm J}\right)^2}.
\end{equation}
Note that the stable symmetry broken solutions do not appear at any driving strength if condition \eqref{eq_bs_detuning} violates, i.e. if the laser frequency is strongly detuned from the eigenfrequency of the stand-alone antisymmetric dyad. {However other symmetry broken solutions exist at $f^\prime>f^\prime_2$ for arbitrary detuning, but these solutions are always dynamically unstable (see Appendix C for the details)}.
Besides the symmetry broken solutions there are also the symmetric states which appear provided that
\begin{equation}\label{eq_symstate_cond}
\tilde{f}>\left| q-\sigma_{\rm J}\right| \sqrt{\Gamma/\nu}.
\end{equation}
At $\sigma_{\rm d}>0$ a pair of symmetric solutions is always stable. Both these states have the same frequency inherited from the coherent driving, although they are characterized by the different phase shift in respect to the phase of the laser light. In this case condition \eqref{eq_symstate_cond} determines the synchronization threshold and is equivalent to \eqref{sync_condition_simple}.
In the case of the dyad which is antisymmetric at zero driving ($\sigma_{\rm d}<0$), the symmetric synchronized states become stable only at $f^\prime>f^\prime_2$, see the blue shaded region in Fig.~\ref{Fig:PhaseModelDiagram}. Thus for the strongly non-resonant driving the synchronization of the antisymmetric dyad requires much stronger power than in the case of the symmetric one.
The domains of existence of the synchronized solutions defined by the conditions \eqref{eq_bs_driving}-\eqref{eq_symstate_cond} remain invariant to the change of a sign of the dissipative coupling $\sigma_{\rm d}$. However, the stability properties of the solutions are different for the symmetric and the antisymmetric dyad. Besides, the change of a sign of the Josephson coupling $\sigma_{\rm J}$ leads to the reflection of the bifurcation diagram about $q=0$ axis. Note that both dissipative and Josephson coupling parameters varies with the variation of the inter-condensate distance and the pumping power, see e.g.~\cite{ohadi2016,kalinin2018matter}.
\section{Conclusion and outlook}
We describe a novel mechanism of manipulation of non-equilibrium condensates of exciton polaritons formed in semiconductor microcavities. This mechanism, based upon a very general nonlinear effect of synchronization with an external coherent driving, paves the way to the non-resonant control of the properties of driven-dissipative polariton condensates. In particular, in the synchronization regime the frequency of the polariton condensate does not depend on the intensity of the incoherent pump but is equal to the coherent pump frequency. In this regime the phase of the polariton condensate is locked to the phase of the coherent pump. Thus, this effect allows to control the phase of the condensate by detuned external laser beam. A simple model describing the spatially distributed condensate by its amplitude can qualitatively reproduce the observed phenomena.
The interplay of mutual synchronization of a couple of neighbouring condensates and their synchronization to the external coherent light is also studied. Those condensates which are locked in-phase in the absence of the coherent driving, can be easily synchronized to the coherent light. However, the synchronization of the antisymmetric polariton dyad is accompanied by the appearance of the symmetry broken configuration of neighbouring polariton condensates which mediates the formation of a mutual coherent state. This intermediate regime is characterized by the imbalance of the condensate populations and corresponds to the spontaneous breaking of the state symmetry. It is superseded by the symmetric (in-phase) configuration of the polariton dyad as the intensity of the coherent light grows. {The obtained results can be straightforwardly generalized to the case of an ensemble of coupled polariton condensates using the model of driven coupled dissipative oscillators, which allows reducing the problem to the simplified dynamical model.}
In spite of the fact that the properties of the polariton dyad are dependent of the spatial overlap of the wave functions of the condensates, the model of two coupled nonlinear oscillators driven by an external force are capable to describe all the peculiarities of the synchronization scenario. Thus the model of coupled oscillators represents an effective tool for investigation of the polariton dynamics.
In particular, it assists the analysis of the bifurcations happening in the polariton systems. In our case the model of coupled oscillators allows to reduce the spontaneous symmetry breaking bifurcation { to the normal form of a supercritical pitchfork bifurcation.}
Synchronization of a single condensate by the coherent light may assist an experimental measurement of the relative phases between the condensates in the ensemble created by the multi-spot nonresonant pumping. Fixing the phase of a single condensate one may use the quasi-resonant laser light as a reference beam.
Besides, it is worth mentioning that the effect of synchronization can potentially be used for manipulation of the spin-polarized polariton condensates which are created by the polarized nonresonant laser pumping. { In particular, it has been recently demonstrated \cite{sakaguchi2017} that the spin-up and spin-down components having different eigenfrequencies can be mutually synchronized due to spin-orbit interaction. The approach developed in the present paper can be used for the studies of how these vector polariton states can be synchronized to coherent pumps of different polarizations and frequencies.}
We also believe that the effect of synchronization can assist to control the distribution of phases and amplitudes in the arrays of polariton condensates which can be useful, for example, for the engineering of polariton lasing systems consisting of polariton laser arrays and for polariton simulators. In particular, since the external coherent pump affects the condensate phase, it can be associated with the effective magnetic field acting either on a single effective spin or on the several condensates in the array.
\begin{acknowledgments}
This work is supported by Westlake University (Project No. 041020100118). AVY was financially supported by the Government of the Russian Federation (Grant 074-U01) through ITMO Fellowship schemes. IYC acknowledges the support from RFBR, grants No. 17-52-10006, 17-42-330928 and the Ministry of Education and Science of the Russian Federation, Project No. 16.1123.2017/4.6.
\end{acknowledgments}
|
1,477,468,750,327 | arxiv | \section{Introduction}
There are two ways in which indistinguishable particles can be represented in quantum-mechanical models: the first-quantization formalism models their state vectors as elements of the (anti)symmetric subspace of a multi-particle Hilbert space, whereas the second-quantization formalism takes their state space to be given by a Fock space of modes. Interestingly, the former option represents indistinguishable particles as necessarily entangled, while according to the latter option, the modes' entanglement depends on the particles' past interactions. Since entanglement is a paradigmatic resource for quantum information processing, the question arises whether the entanglement that is apparently intrinsic according to the first-quantization formalism can in principle be accessed and used, or whether it is a mere artefact stemming from the choice of representation: possible answers are still being actively debated \cite{ghirardi2002entanglement, ghirardi2004general, cavalcanti2007useful, franco2018indistinguishability, barghathi2019operationally, morris2020entanglement, johann2021locality, benatti2021entanglement} (for a recent review see \cite{benatti2020entanglement}).\\
In this Letter we corroborate the former take on this question by proposing an information-theoretic task that can be solved with the use of independently prepared indistinguishable particles. We thereby show that the particles' indistinguishability can be used as an information-theoretic resource, and that there is a sense in which the entanglement present according to the first-quantization notation is more than a mathematical artefact.
We will start by providing an informal presentation of our task, before putting it on abstract grounds. We will then present the quantum-mechanical analysis of the task and expand on the requirements necessarily satisfied by any quantum-mechanical protocol that solves it. The last section contains a discussion on the broader merit of our results and an outlook on possible future developments.
\section{Informal presentation}\label{sec1}
Consider the following scenario. An agent named Alice is given a box, which contains a physical object (e.g. an atom) prepared in one of two perfectly distinguishable states, thereby encoding one bit of information. The box is sealed in such a way that Alice does not have the means of opening it and thus cannot access the object directly. Suppose that the only action she can perform on the box is to translate it in space, e.g. she can move the box from some initial position $\vec{x}$ to another position $\vec{x}'$. Alice may also have available various experimental devices and other physical objects (as for example other boxes). Lastly, we will assume that the translation of the box through space leaves the state of the object within it invariant, and that Alice cannot infer the state by merely translating the box (e.g. via some state-dependent back-reaction acting on Alice's experimental devices or additional objects). An example of this scenario, where the additional objects consist of other boxes, is pictured in Fig. \ref{fig1}.\\
\begin{figure}
\includegraphics[width=\linewidth]{fig1.jpg}
\caption{Alice is given a box containing an object prepared in state $k\in \left\{0,1\right\}$. She also has at disposal other boxes, with their pertaining objects set in state $0$. Alice is challenged to learn the value $k$ by solely moving the box that contains the hidden object (without opening it), while also being allowed to implement transformations on the other boxes: for example, as represented in the diagram, she can swap the locations of the first two boxes.}
\centering
\label{fig1}
\end{figure}
Now we are ready to formulate the task: \textit{can Alice learn the state of the object within the box, given the above restrictions on what she is allowed to do?} At the current stage, this question may appear nonsensical: if (i) Alice is only able to translate the box, and (ii) a mere translation of the box cannot reveal the state of the object within it, then it trivially follows that Alice cannot accomplish the task! However, if the box can be treated quantum-mechanically and if Alice has at disposal additional identical quantum systems, then, as we will show in the next section, she can in fact learn the state of the object, while still in some sense only moving the box.\\
Let us now provide a formalized version of the task and the quantum-mechanical protocol that can be used to solve it.
\section{Formalization of the task}\label{sec2}
Here we will put the scenario presented in the previous section on more formal and abstract grounds. Consider that Alice is given a localized physical object, henceforth named $\cal{T}$. $\cal{T}$ can be characterized to have two degrees of freedom (d.o.f.): an intrinsic one, which we will label with $k$, and a position, labelled with $\vec{x}$. We will thus represent the overall state of the object with the ordered pair $(\vec{x},k)_T$. Throughout the paper, we attach only \textit{operational meaning} to states, i.e. the sentence ``the object is in state $(\vec{x},k)_T$'' is hereafter synonymous with ``a position measurement on the object \textit{would} output value $\vec{x}$, and an appropriate measurement of the internal d.o.f. \textit{would} output value $k$ ''. We furthermore assume that $k$ can take only two possible values, 0 or 1. Let us now suppose that Alice is only allowed to translate the object through space, while keeping the internal d.o.f. intact. We will model this restriction by assuming that she has at disposal a device $\cal{D}$ that can implement ``translation maps'' $M: \mathbb{R}^3 \rightarrow \mathbb{R}^3$, the latter being linear operators that map position vectors $\vec{x}$ into position vectors $\vec{x}'=M\vec{x}$. We will associate states $(M)_D$ to the device, where $M$ indicates which map the device is set to implement on a potential target object. The device is constructed in such a way that the dynamical interaction with the localized target object $\cal{T}$, which is initially in state $(\vec{x},k)_T$, is given by
\begin{equation}\label{eq1}
(M)_D(\vec{x},k)_T \rightarrow (M)_D(M\vec{x},k)_T.
\end{equation}
Furthermore, Alice may have at disposal other localized objects, which cannot interact directly with the initial object $\cal{T}$, but can interact with the device $\cal{D}$. Labelling collectively these additional objects with $\cal{A}$, and their pertaining state with $(\alpha)_A$, the latter constraint means that the overall interaction between $\cal{D}$, $\cal{A}$ and $\cal{T}$ can be represented as:
\begin{equation}\label{eq11}
(M)_D(\alpha)_A(\vec{x},k)_T \rightarrow (M)_D(\tilde{\alpha}(M,\alpha))_A(M\vec{x},k)_T,
\end{equation}
where the final state $\tilde{\alpha}(M,\alpha)$ of $\cal{A}$ is in principle any function insensitive to the internal d.o.f. pertaining to $\cal{T}$. Moreover, we assume that if the initial state of $\cal{D}$ is not equal to any $(M)_D$ (but is e.g. a probabilistic mixture or a quantum superposition state), we assume that it is still the case that, were one to find the final state of $\cal{D}$ to be $(M)_D$, one would find the final state of $\cal{T}$ and $\cal{A}$ to be $(\tilde{\alpha}(M,\alpha))_A(M\vec{x},k)_T$. In other words, for any initial state of $\cal{D}$, postselecting the final state on definite states $(M)_D$ of $\cal{D}$ results in the postselected joint state being equal to the RHS of Eq. \eqref{eq11}.\\
Now suppose that Alice is allowed only to interact with her device, i.e. she is solely able to control and to read out the state of $\cal{D}$. She can thus interact only indirectly with objects $\cal{T}$ and $\cal{A}$, via mediation through $\cal{D}$. Furthermore, Alice has perfect knowledge of the initial state $(\alpha)_A$ associated to the additional objects $\cal{A}$, but does not have any prior knowledge of the value $k$ pertaining to $\cal{T}$. The information-theoretic task can now be formulated as follows:\\
\\
\textbf{Task}: \textit{can Alice learn the state $k$ pertaining to $\cal{T}$, solely by manipulating the given device $\cal{D}$?}\\
\\
As can be immediately seen from Eq. \eqref{eq11}, the state of $\cal{D}$ is insensitive to $k$, so the answer to the latter question seems to be an immediate `\textit{No}'. Also, at the current stage, it is admittedly not clear at all what role, if any, the additional objects $\cal{A}$ could play for the accomplishment of the required task. However, in what follows, we will show that if $\cal{D}$, $\cal{A}$ and $\cal{T}$ can be treated as quantum systems, and if Alice is able to coherently manipulate the device $\cal{D}$, then she can in fact learn the required value $k$, given that $\cal{A}$ and $\cal{T}$ can be modelled as indistinguishable quantum particles.
\subsection*{The quantum protocol}
We will now assume that $\cal{D}$, $\cal{A}$ and $\cal{T}$ can be treated as quantum systems that can be modelled via the usual rules of quantum mechanics. Therefore, we can associate a quantum state
\begin{equation}
\ket{\psi} \in \cal{H}\equiv H_D\otimes H_A \otimes H_T
\end{equation}
to the joint system comprised of $\cal{D}$, $\cal{A}$ and $\cal{T}$. The Hilbert space $\cal{H}_D$ associated to the device is spanned by vectors $\left\{\ket{M}, \forall M: \mathbb{R}^3 \rightarrow \mathbb{R}^3 \right\}$, whereas the Hilbert space $\cal{H}_T$ associated to $\cal{T}$ is spanned by $\left\{\ket{\vec{x},k} \equiv \ket{\vec{x}}\otimes \ket{k}, \forall \vec{x} \in \mathbb{R}^3, \forall k=0,1 \right\}$. Vectors $\ket{M}$ are eigenstates of the observable $\hat{M}$ that corresponds to the ``measurement'' of the device in the intended classical basis, whereas $\ket{\vec{x}}$ and $\ket{k}$ are respectively eigenstates of observables $\hat{\vec{x}}$ and $\hat{k}$ corresponding to measurements of the position and internal d.o.f. of $\cal{T}$. The structure of the Hilbert space $\cal{H}_A$ pertaining to $\cal{A}$ remains for now unspecified. We now assume, in accord with Eq. \eqref{eq11}, that the interaction between $\cal{D}$, $\cal{A}$ and $\cal{T}$ is given by the following unitary evolution:
\begin{equation}\label{eq2}
\ket{M}_D \ket{\alpha}_A \ket{\vec{x},k}_T \rightarrow \ket{M}_D \ket{\tilde{\alpha}(M,\alpha)}_A\ket{M\vec{x},k}_T.
\end{equation}
Notice that the above corresponds to a control gate, where the device $\cal{D}$ acts as the control system, and the objects $\cal{T}$ and $\cal{A}$ act as targets. We stress that we are assuming that $\cal{D}$, $\cal{A}$ and $\cal{T}$ are not interacting with any further environment, i.e. that they constitute an isolated system; in the next subsection we will analyze protocols that violate this assumption.\\
Let us now specify the additional objects $\cal{A}$ to consist only of \textit{one} object localized at position $\vec{x}_A$, and possessing a binary internal d.o.f., whose value is for simplicity set to $k_A=0$. The Hilbert space associated to $\cal{A}$ is thus isomorphic to the one associated to $\cal{T}$, and the object $\cal{A}$ is assigned state $\ket{\vec{x}_A,0}$. We also assume that the device $\cal{D}$ can be used to translate $\cal{A}$ through space. Consequently, Alice is now able to \textit{swap} the two objects by using her device, by moving $\cal{T}$ to position $\vec{x}_A$, and $\cal{A}$ to position $\vec{x}_T$. In order to simplify the discussion, let us introduce an effective state of $\cal{D}$, which we will label with `$S$', and that is constructed in such a way that it effectively swaps the two objects upon interaction, i.e.
\begin{equation}\label{eq3}
\ket{S}_D \ket{\vec{x}_A,0}_A \ket{\vec{x}_T,k_T}_T \rightarrow \ket{S}_D \ket{\vec{x}_T,0}_A \ket{\vec{x}_A,k_T}_T,
\end{equation}
where we introduced more indices in order to avoid confusion.\\
Under the assumption that $\cal{A}$ and $\cal{T}$ are distinguishable systems, Eq. \eqref{eq2} implies that Alice cannot accomplish the task, because the internal degree of freedom pertaining to $\cal{T}$ is isolated from the other subsystems. Let us now suppose that objects $\cal{A}$ and $\cal{T}$ are \textit{indistinguishable}, i.e. that they can be modelled as indistinguishable quantum particles: here we assume the bosonic case, even though the protocol would be equally valid in the fermionic case as well. This warrants us to introduce the ``second quantization'' notation via the following recipe:
\begin{equation}\label{eq4}
\ket{\vec{x}_A,0}_A \ket{\vec{x}_T,k_T}_T \Rightarrow \ket{0}_{\vec{x}_A}\ket{k_T}_{\vec{x}_T},
\end{equation}
where the latter state means that at location $\vec{x}_A$ there is an object with internal state $0$, and at location $\vec{x}_T$ an object with internal state $k_T$, with no further labels that may distinguish the objects. Mathematically, the quantum state is now an element of $\mathcal{H}_{\vec{x}_T} \otimes \mathcal{H}_{\vec{x}_A}$, where $\mathcal{H}_{\vec{x}_{T/A}}$ is associated to spatial mode $\vec{x}_{T/A}$ and is spanned by vectors $\left\{\ket{0}_{\vec{x}_{T/A}},\ket{1}_{\vec{x}_{T/A}} \right\}$. The interaction between the device $\cal{D}$ set in state `$S$' and the two indistinguishable objects is thus given by:
\begin{equation}\label{eq5}
\ket{S}_D \ket{0}_{\vec{x}_A} \ket{k_T}_{\vec{x}_T} \rightarrow \ket{S}_D \ket{k_T}_{\vec{x}_A} \ket{0}_{\vec{x}_T}.
\end{equation}
Now comes the crucial observation: unlike as in the case of distinguishable objects (see Eq. \eqref{eq3}), the overall quantum state in Eq. \eqref{eq5} is invariant upon interaction if and only if $k_T=k_A$, i.e. $k_T=0$. This enables the construction of the following protocol that can be used by Alice to learn $k_T$ with probability higher than $\frac{1}{2}$, given that she knows the value $k_A=0$ of the additional system $\cal{A}$. The procedure goes as follows.\\
(a) Alice prepares the device in state $\ket{\phi} \equiv \frac{1}{\sqrt{2}}\left( \ket{\mathbb{1}}_D + \ket{S}_D \right)$, where `$\mathbb{1}$' is the state of the device that implements the identity transformation.\\
(b) She then lets $\cal{D}$, $\cal{A}$ and $\cal{T}$ interact as:
\begin{equation}\label{eq6}
\begin{split}
&\frac{1}{\sqrt{2}}\left( \ket{\mathbb{1}}_D + \ket{S}_D \right) \ket{0}_{\vec{x}_A} \ket{k_T}_{\vec{x}_T} \rightarrow \\
&\frac{1}{\sqrt{2}}\left( \ket{\mathbb{1}}_D \ket{0}_{\vec{x}_A} \ket{k_T}_{\vec{x}_T} + \ket{S}_D \ket{k_T}_{\vec{x}_A}\ket{0}_{\vec{x}_T} \right).
\end{split}
\end{equation}
After the interaction, the reduced density state $\rho$ associated to $\cal{D}$ is:
\begin{equation}\label{eq7}
\rho = \begin{cases}
\ket{\phi}\bra{\phi} &\text{if $k_T=0$}\\
\frac{1}{2}\mathbb{1}_2 &\text{if $k_T = 1$},
\end{cases}
\end{equation}
where $\mathbb{1}_2$ is the identity operator on $\cal{H}_D$ restricted to the subspace spanned by $\left\{ \ket{\mathbb{1}}_D, \ket{S}_D\right\}$. Notice that, were the objects $\cal{T}$ and $\cal{A}$ distinguishable, the device would end up in a maximally mixed state for all combinations of $k_T,k_A$.\\
(c) Finally, Alice measures $\cal{D}$ with projectors $\left\{\Pi_0 = \ket{\phi}\bra{\phi}, \Pi_1= \mathbb{1}_2 - \ket{\phi}\bra{\phi} \right\}$. If she obtains outcome `0', she guesses $k_T=0$; conversely, if she obtains `1' she guesses $k_T = 1$: the probability of a correct guess is $\frac{3}{4}$, thereby beating a random guess. Can the probability of success be raised closer to unity? The answer is affirmative, and is given by a straightforward extension of the protocol, where $\cal{A}$ now consists of $(N-1)$ additional identical quantum objects, each of them set in the reference state $k_A=0$. The extended protocol is presented in Appendix \ref{appA}, where it is shown that Alice's probability $P_W$ of correctly guessing the required bit is
\begin{equation}
P_W=1-\frac{1}{2N},
\end{equation}
that asymptotically reaches unity for large $N$. Note that if Alice at the end of the process wants to guess which of the objects is the one containing $k_T$ (i.e. $\cal{T}$), she is able to do so only with probability $\frac{1}{N}$. Alternatively, instead of implementing measurement $\left\{\Pi_0, \Pi_1 \right\}$, she could have measured $\cal{D}$ in the $\left\{\ket{\mathbb{1}}_D, \ket{S}_D\right\}$ basis and found out with certainty the location of $k_T$, which would however not have provided her with any knowledge of the value of $k_T$. There is thus a trade-off between the possibility of acquiring knowledge of the value of $k_T$ and of retaining knowledge of its location, which is inherited from the non-commutativity of the observables on $\cal{D}$ that would correspondingly need to be measured.
\subsection*{The necessity of entanglement and indistinguishability}
In the protocol presented in the previous subsection, the interaction between the target objects ($\cal{T}$ and $\cal{A}$) and the device $\cal{D}$ produces a $k_T$-dependent back-reaction on the latter: in particular, the device gets entangled with the target objects if and only if $k_T=1$. This suggests that the possibility of establishing entanglement between the device and the targets, along with the targets being indistinguishable, may be a necessary ingredient for the protocol to work. However, it is not yet clear whether this is the case, since the transformation defined abstractly in Eq. \eqref{eq11} admits other quantum-mechanical realizations besides the ideal quantum control gate that we assumed in Eq. \eqref{eq2}: we thus cannot yet exclude the possibility of there being a noisy control gate that produces a $k_T$-dependent transformation on the device even without the latter getting entangled with the target objects. Nevertheless, as we will sketch here (while leaving the proof for Appendix \ref{appB}), the possibility of establishing entanglement between $\cal{D}$ and the targets turns out after all to be a necessary condition for any quantum-mechanical protocol to solve the task.\\
Let us assume that $N=2$ and that the two target objects are indistinguishable. As before, states $\ket{\mathbb{1}}_D$ and $\ket{S}_D$ pertaining to $\mathcal{D}$ correspond to the `identity' and `swap' operations. The assumption of our task is that the interaction between $\mathcal{D}$ and the targets is such that, postselecting the final state on $\ket{\mathbb{1}}_D$ leaves the targets' state invariant, whereas postselecting on $\ket{S}_D$ leaves the targets in a `swapped' state. More precisely, the most general allowed interaction $G$ is a CPTP-map that satisfies the following: for any quantum states $\rho^{(D)}$ and $\rho^{(T)}$ of the device $\cal{D}$ and the targets $\cal{T}$ and $\cal{A}$, the post-interaction state $\tilde{\rho} \equiv G(\rho^{(D)} \otimes \rho^{(T)})$ satisfies
\begin{equation}\label{gcontrol}
\begin{split}
\Tr \left[ (\Pi_1 \otimes \mathbb{1}) \tilde{\rho}\right]= p_1 \rho^{(T)}\\
\Tr \left[ (\Pi_S \otimes \mathbb{1}) \tilde{\rho}\right]= p_S \hat{S}\rho^{(T)}\hat{S}^{\dagger},
\end{split}
\end{equation}
where $\Pi_{1/S}\equiv \ket{\mathbb{1}/S}_D\bra{\mathbb{1}/S}$, and $\hat{S}$ is the `swap' operator acting on the targets. The factors $p_1\equiv \Tr \left(\Pi_1 \rho^{(D)} \right)$ and $p_S\equiv \Tr \left(\Pi_S \rho^{(D)} \right)$ are determined by the requirement on how $G$ acts in the case of the initial state of $\mathcal{D}$ being $\ket{\mathbb{1}}_D$ or $\ket{S}_D$.\\
Let us label with $\rho^{(T)}_{k_T}$ the initial state of the targets when the unknown bit's value is $k_T$. For any gate $G$ that satisfies Eq. \eqref{gcontrol} and any initial state $\rho^{(D)}$ of $\mathcal{D}$, our task can be solved with probability higher than $\frac{1}{2}$ only if the final reduced state of $\mathcal{D}$ depends on $k_T$, i.e.
\begin{equation}\label{gcontrol2}
\Tr_T \left( \tilde{\rho}_0 \right) \neq \Tr_T \left(\tilde{\rho}_1\right),
\end{equation}
where $\tilde{\rho}_{k_T} \equiv G(\rho^{(D)} \otimes \rho^{(T)}_{k_T})$.
The permutation invariance of state $\rho^{(T)}_0$ implies that $\tilde{\rho}_0$ is a separable state. On the other hand, Eq. \eqref{gcontrol2} holds if and only if $\tilde{\rho}_1$ is an entangled state, as we prove in Appendix \ref{appB}, wherein we also prove a generalization of the latter condition for arbitrary $N>2$.
Therefore, the necessary conditions for a quantum-mechanical protocol to outperform a random guess are that (i) \textit{it involves indistinguishable target objects}, and (ii) \textit{the final quantum state of the device and the target objects is entangled if and only if $k_T=1$.}
\section{Discussion}\label{sec3}
We have seen in the previous section that Alice can accomplish her task thanks to the indistinguishability of the target objects and the possibility of establishing entanglement between the latter and the device. We now want to emphasize that the task and its solution are reminiscent of the \textit{swap test} \cite{buhrman2001quantum}, which enables one to check whether two quantum systems are prepared in equal states by performing a control-swap operation on them, with the `swap' operation acting as $\ket{\psi}\ket{\phi} \rightarrow \ket{\phi}\ket{\psi}$. This offers another angle on how to understand the necessity of the targets' indistinguishability for the quantum protocol to solve our task. Namely, when applied to indistinguishable particles, a \textit{spatial swap} acts on spatial modes exactly as the `swap' operation that is required in the swap-test, whereas for distinguishable particles the two operations are strictly different: indistinguishability thus enables the allowed operations to be used to implement a standard swap-test. Therefore our task presents an instance of quantum indistinguishability serving as an information-theoretic resource, which leads us back to the ongoing debate on the possible physical merit of the entanglement apparently intrinsic to indistinguishable particles, as mentioned in the Introduction. Indeed, notice that our task cannot be solved with distinguishable particles under the requirement that the target objects $\cal{T}$ and $\cal{A}$ be prepared independently. However, if we drop this assumption, the task can be equally well solved with mutually entangled distinguishable particles, as can be noticed by a mere re-interpretation of the quantum state associated to indistinguishable particles in the first-quantization notation. Indistinguishability and entanglement thus represent equivalent information-theoretic resources in our task, which may be taken to support the claim that the entanglement that appears in the first quantization notation is more than a mere representational artefact. However, we do not want to delve here into a more detailed discussion on whether indistinguishable particles need to (or can) be considered as necessarily entangled or not, the answer of which would starkly depend on particular definitions and measures of entanglement \cite{johann2021locality}: our aim is just to point out that there exists an information-theoretic task in which indistinguishability and entanglement serve as equivalent resources.\\
Let us now comment on possible future developments of our results. In the current manuscript we have provided only a quantum-mechanical analysis of our task; however, we believe that the latter, as presented abstractly at the beginning of Section \ref{sec2}, can be transposed into the framework of generalized probabilistic theories \cite{barrett2007information, plavala2021general}, which would enable the assessment of how much our results hinge on the specificities of quantum theory. We may thereby gain understanding of the general relation between indistinguishability and entanglement, by analyzing questions such as: in which other operational theories are the indistinguishability of the targets and the possibility of entangling the latter and the device, both necessary to solve the task? In which theories do indistinguishability and entanglement of the targets constitute equivalent resources for our task?\\
Finally, moving on to potential practical aspects of our findings, it is commonly expected that, following Moore's law, hardware components used in information processing may soon reach a regime in which quantum-mechanical effects cannot be neglected \cite{powell2008quantum}. When (and if) that becomes the case, our results may prove relevant for hardware-security modules that incorporate physical protection against tempering. Examples of this type of devices can already be found both in classical and quantum computing, where the security of one-time programs essentially relies on hardware components, i.e. on \textit{one-time memories} \cite{broadbent2013quantum, roehsner2018quantum}. Other potential applications of our finding may be expected to be found in cryptography and hardware security in general.\\
\begin{acknowledgments}
S.H. and B.D. acknowledge financial support from the Austrian Science Fund (FWF) through BeyondC-F7112.
\end{acknowledgments}
|
1,477,468,750,328 | arxiv | \section{Introduction}
\subsection{Motivation and definitions}
The topic of spacing distributions in random matrix ensembles is almost
as old as the introduction of random matrix theory into nuclear physics.
Both events can be traced back to Wigner in the mid 1950's \cite{Wi55,
Wi57}. Thus Wigner introduced the model of a large real symmetric random
matrix, in which the upper triangular elements are independently
distributed with zero mean and constant variance, for purposes of
reproducing the statistical properties of the highly excited energy levels
of heavy nuclei. This was motivated by the gathering of experimental data
on the spectrum of isotopes such as ${}^{238}$U at energy levels beyond
neutron threshold. Wigner hypothesized that the statistical properties of
the highly excited states of complex nuclei would be the same as those of the
eigenvalues of large random real symmetric matrices.
For the random matrix model to be of use at a
quantitative level, it was necessary to deduce analytic forms of
statistics of the eigenvalues which could be compared against statistics
determined from experimental data.
What are natural statistics for a sequence of energy levels, and can these
statistics be computed for the random matrix model? Regarding the first
question, let us think of the sequence as a point process on the line, and
suppose for simplicity that the density of points is uniform and has been
normalized to unity. For any point process in one dimension a fundamental
quantity is the probability density function for the event that given
there is a point at the origin, there is a point in the interval
$[s,s+ds]$, and further there are $n$ points somewhere in between
these points and thus in the interval
$(0,s)$. Let us denote the probability density function
by $p(n;s)$. In the language of energy levels,
this is the spacing distribution between levels $n$ apart.
Another fundamental statistical quantity is the $k$-point distribution
function $\rho_{(k)}(x_1,\dots,x_k)$. This can be defined recursively, starting
with $\rho_{(1)}(x)$, by the requirement that
\begin{equation}\label{r2.a}
\rho_{(k)}(x_1,\dots,x_k)/\rho_{(k-1)}(x_1,\dots,x_{k-1})
\end{equation}
is equal to the density of points at $x_k$, given there are points at
$x_1,\dots, x_{k-1}$. One sees from the definitions that
\begin{equation}\label{r2}
{\rho_{(2)}(0,s) \over \rho_{(1)}(0)} =
\sum_{n=0}^\infty p(n;s).
\end{equation}
From empirical data of a long energy level sequence, the quantity $p(n;s)$
for small values of $n$ at least is readily estimated (the statistical
uncertainty gets worse as $n$ increases). Use of (\ref{r2})
then allows for an estimation of $\rho_{(2)}(0;s)$.
We thus seek the theoretical determination of $p(n;s)$ for matrix
ensembles.
\subsection{Spacing between primes}
Before taking up the problem of determining $p(n;s)$ for matrix ensembles,
which is the theme of these lectures, let
us digress a little and follow the line of introduction to spacing
distributions given by Porter in the review he wrote as part of the book
\cite{Po65}, which collected together the major papers written in the field
up to 1965. Porter's introduction is particularly relevant to the theme
of the present school because it uses the prime numbers as an example of
a deterministic sequence which, like energy levels of heavy nuclei,
nevertheless exhibit pronounced stochastic features.
It turns out the spacing distributions between primes relate to perhaps
the simplest example of a point process. This is when the probability that
there is a point in the interval $[s,s+ds]$ is equal to $ds$, independent
of the location of the other points. This generates the so called Poisson
process with unit density, or in the language of statistical mechanics, a
perfect gas. By definition of the process the ratio (\ref{r2.a}) is unity
for all $k$ and thus
\begin{equation}\label{r3b}
\rho_{(k)}(x_1,\dots,x_k)=1.
\end{equation}
To compute $p(n;s)$, we think of the Poisson process as the $N \to \infty$
limit of a process in which each unit interval on the line is broken up into
$N$ equal sub-intervals, with the probability of there being a particle in
any one of the subintervals equal to $1/N$. Thus
\begin{equation}\label{r3}
p(s;n) = \lim_{N \to \infty} ( 1 - {1 \over N} )^{sN-n} N^{-n}
\Big ( {sN \atop n} \Big ) = {s^n \over n!} e^{-s}.
\end{equation}
In the first equality of (\ref{r3}), the first factor is the probability that
$sN-n$ subintervals do not contain a particle, the second factor is the
probability that $n$ subintervals do contain a particle, while the final factor
is the number of ways of choosing $n$ occupied sites amongst $sN$ sites in
total. The probability density in the final equality of (\ref{r3}) is the
Poisson distribution. Substituting (\ref{r3}) in (\ref{r2}) gives
$\rho_{(2)}(0,x)=1$, as required by (\ref{r3b}).
The distribution (\ref{r3}) ties in with prime numbers through Kram\'er's
model (see the lectures by Heath-Brown in the present volume). In
this approximation, statistically the primes are regarded as forming a
Poisson process on the positive integer lattice. The probability of
occupation of the $N$th site is taken to equal $1/\log N$, so as to be
consistent with the prime number theorem. Kram\'er's model predicts that as
an approximation
\begin{equation}\label{r4}
p^{(N)}(n;s) = {s^n \over n!} e^{-s}, \qquad s = t/\log N
\end{equation}
where $p^{(N)}(n;s)$ refers to the probability that for primes $p$ in the
neighbourhood of a prime $N$, there is a prime at $p+t$, and furthermore
there are exactly $n$ primes between $p$ and $p+t$.
To compare the prediction (\ref{r4}) against empirical data, we choose a
value of $N$, say $10^9$, and for the subsequent $M$ primes (say
$M=2,000$) record the distance to the following prime (in relation to
$p^{(N)}(1;s)$) and the distance to the second biggest prime after that
(in relation to $p^{(N)}(s;1)$). We form a histogram, with the scale on the
horizontal axis measured in units of $s=t/\log N$, where $t$ is the actual
spacing. The natural units for $t$ are multiples of 2, and this provides
a width for the bars of the histogram. We see from Figure \ref{af.1}
that the general trend of the histograms do indeed follow the
respective Poisson distributions.
\vspace{.5cm}
\begin{figure}
\epsfxsize=12cm
\centerline{\epsfbox{doubleprime.eps}}
\caption{\label{af.1} Distribution of the spacing $t$ between primes (leftmost
graph) and the spacing $t$ between every second prime for $2,000$ consecutive
primes starting with $N=10^9+7$. The distributions are given in units of
$s=t/\log N$. The smooth curves are the Poisson distributions $p(0;s) =
e^{-s}$ and $p(1;s)= se^{-s}$. }
\end{figure}
\subsection{Empirical determination of spacing distributions for matrix
ensembles}
Wigner's interest was in the statistical properties of the eigenvalues
of large real symmetric random matrices. More particularly, he sought the
statistical properties of the eigenvalues in what may be termed the bulk of
the spectrum (as opposed to the edge of the spectrum \cite{Fo93a}). The
eigenvalues in this region are characterized by having a uniform
density, which after rescaling (referred to as `unfolding') may be taken as
unity. In distinction to the situation with the sequence of primes, for
random matrices it is not necessary to study the statistical properties of
a large sequence of (unfolded) eigenvalues from a single matrix. Rather
the spacing distributions with respect to the middle eigenvalue (this is
the eigenvalue most in the bulk) in multiple samples from the class
of random matrices in question can be listed, and then this list used to
create a histogram. Moreover, to approximate large matrix size
behaviour, it is only necessary to consider quite small matrix sizes,
say $13 \times 13$.
In Figure \ref{af.2} we have plotted the empirical determination of
$p(0;s)$ and $p(1;s)$ obtained from lists of eigenvalue spacings for
realizations of the so called GOE (Gaussian orthogonal ensemble) eigenvalue
distribution.
As we know from the lectures of Fyodorov in this volume, the
GOE consists of
real symmetric random matrices, with each diagonal element chosen from the
normal distribution N$[0,1]$, and each (strictly) upper triangular element
chosen from the normal distribution N$[0,1/\sqrt{2}]$. For such matrices,
it is well known that to leading order in the matrix rank $N$, the
eigenvalue density is given by the Wigner semi-circle law
$$
\rho_{(1)}(x) = {\sqrt{2N} \over \pi} \sqrt{ 1 - {x^2 \over 2 N} }.
$$
Multiplying the eigenvalues at point $x$ by this factor allows us to unfold
the sequence giving a mean eigenvalue spacing of unity.
A less well known, and much more recent result relating to GOE matrices is
that their spectrum can be realized without having to diagonalize a
matrix \cite{DE02} (see also \cite{FR02b}). Thus one has that the roots
of the random polynomial $P_N(\lambda)$, defined recursively by the
stochastic three term recurrence
\begin{equation}\label{r7}
P_k(\lambda) = (\lambda - a_k) P_{k-1}(\lambda) - b_{k-1}^2 P_{k-2}(\lambda)
\end{equation}
where
$$
a_k \: \sim \: {\rm N}[0,1], \qquad
b_k^2 \: \sim \: {\rm Gamma}[k/2,1],
$$
have the same distribution as the eigenvalues of GOE matrices
(the notation Gamma$[s,\sigma]$ denotes the gamma distribution with
density proportional to $x^{s-1} e^{-x/\sigma}$). Generating
such polynomials and finding their zeros then provides us with a
sequence
distributed as for GOE
eigenvalues, from which we have determined $p(0;s)$ and
$p(1;s)$.
\vspace{.5cm}
\begin{figure}
\epsfxsize=12cm
\centerline{\epsfbox{double.gue.eps}}
\caption{\label{af.2} Plot of the distribution of the unfolded spacing
between the 6th and 7th, and 7th and 8th eigenvalues
(pooled together) for 2,000 samples from the $13\times 13$ GUE
eigenvalue distribution. The smooth curve is the Wigner surmise
(\ref{ws}). The rightmost graph is the distribution between the 6th and
8th eigenvalues in the same setting, while in the smooth curve in this
case is $(1/2) p_4(0;s/2)$ with $p_4$ given by (\ref{2.10b}).
}
\end{figure}
\section{Eigenvalue product formulas for gap probabilities}
\setcounter{equation}{0}
\subsection{Theory relating to $p(n;s)$}
Consider a point process consisting of a total of $N$ points. Let the joint
probability density function of the $N$ points be denoted $p(x_1,\dots,
x_N)$. A quantity closely related to the spacing distribution $p(0;s)$
is the gap probability
\begin{equation}
E^{\rm bulk}(0;s) := \lim_{N \to \infty} a_N^N \int_{\bar{I}} dx_1
\cdots \int_{\bar{I}} dx_N \, p(a_N x_1,\dots, a_N x_N)
\end{equation}
where $\bar{I} = (-\infty,\infty) - (-s/2,s/2)$ and $a_N$ is the leading large
$N$ form of the local density at the origin (and thus the unfolding factor).
Thus it is easy to see that
\begin{equation}\label{pE}
p(0;s) = {d^2 \over d s^2} E^{\rm bulk}(0;s).
\end{equation}
More generally we can define
\begin{equation}
E^{\rm bulk}(n;s) := \lim_{N \to \infty} \Big ( {N \atop n} \Big )
a_N^N \int_{-s/2}^{s/2} dx_1 \cdots \int_{-s/2}^{s/2} dx_n
\int_{\bar{I}} dx_{n+1} \cdots \int_{\bar{I}} dx_N \,
p(a_N x_1,\dots, a_N x_N).
\end{equation}
These quantities can be calculated from the generating function
\begin{equation}\label{ap}
E^{\rm bulk}(s;\xi) := \lim_{N \to \infty} a_N^N
\int_{-\infty}^\infty dx_1 \cdots \int_{-\infty}^\infty dx_N \,
\prod_{l=1}^N (1 - \xi \chi_{(-s/2,s/2)}^{(l)} )
p(a_N x_1,\dots, a_N x_N),
\end{equation}
where $\chi_J^{(l)} = 1$ for $x^{(l)} \in J$ and
$\chi_J^{(l)} = 0$ otherwise,
according to the formula
\begin{equation}\label{2.5x}
E^{\rm bulk}(n;s) = {(-1)^n \over n!} {\partial^n \over \partial \xi^n }
E^{\rm bulk}(s;\xi) \Big |_{\xi = 1}.
\end{equation}
It follows from the definitions that
\begin{equation}
p(n;s) = {d^2 \over ds^2} E^{\rm bulk}(n;s) + 2 p(n-1;s) -
p(n-2;s),
\end{equation}
or equivalently
\begin{equation}
p(n;s) = {d^2 \over ds^2} \sum_{j=0}^n (n-j+1) E^{\rm bulk}(j;s).
\end{equation}
Hence knowledge of $\{E^{\rm bulk}(j;s)\}_{j=0,\dots,n}$ is sufficient for
the calculation of $p(n;s)$.
It is possible to relate (\ref{ap}) to the $k$-point distribution
functions. In the finite system the latter are given by
\begin{equation}\label{ap1}
\rho_{(k)}^{(N)}(x_1,\dots,x_k) = {N! \over (N-k)!}
\int_{-\infty}^\infty dx_{k+1} \cdots \int_{-\infty}^\infty dx_N \,
p(x_1,\dots,x_N).
\end{equation}
With
$$
\rho_{(k)}^{\rm bulk}(x_1,\dots,x_k) := \lim_{N \to \infty} a_N^k
\rho_{(k)}^{(N)}(a_N x_1,\dots,a_N x_k),
$$
by expanding (\ref{ap}) in a power series in $\xi$ and making use of
(\ref{ap1}) we see that
\begin{equation}\label{ap2}
E^{\rm bulk}(s;\xi) = 1 +
\sum_{k=1}^\infty {(-\xi)^k \over k!}
\int_{-s/2}^{s/2} dx_1 \cdots \int_{-s/2}^{s/2} dx_k \,
\rho_{(k)}^{\rm bulk}(x_1,\dots,x_k).
\end{equation}
For the limiting process to be rigorously justified, because $[-s/2,s/2]$
is a compact interval, it is sufficient that
$\rho_{(k)}^{\rm bulk}(x_1,\dots,x_k)$ be bounded by $M^k$ for some
$M > 0$.
With these basic formulas established, we will now proceed to survey some
of the main results relating to spacing distributions in the bulk of the
various matrix ensembles (orthogonal, unitary and symplectic symmetry
classes).
\subsection{Wigner surmise}
For the Poisson process we have seen that $p(0;s) = e^{-s}$. Thus in this
case the spacing distribution is actually maximum at zero separation
between the points. The opposite feature is expected for $p(0;s)$ in relation
to the eigenvalues of random real symmetric matrices, as can be seen by
examining the $2 \times 2$ case of matrices of the form
$$
A = \left [ \begin{array}{cc} a & b \\ b & c \end{array}
\right ].
$$
This matrix is diagonalized by the decomposition
$A = R {\rm diag}[\lambda_+,\lambda_-] R^T$ where
$$
R = \left [ \begin{array}{cc} \cos \theta & - \sin \theta \\
\sin \theta & \cos \theta \end{array} \right ].
$$
Expressing $a,b,c$ in terms of $\lambda_+,\lambda_-, \theta$ it is simple to
show
\begin{equation}\label{2.10'}
da db dc = |\lambda_+ - \lambda_-| d \lambda_+ d \lambda_- d \theta.
\end{equation}
Thus for small separation $s:= |\lambda_+ - \lambda_-|$ the probability density
function vanishes linearly.
Let $\mu(s)$ denote the small $s$ behaviour of $p(0;s)$. We have seen that
for the Poisson process $\mu(s)=1$, while for the bulk eigenvalues of real
symmetric matrices $\mu(s) \propto s$. Wigner hypothesized \cite{Wi57}
that as with the
Poisson process, $p(0;s)$ for the bulk eigenvalues of random real symmetric
matrices could be deduced from the ansatz
\begin{equation}\label{ws1}
p(0;s) = c_1 \mu(s) \exp \Big ( - c_2 \int_0^s \mu(t) \, dt \Big )
\end{equation}
where the constants $c_1$ and $c_2$ are determined by the normalization
requirements
$$
\int_0^\infty p(0;s) \, ds = 1, \qquad
\int_0^\infty s p(0;s) \, ds = 1
$$
(the second of these says that the mean spacing is unity). Thus one arrives
at the so called Wigner surmise
\begin{equation}\label{ws}
p(0;s) = {\pi \over 2} s e^{- \pi s^2 / 4}
\end{equation}
for the spacing distribution of the bulk eigenvalues of random real symmetric
matrices.
The ansatz (\ref{ws1}) does not apply if instead of real symmetric matrices
one considers complex Hermitian matrices, or Hermitian matrices with real
quaternion elements. Examining the $2 \times 2$ case (see the introductory
article by Porter in \cite{Po65}) one sees that in the analogue of
(\ref{2.10'}), the factor $|\lambda_+-\lambda_-|$ should be replaced by
$|\lambda_+-\lambda_-|^\beta$ with $\beta=2$ (complex elements) or
$\beta=4$ (real quaternion elements). Choosing the elements to be appropriate
Gaussians, one can reclaim (\ref{ws}) and furthermore obtain
\begin{equation}\label{2.10b}
p_2(0;s) = {32 s^2 \over \pi^2} e^{-4 s^2/\pi}, \qquad
p_4(0;s) = {2^{18} s^4 \over 3^6 \pi^3} e^{-64 s^2/9 \pi}.
\end{equation}
as approximations to the spacing distributions in the cases $\beta=2$ and
$\beta=4$ respectively.
\subsection{Fredholm determinant evaluations}
A unitary invariant matrix ensemble of $N \times N$ random complex
Hermitian matrices has as its eigenvalue probability density function
\begin{equation}\label{3.1}
{1 \over C} \prod_{l=1}^N w_2(x_l) \prod_{1 \le j < k \le N}
(x_k - x_j)^2,
\end{equation}
which we will denote by UE${}_N(g)$. We know (see the lectures by Fyodorov
in this volume) that the $k$-point distribution function can be expressed
in terms of the monic orthogonal polynomials $\{p_k(x)\}_{k=0,1,\dots}$
associated with the weight function $w_2(x)$,
$$
\int_{-\infty}^\infty w_2(x) p_j(x) p_k(x) \, dx = h_j \delta_{j,k}.
$$
Thus with
\begin{eqnarray}\label{2.13'}
K_N(x,y) & = & (w_2(x) w_2(y) )^{1/2} \sum_{k=0}^{N-1} {p_k(x) p_k(y) \over h_k}
\nonumber \\
& = & (w_2(x) w_2(y) )^{1/2} {p_N(x) p_{N-1}(y) - p_N(y) p_{N-1}(x) \over x-y}
\end{eqnarray}
we have
\begin{equation}\label{3.1a}
\rho_{(k)}^{(N)}(x_1,\dots,x_k) = \det \Big [ K_N(x_j, x_l)
\Big ]_{j,l=1,\dots,k}.
\end{equation}
This structure is significant for the evaluation of the generating function
\begin{equation}\label{3.2}
E_{N,2}(J;\xi;w_2) := \Big \langle \prod_{l=1}^N(1 - \xi \chi_J^{(l)})
\Big \rangle_{{\rm UE}_N(g)}
\end{equation}
(the subscript 2 on $E_{N,2}$ indicates the exponent in (\ref{3.1})). Expanding
(\ref{3.2}) in a power series analogous to (\ref{ap2}) we obtain
\begin{equation}\label{3.2a}
E_{N,2}(J;\xi;w_2) = 1 + \sum_{k=1}^N {(-\xi)^k \over k!}
\int_J dx_1 \cdots \int_J dx_k \, \det \Big [ K_N(x_j, x_l)
\Big ]_{j,l=1,\dots,k},
\end{equation}
where use has been made of (\ref{3.1a}). The sum in (\ref{3.2a})
occurs in the theory of
Fredholm integral equations \cite{WW65},
and is in fact an expansion of the determinant
of an integral operator,
\begin{equation}
E_{N,2}(J;\xi;w_2) = \det(1 - \xi K_J)
\end{equation}
where $K_J$ is the integral operator on the interval $J$ with kernel
$K_N(x,y)$,
$$
K_N[f](x) = \int_J K_N(x,y) f(y) \, dy.
$$
It is well known that in the bulk scaling limit, independent of the precise
functional form of $w_2(x)$,
\begin{equation}\label{3.3}
\lim_{N \to \infty} a_N K_N(a_N x, a_N y) =
{\sin \pi (x-y) \over \pi (x-y) } =: K^{\rm bulk}(x,y)
\end{equation}
for a suitable scale factor $a_N$. Thus we have
\begin{equation}\label{3.4}
E_2^{\rm bulk}(J;\xi) = \det (1 - \xi K_J^{\rm bulk})
\end{equation}
where $K_J^{\rm bulk}$ is the integral operator on the interval $J$ with
kernel (\ref{3.3}) (the so called sine kernel). This is a practical formula
for the computation of $E_2^{\rm bulk}$ if we can compute the
eigenvalues $\{ \mu_j \}_{j=0,1,\dots}$ of $K_J^{\rm bulk}$, since we
have
\begin{equation}\label{3.3e}
E_2^{\rm bulk}(J;\xi) = \prod_{j=0}^\infty (1 - \xi \mu_j).
\end{equation}
In fact for $J = (-s,s)$ the eigenvalues can be computed \cite{Ga61}
by relating $K_{(-s,s)}^{\rm bulk}$ to a differential operator which
has the prolate spheroidal functions as its eigenfunctions, and using
previously computed properties of this eigensystem.
Wigner's interest was not in complex Hermitian random matrices, but rather
real symmetric random matrices. Orthogonally invariant ensembles of the
latter have an eigenvalue probability density function of the form
\begin{equation}
{1 \over C} \prod_{l=1}^N w_1(x_l) \prod_{1 \le j < k \le N}
|x_k - x_j|,
\end{equation}
to be denoted OE${}_N(w_1)$. For such matrix ensembles, the $k$-point
distribution function can be written as a quaternion determinant (or
equivalently Pfaffian) with an underlying $2 \times 2$ matrix kernel
(see e.g.~\cite[Ch.~5]{Fo02}). From this it is possible to show that
\begin{equation}
\Big ( E_1^{\rm bulk}(J;\xi) \Big )^2 = \det ( 1 - \xi K_{1,J}^{\rm bulk} )
\end{equation}
where $K_{1,J}^{\rm bulk}$ is the integral operator on $J$ with matrix kernel
\begin{equation}
K_1^{\rm bulk}(x,y) =
\left[
\begin{array}{cc}
\displaystyle{{\sin \pi(x-y) \over
\pi(x-y)}} & \displaystyle{{1 \over \pi } \int_0^
{ \pi (x-y)} {\sin t \over t} dt} -
{1 \over 2 }{\rm sgn}(x-y)
\\[.3cm]
\displaystyle{{\partial \over \partial x}
{\sin \pi(x-y) \over \pi
(x-y)}} & \displaystyle{{\sin \pi(x-y) \over \pi(x-y)}}
\end{array} \right].
\end{equation}
However, unlike the result (\ref{3.4}), this form has not been put to any
practical use.
Instead, as discovered by Mehta \cite{Me60}, a tractable formula results
from the scaling limit of an inter-relationship between the generating function
of an orthogonal symmetry gap probability and a unitary symmetry gap
probability. The inter-relationship states
\begin{equation}\label{me}
E_{2N,1}((-t,t);\xi;e^{-x^2/2}) \Big |_{\xi=1} =
E_{N,2}((0,t^2);\xi;y^{-1/2} e^{-y} \chi_{y > 0} ) \Big |_{\xi=1},
\end{equation}
and in the scaling limit leads to the result
\begin{equation}
E_1^{\rm bulk}((-s,s);\xi) \Big |_{\xi=1} =
\det ( 1 - K^{{\rm bulk}+}_{(-s,s)} )
\end{equation}
where $K^{{\rm bulk}+}_{(-s,s)}$ is the integral operator on $(-s,s)$
with kernel
\begin{equation}\label{2.24a}
{1 \over 2} \Big ( {\sin \pi (x-y) \over \pi (x-y) } +
{\sin \pi (x+y) \over \pi (x+y) } \Big ),
\end{equation}
which we recognize as the even part of the sine kernel (\ref{3.3}).
(For future reference we define $K^{{\rm bulk}-}_{(-s,s)}$ analogously,
except that the kernel consists of the difference of the two terms in
(\ref{2.24a}), or equivalently the odd part of the sine kernel
(\ref{3.3}).)
Because the eigenvalues $\mu_{2j}$ of the integral operator on $(-s,s)$
with kernel (\ref{3.3}) correspond to even eigenfunctions, while the
eigenvalues $\mu_{2j+1}$ correspond to odd eigenfunctions, we have that
\begin{equation}\label{ga}
E_1^{\rm bulk}((-s,s);\xi) \Big |_{\xi=1} =
\prod_{l=0}^\infty (1 - \mu_{2l}).
\end{equation}
Gaudin \cite{Ga61} used this formula, together with (\ref{pE}), to tabulate
$p_1^{\rm bulk}(0;s)$ and so test the accuracy of the Wigner surmise
(\ref{ws}). In
fact this confirmed the remarkable precision of the latter, with the
discrepancy between it and the exact value no worse than a few percent.
The case of Hermitian matrices with real quaternion elements and having a
symplectic symmetry remains. The eigenvalue p.d.f.~of the independent
eigenvalues (the spectrum is doubly degenerate) is then
\begin{equation}\label{2.26'}
{1 \over C} \prod_{l=1}^N w_4(x_l) \prod_{1 \le j < k \le N}
(x_k - x_j)^4,
\end{equation}
which we denote by SE${}_N(w_4)$. The computation of the corresponding
bulk gap probability relies on further inter-relationships between
matrix ensembles with different underlying symmetries. These apply to the
eigenvalue probability density function for Dyson's circular ensembles,
$$
{1 \over C} \prod_{1 \le j < k \le N} | e^{i \theta_k} - e^{i \theta_j}
|^\beta,
$$
where $\beta = 1,2$ or 4 according to the underlying symmetry being
orthogonal, unitary or symplectic
respectively. The corresponding matrix ensembles are referred to as the
COE${}_N$, CUE${}_N$ and CSE${}_N$ in order.
In the $N \to \infty$ scaling limit these
ensembles correspond with the bulk of the ensembles OE${}_N(w_1)$,
UE${}_N(w_2)$ and SE${}_N(w_4)$ respectively.
The first of the required inter-relationships was formulated by Dyson
\cite{Dy62} and proved by Gunson \cite{Gu62}. It states that
\begin{equation}\label{2.1a}
{\rm alt}( {\rm COE}_N \cup {\rm COE}_N ) = {\rm CUE}_N
\end{equation}
where the operation ${\rm COE}_N \cup {\rm COE}_N$ refers to the
superposition of two independent realizations of the ${\rm COE}_N$ and
alt refers to the operation of observing only every second member of the
sequence. The second of the required inter-relationships is due to
Dyson and Mehta \cite{DM63}. It states that
\begin{equation}\label{2.1b}
{\rm alt} \, {\rm COE}_{2N} = {\rm CSE}_N.
\end{equation}
(For generalizations of (\ref{2.1a}) and (\ref{2.1b}) to the ensembles
OE${}_N(w_1)$, UE${}_N(w_2)$ and SE${}_N(w_4)$ with particular
$w_1$, $w_2$ and $w_4$ see \cite{FR01}.)
Using (\ref{2.1a}) and (\ref{2.1b}) together one can deduce that in the
scaled limit
\begin{equation}
E_4^{\rm bulk}(0;(-s/2,s/2)) = {1 \over 2} \Big (
E_1^{\rm bulk}(0;(-s,s)) + {E_2^{\rm bulk}(0;(-s,s)) \over
E_1^{\rm bulk}(0;(-s,s)) } \Big ),
\end{equation}
which upon using (\ref{3.3e}) and (\ref{ga}) reads
\begin{equation}\label{ga1}
E_4^{\rm bulk}(0;(-s/2,s/2)) = {1 \over 2} \Big (
\prod_{l=0}^\infty(1 - \lambda_{2l}) + \prod_{l=0}^\infty(1 - \lambda_{2l+1})
\Big ).
\end{equation}
Another consequence of (\ref{2.1b}) is that
\begin{equation}\label{2.34}
p_4(0;s) = 2 p_1(1;2s).
\end{equation}
It is this relationship, used together with the approximation for
$p_4(0;s)$ in (\ref{2.10b}), which is used to approximate $p(1;s)$ as a
smooth curve in Figure \ref{af.2}.
In summary, as a consequence of the pioneering work of Mehta, Gaudin and
Dyson, computable formula in terms of the eigenvalues of the integral
operator on $(-s,s)$ with the sine kernel (\ref{3.3}) were obtained for
$$
E_2^{\rm bulk}((-s,s);\xi), \qquad E_1^{\rm bulk}(0;(-s,s)), \qquad
E_4^{\rm bulk}(0;(-s/2,s/2)).
$$
\section{Painlev\'e transcendent evaluations}
\setcounter{equation}{0}
\subsection{The results of Jimbo et al.}
An explicit connection between the multiple interval gap probability
$$
E_2^{\rm bulk}\Big ( \cup_{j=1}^p (a_{2j-1},a_{2j});\xi \Big )
$$
and integrable systems theory --- specifically the theory of
isomondromic deformations of linear differential equations --- was made
by Jimbo, Miwa, M\^ori and Sato in 1980. Here the endpoints
$a_1, \dots, a_{2p}$ of the gap free intervals become dynamical time like
variables, inducing flows which turn out to be integrable.
As part of this study the quantity
\begin{equation}\label{e2}
E_2^{\rm bulk}((-s,s);\xi) = \det(1 - \xi K_{(-s,s)}^{\rm bulk}) =
\prod_{j=0}^\infty (1 - \xi \mu_j)
\end{equation}
was expressed in terms of the solution of a nonlinear equation. In fact
knowledge of (\ref{e2}) is sufficient to calculate the products appearing
in (\ref{ga}) and (\ref{ga1}). Thus with
$$
D_+(s;\xi) := \prod_{j=0}^\infty(1 - \xi \mu_{2j}), \qquad
D_-(s;\xi) := \prod_{j=0}^\infty(1 - \xi \mu_{2j+1})
$$
Gaudin (see \cite{Me91}) has shown
\begin{equation}\label{y1}
\log D_{\pm}(s;\xi) = {1 \over 2} \log E_2^{\rm bulk}((-s,s);\xi) \pm
{1 \over 2} \int_0^s \sqrt{ - {d^2 \over dx^2} \log
E_2^{\rm bulk}((-x,x);\xi) } \, dx.
\end{equation}
The result of \cite{JMMS80} is that
\begin{equation}\label{jmms}
E_2^{\rm bulk}((-s,s);\xi) = \exp \int_0^{\pi s} {\sigma (u;\xi) \over u}
\, du
\end{equation}
where $\sigma(u;\xi)$ satisfies the nonlinear differential equation
\begin{equation}\label{jj1}
(u \sigma'')^2 + 4(u \sigma' - \sigma) ( u \sigma' - \sigma + (\sigma')^2 ) = 0
\end{equation}
subject to the boundary condition
$$
\sigma(u;\xi) \mathop{\sim}\limits_{u \to 0^+} - {\xi u \over \pi}.
$$
In fact the equation (\ref{jj1}) is an example of the so called $\sigma$ form
of a Painlev\'e V equation. In view of this it is appropriate to give some
background into the Painlev\'e theory, following \cite{IKSY91}. First we
remard that the
Painlev\'e differential equations are second order nonlinear equations
isolated as part of the study of Painlev\'e and his students into the
moveable singularities of the solution of such equations.
Earlier Fuchs and Poincar\'e had studied first order differential equations
of the form
\begin{equation}\label{Pp}
P(y',y,t) = 0
\end{equation}
where $P$ is a polynomial in $y', y$ with coefficients meromorphic in $t$.
In contrast to linear differential equations, nonlinear equations have the
property that the position of the singularities of the solution will depend
in general on the initial condition. The singularities are then said to be
moveable. For example
\begin{equation}\label{Pp1}
{dy \over dt} = y^2
\end{equation}
has the general solution $y = 1/(c-t)$, where $c$ determines the
initial condition, and so exhibits a moveable first order pole. The
nonlinear equation
$$
y {d y \over dt} = {1 \over 2}
$$
has the general solution $y = (t - c)^{1/2}$, which exhibits a moveable
branch point (essential singularity). Fuchs and Poincar\'e sought to
classify all equations of the form (\ref{Pp}) which are free of
moveable essential singularities. They were able to show that up to an
analytic change of variables, or fractional linear transformation, the only
such equations with this property were the differential equation
of the Weierstrass ${\cal P}$-function,
\begin{equation}\label{6.12}
\Big ( {d y \over dt} \Big )^2 = 4y^3 - g_2 y - g_3,
\end{equation}
or the Riccati equation
\begin{equation}\label{6.13}
{dy \over dt} = a(t) y^2 + b(t) y + c(t)
\end{equation}
where $a, b, c$ are analytic in $t$ (note that (\ref{Pp1}) is of the
latter form).
Painlev\'e then took up the same problem as that addressed by Fuchs and
Poincar\'e, but now with respect to second order differential equations
of the form
$$
y'' = R(y',y,t)
$$
where $R$ is a rational function in all arguments. It was found that the only
equations of this form and with no moveable essential singularities were
either reducible to (\ref{6.12}) or (\ref{6.13}), reducible to a linear
differential equation, or were one of six new nonlinear differential
equations, now known as the Painlev\'e equations. As an explicit example
of the latter, we note the Painlev\'e V equation reads
\begin{equation}\label{PV}
y'' = \Big ( {1 \over 2y} + {1 \over 1 - y} \Big ) (y')^2
- {1 \over x} y' + {(y-1)^2 \over x^2} \Big ( \alpha y + {\beta \over
y} \Big ) + {\gamma y \over x} + {\delta y (y+1) \over y - 1}
\end{equation}
where $\alpha, \beta, \gamma$ are parameters.
An immediate question is to how (\ref{PV}) relates to (\ref{jj1}).
For this one must develop a Hamiltonian theory of the Painlev\'e
equations. The idea is to present a
Hamiltonian $H=H(p,q,t;\vec{v})$, where the components of
$\vec{v}$ are parameters, such that after eliminating $p$ in the
Hamilton equations
\begin{equation}\label{6.21}
q ' = {\partial H \over \partial p}, \qquad
p' = - {\partial H \over \partial q},
\end{equation}
$q'$ and $p'$ denoting derivatives with respect to $t$,
the equation in $q$ is the appropriate Painlev\'e equation
(in (\ref{6.21}) the role of $p$ and $q$ is interchanged relative to their
usual meaning of position and momentum in physics; here we are following
the convention of Okamoto. Malmquist \cite{Ma22} was the
first to present such Hamiltonians, although his motivation was not to
further the development of the Painlev\'e theory itself. This was left to
Okamoto in a later era, and it is aspects of his theory we will briefly
present here.
The Hamiltonian for the PV equation as presented by Okamoto \cite{OK87} is
\begin{eqnarray}
t H_V & = & q(q-1)^2p^2 - \{ (v_1 - v_2)(q-1)^2 - 2(v_1 + v_2)q(q-1) + tq
\} p \nonumber \\&& \qquad + (v_3 - v_2)(v_4 - v_2) (q-1),
\end{eqnarray}
where the parameters are constrained by $v_1+v_2+v_3+v_4=0$ and are
further related to those in (\ref{PV}) according to
$$
\alpha = {1 \over 2}(v_3 - v_4)^2, \: \:
\beta = - {1 \over 2} (v_1 - v_2)^2, \:
\gamma = v_1 + 2 v_2 - 1, \: \: \delta = - {1 \over 2}.
$$
It turns out that, as a consequence of the Hamilton equations (\ref{6.21}),
$t H_V$ itself satisfies a nonlinear differential equation. It is this
differential equation which relates to (\ref{jj1}). Okamoto made use
of this equation for the symmetry it exhibits in the parameters
$v_1,\dots,v_4$.
The equation in question, which is fairly straightforward to derive, is
presented for the so called auxilary Hamiltonian
$$
h_V(t) = tH_V + (v_3 - v_2) (v_4 - v_2) - v_2 t - 2 v_2^2.
$$
Okamoto showed
$$
(th_V'')^2 - (h_V - th_V' + 2 (h_V')^2)^2 + 4
\prod_{k=1}^4(h_V'+v_k) = 0.
$$
Setting
$$
\sigma_{V}(t) = h_V(t) + v_2t + 2v_2^2, \qquad \nu_{j-1} = v_j - v_2
\: \:\: (j=1,\dots,4)
$$
in this one obtains the so called Jimbo-Miwa-Okamoto $\sigma$-form of the
Painlev\'e V equation
\begin{eqnarray}\label{3.12}
&&
(t \sigma_V'')^2 - \Big ( \sigma_V - t \sigma_V'
+ 2 (\sigma_V')^2 + (\nu_0 + \nu_1 + \nu_2 + \nu_3)
\sigma_V' \Big )^2 \nonumber \\
&& \quad + 4 (\nu_0 + \sigma_V')(\nu_1 + \sigma_V')
(\nu_2 + \sigma_V')(\nu_3 + \sigma_V') = 0
\end{eqnarray}
(Jimbo and Miwa \cite{JM81} arrived at (\ref{3.12}) in their study of
isomonodromic deformations of linear differential equations).
We note that (\ref{jj1}) is an example of this equation with
$$
\nu_0 = \nu_1 = \nu_2 = \nu_3 = 0, \qquad t \mapsto - 2 i u.
$$
\subsection{Unveiling more structure}
The result of Jimbo et al.~relates to the Fredholm determinant of the
integral operator with the sine kernel. What is special about the sine
kernel that relates it to integrable systems theory? This question was
answered by Its, Izergin, Korepin
and Slanov \cite{IIKS90} who exhibited
integrability features of all kernels of the Christoffel-Darboux
type (recall (\ref{2.13'}) in relation to the latter terminology)
\begin{equation}\label{dR3}
\xi K(x,y) = {\phi(x) \psi(y) - \phi(y) \psi(x) \over x - y},
\end{equation}
the sine kernel begin the special case
\begin{equation}\label{sinc}
\phi(x) = \sqrt{\xi} \sin x, \qquad
\psi(y) = \sqrt{\xi} \cos y.
\end{equation}
One of their key results related to the form of the kernel $R(x,y)$
for the so called resolvent operator
$$
R_J := \xi K_J ( 1 - \xi K_J )^{-1}.
$$
With
\begin{equation}\label{3.14'}
Q(x) := (1 - \xi K_J)^{-1} \phi, \qquad P(x) := (1 - \xi K_J)^{-1} \psi
\end{equation}
they showed
\begin{equation}\label{dR1}
R(x,y) = {Q(x) P(y) - P(x) Q(y) \over x - y}.
\end{equation}
The significance of the resolvent kernel is evident from the general formula
\begin{equation}\label{dR}
{\partial \over \partial a_j} \log \det (1 - \xi K_{(a_1,a_2)} ) =
(-1)^{j-1} R(a_j, a_j) \quad (j=1,2).
\end{equation}
To derive this formula, one notes that
\begin{eqnarray*}
\log \det (1 - \xi K_{(a_1,a_2)} ) & = & {\rm Tr} \,
\log (1 - \xi K_{(a_1,a_2)} ) \\
& = & \int_{-\infty}^\infty \log (1 - \xi K(x,x) \chi_{(a_1,a_2)}^{(x)})
\, dx.
\end{eqnarray*}
Thus
$$
{\partial \over \partial a_j} \log \det (1 - \xi K_{(a_1,a_2)} ) =
(-1)^{j-1} (1 - \xi K(a_j,a_j))^{-1} \xi K(a_j,a_j)
$$
as required.
According to (\ref{dR1})
\begin{equation}\label{dRa}
R(a_j, a_j) = - Q(x) P'(x) + P(x) Q'(x) \Big |_{x=a_j},
\end{equation}
so we see from (\ref{dR}) that the Fredholm determinant is determined by the
quantities (\ref{3.14'}) and their derivatives evaluated at the endpoints of
the interval. Indeed a close examination of the workings of
\cite{JMMS80}, undertaken by Mehta \cite{Me91a}, Dyson \cite{Dy95} and
Tracy and Widom \cite{TW93}, revealed that
the former study indeed proceeds via the equations (\ref{dR}) and
(\ref{dRa}), and in fact $\sigma(t)$ in (\ref{jmms}) is related to the
resolvent kernel evaluated at an endpoint by $\sigma(t) = - t
R(t/2,t/2)$. Moreover it was realized that like (\ref{dR1}) there are
other equations contained in the working of \cite{JMMS80} which apply to all
kernels of the form (\ref{dR3}). However it was also clear that other
equations used in \cite{JMMS80} were specific to the form of
$\phi$ and $\psi$
in (\ref{sinc}).
Tracy and Widom were able to identify these latter properties, which are that
$\phi$ and $\psi$ are related by the coupled first order differential
equations
\begin{eqnarray}\label{7.34}
m(x) \phi'(x) & = & A(x) \phi(x) + B(x) \psi(x) \nonumber \\
m(x) \psi'(x) & = & -C(x) \phi(x) - A(x) \psi(x)
\end{eqnarray}
where $m,A,B,C$ are polynomials. This structure allows the so called
universal equations (independent of the specific form of (\ref{dR3}))
such as (\ref{dRa}) to be supplemented by a number of case specific
equations. For some choices of $\phi$ and $\psi$ in addition to that
corresponding to sine kernel, the resulting system of equations closes.
Examples relevant to spacing distributions at the soft and hard edge of
matrix ensembles with unitary symmetry are
$$
\phi(x) = \sqrt{\xi} {\rm Ai}(x), \: \: \psi(x) = \phi'(x), \qquad \qquad
\phi(x) = \sqrt{\xi} J_a(\sqrt{x}), \: \: \psi(x) = x \phi'(x).
$$
In both these cases it was possible to obtain an evaluation of the generating
function for the corresponding gap probability in a form analogous to
(\ref{jmms}) \cite{TW94a,TW94b}.
We will make note of the hard edge result because it, by virtue of Mehta's
inter-relationship (\ref{me}), relates to the gap probability in the bulk
in the case of an underlying orthogonal symmetry. First, we define the hard
edge gap probability in the case of an underlying
unitary symmetry as the scaled limit of the ensemble (\ref{3.1}) with
$w_2(x) = x^a e^{-x} \chi_{x > 0}$. Explicitly
\begin{equation}\label{u4.1}
E_2^{\rm hard}((0,s);\xi) = \lim_{N \to \infty}
E_2\Big ( (0, {s \over 4N});\xi;x^a e^{-x} \chi_{x>0} \Big ).
\end{equation}
It was shown in \cite{Fo93a} that
\begin{equation}\label{u4.1a}
E_2^{\rm hard}((0,s);\xi) = \det (1 - \xi K^{\rm hard}_{(0,s)})
\end{equation}
where $K^{\rm hard}_{(0,s)}$ is the integral operator on $(0,s)$ with kernel
$$
K^{\rm hard}(x,y) = {J_a(x^{1/2}) y^{1/2} J_a'(y^{1/2}) - x^{1/2}
J_a'(x^{1/2}) J_a(y^{1/2}) \over 2 ( x - y) }.
$$
As part of the study \cite{TW94b} the Fredholm determinant (\ref{u4.1a})
was given the evaluation
\begin{equation}\label{u4.2}
E_2^{\rm hard}((0,s);\xi) = \exp \int_0^s u(t;a;\xi) {dt \over t}
\end{equation}
where $u$ satisfies the differential equation
\begin{equation}\label{6.90}
(t u'')^2 - a^2 (u')^2 - u'(4 u' + 1) (u - tu') = 0
\end{equation}
subject to the boundary condition
$$
u(t;a;\xi)
\: \mathop{\sim}\limits_{t \to 0^+} \: - \xi t K^{\rm hard}(t,t).
$$
The equation (\ref{6.90}) is a special case of the $\sigma$ form of the
Painlev\'e III$'$ system \cite{Ok87a}.
It follows from (\ref{me}), (\ref{u4.1}) and (\ref{u4.2}) that \cite{Fo99a}
\begin{equation}\label{ch.5}
E_1^{\rm bulk}(0;(-s,s)) =
E_2^{\rm hard}(0;(0,\pi^2 s^2)) \Big |_{a=-1/2} =
\exp \int_0^{(\pi s)^2} u(t;a;\xi) \, {dt \over t}
\Big |_{a=-1/2 \atop \xi = 1}.
\end{equation}
This is an alternative Painlev\'e transcendent evaluation to that implied
by (\ref{2.24a}), (\ref{y1}) and (\ref{jmms}). Similarly, by noting that
$$
2 \sqrt{xy} K^{\rm hard}(x^2,y^2) \Big |_{a=1/2} =
{1 \over 2} \Big ( {\sin (x - y) \over x - y} -
{\sin (x + y) \over x + y} \Big )
$$
we see from (\ref{ga1}), (\ref{u4.1a}) and (\ref{u4.2}) that \cite{Fo99a}
\begin{eqnarray}
&& E_4^{\rm bulk}(0;(-s/2,s/2)) \nonumber \\
&& \qquad = {1 \over 2} \Big (
\exp \int_0^{(\pi s)^2} u(t;a;\xi) \, {dt \over t}
\Big |_{a=-1/2 \atop \xi = 1} +
\exp \int_0^{(\pi s)^2} u(t;a;\xi) \, {dt \over t}
\Big |_{a=1/2 \atop \xi = 1} \Big ).
\end{eqnarray}
In summary, the Fredholm determinants in the expressions for the bulk gap
probabilities can each be written in terms of Painlev\'e transcendents. From
a practical viewpoint these expressions are particularly well suited for
generating power series expansions, and also allow for a numerical tabulation
of each of $E_2^{\rm bulk}(0;(-s,s))$, $E_1^{\rm bulk}(0;(-s,s))$ and
$E_4^{\rm bulk}(0;(-s,s))$, as well as $E_2^{\rm bulk}(n;(-s,s))$ for
$n \ge 1$. For the latter quantity, according to (\ref{2.5x}) we must
differentiate $E_2^{\rm bulk}((-s,s);\xi)$ with respect to $\xi$ then set
$\xi = 1$. Doing this in (\ref{jj1}) gives a coupled system of differential
equations for $\partial^j \sigma(u;\xi) / \partial \xi^j |_{\xi = 1}$
$(j=0,\dots,n)$ which is only numerically stable for small values of $n$.
\subsection{Distribution of bulk right or left nearest neighbour
spacings}
The spacing distribution refers to the distribution of the distance between
consecutive points as we move along the line left to right. Another
simple to measure statistic of this type is the distribution of the
smallest of the left neighbour spacing and right neighbour spacing for each
point. Let us denote this by $p_\beta^{\rm n.n.}(s)$ (the superscript
n.n.~stands for nearest neighbour, while the subscript $\beta$ indicates
the symmetry class). Let $E_\beta^{\rm n.n.}(0;(-s,s))$ denote the
probability that about a fixed eigenvalue at the origin, there is no
eigenvalue at distance $s$ either side. Analogous to (\ref{pE}) it is
easy to see that
\begin{equation}\label{fre0}
p_\beta^{\rm n.n.}(s) = - {d \over ds} E_\beta^{\rm n.n.}(0;(-s,s)).
\end{equation}
In the case $\beta = 2$ (unitary symmetry) the generating function
$E_\beta^{\rm n.n.}((-s,s);\xi)$ can be expressed as a Fredholm
determinant
\begin{equation}\label{fre}
E_\beta^{\rm n.n.}((-s,s);\xi) = \det ( 1 - \xi K_{(-s,s)}^{\rm n.n.})
\end{equation}
where $K_{(-s,s)}^{\rm n.n.}$ is the integral operator on $(-s,s)$ with
kernel
\begin{equation}\label{fre1}
K^{\rm n.n.}(x,y) := (\pi x)^{1/2} (\pi y)^{1/2}
{\Big ( J_{a+1/2}(\pi x) J_{a-1/2}(\pi y) -
J_{a+1/2}(\pi y) J_{a-1/2}(\pi x) \Big ) \over 2(x-y)}
\end{equation}
evaluated at $a=1$. Following the strategy which leds to (\ref{u4.2}), the
Fredholm determinant (\ref{fre}) for general $a \in \mathbb Z_{\ge 0}$ can be
characterized as the solution of a nonlinear equation. Explicitly
\cite{FO96}
\begin{equation}\label{frb}
E_\beta^{\rm n.n.}((-s,s);\xi) =
\exp \Big ( \int_0^{2 \pi s} {\sigma_a(t;\xi) \over t} \, dt \Big )
\end{equation}
where $\sigma_a$ satisfies the nonlinear equation
\begin{equation}\label{fre2}
(s \sigma_a'')^2 + 4 (-a^2 + s \sigma_a' - \sigma_a)
\Big ( (\sigma_a')^2 - \{ a - (a^2 - s \sigma_a' + \sigma_a)^{1/2} \}^2 \Big )
= 0
\end{equation}
subject to the boundary condition
$$
\sigma_a (s;\xi) \mathop{\sim}\limits_{s \to 0^+}
-\xi {2 (s/4)^{2a + 1} \over \Gamma (1/2 + a)
\Gamma(3/2 + a)}.
$$
In the case $a=0$, (\ref{fre1}) reduces to the sine kernel and the
differential equation (\ref{fre2}) reduces to (\ref{jj1}). For general
$a$ the differential equation (\ref{fre2}) is satisfied by an auxilary
Hamiltonian for PIII (as distinct from PIII$'$) \cite{Wi03}.
Substituting (\ref{frb}) in (\ref{fre0}) gives
\begin{equation}\label{fre4}
p_2^{\rm n.n.}(s) = - {\sigma_a(2 \pi s;\xi) \over 2 \pi s}
\exp \int_0^{2 \pi s} {\sigma_a(t;\xi) \over t} \, dt
\Big |_{a=\xi=1}.
\end{equation}
An application of this result can be made to the study of the zeros of the
Riemann zeta function on the critical line (Riemann zeros). We recall that
the Montgomery-Odlyzko law states that the statistics of the large
Riemann zeros coincide with the statistics of bulk eigenvalues of an
ensemble of random matrices with unitary symmetry, where both the zeros
and eigenvalues are assumed to be unfolded so as to have mean spacing
unity. As a test of this law, in \cite{FO96} the empirical determination
of $p_2^{\rm n.n.}(s)$ for large sequences of Riemann zeros, starting at
different positions along the critical line, was compared with
(\ref{fre4}). The results, which are consistent with the
Montgomery-Odlyzko law, are reproduced in Figure \ref{g1}. A significant
feature is that the empirical determination of $p_2^{\rm n.n}(s)$ for the
Riemann zeros is so accurate that it is not possible to compare against
an approximate form of $p_2^{\rm n.n.}(s)$ for the random matrices. Thus
the exact, readily computable,
Painlev\'e evaluation (\ref{fre4}) is of a practical importance.
\vspace{.5cm}
\begin{figure}
\epsfxsize=12cm
\centerline{\epsfbox{nnt.ps}}
\caption{\label{g1} Comparison of $nn(t):= p_2^{\rm n.n.}(s)$
for the matrix ensembles with unitary symmetry in the bulk (continuous
curve) and for $10^6$ consecutive Riemann zeros,
starting near zero number 1 (open circles),
$10^6$ (asterisks) and $10^{20}$ (filled circles). }
\end{figure}
\section{Gap probabilities from the Okamoto $\tau$-function theory}
\setcounter{equation}{0}
\subsection{Other stragies}
The method of Tracy and Widom may be described as being based on function
theoretic properties of Fredholm determinants. Alternative methods which
also lead to the characterization of gap probabilities in terms of the
solution of nonlinear equations have been given by a number of authors.
One alternative method is due to Adler and van Moerbeke \cite{vM01}, who
base their strategy on the fact that for suitable underlying weight
$w_2$, gap probabilities in the case of a unitary symmetry satisfy the
KP hierarchy of partial differential equations known from soliton theory.
The first member of this hierachy is then used in conjuction with a set
of equations referred to as Virasora constraints, satisfied by the gap
probabilities as a function of the endpoints of the gap free regions,
to arrive at third order equations for some single interval gap
probabilities. These third order equations are reduced to the
$\sigma$-form of the Painlev\'e theory, making use of results of
Cosgrove \cite{CS93,Co00}.
Borodin and Deift \cite{BD00} have given a method based on the
Riemann-Hilbert formulation of the resolvent kernel (\ref{dR1})
\cite{KH99}. This makes direct contact with the Schlesinger equations
from the theory of the isomonodromic deformation of linear differential
equations, and is thus closely related to the
work of Jimbo et al.~\cite{JMMS80}.
The other approach to be mentioned is due to Forrester and Witte
\cite{FW00}. It is based on Okamoto's development of the Hamiltonian
approach to Painlev\'e systems, and proceeds by inductively constructing
sequences of multi-dimensional integral solutions of the $\sigma$ form of
the Painlev\'e equations, and identifying these solutions with gap
probabilities for certain random matrix ensembles with unitary symmetry.
For detailed accounts of all these methods, see \cite[Ch.~6\&7]{Fo02}.
In the remainder of these lectures
we will restrict ourselves to results from the work of Forrester and
Witte which relate directly to gap probabilities in the bulk.
\subsection{Direct calculation of spacing distributions}
We have taken as our objective the exact evaluation of the bulk spacing
distributions for the three symmetry classes of random matrices. So far
exact
evaluations have been presented not for the spacing distribution itself,
but rather the corresponding gap probability, which is related to the
spacing distribution by (\ref{pE}). It was realized by Forrester and
Witte \cite{FW00e} that in all three cases one of the derivatives could
be performed analytically by using theory relating to the $\sigma$ form
of the Painlev\'e transcendents.
As an explicit example, consider the result (\ref{ch.5}). It was shown in
\cite{FW00e} that
\begin{equation}\label{am.t}
{d \over ds} \exp \int_0^{(\pi s)^2} u(t;a;\xi) \, {dt \over t}
\Big |_{a=-1/2 \atop \xi = 1} = - \exp \Big ( - \int_0^{(\pi s)^2}
\tilde{u}(t) \, {dt \over t} \Big )
\end{equation}
where $\tilde{u}$ satisfies the nonlinear equation
$$
s^2 ( \tilde{u}'')^2 = (4(\tilde{u}')^2 - \tilde{u}')
(s \tilde{u}' - \tilde{u}) + {9 \over 4} ( \tilde{u}')^2 -
{3 \over 2} \tilde{u}' + {1 \over 4}
$$
subject to the boundary condition
$$
\tilde{u}(s) \mathop{\sim}\limits_{s \to 0^+}
{s \over 3} - {s^2 \over 45} + {8 s^{5/2} \over 135 \pi}.
$$
Recalling now (\ref{pE}) we see that
\begin{equation}\label{p1b}
p_1^{\rm bulk}(0;s) = {2 \tilde{u}((\pi s/2)^2) \over s}
\exp \Big ( - \int_0^{(\pi s/2)^2} {\tilde{u}(t) \over t} \, dt \Big )
\end{equation}
(cf.~(\ref{ws})).
The identity (\ref{am.t}) can be understood from the approach to gap
probabilities of Forrester and Witte. The key advance from earlier
studies is that the generating function (\ref{ap}), with $p$ given
by (\ref{3.1}), can be generalized to the quantity
\begin{equation}\label{n.s}
E^{\rm bulk}(s;\mu;\xi) := \lim_{N \to \infty} a_N^N
\int_{-\infty}^\infty dx_1 \cdots \int_{-\infty}^\infty dx_N \,
\prod_{l=1}^N (1 - \xi \chi_{(-s/2,s/2)}^{(l)} )|s/2 - a_N x_l|^\mu
p(a_N x_1,\dots, a_N x_N)
\end{equation}
and still be characterized as the solution of a nonlinear equation.
This is
also true at the hard and soft edges, and in the neighbourhood of a spectrum
singularity (before the generalization the latter is controlled by the
kernel (\ref{fre1})).
It is the generalization in the case of the hard edge which leads to
(\ref{am.t}). The quantity of interest is defined by
\begin{equation}\label{e2h}
E_2^{\rm hard}((0,s);\mu;\xi) = \lim_{N \to \infty}
{I_N(a) \over I_N(a+\mu)}
E_2 \Big ( (0,{s \over 4N});\xi;(x - {s \over 4N})^\mu x^a e^{-x}
\chi_{x>0} \Big )
\end{equation}
where
$$
I_N(a) := \int_0^\infty dx_1 \cdots \int_0^\infty dx_N \,
\prod_{l=1}^N e^{-x_l} x_l^a \prod_{1 \le j < k \le N} (x_k - x_j)^2
$$
(the factor $I_N(a)/I_N(a+\mu)$, which is readily evaluated in terms of
gamma functions, is chosen so that when $s=0$, (\ref{e2h}) is equal to unity).
By using theory from the Okamoto $\tau$ function approach to the
Painlev\'e systems PV and PIII$'$ it is shown in \cite{FW01a} that
$$
\tilde{E}^{\rm hard}_2((0,s);\mu;\xi) = \exp \int_0^s
u^h(t;a,\mu;\xi) \, {dt \over t},
$$
where $u^h$ satisfies the differential equation
\begin{equation}\label{uh}
(tu'')^2 - (\mu + a)^2(u')^2 - u' (4 u' + 1)(u - tu') -
{\mu(\mu+a) \over 2} u' - {\mu^2 \over 4^2} = 0.
\end{equation}
Thus we have
\begin{equation}\label{am.5}
- {1 \over \xi} {d \over ds} \exp \Big ( \int_0^s
u^h(t;a,\mu;\xi) |_{\mu=0} \, {dt \over t} \Big ) =
{s^a \over 2^{2a+2} \Gamma(a+1) \Gamma(a+2) }
\exp \Big ( \int_0^s
u^h(t;a,\mu;\xi) |_{\mu=2} \, {dt \over t} \Big ),
\end{equation}
which in the case $a=-1/2$ reduces to (\ref{am.t}).
We also read off from (\ref{am.5}) that
\begin{equation}\label{vb}
{d \over ds} \exp \int_0^{(\pi s)^2} u(t;a;\xi) {dt \over t}
\Big |_{a=1/2 \atop \xi = 1} =
- {2 \over 3} (\pi s)^2 \exp \Big ( -
\int_0^{(\pi s)^2} \tilde{v}(t) {dt \over t} \Big )
\end{equation}
where $\tilde{v}(t) = - u^h(t;a,\mu;\xi) |_{a=1/2,\mu=2,\xi=1}$ and thus
satisfies (\ref{uh}) appropriately specialized. The boundary condition
consistent with (\ref{vb}) is
\begin{equation}\label{vb1}
\tilde{v}(t) \mathop{\sim}\limits_{t \to 0^+}
{t \over 5} ( 1 + O(t)) + {8 t^{7/2} \over 3^3 \cdot 5^3 \cdot
7 \pi} ( 1 + O(t)).
\end{equation}
Hence, according to (\ref{ga1}) and (\ref{pE}),
\begin{equation}\label{vb2}
p_4^{\rm bulk}(0;s) = 2 p_1^{\rm bulk}(0;2s) +
{2 \pi^2 s \over 3} \Big ( \tilde{v}((\pi s)^2) - 1 \Big )
\exp \Big ( - \int_0^{(\pi s)^2} \tilde{v}(t) {dt \over t} \Big ).
\end{equation}
The Okamoto $\tau$-function theory of PVI and PV allows (\ref{n.s}) to be
computed for general $\mu$, and also its generalization in which there is
a further factor $|-s/2 - a_N x_l|^a$ in the product over $l$ in the
integrand \cite{FW02}. These results allow not only the first derivative
with respect to $s$ of (\ref{jmms}) to be computed by an identity
analogous to (\ref{am.t}), but also the second derivative. In particular,
it is found that
\begin{equation}\label{p2b}
p_2^{\rm bulk}(0;s) = {\pi^2 \over 3} s^2
\exp \int_0^{2\pi s} v(t;\xi) \, {dt \over t}
\end{equation}
where $v$ satisfies the nonlinear equation (which can be identified in
terms of the $\sigma$ form of the PIII$'$ equation)
$$
(sv'')^2 + (v - sv') \{ v - sv' + 4 - 4 (v')^2 \}
- 16 (v')^2 = 0
$$
subject to the boundary condition
$$
v(s;\xi) \mathop{\sim}\limits_{s \to 0}
- {1 \over 15} s^2.
$$
The exact evaluations (\ref{p1b}), (\ref{vb2}) and (\ref{p2b}) are perhaps
the most compact Painlev\'e evaluations possible for the bulk spacing
distributions. A striking feature of (\ref{p1b}) and (\ref{p2b}) is that
they are of the functional form $a(s) \exp(- \int_0^s b(t) \, dt)$ and
thus extend the Wigner surmise (\ref{ws}) and its $\beta=2$ analogue in
(\ref{2.10b}) to exact results.
\section*{Acknowledgement}
It is a pleasure to thank the organisers for putting together such
a stimulating workshop, and program in general. Also, the financial
support of the Newton Insitute and
the Australian Research Council is
acknowledged.
|
1,477,468,750,329 | arxiv | \section{Introduction}
Please follow the steps outlined below when submitting your manuscript to
the IEEE Computer Society Press. This style guide now has several
important modifications (for example, you are no longer warned against the
use of sticky tape to attach your artwork to the paper), so all authors
should read this new version.
\subsection{Language}
All manuscripts must be in English.
\subsection{Dual submission}
Please refer to the author guidelines on the CVPR 2017 web page for a
discussion of the policy on dual submissions.
\subsection{Paper length}
Papers, excluding the references section,
must be no longer than eight pages in length. The references section
will not be included in the page count, and there is no limit on the
length of the references section. For example, a paper of eight pages
with two pages of references would have a total length of 10 pages.
{\bf There will be no extra page charges for
CVPR 2017.}
Overlength papers will simply not be reviewed. This includes papers
where the margins and formatting are deemed to have been significantly
altered from those laid down by this style guide. Note that this
\LaTeX\ guide already sets figure captions and references in a smaller font.
The reason such papers will not be reviewed is that there is no provision for
supervised revisions of manuscripts. The reviewing process cannot determine
the suitability of the paper for presentation in eight pages if it is
reviewed in eleven.
\subsection{The ruler}
The \LaTeX\ style defines a printed ruler which should be present in the
version submitted for review. The ruler is provided in order that
reviewers may comment on particular lines in the paper without
circumlocution. If you are preparing a document using a non-\LaTeX\
document preparation system, please arrange for an equivalent ruler to
appear on the final output pages. The presence or absence of the ruler
should not change the appearance of any other content on the page. The
camera ready copy should not contain a ruler. (\LaTeX\ users may uncomment
the \verb'\cvprfinalcopy' command in the document preamble.) Reviewers:
note that the ruler measurements do not align well with lines in the paper
--- this turns out to be very difficult to do well when the paper contains
many figures and equations, and, when done, looks ugly. Just use fractional
references (e.g.\ this line is $095.5$), although in most cases one would
expect that the approximate location will be adequate.
\subsection{Mathematics}
Please number all of your sections and displayed equations. It is
important for readers to be able to refer to any particular equation. Just
because you didn't refer to it in the text doesn't mean some future reader
might not need to refer to it. It is cumbersome to have to use
circumlocutions like ``the equation second from the top of page 3 column
1''. (Note that the ruler will not be present in the final copy, so is not
an alternative to equation numbers). All authors will benefit from reading
Mermin's description of how to write mathematics:
\url{http://www.pamitc.org/documents/mermin.pdf}.
\subsection{Blind review}
Many authors misunderstand the concept of anonymizing for blind
review. Blind review does not mean that one must remove
citations to one's own work---in fact it is often impossible to
review a paper unless the previous citations are known and
available.
Blind review means that you do not use the words ``my'' or ``our''
when citing previous work. That is all. (But see below for
techreports.)
Saying ``this builds on the work of Lucy Smith [1]'' does not say
that you are Lucy Smith; it says that you are building on her
work. If you are Smith and Jones, do not say ``as we show in
[7]'', say ``as Smith and Jones show in [7]'' and at the end of the
paper, include reference 7 as you would any other cited work.
An example of a bad paper just asking to be rejected:
\begin{quote}
\begin{center}
An analysis of the frobnicatable foo filter.
\end{center}
In this paper we present a performance analysis of our
previous paper [1], and show it to be inferior to all
previously known methods. Why the previous paper was
accepted without this analysis is beyond me.
[1] Removed for blind review
\end{quote}
An example of an acceptable paper:
\begin{quote}
\begin{center}
An analysis of the frobnicatable foo filter.
\end{center}
In this paper we present a performance analysis of the
paper of Smith \etal [1], and show it to be inferior to
all previously known methods. Why the previous paper
was accepted without this analysis is beyond me.
[1] Smith, L and Jones, C. ``The frobnicatable foo
filter, a fundamental contribution to human knowledge''.
Nature 381(12), 1-213.
\end{quote}
If you are making a submission to another conference at the same time,
which covers similar or overlapping material, you may need to refer to that
submission in order to explain the differences, just as you would if you
had previously published related work. In such cases, include the
anonymized parallel submission~\cite{Authors14} as additional material and
cite it as
\begin{quote}
[1] Authors. ``The frobnicatable foo filter'', F\&G 2014 Submission ID 324,
Supplied as additional material {\tt fg324.pdf}.
\end{quote}
Finally, you may feel you need to tell the reader that more details can be
found elsewhere, and refer them to a technical report. For conference
submissions, the paper must stand on its own, and not {\em require} the
reviewer to go to a techreport for further details. Thus, you may say in
the body of the paper ``further details may be found
in~\cite{Authors14b}''. Then submit the techreport as additional material.
Again, you may not assume the reviewers will read this material.
Sometimes your paper is about a problem which you tested using a tool which
is widely known to be restricted to a single institution. For example,
let's say it's 1969, you have solved a key problem on the Apollo lander,
and you believe that the CVPR70 audience would like to hear about your
solution. The work is a development of your celebrated 1968 paper entitled
``Zero-g frobnication: How being the only people in the world with access to
the Apollo lander source code makes us a wow at parties'', by Zeus \etal.
You can handle this paper like any other. Don't write ``We show how to
improve our previous work [Anonymous, 1968]. This time we tested the
algorithm on a lunar lander [name of lander removed for blind review]''.
That would be silly, and would immediately identify the authors. Instead
write the following:
\begin{quotation}
\noindent
We describe a system for zero-g frobnication. This
system is new because it handles the following cases:
A, B. Previous systems [Zeus et al. 1968] didn't
handle case B properly. Ours handles it by including
a foo term in the bar integral.
...
The proposed system was integrated with the Apollo
lunar lander, and went all the way to the moon, don't
you know. It displayed the following behaviours
which show how well we solved cases A and B: ...
\end{quotation}
As you can see, the above text follows standard scientific convention,
reads better than the first version, and does not explicitly name you as
the authors. A reviewer might think it likely that the new paper was
written by Zeus \etal, but cannot make any decision based on that guess.
He or she would have to be sure that no other authors could have been
contracted to solve problem B.
FAQ: Are acknowledgements OK? No. Leave them for the final copy.
\begin{figure}[t]
\begin{center}
\fbox{\rule{0pt}{2in} \rule{0.9\linewidth}{0pt}}
\end{center}
\caption{Example of caption. It is set in Roman so that mathematics
(always set in Roman: $B \sin A = A \sin B$) may be included without an
ugly clash.}
\label{fig:long}
\label{fig:onecol}
\end{figure}
\subsection{Miscellaneous}
\noindent
Compare the following:\\
\begin{tabular}{ll}
\verb'$conf_a$' & $conf_a$ \\
\verb'$\mathit{conf}_a$' & $\mathit{conf}_a$
\end{tabular}\\
See The \TeX book, p165.
The space after \eg, meaning ``for example'', should not be a
sentence-ending space. So \eg is correct, {\em e.g.} is not. The provided
\verb'\eg' macro takes care of this.
When citing a multi-author paper, you may save space by using ``et alia'',
shortened to ``\etal'' (not ``{\em et.\ al.}'' as ``{\em et}'' is a complete word.)
However, use it only when there are three or more authors. Thus, the
following is correct: ``
Frobnication has been trendy lately.
It was introduced by Alpher~\cite{Alpher02}, and subsequently developed by
Alpher and Fotheringham-Smythe~\cite{Alpher03}, and Alpher \etal~\cite{Alpher04}.''
This is incorrect: ``... subsequently developed by Alpher \etal~\cite{Alpher03} ...''
because reference~\cite{Alpher03} has just two authors. If you use the
\verb'\etal' macro provided, then you need not worry about double periods
when used at the end of a sentence as in Alpher \etal.
For this citation style, keep multiple citations in numerical (not
chronological) order, so prefer \cite{Alpher03,Alpher02,Authors14} to
\cite{Alpher02,Alpher03,Authors14}.
\begin{figure*}
\begin{center}
\fbox{\rule{0pt}{2in} \rule{.9\linewidth}{0pt}}
\end{center}
\caption{Example of a short caption, which should be centered.}
\label{fig:short}
\end{figure*}
\section{Formatting your paper}
All text must be in a two-column format. The total allowable width of the
text area is $6\frac78$ inches (17.5 cm) wide by $8\frac78$ inches (22.54
cm) high. Columns are to be $3\frac14$ inches (8.25 cm) wide, with a
$\frac{5}{16}$ inch (0.8 cm) space between them. The main title (on the
first page) should begin 1.0 inch (2.54 cm) from the top edge of the
page. The second and following pages should begin 1.0 inch (2.54 cm) from
the top edge. On all pages, the bottom margin should be 1-1/8 inches (2.86
cm) from the bottom edge of the page for $8.5 \times 11$-inch paper; for A4
paper, approximately 1-5/8 inches (4.13 cm) from the bottom edge of the
page.
\subsection{Margins and page numbering}
All printed material, including text, illustrations, and charts, must be kept
within a print area 6-7/8 inches (17.5 cm) wide by 8-7/8 inches (22.54 cm)
high.
Page numbers should be in footer with page numbers, centered and .75
inches from the bottom of the page and make it start at the correct page
number rather than the 4321 in the example. To do this fine the line (around
line 23)
\begin{verbatim}
\setcounter{page}{4321}
\end{verbatim}
where the number 4321 is your assigned starting page.
Make sure the first page is numbered by commenting out the first page being
empty on line 46
\begin{verbatim}
\end{verbatim}
\subsection{Type-style and fonts}
Wherever Times is specified, Times Roman may also be used. If neither is
available on your word processor, please use the font closest in
appearance to Times to which you have access.
MAIN TITLE. Center the title 1-3/8 inches (3.49 cm) from the top edge of
the first page. The title should be in Times 14-point, boldface type.
Capitalize the first letter of nouns, pronouns, verbs, adjectives, and
adverbs; do not capitalize articles, coordinate conjunctions, or
prepositions (unless the title begins with such a word). Leave two blank
lines after the title.
AUTHOR NAME(s) and AFFILIATION(s) are to be centered beneath the title
and printed in Times 12-point, non-boldface type. This information is to
be followed by two blank lines.
The ABSTRACT and MAIN TEXT are to be in a two-column format.
MAIN TEXT. Type main text in 10-point Times, single-spaced. Do NOT use
double-spacing. All paragraphs should be indented 1 pica (approx. 1/6
inch or 0.422 cm). Make sure your text is fully justified---that is,
flush left and flush right. Please do not place any additional blank
lines between paragraphs.
Figure and table captions should be 9-point Roman type as in
Figures~\ref{fig:onecol} and~\ref{fig:short}. Short captions should be centred.
\noindent Callouts should be 9-point Helvetica, non-boldface type.
Initially capitalize only the first word of section titles and first-,
second-, and third-order headings.
FIRST-ORDER HEADINGS. (For example, {\large \bf 1. Introduction})
should be Times 12-point boldface, initially capitalized, flush left,
with one blank line before, and one blank line after.
SECOND-ORDER HEADINGS. (For example, { \bf 1.1. Database elements})
should be Times 11-point boldface, initially capitalized, flush left,
with one blank line before, and one after. If you require a third-order
heading (we discourage it), use 10-point Times, boldface, initially
capitalized, flush left, preceded by one blank line, followed by a period
and your text on the same line.
\subsection{Footnotes}
Please use footnotes\footnote {This is what a footnote looks like. It
often distracts the reader from the main flow of the argument.} sparingly.
Indeed, try to avoid footnotes altogether and include necessary peripheral
observations in
the text (within parentheses, if you prefer, as in this sentence). If you
wish to use a footnote, place it at the bottom of the column on the page on
which it is referenced. Use Times 8-point type, single-spaced.
\subsection{References}
List and number all bibliographical references in 9-point Times,
single-spaced, at the end of your paper. When referenced in the text,
enclose the citation number in square brackets, for
example~\cite{Authors14}. Where appropriate, include the name(s) of
editors of referenced books.
\begin{table}
\begin{center}
\begin{tabular}{|l|c|}
\hline
Method & Frobnability \\
\hline\hline
Theirs & Frumpy \\
Yours & Frobbly \\
Ours & Makes one's heart Frob\\
\hline
\end{tabular}
\end{center}
\caption{Results. Ours is better.}
\end{table}
\subsection{Illustrations, graphs, and photographs}
All graphics should be centered. Please ensure that any point you wish to
make is resolvable in a printed copy of the paper. Resize fonts in figures
to match the font in the body text, and choose line widths which render
effectively in print. Many readers (and reviewers), even of an electronic
copy, will choose to print your paper in order to read it. You cannot
insist that they do otherwise, and therefore must not assume that they can
zoom in to see tiny details on a graphic.
When placing figures in \LaTeX, it's almost always best to use
\verb+\includegraphics+, and to specify the figure width as a multiple of
the line width as in the example below
{\small\begin{verbatim}
\usepackage[dvips]{graphicx} ...
\includegraphics[width=0.8\linewidth]
{myfile.eps}
\end{verbatim}
}
\subsection{Color}
Please refer to the author guidelines on the CVPR 2017 web page for a discussion
of the use of color in your document.
\section{Final copy}
You must include your signed IEEE copyright release form when you submit
your finished paper. We MUST have this form before your paper can be
published in the proceedings.
{\small
\bibliographystyle{ieee}
\section{Introduction}
Video recognition technology is very important in the field of artificial intelligence. It is a challenging task because understanding context of a given video is related to high-level temporal causal relationship among the scenes. In addition, this technology can be applied to a variety of fields such as learning activity recognition or scene understanding in videos\cite{DBLP:journals/corr/DonahueHGRVSD14,DBLP:journals/pami/KarpathyF17}, detecting future incidents or criminals by tracking real-time CCTV videos\cite{chen2011face,sankaranarayanan2008object} and decoding cognitive thinking process of subject by analyzing temporal patterns of brain activity in fMRI images.\cite{kamitani2005decoding,kay2008identifying,norman2006beyond}
Fundamentally, video recognition technology is required to understand the topic of a given video. A video consists of the sequence of images that are correlated with each other. In the problem of tagging what information the video contains, it is possible to tag one identical label for multiple frames or tag multiple labels in one frame and not to tag for the rest of the frames. That is, the frames that contain the topics of the video are determined with respect to the distribution and relation among entire images in the video. The distribution of labels tagged in these frames of one video is different from that of other video. Moreover, the number of frames in videos varies, and the distribution is variable and unknown. In addition, multi-label classification problem can be solved by logistic regression, mixture model, SVM, but if dataset to be analyzed is large-scale, batch learning could not be applied and online learning should be considered.\cite{DBLP:journals/corr/Abu-El-HaijaKLN16}
Despite these challenging factors, we try to approach this classification problem from different point of view. If we view the label as a word, classifying multiple labels from a video can eventually be turned into a video to sentence translation, or video description problem. Recent advances that generate a scene description from a video can be applied to this problem as it is; recent papers have improved the quality of video description technology with the development of neural networks and the powerful combination of CNN and LSTM. We also use the LSTM decoder and transfer learning based on the mean pooling of CNN features. Here, for easy transfer learning, we use Youtube-8M dataset\cite{Google2017} because it already stores and provides Inception CNN visual features for each frame. Therefore, we focus on what better LSTM structure is and how to improve its generalization performance by using recent optimization trend called batch normalization.
The contributions of this paper are the following:
\begin{quote}
- We suggest an insight that multi-label classification can be transformed to the problem in video description framework and establish base LSTM model. And we explore different structures of LSTM-based feature extractor.
- We investigate how to improve generalization of LSTMs by using batch normalization. We deal with issues that occur when we use BNLSTM as video description translator, such as feedback selection issue. We introduce stochastic gating mechanism to alleviate this issue and determine which structure is better for feedback loop in the feature extractor.
- Finally, we report validation results of our models on large-scale Youtube-8M datasets
\end{quote}
\begin{figure*}
\begin{center}
\includegraphics[width=0.95\linewidth]{paper_picture_latest.pdf}
\end{center}
\caption{Detailed illustration of our LSTM models for video classification. (a) our base LSTM model, (b) as a variant of (a), the guided LSTM is designed for feature extractor to be used with the following classifier.}
\label{fig:short}
\end{figure*}
\section{Related Work}
Donahue et al.\cite{DBLP:journals/corr/DonahueHGRVSD14} showed that a combination of CNN and LSTM can efficiently perform image caption, video description and activity recognition and the model can learn spatial and temporal compositional representations. Venugopalan et al.\cite{DBLP:journals/corr/VenugopalanXDRMS14} solved translating video to natural language problem by transfer learing from CNN structure, and performed LSTM decoding process after mean pooling those CNN features. These video frames also can be compressed into one visual feature vector by LSTM-based encoding process(Venugoplan et al.\cite{DBLP:journals/corr/VenugopalanRDMD15}) or 3D-CNN based representation(Yao et al. \cite{Yao15}). On top of LSTM Encoder-Decoder models, Cho et al.\cite{DBLP:journals/corr/ChoCB15} and Xu et al.\cite{xu2015show} extended the model for visual attentional framework and showed improved performance. It turns out to be useful using not only attentional mechanism but also features from additional information such as scoring some object classes or optical flow(Rohrbach et al.\cite{DBLP:journals/corr/RohrbachRS15}).
To improve the validation performance of LSTM model, batch normalization method has been applied to LSTM models. Batch normalization uses batch mean and variance of input features for standardization to reduce internal covariate shift issue(Ioffe et al.\cite{DBLP:journals/corr/IoffeS15}). This batch normalization method is powerful and has recently become a trend, because this enables faster learning than dropout, preserving good generalization performance. Laurent et al.\cite{Laurent15} showed that the batch-normalized input-to-hidden transitions can lead to a faster convergence, and Cooijmans et al.\cite{DBLP:journals/corr/CooijmansBLC16} proposed a total reparameterization of LSTM by adding the hidden-to-hidden transitions, which improved generalization.
\section{Approach}
We propose a feature extractor for video classification guided by video description structure.
In general, neural machine translation finds patterns mapping between the input sentence of one natural language to the output sentence of another language. This idea has been effectively applied to the field of video description, because the input can be generalized into the forms of sequence of any features including video frames.
In our model, we extend the original classification problem into the concept of video description.
By this change of perspective, we view each target label vector as a set of meaningful words, ``a sentence''. This idea results into the perspective that we can perform a translation from a video to a sequence. During translation process, the feature extractor can obtain aggregated features which are distinct to other sentence labels. This feature extraction process by translation is called ``guidance''. We can expect the final output of guidance can be utilized for video classification.
\subsection{Common structure}
There is a mean pooling layer to aggregate frame-level visual features and the output video-level features are input into all LSTM cell inputs.
To calculate ${loss}_{word}$, we split the learning target label into a set of one-hot vectors, and make a semantic word vector with embedding layer.
This word vector is also concatenated with the visual feature for input of LSTM cells. For guidance process, semantic vectors for the virtual $<$BOS$>$ and $<$EOS$>$ tokens are introduced together.
\subsection{Basic LSTM structure for guidance}
The Long Short Term Memory\cite{hochreiter1997long} is one of the state-of-the-art Recurrent Neural Network that has been applied in neural machine translation\cite{johnson2016google}, image captioning\cite{DBLP:journals/corr/VinyalsTBE14}, video description\cite{DBLP:journals/corr/RohrbachRS15}, etc. LSTM deals with memorizing not only patterns observed until current time $t$, but also patterns of how to recall and forget correlations throughout the patterns based on hidden states $h_t$, internal memory cell state $c_t$ and three gates $i_t$, $o_t$, $f_t$. $g_t$ is a candidate memory cell state from the current input and the previous hidden:
\begin{subequations} \label{eq:lstm_model}
\begin{align}
i_t &= \sigma (W^i x_t \oplus w_t + U^i h_{t-1} + b_i) \\
o_t &= \sigma (W^o x_t \oplus w_t + U^o h_{t-1} + b_o) \\
f_t &= \sigma (W^f x_t \oplus w_t + U^f h_{t-1} + b_f) \\
g_t &= \tanh (W^g x_t \oplus w_t + U^g h_{t-1} + b_g) \\
c_t &= f_t\odot c_{t-1} + i_t \odot g_t \\
h_t &= o_t \odot \tanh c_t
\end{align}
\end{subequations}
where $\oplus$ is a vector concatenation operator, $\odot$ is the element-wise multiplication between two vectors, W's are weight matrices from input to hidden states, U's are weight matrices from hidden to hidden. All weight matrices and biases b's are model parameters to be trained.
The input is composed of two parts. The first part can be any form of comprehensive feature that represents the whole frames of a given video.
Here, we set the mean pooled frame featrue as input of our model, including video and audio components of each frame. YouTube-8M dataset provides it as ``video-level'' feature. The second part is word embedding vector for guidance process. For any given instance, we split one multi-label target vector into many one-hot word vectors $(y_1, ... , y_T)$, where $T$ is the number of tags in the target label vector. Finally we add embedding layer to squeeze the high dimensional sparse vectors into the lower dimensional dense word vectors $(w_1, ... , w_T)$. Then the averaged frame feature $x$
is duplicated and concatenated with word vectors $w_t$, finally input to LSTM model at each time step, as $(x_1 \oplus w_1, ... , x_T \oplus w_T)$. The intermeidate hiddens $(h_1, ... , h_T)$ are outputs of LSTM cells in charge of guiding memory of LSTM converging to the final goal state. These outputs are projected back into high dimensional space to get a distribution over all of the words in the vocabulary. Then, for each step, our LSTM models estimate conditional probability :
\begin{equation}
P(y_t, ... , y_1 | x_t \oplus w_t, ... ,x_1 \oplus w_1) = \prod_{1 \le t \le T}{P(y_t | h_{t-1})}
\end{equation}
and maximizes cross entropy of each word. After all, The final hidden state $h_T$ is used for video classification.
We can have benefits from this change of viewpoint in terms of classification performance as well as learning time. Our target dataset, YouTube-8M, contains videos with at most 300 frames annotated by 3.4 labels in average, maximally around 30 labels. Therefore, searching for the features in guidance process takes only about $\frac{1}{10}$ to $\frac{1}{100}$ times LSTM steps than learning in time domain.
As depicted in (a) of figure \ref{fig:long}, each LSTM cell output passes through a common word projection layer which maps input vector into the original target word vector space.
Since this output has a meaning of likelihood distribution of the word vocabulary, we use softmax function as activation to make it a probabilistic distribution.
For training, we calculate cross entropy for each word vector and aggregate them. It means this LSTM structure is guided by word losses.
Besides, to calculate the overall output vector, max pooling layer aggregates all of the distribution outputs.
\subsection{LSTM as feature extractor}
Since the internal dynamics of LSTM is guided by sentence learning structure, it makes us to hypothesize that the final hidden state of LSTM is viewed
as a condensed feature including sentence inference path from $w_1$ to $w_T$. This idea makes us to design a different LSTM structure as a feature extrator,
which can make synergetic effect in collaboration with other classifiers. This design is illustrated in (b) in figure \ref{fig:long}.
\subsection{Stochastic Gating Mechanism}
When it comes to input word vectors, many word generation structures utilized ground truth labels as input sequence for train phase.
It is switched to LSTM cell outputs when it performs inference. When we investigate this LSTM structure in detail, we face to a critical issue that
if we use ground truth label embedding vector as $w_t$, it actually leads our model to overfitting: For training phase, LSTM seems to learn
not the hidden patterns in $x_t$ but just $w_t$ itself and seems to bypass $w_1$ to $w_T$ to the final
hidden state $h_T$. We questioned what the real effect of this switching is and how it is related to overfitting within our models.
To figure out the cause, we added a stochastic gating before the input of each LSTM cell as one of structural variants. We can exploit both ground truth label and the embedding vector projected from the previous cell output as $w_t$ and this gate opens to ground truth labels with probability $\beta$ and to the previous cell output with $1-\beta$. By this structure we can avoid overfitting phenomena and consider which value of $\beta$ is helpful for learning. This stochastic gating mechansim is illustrated in Figure \ref{fig:short}. We can briefly touch the core concept of SG by using approximately simplified asymptotic model in Figure \ref{fig:long}. (a) if $\beta=1$, the model uses only ground truth labels as word vector $w_t$. This leads model to learn only $P({correct}_t | {correct}_{t-1})=p_t$ and there is no concern about $P({correct}_t | {incorrect}_{t-1})=q_t$, that affects lower generalization performance. (b) if $\beta=0$, the previous cell output is used for $w_t$. Let $\gamma_t$ be the probability that the tag at time $t$ is correct,
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.95\linewidth]{paper_picture_sg.pdf}
\end{center}
\caption{Asymptotic explanation about our stochastic gating mechanism which guides stacked LSTMs by teaching each LSTM cell
"label" during the intermeidate procedures. (a) This approach faces two possible cases: the input is correct or not. Therefore, our approach makes balance between these two cases by random process of gating. (b) The probability that gate will open to the ground truth label embedding vector at each time $t$ is $\beta$, which we call label injection probabililty. So, with probability $1-\beta$, Gates are open to
the output embedding vector of the previous cells. During inference process, the $\beta$
is fixed to 0 and the model utilizes only the cell outputs of model itself. This method
improves not only generalization performance but also the ascending speed of learning curve.
}
\label{fig:long}
\label{fig:onecol}
\end{figure}
\begin{subequations}
\begin{align}
\gamma_t &= P({correct}_t | {correct}_{t-1}) P({correct}_{t-1}) \nonumber \\
&+ P({correct}_t|{incorrect}_{t-1})P({incorrect}_{t-1}) \label{eq:1}\\
&=p_t \gamma_{t-1} + q_t (1-\gamma_{t-1})
\end{align}
\end{subequations}
If we assume that learning algorithm converges to an equilibirum state as time t goes to infinity
($\lim_{t\to \infty} \gamma_t = \gamma$, $\lim_{t\to \infty} p_t = p$, $\lim_{t\to \infty} q_t = q$),
\begin{subequations}
\begin{align}
\gamma &= p \gamma + q (1-\gamma) \nonumber \\
\gamma &= \gamma_0 = \frac{q}{1-p+q}
\end{align}
\end{subequations}
Now, (c) let us consider the case that $\beta \in (0,1)$. As the diagram is depicted, The probability that the input tag at time t is correct increases as ground truth label injection occurs with probability $\beta$.
\begin{subequations}
\begin{align}
P({correct}_{t-1}) &= \beta + (1-\beta) \gamma_{t-1} \nonumber \\
P({incorrect}_{t-1}) &= (1-\beta) (1-\gamma_{t-1}) \nonumber
\end{align}
\end{subequations}
In this case, the probability of being correct at time t is computed as the equation \ref{eq:1}:
\begin{subequations}
\begin{align}
\gamma_t &= p_t (\beta + (1-\beta) \gamma_{t-1}) + q_t (1-\beta) (1-\gamma_{t-1}) \nonumber \\
\gamma &= \gamma (\beta) = \frac{p\beta + q(1-\beta)}{1 - (1-\beta)(p-q)}
\end{align}
\end{subequations}
Let us compare the $\gamma_0$ and $\gamma(\beta)$. If the learning algorithm trained the model to output the intermediate tags correctly with a high probability, it may be a good start to consider the case $p > q$ at first. Since the numerator is a weighted average between $p$ and $q$, it becomes larger than $q$. In addition, the negative term $-(p-q)$ of the denominator decreases by a factor of $1-\beta$, resulting into the increase of $\gamma(\beta)$. That is, if $p > q$, $\gamma_0 < \gamma(\beta)$ for $\beta > 0$, which means approximately it has a higher asymptotic limitation of learning curve than that of $\beta=0$ case.
In other case, $p < q$ means the learning algorithm has trained the model to find patterns from the previous incorrect tag input to the correct tags and it is better than correct-to-correct tags. This case is possible if the amount of incorrect input tags are more trained than that of correct input tags. This results into the reversed relation: $\gamma_0 > \gamma(\beta)$.
\subsection{Batch Normalized LSTM}
To improve our LSTM models, we adopted batch normalizations into our models. Firstly we just added two BN layers between classifier and LSTM final state and at the output word projection layer, respectively.
This structure doesn't modify LSTM itself. But the next model, Batch normalized LSTM (BNLSTM)\cite{DBLP:journals/corr/CooijmansBLC16,Laurent15} has interal batch normalization for reparameterization hiddens and cell memory:
\begin{subequations} \label{eq:bnlstm_model}
\begin{align}
\tilde{x}^j_t &= BatchNorm(W^j x_t \oplus w_t) \\
\tilde{h}^j_t &= BatchNorm(U^j h_{t-1}) \\
k_t &= \sigma ( \tilde{x}^j_t + \tilde{h}^j_t + b_j) \\
g_t &= \tanh ( \tilde{x}^g_t + \tilde{h}^g_t + b_g) \\
c_t &= f_t\odot c_{t-1} + i_t \odot g_t \\
\tilde{c}_t &= BatchNorm(c_t) \\
h_t &= o_t \odot \tanh \tilde{c}_t
\end{align}
\end{subequations}
where $j=i,o,f,g$ and $k=i,o,f$
\section{Experimental Setup}
This section illustrates the process of evalution for our approach. Firstly,
we explain about the Youtube-8M dataset that we worked on.
Secondly, we describe the evaluation metrics and lastly,
the implementation details of our models.
\subsection{YouTube-8M dataset}
YouTube-8M dataset\cite{Google2017} is a large-scale video benchmark dataset collected from Google YouTube.
It provides 8 Million video URLs with 4716 classes (video tags). Every video is
tagged by 3.4 labels in average, and maximum number of labels in a video is around 30.
For each video, there are two different levels;
video-level and frame-level. It provides the videos
as not pixel-level raw frames, but feature representation vectors extracted by
Convolutional Neural Network(CNN) such as Inception network. That is, the dataset
already has extracted significant feature vectors with 1024 dimension from videos
by each frame per second. In addition, it also contains audio feature vectors
with 128 dimension synchronized by video features. Each video has at most 300 frames,
which consist of frame-level datasets, and one average pooled frame, which is video-level data.
They are all stored in froms of TensorFlow Record (tfrecord) binary files, to boost
up the loading and preprocessing speed. There are 4096 train tfrecords files,
4096 validation files, and 4096 test files respectively. Frame-level dataset,
especially frame-level train dataset requires a huge amount of storage space, i.e. 1.2TB,
Averaged pooled video-level (inception feature + audio feature) datasets are provided
due to the above reason. Video-level training dataset requires only less than 30 GB.
In this paper, we focus on video-level datasets to implement video classifier.
This is possible because recent studies \cite{DBLP:journals/corr/DonahueHGRVSD14,DBLP:journals/corr/VenugopalanXDRMS14} proved that mean pooling layer
can be one of the efficient methods to aggregate frames in a video.
\subsection{Evaluation Metrics}
For information retrieval, we can measure three different evaluation metrics
for the performance of topic classifiers such as Hit@k, PERR and GAP. \cite{DBLP:journals/corr/Abu-El-HaijaKLN16}
Hit@k is the fraction of retrieved samples that include one or more ground truth labels in top $k$ predictions
PERR means Precision at Equal Recall Rate, which measures the averaged fraction of how many predictions
are in the size of a set of ground truth labels, not just fixed value $k$. The calculation of both Hit@k and PERR
are based on ranking entity(label) scores from predictions. Finally, GAP is from the concept of averaged precision.
This GAP is a standard evaluation for YouTube-8M dataset\cite{Kaggle2017}. The detailed definitions of these
metrics can be found in \cite{DBLP:journals/corr/Abu-El-HaijaKLN16, Google2017}. Especially, \cite{Google2017} provides automatic evaluation tools for these metrics,
so we use them for this experiments.
\subsection{Experimental details of our models}
\textbf{Baseline Description} For multi-label video classification on Youtube-8M dataset,
logistic classifier and mixture of experts model are applied for video-level classification\cite{DBLP:journals/corr/Abu-El-HaijaKLN16}.
We also use them as our baseline models provided in starter code published by google\cite{Google2017}.
Since our model can be unified with classifiers including these baseline models, we can improve our models by boosting up
the classifiers by adding dropout layers or extending dimension of layers, etc.
In this paper, we don't focus on these classifiers and leave them for the further work.
\textbf{Base LSTM model}
We implemented 2 layered standard LSTMs as explained in Section 3.
The size of hidden state in an LSTM cell is 256 and the word embedding layer has 64 dimensional output word vector.
They are initialized to be orthogonal each other and LSTM cells run up to the maximum size of the number of entities $T$ of video samples
A (shared) word projection layer generates a vocabulary distribution vector for each LSTM cell output. Then we calculate ${loss}_{word}$ by using standard softmax cross entropy for each output.
We examine whether this structure can show significant result or not.
\textbf{Guided LSTM with Stochastic Gating Mechanism}
We implemented a different structure of LSTMs guided by video-to-tag translation process. So the guided LSTM can act as a feature extractor for the connected classifier. For this experiment, we figure out which structural feedback variants of guided LSTM can perform better generalization than the base LSTM model. Here, we chose the baseline logistic model as our classifier following LSTMs.
All parameter settings are equal to the above base model, besides the additional one more LSTM step runs to generate hidden state to be input to the classifier. In addition, we try to apply binary cross entropy for ${loss}_{word}$ to be much faster learning convergence. We calculated additional ${loss}_{class}$ by using binary cross entropy for the final prediction, and optimize both ${loss}_{word}$ and ${loss}_{class}$
\textbf{Adding Batch Normalization layers into guided LSTM}
Since a batch normalization(BN) layer is powerful, it has become a trend to add BN layers to every layer in the structure.
However, our stochastic gating mechanism can distort the distribution of input word vectors. So we attempted to add BN layers gradually.
We firstly add a BN layer before each loss calculation. This preserves LSTM structure itself. Seconldy, we try to upgrade LSTM layer to BNLSTM layer. Lastly, we exploit both additions to figure out the performance improvement.
\textbf{Extention to other classifiers}
The above all models cooperate with logistic classifier model. To show our model can be a collaborative feature extractor with other classifiers, we reconnected our model to MoE model. For the further work, we show if our model can be upgraded as we use more competitive classifiers.
\section{Results and Discussion}
\begin{table}
\begin{center}
\begin{tabular}{|l|c|c|c|}
\hline
Model & Hit@1 & PERR & GAP \\
\hline\hline
Logistic Model & 82.5 & 69.1 & 75.9 \\
Mixture of Experts(MoE) & \textbf{83.9} & \textbf{70.7} & \textbf{78.0} \\
\hline\hline
ours & & & \\
\hline
max pooling & 80.9 & 66.5 & 73.0 \\
guided ($\beta=1.0$) & 81.4 & 67.5 & 74.4 \\
guided ($\beta=0.5$) & 81.8 & 68.0 & 75.1 \\
guided ($\beta=0.0$) & \textbf{82.5} & \textbf{68.9} & \textbf{76.3} \\
\hline
guided ($\beta=0.5$) & & & \\
LSTM + BN layer & 83.0 & 69.4 & 76.9 \\
BNLSTM & \textbf{83.9} & \textbf{70.8} & \textbf{78.3} \\
BNLSTM + BN layer & 83.6 & 70.2 & 77.9 \\
\hline
guided ($\beta=0.0$) & & & \\
LSTM + BN layer & 83.4 & 69.6 & 77.4 \\
BNLSTM & \textbf{84.3} & \textbf{71.2} & \textbf{78.8} \\
\hline
BNLSTM + MoE & \textbf{84.5} & \textbf{71.5} & \textbf{79.1} \\
\hline
\end{tabular}
\end{center}
\caption{Validation results of our models (100k iterations). All values in this table are averaged results and reported in percentage(\%).}
\end{table}
\textbf{Base and guided LSTM models}
We obtained two significant results from the first experiment: First, regardless of $\beta$, guided LSTM structure can perform better
than base model(GAP 73.0\%). Guidance process can make LSTM to learn dynamics which can generate the final hidden state tracked by label
entities. Second, ground truth label injection ($\beta=1$, 74.4\%) is prevalently used in neural machine translation or image description structures,
validation results show that metrics can be improved by decreasing $\beta$($\beta$=0.5, 75.1\%, $\beta$=0.0, 76.3\%). As described earlier, we can check that $\gamma_0 > \gamma(\beta)$.
In addition, since the activatio function is softmax after the word projection structure, it becomes a bottleneck of the base model.
The learning speed of guided LSTM, however, is increased by using individual binary cross entropy for each word projection, which get rid of the bottleneck.
\textbf{Batch Normalized LSTM and Extention with MoE}
We performed the second experiments for two different ground truth label injection probability ($\beta=0.5, 0.0$). As reported in Table 1., guided LSTM model has higher evaluation metrics by just adding a BN layer between guided LSTM and logistic classifier (76.9\%, 77.4\%). In addition, BNLSTM with logistic classifier (78.3\%) shows higher performance than that of the case of adding a BN layer (77.9\%). We had guessed that adding both modifications into the structure could perform better, but validation results of it don't show better results. We concluded that this may be related to stabilization of BN layers because each BN layer has its own population mean and variance which are accumulated by batch mean and variance values with exponential decaying algorithm and the result varies with this decaying factors. Above all, we got the higher results(78.8\%) with $\beta=0.0$ than base mixture model(78.0\%). In the final experiment, we obtained the possibility of extentions of our model by updating the highest value with different classifier(79.1\%)
\section{Conclusion}
In this paper, we proposed a method to use LSTMs as a feature extractor for multi-label video classification and investigated how to improve LSTM performance through batch normalization.
For better generalization, we found out that stochastic gating mechanism with $\beta=0.0$ has shown better validation results than $\beta>0.0$. it means it is better to use feedback loop from the previous LSTM cell in both training and inference phase. In addition, batch normalization layer improved the performance, but it requires careful consideration about which parts of the structure are attached by BN layer.
Last, mean pooling is known to be an effective aggregation method, but the ordered relation information that can be seen at frame-level may disappear by mean pooling. Therefore, it may be difficult to classify the same videos even if they have different meanings depending on the order of the frames. The transfer learning is performed by using the given CNN features from DB in this paper. To deal with frame-level features directly, we can put an LSTM encoder instead of mean pooling layer. In addition it can be a possible way to put an attention layer between encoder and decoer LSTMs for boosting up the overall metrics.
\section{Acknowledgements}
We thank Dr. Joonoo Kim in Mobile Communications Business of Samsung Electronics, who supported us to perform research and publish this work as a project leader, and Dr. Sundo Choi at Samsung Advanced Institute of Technology for his kind advice and helful discussion.
{\small
\bibliographystyle{ieee}
|
1,477,468,750,330 | arxiv | \section{Introduction}
\label{sec1}
In many applications there is interest in regressing an outcome on exposures observed over a previous time window. This frequently arises in environmental epidemiology applications where either a health outcome on one day is regressed on exposures (e.g. temperature or air pollution) observed on that day and several proceeding days or when a birth or children’s health outcome is regressed on exposures observed daily or weekly throughout pregnancy \citep{Stieb2012}.
In the context of maternal exposure to air pollution, which we consider in this paper, there are generally two inferential goals. The first is to estimate the critical windows of susceptibility--periods in time during which an exposure can alter a future phenotype. The second goal is to estimate the exposure-time-response function. Recent studies have identified critical windows and associations between maternal exposure to air pollution and several outcomes including preterm birth \citep{Chang2012,Chang2015a} adiposity \citep{Chiu2017}, asthma and wheeze \citep{Bose2018a,Lee2017}, neurodevelopment \citep{Chiu2016a}, among other outcomes \citep{Stieb2012,Sram2005}. This includes studies that have found that the linear \citep{Chiu2017, Chang2015a} and nonlinear \citep{Wu2018a} association vary across weeks of gestation.
A popular approach to estimate the association between maternal exposure to air pollution during pregnancy and a birth outcome is a distributed lag model (DLM) \citep{Schwartz2000a,Zanobetti2000}. In a DLM, the outcome is regressed on the exposures at each of the time points, e.g. mean exposure during each week of pregnancy. Most commonly, the model is constrained so that the exposure effect varies smoothly over time. Constraining the model adds stability to the estimator in the presence of typically high temporal correlation in the exposure. Methods of regularization include penalized spline regression \citep{Zanobetti2000}, Gaussian processes \citep{Warren2012}, principal components or splines \citep{Wilson2017a}. \cite{Wilson2017} showed that a constrained DLM outperforms more naive methods such as using average exposure over each of the trimesters because DLMs adjust for exposures at other time points throughout pregnancy and provide a data driven approach to identify critical windows even when they do not align with clinically defined trimesters.
To extend the DLM to estimate nonlinear associations in the exposure-response function at any given time, a class of distributed lag nonlinear models (DLNMs) has been proposed \citep{Gasparrini2010, Gasparrini2017}. DLNM methods typically operate by cross-basis smoothing with splines or penalized spline regression. This results in a unique nonlinear exposure-response function at each time point that varies smoothly over the lagged exposures.
A consequence of imposing smoothness over time in a DLM or DLNM is that estimates may generalize the critical window(s) to a wider set of times than is appropriate. Critical windows are hypothesized to be defined by biological events in the fetal developmental process that may be altered by environmental exposures. Methods that can adapt to the discrete time spans of these events are needed to better estimate critical windows. Motivated by this, \cite{Warren2019} proposed a hierarchical Bayesian framework to improve critical window characterization for DLMs using a variable selection approach that selected weeks in or out of the critical window. However, there are no DLNM methods that relax the smoothness constraint for a nonlinear exposure-response function.
In this paper, we propose a method for DLNM that relaxes the smoothness assumption and can more precisely identify critical windows. The proposed approach, which we call treed distributed lag nonlinear models (TDLNM), is based on the Bayesian additive regression trees (BART) framework developed by \cite{Chipman2012}. Applied to estimating a distributed lag function, TDLNM treats the time series of exposures as a single multivariate predictor and uses a tree structure to partition the exposure concentration and time dimensions to construct a flexible exposure-time-response surface.
We propose two forms of TDLNM. The first form uses a dichotomous tree structure to form a piecewise constant exposure-time-response function. By using an ensemble of trees, the model can approximate both smooth and non-smooth functions. The second form imposes smoothness only in the exposure-concentration dimension but not over time. This forces smoothness in the exposure-response, while maintaining precision in critical window identification. We also discuss how the smooth version can be used to incorporate exposure uncertainty into the model.
Following development of TDLNM, we perform a simulation study that compares our proposed method to spline-based methods across a variety of settings. These simulations demonstrate that our method excels in the estimation of the exposure-time-response function for non-smooth settings, but also adapts well to estimating scenarios with a smooth exposure-time-response. Importantly, we find that TDLNM more precisely identifies critical windows and has an extremely low rate of critical window misspecification. In addition, simulations show that TDLNM has narrower confidence intervals, especially near the boundaries, while maintaining nominal coverage. Finally, we apply TDLNM to estimating the association between the fine particulate matter (PM$_{2.5}$) experienced by a mother during pregnancy and the resulting birth weight of the child. Software to implement this method is available in the {\tt R} package {\tt dlmtree}.
\section{Data}
We analyze birth records from Colorado, USA, vital statistics data. The data includes live, singleton, full term ($\ge 37$ weeks gestation) births from Colorado with estimated conception dates between 2007 and 2015, inclusive, with no known birth defects. We limited the analysis data to the northern front range counties (those immediately east of the Rocky Mountains roughy extended from Colorado Springs to the Wyoming border). This area contains the majority of the Colorado population. We further limited the analysis to census tracts with elevation less than 6000 feet above sea level. This both reduces the potential confounding by altitude and the impact of mountainous terrain on exposure data.
The primary outcome of interest in this paper is birth weight for gestational age $z$-score (BWGAZ). We obtained BWGAZ using the Fenton birth charts \citep{Fenton2013a}. BWGAZ measures birth weight as the number of standard deviations above or below the expected birth weight of a child with the same fetal age and sex. The data contain individual level covariate information including mother's age, weight, height, income, education, marital status, prenatal care habits and whether they smoked before or during pregnancy, as well as race and Hispanic designations. We limit the analysis to observations with complete covariate information, resulting in 300,463 births.
We use PM$_{2.5}$ exposure data from the US Environmental Protection Agency fused air quality surface using downscaling data. This data is publicly available at \url{www.epa.gov/hesc/rsig-related-downloadable-data-files}. The statistical methodology for construction of the data files has been described in \cite{Berrocal2009AModels}. We linked the exposure data to the birth records based on the census tract of maternal residence at birth. We then constructed weekly average exposures for each week of gestation. A map detailing the number of births in each county is shown in Supplemental Figure 1.
This study was approved by the Institutional Review Board of Colorado State University.
\section{Methods}
\subsection{DLNM Framework}
Before introducing our proposed method we briefly recap the DLNM framework and standard methodology. Let $y_i$ be the continuous outcome for person $i$ from a sample $i=1,\dots,n$. Let $\mathbf{x}_i=[x_{i1},\ldots,x_{iT}]^T$ denote a vector of exposures observed at equally spaced times $t=1,\ldots,T$. In our case, $y_i$ indicates BWGAZ while $x_{it}$ represents the $i^{th}$ mother's exposure to PM$_{2.5}$ in week $t$ of pregnancy. We control for a vector of covariates, denoted $\mathbf{z}_i$. The Gaussian DLNM model is
\begin{equation}
\label{eq:main}
y_i\sim\mathcal{N}\left[f(\mathbf{x}_i)+\mathbf{z}_i^T\boldsymbol{\gamma}, \sigma^2\right],
\end{equation}
where $f(\mathbf{x}_i)$ is the distributed lag function and $\boldsymbol{\gamma}$ is a vector of regression coefficients.
The distributed lag function $f(\mathbf{x}_i)$ can take several linear as well as nonlinear forms. The DLNM allows a unique nonlinear association between exposures at each time point and the outcome. In general, the distributed lag function is defined as
\begin{equation}\label{eq:dlnm}
f(\mathbf{x}_i)=f(x_{i1},\ldots,x_{iT})=\sum_{t=1}^T w(x_{it},t)
\end{equation}
where $w(x,t)$ is the exposure-response function relating exposure at week $t$ of gestation to the outcome. Existing frameworks for the DLNM \citep{Gasparrini2010,Gasparrini2017} utilize a cross-basis where $w$ is represented as a bivariate basis expansion in the exposure concentration and time dimensions. Penalized spline implementations allow for a range of assumptions to be made regarding the structure of the exposure-time-response. For example, varying ridge penalties target shrinkage at specific times, while varying difference penalties control the smoothness along the curve. Basis expansion methods, such as splines, regularize the model to improve stability of the estimated effect in the presence of multicollinearity in the predictor. However, these methods also impose the assumption of smoothness in the DLNM.
\subsection{Treed DLNM Approach}\label{sec:treed-dlnm}
We introduce a sum-of-trees model based on the BART framework \citep{Chipman2012} to estimate the exposure-time-response function, $f(\mathbf{x}_i)$. The general approach is to build dichotomous trees that partition the time-varying exposure $\mathbf{x}_i$ in both the exposure concentration and time dimensions. Figure~\ref{fig:dlnm-tree-example} illustrates the approach for a single tree. Figure~\ref{fig:dlnm-tree-example-a} is a diagram of a tree showing binary rules defined on the exposure and time values. These rules divide the exposure-time space into five terminal nodes, denoted $\eta_1,\ldots,\eta_5$. Figure~\ref{fig:dlnm-tree-example-b} shows the exposure-time space partitioned into five regions with each region corresponding to a single terminal node. A tree and corresponding parameters define a piecewise constant exposure-response function,
\begin{equation}
\label{eq:wa_constant}
w(x_{it},t)=\mu_{b} \quad \text{if } x_{it} \in \eta_{b}.
\end{equation}
The distributed lag function for tree $\mathcal{T}$ takes a form similar to that in \eqref{eq:dlnm} and is defined as
\begin{equation}
\label{eq:partial-dlnm}
g(\mathbf{x}_i,\mathcal{T})= \sum_{t=1}^T w(x_{it},t).
\end{equation}
In our TDLNM framework, we consider an ensemble of $A$ regression trees. For tree $\mathcal{T}_a$, $a\in\{1,\ldots,A\}$, denote the $B_a$ terminal nodes as $\eta_{a1},\ldots,\eta_{aB_a}$. Each terminal node, $\eta_{ab}$ has a corresponding set of limits in time and exposure concentration given by the rules defined as splits of the tree and a corresponding parameter $\mu_{ab}$. Collectively, the terminal nodes of each tree define a partition of the exposure-time-space and allows for flexible estimation of the exposure-time surface. As in \eqref{eq:wa_constant}, we define the effect of each exposure-time combination in tree $\mathcal{T}_a$ to be $w_a(x_{it},t)=\mu_{ab}$ if $x_{it}\in\eta_{ab}$. Each regression tree in the ensemble provides a partial estimate of the distributed lag nonlinear function $f$. Formally, the exposure-time-response function for TDLNM is
\begin{equation}
\label{eq:partial-dlf}
f(\mathbf{x}_i)=\sum_{a=1}^A g(\mathbf{x}_i,\mathcal{T}_a),
\end{equation}
where $g(\mathbf{x}_i,\mathcal{T}_a)$ represents the partial estimate contributed by tree $a$ given in \eqref{eq:partial-dlnm}.
TDLNM foregoes the basis-imposed smoothness assumption. However, when different time and exposure breaks are staggered across trees the ensemble of trees can approximate smooth functions. Model regularization is a result of the tree prior, which prefers trees having only a few splits. Smaller trees ensure that the model is stable in the presence of temporal correlation because each terminal node averages across multiple time points.
\subsection{Smoothing in exposure concentration} \label{sec:smoothing_exposure_conc}
Most epidemiological studies assume that the exposure-response relationship is smooth in exposure concentration. The TDLNM methods presented above assume a piecewise linear structure that can approximate a smooth function, but it is never truly smooth. In this subsection we propose a TDLNM model that is truly smooth in exposure (TDLNMse). Importantly, TDLNMse does not force smoothness in time to allow for accurate critical window estimation.
To allow for smoothing in the exposure-response, we introduce a weight function on the terminal-node specific effects. A similar idea was introduced by \cite{Linero2018}, which assigned a node-specific probability to each observation using a gating function at each dichotomous split on a covariate. TDLNMse differs in that we desire smoothing only in the exposure-concentration dimension. To accomplish this, we define smoothing parameter $\sigma_x$ and modify \eqref{eq:wa_constant} to be
\begin{equation}
\label{eq:wa_smooth}
w_a\left(x_{it},t\right)=
\sum_{b=1}^{B_a} \mu_{ab}\cdot \psi(x_{it}; \eta_{ab},\sigma_x).
\end{equation}
The weight function $\psi(x_{it}; \eta_{ab},\sigma_x)$ allows each observation $x_{it}$ to be distributed across all terminal nodes that contain time point $t$. For the weight function we use a normal kernel with bandwidth $\sigma_x$. Hence, the weight for $x_{it}$ assigned to node $\eta_{ab}$ is
\begin{equation}\label{eq:weight_x_in_eta}
\psi(x_{it}; \eta_{ab},\sigma_x)=
\left\{
\Phi\left(\frac{\lceil x_{ab}\rceil-x_{it}}{\sigma_x}\right)-
\Phi\left(\frac{\lfloor x_{ab}\rfloor-x_{it}}{\sigma_x}\right)
\right\}
\cdot \mathbb{I}(t\in\eta_{ab}),
\end{equation}
where $\lceil x_{ab}\rceil$ and $\lfloor x_{ab}\rfloor$ refers to the maximum and minimum exposure concentration limits of node $\eta_{ab}$, respectively, and $\Phi$ is the standard normal cumulative distribution function. The inclusion of the indicator function allows TDLNMse to retain a piecewise constant effect in time at each exposure concentration value. The kernel smoother requires fewer terminal nodes to estimate a smooth effect in exposure-concentration as observations near the boundary of two terminal nodes will have an estimated effect that is in between estimates of observations located centrally in the nodes.
For TDLNMse, we propose fixing the bandwidth $\sigma_x$ a priori. Alternatively, we could assign a prior to $\sigma_x$ and estimate the bandwidth.
\subsection{Incorporating exposure uncertainty with TDLNMse} \label{sec:incorporating_measurement_error}
A situation that has not been addressed in the DLNM literature is uncertainty in the exposure. Many exposure models including climate models and spatially kriged exposure models provide measurement of uncertainty. Most commonly these occur in one of two forms---standard errors for the exposure data or multiple realizations from a model such as draws from a posterior predictive distribution or an ensemble method. This uncertainty is not accommodated for in the health effect estimates from standard DLNMs.
Exposure uncertainty can be incorporated into TDLNM by using a weight function to spread the exposure across multiple terminal nodes according to the probability that the exposures is in each of those nodes. The result is similar to TDLNMse, using a weight function corresponding to the uncertainty in each observation. In the case of reported standard errors for the exposure data, we use \eqref{eq:weight_x_in_eta} with observation specific smoothing parameter $\sigma_{xi}$ that is equal to the standard error for each observation. If instead we have multiple draws of exposures from an ensemble or Bayesian model we replace $\boldsymbol\Phi$ in \eqref{eq:weight_x_in_eta} with the empirical cumulative distribution function.
\subsection{Interpretation of TDLNM and relation to spline-based DLNMs}
To gain some insight into the exposure-response function characterized by TDLNM we consider the DLNM relation at a single time point. The distributed lag function in TDLNM is given by combining \eqref{eq:partial-dlnm} and \eqref{eq:partial-dlf}, i.e. the sum over trees and the sum of each tree over time. Reversing the order of the summation we get
\begin{equation}
\label{eq:TDLNM-dl-est}
f(\mathbf{x}_i)=\sum_{t=1}^T\sum_{a=1}^A w_a(x_{it},t).
\end{equation}
At time $t$, the exposure-response function, $\sum_{a=1}^A w_a(x_{it},t)$, is equivalent to the BART model with univariate predictor $x_{it}$. In the case of TDLNM this implies a piecewise-constant exposure-response function across the exposure concentration levels at time $t$. For TDLNMse, the weight function $\psi$ acts as a linear smoother over exposure concentration for the exposure-response function at each time.
\section{Prior Specification and Computation}
Our prior specification is based on that of \cite{Chipman2012}; however, some modifications and a different MCMC algorithm are needed to accommodate or improve performance with the multivariate predictor and parametric control for covariates. In this section we specify key differences in the priors and computation approach from that of \cite{Chipman2012}, including a horseshoe-like shrinkage prior on tree-specific effects and an altered prior for tree splits on a multivariate predictor. Full details on the priors and computation are in the Supplemental Materials, Section B.
\subsection{Prior Specification}
We apply a tree-specific, horseshoe-like prior to the effects at the terminal nodes $\mu_{ab}$ \citep{Carvalho2010}. The prior for terminal node $b$ on tree $a$ is
\begin{equation}\label{eq:dlnm-prior}
\mu_{ab}|\sigma^2,\omega^2,\tau_a^2
\sim
\mathcal{N} \left(0, \sigma^2\omega^2\tau_a^2\right).
\end{equation}
Here $\tau_a\sim\mathcal{C}^+(0,1)$ and $\omega\sim\mathcal{C}^+(0,1)$ defines the horseshoe prior on trees. We specify prior $\sigma\sim\mathcal{C}^+(0,1)$ and $\boldsymbol{\gamma}\sim \mathcal{MVN}(\boldsymbol{0},\sigma^2 c\mathbf{I})$, where $c$ is fixed at a large value.
For the half-Cauchy priors on all variance parameters we adopt the hierarchical framework of \cite{Makalic2015}, where $r^2|s\sim \mathcal{IG}(1/2,1/s)$ and $s\sim\mathcal{IG}(1/2,1)$ gives that marginally $r\sim\mathcal{C}^+(0,1)$. This allows for Gibbs sampling of all variance components.
The tree-specific shrinkage prior on $\mu_{ab}$ results in better mixing throughout the MCMC sampler. This occurs by allowing shrunken trees with a small variance component and smaller effects $\mu_{a1},\ldots,\mu_{aB_a}$ to more easily explore splitting locations in the exposure-time space. After reconfiguration, these trees have the ability to contribute larger partial estimates.
Our stochastic tree generating process largely follows \cite{Chipman1998}. The probability a tree splits at node $\eta$ with depth $d_\eta$ equals $p_\text{split}(\eta)=\alpha(1+d_\eta)^{-\beta}$, where hyperpriors $\alpha\in (0,1)$ and $\beta\geq 0$. In our data setting the number of potential splits in the time direction is $T-1$ while the number of potential split points in the exposure direction is equal to the number of unique exposure values minus one, which is substantially larger than $T-1$. To address this imbalance we limit the potential exposure split points a priori and propose an alternative prior on potential split points. By limiting the exposure split points we also avoid situations where a split in one dimension limits future splits in another dimension due to empty nodes. For example, if TDLNM has a tree that splits on an extreme value in the exposure-concentration dimension, it may be unable to further split on the time dimension due to lack of data in one partition of the exposure-time space. We restrict the potential split locations in the exposure dimension to a predefined set of quantiles or values. Specification of potential splitting values also improves computational efficiency by allowing for precalculation of counts or weights for the limited number of potential splits.
We assign prior probabilities uniformly across potential time splits and uniformly across potential exposure splits such that there is a $0.5$ probability of selecting either a time or exposure split as the first splitting rule in a tree. Hence, for $s_x$ and $s_t$ total potential exposure and time splitting values, respectively, the probability of the first split in a tree being on exposure equals $1/(2s_x)$ or $1/(2s_t)$ if the split is on time. For a splitting decision further down the tree, the splitting rule probability is proportional to the probabilities of the potential remaining splits in a selected node. Following a split in time, there are fewer potential remaining splits in time, increasing the probability that the next split will take place in the exposure dimension.
\subsection{MCMC Sampler}\label{sec:mcmc}
We estimate TDLNM using MCMC. The MCMC approaches used for BART do not apply to the current model for two reasons. First, the algorithm of \cite{Chipman2012} relied on the fact that any specific vector of predictors $\mathbf{x}_i$ is contained in a single terminal node on each tree, whereas TDLNM divides the exposures related to each observation across the terminal nodes. Second, we modify the algorithm to allow for parametric control of the confounding variables $\mathbf{z}$. Due to these differences we propose an alternative MCMC approach for the TDLNM model. In particular, we integrate out $\boldsymbol{\gamma}$ using standard analytical techniques. Then, we apply Bayesian backfitting \citep{Hastie2000} to simultaneously estimate the effects of the partial exposure-time-response based on the partition defined by each tree, $\mathcal{T}_a$. Full details of the MCMC sampler can be found in Supplemental Materials Section B.
\subsection{Hyperprior selection and tuning}
\label{sec:prior_setup}
Tree splitting hyperpriors were set to the defaults used in \cite{Chipman2012} with $\alpha=0.95$ and $\beta=2$; different settings did not improve results. Trees in TDLNM explore only two dimensions, which requires fewer trees to adequately explore the predictor space. In preliminary work, we found that 10 to 20 trees was sufficient. Results did not change using more than 20 trees. We used $A=20$ trees for our simulation and data analysis. We assigned the stochastic tree process to grow or prune with probability 0.3 each and change with probability 0.4. The fixed smoothing parameter in TDLNMse, $\sigma_x$, is data dependent: too large and the estimated effect will appear linear, too small and the model reverts to TDLNM (no smooth effect). For our simulation and data analysis, we set $\sigma_x$ to half the standard deviation of the exposure data. We found this setting to balance a smooth effect while also clarifying nonlinearity in the exposure-concentration effects.
\subsection{Estimating the exposure-time-response function}\label{sec:estimating-dlnm}
The distributed lag nonlinear function $f$ includes the model intercept. To ease interpretation we remove the intercept by centering $f$ at a reference exposure value, $x_0$, at each time. As $f$ is estimated as a sum of tree, we center each tree at the reference value and we use the centered trees for posterior inference.
\section{Simulation}
We conduct a simulation study to compare the empirical performance of TDLNM and TDLNMse to established DLNM methods that use penalized and unpenalized splines. Key to the simulation is that we compare performance on simulation scenarios representing both smooth and non-smooth exposure-time-response functions.
We simulate data according to \eqref{eq:main} and \eqref{eq:dlnm}, using a sample size equivalent to our exposure data ($n=300,463$). To accurately represent the autocorrelation found in air pollution exposure data, we use PM$_{2.5}$ exposures from our data analysis. We simulate the DLNM surface using 37 consecutive weeks from each observation to simulate a full-term pregnancy. We consider four simulated exposure-time-response functions. Each corresponds to a different true model (TDLNM, TDLNMse, smooth DLNM with splines, and linear DLM). The four DLNM scenarios are: A) piecewise constant effect in exposure across weeks $11-15$; B) linear effect in exposure across weeks $11-15$; C) smooth, nonlinear effect (logistic shape) in exposure across weeks $11-15$; D) smooth, nonlinear effect (logistic shape) in exposure with a smooth effect in time peaking at week $13$ and extending approximately five weeks in either direction. We generate the outcomes using log-transformed exposure data. All scenarios are centered at log-exposure value 1. Several cross-sections of the exposure-time surfaces are shown in Figures \ref{fig:sim-slice-exp} and \ref{fig:sim-slice-time}. Algebraic details of the DLNM surface for each scenario and a graphic representation can be found in the Supplemental Materials Section C.1.
We generate a set of covariates (five standard normal, five binomial with probability 0.5) and corresponding coefficients from standard normal. We include a seasonal trend by using ozone data. Specifically, we add a random ozone effect for every $5^\text{th}$ week (5, 10, \ldots, 35), where ozone measurement at each time is centered mean zero, scaled to have standard deviation one and multiplied against a draw from $\mathcal{N}(0,\sigma^2=0.04)$. This allows for a different seasonal trend for each simulated dataset that is correlated with both the exposure, PM$_{2.5}$, and the outcome. We set the error variance $\sigma^2$ such that $\text{Var}[f(\mathbf{x}_i)]/\sigma^2=1/1000$ to represent a realistic signal to noise ratio and run 500 simulation replicates in each scenario. The simulation design can be reproduced with the {\tt R} package {\tt dlmtree}.
\subsection{Simulation estimators and comparisons}
TDLNM and TDLNMse used the prior settings described in section \ref{sec:prior_setup}. Thirty evenly spaced values ranging between the 0.01 percentile and the 99.9 percentile of all log-exposure value were designated as potential splits in the exposure-dimension. After a burn-in period of 5,000 iterations, we ran each model for 15,000 iterations, thinning to every tenth draw.
We compare TDLNM and TDLNMse to several spline-based penalized and unpenalized DLNM models. The models are described as follows with more detail given in \cite{Gasparrini2017}.
\begin{itemize}
\item GAM: base model defined by penalized cubic $B$-spline smoothers of rank 10 in both exposure and time dimensions, with second-order penalties, estimated with REML;
\item DLM: using GAM with a linear assumption in exposure concentration;
\item GLM-AIC: optimal number of unpenalized, quadratic $B$-splines in both exposure and time dimensions (df 1 to 10) selected by minimizing AIC;
\item GAMcr: defined by replacing the cubic $B$-spline basis in GAM with cubic regression splines and penalties on the second derivatives;
\item GAM-exp: GAM, replacing the second-order penalties with a varying ridge penalty.
\end{itemize}
To assess model performance, we center the DLNM for each model at log-exposure value $1$ and evaluate the estimated DLNM over a grid of points.
In each model we include all 10 simulated covariates as well as indicators for year and month to control for the additional seasonal trend. We log-transform the exposure concentration values to reduce skew in the exposure data and allow for equally spaced knots in the spline basis models. The decision to log-transform the response has no impact on TDLNM as the model will produce identical results with or without a log-transform; it does impact the smoothing with TDLNMse.
\subsection{Simulation Results}
Summary measures of model performance are shown in Table \ref{tab:sim-results}.
Here, we compare each model by the root mean square error (RMSE) of the entire exposure-time surface and broken down to the RMSE within and outside the simulated critical windows. We also show the empirical coverage of 95\% confidence intervals along with average confidence interval width. In addition, the models are compared on the probability of identifying a non-zero effect across grid points inside the simulated critical window (TP), the probability of incorrectly placing a non-zero effect across grid points outside the simulated critical window (FP), and the precision of correct identification of a non-zero effect: TP/(TP+FP). We designate a non-zero effect in the true exposure-time surface as any effect outside of the interval from $-0.005$ to $0.005$ to account for scenario B and C, which have a non-zero effect everywhere between weeks 11-15 and scenario D, which has a non-zero effect everywhere. Figures 2 and 3 show cross-sections of the exposure-time-response surface using estimates from models TDLNM, TDLNMse, and GAMcr. A non-zero estimate in the plots indicates a change in the response for any observation with that particular time and exposure-concentration value.
TDLNM and TDLNMse have as good or better overall RMSE in scenarios A, C and D. In all scenarios, the tree-based methods have the lowest RMSE in areas of zero effect in the exposure-time-response surface. Figure \ref{fig:sim-slice-exp} highlights the ability of TDLNM and TDLNMse to find a sharp distinction between times with or without effects. The shrinkage prior on the tree-specific parameters reduces variance leading to lower RMSE in areas of no effect. In areas of non-zero effect, our models have lower RMSE than spline based models in scenario A and are comparable in scenarios C and D. In scenario B the RMSE in areas of non-zero effect is higher for TDLNM and TDLNMse, as the spline based models do a better job interpolating into the extreme exposure values where few data points reside. Figure \ref{fig:sim-slice-time} contrasts how tree-based models attenuate the effect at the boundaries of exposure values, while GAMcr continues the trend linearly.
The tree-based models have near nominal coverage, except in scenario B. All models show below nominal coverage in scenario B, however, TDLNM and TDLNMse perform best, each having 87\% surface coverage. In addition, our models have the smallest average confidence interval width, which is particularly notable at the boundaries in time or extreme exposure concentration where the `wiggliness' of spline-based models becomes more pronounced (Figures \ref{fig:sim-slice-exp} and \ref{fig:sim-slice-time}). The lack of `wiggliness' in the tree-based model estimates contributes to narrow confidence intervals as well as decreased RMSE, especially in areas of zero-effect. Furthermore, the variation between simulation replicates is much smaller for TDLNM and TDLNMse.
Scenario B, while seemingly natural for a DLM, poses several difficult situations. First, a proper estimate by TDLNM would require trees with many breaks spanning the exposure concentration during the correct critical window. Second, TDLNM attenuates the effect when data is sparse (e.g. high and low concentrations in this scenario). Third, at high concentration, there is a jump from zero to a large effect that smooth methods cannot accommodate; in particular, DLM extends the critical window well beyond the true period of effect as a result of the smoothness assumption.
Precision with TDLNM and TDLNMse is the highest across all simulation scenarios (Table \ref{tab:sim-results}). The high precision is a result of near zero FP, but with a tradeoff of lower TP in scenarios B and D. The cross-sectional plots in Figure \ref{fig:sim-slice-exp} shows the ability of TDLNM and TDLNMse to adapt to non-smooth exposure-time response surfaces. Supplemental Figure 3 indicates the probability detecting a non-zero effect in at least one exposure value in each week. These results shows that the spline-based methods have a much higher probability of misclassifying weeks just outside of the true critical windows. On the other hand, the tree-based models adapt to changing smoothness in the exposure-time-response surface and rarely detect non-zero effects outside of the true critical window. The key takeaway is that the critical windows detected by TDLNM and TDLNMse have a high probably of being correct.
\section{Data Analysis}
We use TDLNM and TDLNMse to estimate the relationship between a mother's exposure to PM$_{2.5}$ during the first 37 weeks of pregnancy and child BWGAZ. By using weekly exposures, we limit the temporal resolution at which critical windows can be identified with any method to correspond to weeks. For comparison, we also apply DLNM using penalized cubic regression splines (GAMcr) and DLM. We control for maternal baseline characteristics and season and long-term trends. The maternal characteristics are: pre-pregnancy age (quadratic fit), weight, smoking (if done before or during pregnancy), income, education, prenatal care (when first received), race and Hispanic designations, elevation, and county of residence. We do not control for fetal sex or gestational age as the outcome, BWGAZ, is already adjusted for these factors. In addition, we adjust for seasonal effects using indicators for year and month of conception.
For TDLNM and TDLNMse, we use the same hyperparameters as in our simulation, running the models for a burn-in period of $5,000$ iterations followed by $15,000$ iterations retaining every tenth draw from our MCMC sampler. We specify 30 equally spaced potential splits in the exposure dimension ranging from the 0.1 percentile to the 99.9 percentile of log-exposure values. Different numbers of potential splits were considered, but showed no differences in the result. In TDLNMse we set the smoothing parameter $\sigma_x$ equal to half the standard deviation of the log-exposures. Models GAMcr and DLM used the same settings as in simulation. The DLNM estimates for all models are centered at the median exposure value (approximately 7 $\mu$g/m$^3$). Critical windows are defined as any week containing a region in the exposure-time-response where the 95\% confidence interval does not contain zero.
\subsection{DLNM Results}
The posterior mean exposure-time-response estimates for TDLNMse is shown in Figure \ref{fig:data-analysis-surface}. PM$_{2.5}$ exposure below the median is associated with an increase in BWGAZ. Exposure concentration above the median value indicate a slight decrease in BWGAZ, but the 95\% credible intervals do not give reason to believe this is different from zero. This pattern is present across all gestational weeks. Cross-sections of the exposure-time-response surface at weeks 5, 15, 25, and 35 are shown in Figure \ref{fig:data-analysis-slice} and indicate a critical window spanning the entire pregnancy.
Based on TDLNMse, a change from median (7.0 $\mu g/m^3$) to the 25th percentile of PM$_{2.5}$ exposure (5.89 $\mu g/m^3$) across the pregnancy would result in a cumulative mean increase in BWGAZ of 0.0132 (95\% CI $[0.0003, 0.0354]$) or approximately 5.74g (95\% CI $[0.11, 15.41]$) when translated to actual birth weight (this is approximate because BWGAZ accounts for gestational age and fetal sex). The nonlinear association shows that a further decrease in PM$_{2.5}$ exposure to the 10th percentile (5.02 $\mu g/m^3$) would result in a 0.055 (95\% CI $[0.016, 0.090]$) mean increase in BWGAZ, or an approximate increase of 24.1g (95\% CI $[7.155, 39.10]$). These results suggest that decreasing PM$_{2.5}$ below the current national ambient air quality standards would result in higher average birth weights in this population.
The mean exposure-time-response estimate for GAMcr, shown in Figure \ref{fig:data-analysis-surface}, closely resembles the estimates of TDLNMse. As in our simulations, we see a difference in the tail behavior. GAMcr continues the trend in the effect with large intervals. Despite the large point estimate with GAMcr at low exposure levels the larger confidence intervals include zero. In contrast, TDLNMse tapers off and estimates a smaller effect with substantially smaller intervals that do not contain zero. The smaller intervals found in TDLNMse near the boundaries are a result of these boundary regions being grouped in terminal nodes that also contain internal regions and therefore receive the same estimates.
Our findings of an association between increased PM$_{2.5}$ and decreased BWGAZ are consistent with previous literature. A meta-analysis by \cite{Sun2016TheMeta-analysis} found a 10 $\mu$g/$m^3$ increase in PM$_{2.5}$ across pregnancy to be associated with 15.9g decrease in birth weight (95\% CI $[-26.8,-5]$); increased exposures in the second and third trimesters were also determined to have a nonzero negative association with birth weight. \cite{Zhu2015a} reported similar results in a separate meta-analysis. \cite{Strickland2019} found that the magnitude of associations between PM$_{2.5}$ and birth weight increased for higher percentiles of the birth weight distribution across all trimesters. Finally, a study investigating individual chemical components of PM$_{2.5}$ found non-zero increased risk of low birth weight for maternal exposures during each trimester of pregnancy \citep{Ebisu2012}.
\subsection{Comparing less flexible model alternatives}
For comparison, we fit TDLNM, a DLM and several linear models to compare results. Each of these models was consistent with the TDLNMse results. More details on these methods can be found in Supplemental Materials Section D.
\section{Discussion}
In this work we have proposed a tree-based method for a DLNM to estimate the association between a time-resolved series of pollution exposures and a continuous birth outcome. TDLNM eliminates the smoothness assumption in the exposure-time response surface. TDLNMse imposes smoothness only in the exposure-concentration dimension but not over time. TDLNM also has the potential to account for measurement error within the exposure-response function. By relaxing the smoothness assumption in the time dimension, our new methods more precisely identify critical windows of susceptibility.
TDLNM provides several extensions to tree-based regression models. First, we allow for a multivariate predictor with temporal correlation. Second, we provide a computationally efficient method for estimating a tree-based function while controlling for a fixed effect. Finally, we eliminate the need for cross-validation to select variance hyperpriors through the application of a horseshoe-like prior on tree-specific effects.
In simulation scenarios, we show that TDLNM and TDLNMse have a low false positive rate of critical window identification, while spline-based DLNMs have a tendency to over-generalization the time periods containing critical windows. Furthermore, our tree-based methods can approximate both smooth and non-smooth exposure-time-response functions. As the smoothness assumption in time changes, TDLNM and TDLNMse allow for information sharing at the same exposure levels across time, so that the piecewise constant steps are distributed across adjacent times allowing for near-smooth estimates. The shrinkage priors reduce the variance of estimates, reducing RMSE in areas of no effect and decreasing the rate of false positives. In the presence of a linear trend, DLNM models are overly flexible. While penalized spline DLNM can revert to an approximately linear model, TDLNM requires a large number of splits in the exposure-concentration dimension to accomplish the same results. As seen in simulation Scenario B, TDLNMse attenuated the linear trend in areas of few exposure observations. The simulations indicate that TDLNM and TDLNMse have high precision in identifying critical windows.
We applied TDLNM and TDLNMse to a Colorado birth cohort. We found a nonlinear effect of PM$_{2.5}$ on BWGAZ. Specifically, we found that below median levels of PM$_{2.5}$ throughout pregnancy were associated with higher BWGAZ. We found no change in BWGAZ due to above median PM$_{2.5}$ exposure.
\section*{Supplementary Materials}
The reader is referred to online Supplementary Materials for technical appendices and additional results concerning simulation and data analysis. The {\tt R} package {\tt dlmtree} used in the simulation and data analysis can be found at \url{https://github.com/danielmork/dlmtree}.
\section*{Acknowledgement}
This work was supported by National Institutes of Health grant ES028811.
These data were supplied by the Center for Health and Environmental Data Vital Statistics Program of the Colorado Department of Public Health and Environment, which specifically disclaims responsibility for any analyses, interpretations, or conclusions it has not provided.
This work utilized the RMACC Summit supercomputer, which is supported by the National Science Foundation (awards ACI-1532235 and ACI-1532236), the University of Colorado Boulder and Colorado State University. The RMACC Summit supercomputer is a joint effort of the University of Colorado Boulder and Colorado State University.
|
1,477,468,750,331 | arxiv | \section{Introduction}
\setcounter{equation}{0}
\label{sec:intro}
The theory of polymer adsorption \cite{DeBell,Rensburg2000} has a long history \cite{Rubin,Silberberg1962}.
For linear polymers a variety of models have been considered including random walks
\cite{Hammersley1982,Rubin,Binder2012}, directed and partially directed walks
\cite{Forgacs,Rensburg2003,Whittington1998} and self-avoiding walks
\cite{Batchelor1995,Beaton2012,Guim1989,HTW1982,Hegger1994,Rensburg1998,Rensburg2004}. In this paper we shall
be concerned with the self-avoiding walk model for which there are a few rigorous results
\cite{HTW1982,Rensburg1998} as well as extensive numerical investigations (see for
instance \cite{Beaton2012,Guim1989,Hegger1994,Rensburg2004}).
The invention of micro-manipulation techniques such as atomic force
microscopy (AFM) \cite{Zhang2003} which allow adsorbed polymer molecules to be pulled off a surface
\cite{Haupt1999} has led to the development of theories of adsorbed polymers subject
to a force
\cite{Krawczyk2005,Orlandini1999,Owczarek2010,Skvortsov2009,Binder2012}.
Much of this work has focussed on random,
directed and partially directed walk models but there has been some numerical work on the
self-avoiding walk model \cite{Iliev2013,Krawczyk2005,Mishra2005} and a recent rigorous treatment
\cite{JvRW2013} which establishes the existence of a phase boundary between an adsorbed
phase and a ballistic phase when the force is applied normal to the surface. In this paper we use exact enumeration and series analysis
techniques to identify this phase boundary for self-avoiding walks on the square
lattice. We also make precise estimates of the critical points for adsorption with no force and
for the transition to the ballistic phase with no surface interaction, and various
relevant critical exponents. For a brief discussion of critical exponents appearing in this problem
see \cite{Batchelor1995} or \cite{Guim1989}.
\section{Definitions and review of rigorous results}
\setcounter{equation}{0}
\label{sec:defns}
Consider the square lattice ${\mathbb Z}^2$ where the vertices have
integer coordinates. We write $(x_i,y_i)$,
$i=0,1,2, \ldots n$ for the coordinates of the $i$-th vertex of an
$n$-step self-avoiding walk on ${\mathbb Z}^2$.
The number of $n$-step self-avoiding walks from the origin is denoted by
$c_n$. It is known that $\lim_{n\to\infty} n^{-1} \log c_n
= \log \mu$ exists \cite{HM54}, where $\mu$ is the
\emph{growth constant} of self-avoiding walks on this lattice.
A \emph{positive walk} is a self-avoiding walk on ${\mathbb Z}^2$ that starts
at the origin and is constrained to have $y_i \ge 0$ for all
$0 \le i \le n$. The number of $n$-step positive walks from
the origin is denoted by $c_n^+$. It is known that
$\lim_{n\to\infty} n^{-1} \log c_n^+ = \log \mu$ \cite{Whittington1975}.
Vertices of a positive walk with $y_i=0$ are \emph{visits} to the surface although,
by convention, the vertex at the origin is not
counted as a visit. We say that the walk has \emph{height} $h$ if for the last vertex
$y_n=h$. The number of positive walks of $n$-steps from the
origin, with $v$ visits and height $h$ is denoted by $c_n^+(v,h)$. The corresponding partition
function is
\begin{equation}
C_n(a,y) = \sum_{v,h} c_n^+(v,h) a^v y^h.
\end{equation}
If $\epsilon$ is the energy associated with a visit and $f$ is the force applied
at the last vertex, normal to the surface,
\begin{equation}
a= \exp[-\epsilon /k_B T] \quad \mbox{and} \quad y=\exp[f/k_B T]
\label{eqn:physvar}
\end{equation}
where $k_B$ is Boltzmann's constant and $T$ is the absolute temperature. If no force is
applied $y=1$ and the appropriate partition function is $C_n(a,1)$ while if there
is no interaction with the surface $a=1$ and the appropriate partition function
is $C_n(1,y)$.
It is known \cite{HTW1982} that the limit
\begin{equation}
\lim_{n\to \infty} n^{-1} \log C_n(a,1) \equiv \kappa (a)
\label{eqn:kappa}
\end{equation}
exists and that $\kappa (a) $ is a convex function of $\log a$. There exists a value
of $a = a_c^o > 1$ such that $\kappa (a) = \log \mu $ for $a \le a_c^o$ and $\kappa(a)$
is strictly monotone increasing for $a > a_c^o$.
Therefore the free energy $\kappa (a)$ is non-analytic
at $a=a_c^o$ \cite{HTW1982} and this corresponds to the adsorption transition in the absence
of a force. For $a > a_c^o$, in the adsorbed phase,
\begin{equation}
\lim_{n\to\infty} \frac{\langle v \rangle}{n} > 0
\end{equation}
while, for $a<a_c^o$, $\langle v \rangle = o(n)$. Here $\langle \cdots \rangle$ denotes expectation.
Similarly it is known \cite{Rensburg2009} that the limit
\begin{equation}
\lim_{n\to \infty} n^{-1} \log C_n(1,y) \equiv \lambda (y)
\label{eqn:lambda}
\end{equation}
exists and $\lambda (y) $ is a convex function of $\log y$. There is a critical point
$y_c^o \ge 1$ such that $\lambda (y) = \log \mu$ for $y \le y_c^o$ and $\lambda (y) $ is
strictly monotone increasing for $y > y_c^o$ \cite{Rensburg2009}. The critical point corresponds
to a transition from a free phase where $\langle h \rangle = o(n)$ to a ballistic phase where
\begin{equation}
\lim_{n\to\infty} \frac{ \langle h \rangle }{n} > 0.
\end{equation}
There are good reasons to believe \cite{IoffeVelenik,Rensburg2009} that $y_c^o=1$.
For the full two variable model it has recently been shown \cite{JvRW2013} that the limiting free energy
\begin{equation}
\psi (a,y) = \lim_{n\to\infty} n^{-1} \log C_n(a,y)
\end{equation}
exists. $\psi(a,y)$ is a convex function of $\log a$ and $\log y$ (\emph{i.e.} convex as a surface) and
\begin{equation}
\psi(a,y) = \max[\kappa (a), \lambda (y)].
\label{eqn:psi}
\end{equation}
This implies that there is a \emph{free phase } when $a < a_c^o$ and $y< y_c^o$
where $\langle v \rangle = o(n)$ and $\langle h \rangle = o(n)$ and a strictly monotone curve $y=y_c(a)$ through the point $(a_c^o,y_c^o)$ separating two phases:
\begin{enumerate}
\item
an \emph{adsorbed phase} when $a > a_c^o$ and $y < y_c(a)$, and
\item
a \emph{ballistic phase} when $y >\max[y_c^o, y_c(a)]$.
\end{enumerate}
Moreover, for the square lattice, $y_c(a)$ is asymptotic to $y=a$ as $a \to \infty$.
\section{Exact enumerations \label{sec:enum}}
The algorithm we use to enumerate self-avoiding walks (SAW) on the square lattice builds on the
pioneering work of Enting \cite{Enting80} who enumerated square lattice
self-avoiding polygons (SAP) using the finite lattice method. More specifically
our algorithm is based in large part on the one devised by Conway, Enting and
Guttmann \cite{Conway93a} for the enumeration of SAWs. Many details of our
algorithm can be found in \cite{Jensen04}.
All of the above transfer matrix (TM) algorithms are based on keeping track of the way
partially constructed SAW are connected to the left of a cut-line bisecting
the given finite lattice (rectangles in the case of the square lattice).
Recently Clisby and Jensen \cite{Clisby12} devised
a new and more efficient implementation of the transfer-matrix
algorithm for self-avoiding polygons. In that implementation we took a new approach and instead
kept track of how a partially constructed SAP must connect up to the right of the
cut-line. Jensen extended this approach to the enumeration of SAW \cite{Jensen13}.
Here we briefly describe how this algorithm can be amended to enumerate SAW
configurations for the problem we study in this paper.
The first terms in the series for the SAW generating
function can be calculated using transfer matrix techniques to count
the number of walks in rectangles $W$ unit cells wide and $L$ cells long.
Any walk spanning such a rectangle
has a length of at least $W+L$ steps. By adding the contributions
from all rectangles of width $W \leq W_{\rm max}$ (where the choice of
$W_{\rm max}$ depends on available computational resources) and length
$W \leq L \leq 2W_{\rm max}-W+1$ the number of walks per vertex of an
infinite lattice is obtained correctly up to length $N=2W_{\rm max}+1$.
\begin{figure}[ht]
\begin{center}
\includegraphics[scale=0.7]{sawex.eps}
\end{center}
\caption{\label{fig:sawex}
An example of a self-avoiding walk on a $10\times 8$ rectangle. The walk is tethered to the surface,
has the end-point at $h=5$ and four vertices (other than the start-point) in the surface.}
\end{figure}
\begin{figure}[ht]
\begin{center}
\includegraphics[scale=0.9]{sawcut.eps}
\end{center}
\caption{\label{fig:sawcut}
Examples of cut-lines through the SAW of Fig.~\ref{fig:sawex} such that
the signature of the yet to be completed section to the right of the cut-line
(black lines) contains, respectively, two, one and no free edges.}
\end{figure}
The basic idea of the algorithm can best be illustrated by
considering the specific example of a SAW given in figure~\ref{fig:sawex}.
Clearly any SAW is topologically equivalent to a line and therefore
has exactly two end-points. If we cut the SAW by a vertical line as shown
in figure~\ref{fig:sawcut} (the dashed line) we see that the SAW is broken into
several pieces to the left and right of the cut-line. On {\em either} side of the cut-line
we have a set of arcs connecting two edges on the cut-line and at most
two line pieces connected to the end-points of the SAW.
As we move the cut-line from left to right we prescribe what must happen
in the future, that is how edges are to be connected to the right of the cut-line so as to
form a valid SAW. Each end of an arc is assigned one of two labels
depending on whether it is the lower or upper end of an arc. Any configuration along the
cut-line can thus be represented by a set of edge states
$\{\sigma_i\}$, where
\begin{equation}\label{eq:states}
\sigma_i = \left\{ \begin{array}{rl}
0 &\;\;\; \mbox{empty edge}, \\
1 &\;\;\; \mbox{lower edge}, \\
2 &\;\;\; \mbox{upper edge}. \\
3 &\;\;\; \mbox{free edge}. \\
\end{array} \right.
\end{equation}
\noindent
If we read from the bottom to the top, the configuration or signature $S$ along the
cut-lines of the SAW in figure~\ref{fig:sawcut} are, respectively, $S=\{030010230 \}$,
$S=\{300000000\}$, and $S=\{102001002 \}$.
Since crossings are not permitted this encoding uniquely describes
how the occupied edges are connected.
The most efficient implementation of the TM algorithm generally involves moving
the cut-line in such a way as to build up the lattice vertex by vertex.
The sum over all contributing graphs is calculated as the cut-line is moved through the lattice.
For each configuration of occupied or empty edges along the intersection we maintain a
generating function $G_S$ for partial walks with signature $S$. In exact enumeration studies
such as this $G_S$ is a truncated polynomial $G_S(x,a)$ where $x$ is conjugate to the
number of steps and $a$ to the number of visited vertices in the surface.
In a TM update each source signature $S$ (before the boundary is moved) gives rise
to a few new target signatures $S'$ (after the move of the boundary line)
as $k=0, 1$ or 2 new edges are inserted with $m=0$ or 1 surface visits
leading to the update $G_{S'}(x,a)=G_{S'}(x,a)+x^ka^mG_S(x,a)$. Once a signature $S$
has been processed it can be discarded.
Some minor changes to the basic algorithm described in
\cite{Jensen13} are required in order to enumerate the SAW configurations for the problem
we study in this paper. Since we are moving the cut-line so as to add one vertex at a time
we have complete control over the placement of the end-points of the SAW.
In particular, grafting the SAW to the surface can be achieved by forcing the SAW to have
a free end (the start-point) on the top of the rectangle. In enumerations of unrestricted SAW one
can use symmetry to restrict the TM calculations to rectangles with $W\leq N/2+1$ and $L\geq W$ by
counting contributions for rectangles with $L>W$ twice. The grafting of the start-point to the wall
breaks the symmetry and we have to consider all rectangles with $W\leq N+1$. The number of
signatures one must consider grows exponentially with $W$. Hence we must minimize the length of
the cut-line to obtain an optimal algorithm. To achieve this the TM calculation on the set of rectangles is broken
into two sub-sets with $L\geq W$ and $L<W$, respectively. The calculations for the
sub-set with $L\geq W$ is done as outlined above. In the calculations for the
sub-set with $L<W$ the boundary line is chosen to be horizontal (rather than vertical) so
it cuts across at most $L+1$ edges. Alternatively, one may view the calculation for the second sub-set
as a TM algorithm for SAW with start-point on the left-most border of the rectangle.
To keep track of the height $h$ of the end-point we simply specify that it must be placed in a
row (or column) $h$ lattice-units from the surface and we then repeat the calculation for all the possible values
of $h$.
We calculated the number of SAW up to length $n=59$. The calculation was
performed in parallel using up to 16 processors, a maximum of some 40GB of memory
and using a total of just under 6000 CPU hours (see \cite{Jensen04}
for details of the parallel algorithm).
\section{Results}
\label{sec:results}
In this Section we describe the results from series analysis, chiefly using
differential approximants \cite{GJ09}. We first discuss the $y$-dependence of the free energy
$\lambda (y)$ when there is no surface interaction, then the $a$-dependence of the
free energy $\kappa (a)$ when there is no applied force and finally the two variable
free energy $\psi (a,y)$ when there is both a surface interaction and a force.
\subsection{No surface interaction. $a=1.$}\label{sec:lambda}
\begin{figure}
\centering
\includegraphics[scale =0.5] {lambda-logy.eps}
\caption{The $y$-dependence of the free energy $\lambda (y)$. The straight lines
indicate the exact lower and upper bounds,
where for $\log y < 0$ we know that $\lambda (y) = \log \mu$ while for $ \log y > 0$ we know that
$ \max[\log \mu, \log y] \le \lambda (y) \le \log \mu + \log y$.}
\label{fig:lambda}
\end{figure}
If we write
\begin{equation}
H(x,y) = \sum_n C_n(1,y) x^n=\sum_n e^{\lambda(y) n + o(n)} x^n
\end{equation}
then $H(x,y)$ will be singular at $x=x_c(y) = \exp[-\lambda (y)]$
and, close to this singularity, $H(x,y)$ is expected to behave as
\begin{equation}
H(x,y) \sim \frac{A}{[x_c(y)-x]^{\gamma (y)}}
\end{equation}
where $\gamma(y)$ is a critical exponent whose value depends on the value of $y$.
In table~\ref{tab:a1} below we give the results of an analysis of the series $H(x,y)$ for various values of $y$.
The resulting estimates of the free energy $\lambda (y) = -\log x_c$ are plotted in figure~\ref{fig:lambda}.
The series were analysed using second and third order differential approximants \cite{GJ09}. At $y=1$ the series is well behaved and has critical point $1/\mu$ with exponent $\gamma_1=61/64,$ the exponent for terminally-attached self-avoiding walks (TASAW), as one expects \cite{Ca84}.
For $y$ just below 1 the series are quite difficult to analyse. Estimates of $x_c$ are close to the known value $ 1/\mu.$ For $y \le 0.7$ this is clearly evident from the analysis. And as $y$ gets smaller still, so that walks ending near to the surface are favoured, it is clear that the exponent is approaching $\gamma_{1,1}=-3/16=-0.1875$ as expected from the scaling law \cite{BGMTW78} $2\gamma_1-\gamma_{11}=\gamma+\nu.$ This is the exponent appropriate to arches, often called {\em loops} in the literature. They are walks in the half-plane whose origin and end-point both lie in the surface. For $y=0.8$ the series really does suggest that $x_c = 0.37918 \pm 0,00003$ with an exponent that looks very close to zero. For $y=0.9$ the approximants suggest that $x_c = 0.3792 \pm 0.0002,$ (so could be $1/\mu$) but there is another singularity very close by (at around $0.389$), and it is known that in such situations the estimate of the location, and exponent, is less trustworthy.
This sort of behaviour is typical of the situation in series analysis when one is in the vicinity of discontinuous change in the critical exponent. The only way a finite series -- that is to say, a polynomial approximation to an infinite series -- can mimic this discontinuity is by shifting the critical point slightly. So our conclusion is that the observed behaviour is consistent with the known result that $x_c = 1/\mu$ for $y \le 1,$ and that the exponent changes discontinuously from $-\gamma_1 = -0.953125$ to $-\gamma_{1,1} = 0.1875$ as $y$ decreases below 1.
For $y >1.04$ the series are beautifully behaved, the singularity is clearly seen to be a simple pole, and we can provide 10 digit (or more) accuracy in estimates of the critical point.
For $1 < y < 1.04$ we get the sort of behaviour we expect with a discontinuous change in exponent as we transition from an exponent $\gamma_1 \ne 1$ to a simple pole.
So, in summary, it appears that for $y < 1$ we have $x_c = 1/\mu$ and exponent $\gamma_{1,1} = -3/16;$ for $y=1$ we have $x_c=1/\mu$ and exponent $\gamma_1 = 61/64$ and for $y > 1$ we have $x_c$ monotonically decreasing as $y$ increases, and with a simple pole singularity.
An interesting and unexpected feature is that for all values of $y$, the location of the {\em antiferromagnetic} singularity -- that is to say, the singularity on the negative real axis, which for unconstrained SAWs is at $x=-1/\mu,$ -- is unchanged at $-1/\mu$ with exponent $3/2$\footnote{To be precise, this singularity is less obvious for $y \ge 4.$ This can be understood from the fact that the radius of convergence of the series decreases as $y$ increases. The location of the anti-ferromagnetic singularity is located at a distance more than twice the radius of convergence when $y \ge 4$, so is increasingly difficult to detect. One would have to expect that it is there nonetheless.}.
A further bonus of this analysis is that the series analysis is exquisitely sensitive to the value of $y$ near $y=1.$ This gives us a method for confirming that $y_c=1$ \cite{IoffeVelenik, Rensburg2009}. From table \ref{tab:a1}, giving the results of an analysis of the series $H(x,1)$ (see the second column of table~\ref{tab:y1}) we find a value of the critical point very close to $1/\mu.$ The value of $1/\mu$ is 0.379052277751, with uncertainty in the last digit only \cite{Clisby12}. We can vary our estimate of $y_c$ until we get agreement with $1/\mu,$ and this turns out to be at $y_c=0.9999995 \pm 0.0000005.$ We know that
$ y_c \ge 1$, so combining our numerical results with this rigorous result, we conclude that $y_c=1$. So it seems that for $y=y_c$ the exponent is given by $\gamma_1,$ and that this changes discontinuously to a simple pole for $y > y_c.$ For $y < y_c$ the evidence strongly suggests that the exponent is given by $\gamma_{1,1}.$
\begin{table}
\centering
\begin{tabular}{|l|l|l|}
\hline
$y$ & $x_c$ & Exponent\\
\hline
0.4 & 0.379053 & 0.186 \\
0.5 & 0.379052 & 0.186 \\
0.6 & 0.379052 & 0.187 \\
0.7 & 0.37905 & 0.195 \\
0.8 & 0.37918 & 0.00 \\
0.9 & 0.3792 & -0.3 \\
0.99 & 0.37925 & -0.63 \\
0.999 & 0.3790837 & -0.9328 \\
0.9999 & 0.379055 & -0.950 \\
0.99999 & 0.379052628 & -0.95296\\
0.999999 & 0.37905229 & -0.95307 \\
1.0 & 0.37905225 & -0.95308 \\
1.000001 & 0.37905221 & -0.95309 \\
1.00001 & 0.37905188 & -0.95321 \\
1.0001 & 0.3790488 & -0.9547 \\
1.001 & 0.379019 & -0.970 \\
1.01 & 0.37862 & -1.11 \\
1.02 & 0.37804 & -1.137 \\
1.04 &0.37649 & -1.0 \\
1.06 &0.37463 & -0.99 \\
1.08 &0.37265 & -0.99\\
1.1 &0.370564 & -1 \\
1.2 &0.3592886 & -1 \\
1.3 &0.3475682 & -1 \\
1.5 &0.3249328 & -1 \\
1.75 &0.2995547603 & -1 \\
2.0 &0.2775487 & -1 \\
2.5 & 0.2418862105 & -1 \\
3& 0.214449855 & -1 \\
4 & 0.1751070033 & -1 \\
5 & 0.14820871438 & -1 \\
7 & 0.1136573165016 & -1 \\
10 & 0.084421281924 & -1 \\
20 & 0.045635244067 & -1 \\
40 & 0.02383593409377 & -1 \\
60 & 0.01613729712 & -1 \\
90 & 0.01087210691 & -1 \\
150 & 0.00657951322 & -1 \\
250 & 0.003968378456 & -1 \\
\hline
\end{tabular}
\caption{SAWs at a surface. Estimates of $x_c$ for $a=1$ and various $y$ values.
For $y > 1$ the singularity is a simple pole.
For all $y$ values, there is also an anti-ferromagnetic singularity at $-1/\mu$ with exponent $1.5$.}
\label{tab:a1}
\end{table}
\subsection{No applied force. $y=1.$}\label{sec:kappa}
Define the generating function
\begin{equation}
K(x,a) = \sum_n C_n(a,1) x^n=\sum_n e^{\kappa(a) n + o(n)} x^n.
\end{equation}
$K(x,a)$ will be singular at $x=x_c(a) = \exp[-\kappa (a)]$
and, close to this singularity, $K(x,a)$ should behave as
\begin{equation}
K(x,a) \sim \frac{B}{[x_c(a)-x]^{\gamma (a)}}
\end{equation}
where $\gamma (a)$ is a critical exponent whose value depends on the value of $a$.
\begin{figure}
\centering
\includegraphics[scale =0.5] {kappa-loga.eps}
\caption{The $a$-dependence of the free energy $\kappa (a)$. The straight lines
indicate the exact lower and upper bounds,
where for $\log a < 0$ we know that $\kappa (a) = \log \mu$ while for $ \log a > 0$ we know that
$ \max[\log \mu, \log a] \le \kappa (a) \le \log \mu + \log a$.}
\label{fig:kappa}
\end{figure}
We have analysed the series $K(x,a),$ corresponding to the ``no force'' situation. As in the previous ``no interactions'' case, we find that the series is exquisitely sensitive to the value of $a_c.$ The best estimate \cite{Beaton2012} currently is $a_c = 1.77564,$ with errors expected to be confined to the last quoted digit. That estimate is a comparatively recent result, which improved dramatically on pre-existing estimates, so improving on this, as we have done, is quite surprising. From table \ref{tab:y1} below, we see from the second column that we find a value of the critical point very close to $1/\mu$. We can vary our estimate of $a_c$ until we get agreement with $1/\mu,$ and this turns out to be at $a_c=1.775615 \pm 0.000005$ with exponent $1.45395$ which is satisfyingly close to the conjectured exact value \cite{Guim1989, Batchelor1995} $\gamma_1^{sp}=93/64=1.453125,$ where the superscript refers to the ``special'' transition that takes place right at the adsorption temperature \cite{Guim1989}. As $a$ increases, we quickly see a simple pole emerging. So it seems that for $a=a_c$ the singularity is characterised by a (diverging) exponent $93/64,$ and that this changes discontinuously to a simple pole for $a > a_c.$ For $a < a_c$ the exponent is, as we would expect, given by $\gamma_1.$
If we analyse the series with $y=0$ and $a=a_c,$ we can estimate the exponent $\gamma_{11}^{sp}.$ We did this and found $\gamma_{11}^{sp} = 0.816 \pm 0.006,$ in agreement with the expected value $13/16 = 0.8125$ \cite{DS86}.
In figure~\ref{fig:kappa} we give our estimates of the free energy $\kappa (a) = -\log x_c$ as a function of $\log a$.
\begin{table}
\centering
\begin{tabular}{|l|l|l|l|l|}
\hline
$a$ & $x_c$ & Exponent& $x^*$ & Exponent\\
\hline
1.3 & 0.379052 & -0.952 & -0.37905 & 1.5 \\
1.6 & 0.379058 & -0.96 & -0.3793 & 1 \\
1.75 & 0.37910 &-1.31 & -0.37918 & 0.333 \\
1.77559 & 0.37905237 & -1.4538 & -0.379050 & 0.256\\
1.775615 & 0.37905227 & -1.4539 & -0.379050 & 0.256 \\
1.77564 & 0.37905217 & -1.4541 & -0.37905 & 0.256\\
1.775665 & 0.37905207 & -1.4542 & -0.37905 & 0.256 \\
1.77569 & 0.37905197 & -1.4538 & -0.37905 & 0.256\\
1.8 & 0.37893 & -1.58 & -0.378885 & 0.189 \\
2.0 & 0.37112 & -1.06 & -0.377 & -0.71\\
2.5 & 0.332682 & -1.0008 & -0.365065 & -0.517\\
2.75 & 0.3125387 & -0.999995 & -0.35806 & -0.5002 \\
3.0 & 0.293630848 & -1 & -0.35106 & -0.5005 \\
4.0 & 0.2329152160359 & -1 & -0.325298 & -0.499 \\
5.0 & 0.191211527263626 & -1 & -0.30403 & -0.5005 \\
6.0 & 0.16158981267578 & -1 & -0.28652 & -0.4995 \\
7.5 & 0.130751327296498 & -1 & -0.26538 & -0.4994 \\
10.0 & 0.0989240104593583 & -1 & -0.23912 & -0.4996 \\
15.0 & 0.0663536371608435 & -1 & -0.20474 & -0.496 \\
20.0 & 0.049869446447162 & -1 & -0.18249 & -0.503 \\
40 & 0.0249840050794006 & -1 & -0.1367 & -0.6 \\
60 & 0.01666196255 & -1 & & \\
90 & 0.01110972448 & -1 & & \\
150 & 0.0066663684218 & -1 & & \\
250 & 0.00399993575 & -1 & & \\
\hline
\end{tabular}
\caption{SAWs at a surface. Estimates of $x_c$ for $y=1$ and various $a$ values. For
$a > a_c,$ the singularity is a simple pole.
There is also a second, antiferromagnetic singularity at $x=x^*$ with exponent $-1/2.$}
\label{tab:y1}
\end{table}
The behaviour of the antiferromagnetic singularity is different from that observed in the previous sub-section. For $a < a_c$ it seems stable at $-1/\mu,$ with an exponent that is likely to be exactly $1.5,$ as for the case above. For $a > a_c$ however, the anti-ferromagnetic critical point monotonically decreases as $a$ increases for $a > a_c,$ and (conjecturally) has a square root singularity. At $a=a_c$ it looks more like a fourth root branch point, but a zero, not a divergence.\footnote{The estimate of the singularity location is not very precise, so it's entirely possible that the exponent is not exactly $1/4,$ but some fraction of approximately similar value.}
\subsection{Phase diagram calculation}
\begin{figure}
\centering
\includegraphics[scale =0.7] {loga-logy.eps}
\caption{The phase boundary between the adsorbed and ballistic phases in the
$(\log{a},\log{y})$-plane. The blue circles correspond to the data from table~\ref{tab:ay1} while
the red diamonds correspond to the data from table~\ref{tab:ay2}.
The inset is a blow-up of the region near the origin.}
\label{fig:phaseboundary}
\end{figure}
In order to locate the phase boundary between the adsorbed and ballistic
phases, in the $(a,y)$-plane, we make use of (\ref{eqn:psi}). $\psi(a,y)$ is equal to
$\kappa (a)$ throughout the adsorbed phase and to $\lambda (y)$ throughout the
ballistic phase. The phase boundary between the adsorbed and ballistic phases is the
locus of points where $\kappa (a) = \lambda (y)$. For a given value of $a$ we calculated
$\kappa (a)$ as in Section \ref{sec:kappa} and then found the value of $y$ such that
$\lambda (y) = \kappa (a)$ by interpolating the results for $\lambda (y)$ found in
Section \ref{sec:lambda}. More precisely, from table~\ref{tab:a1} we calculated $y=f_1(x_c)$ by using the program Eureqa \cite{Eureqa} on columns 1 and 2 of table~\ref{tab:a1}.
As a relevant technical detail, in actual implementation it is desirable to minimize the variation of the parameters as much as possible.
Accordingly, we sought a fit to the functional form $1/y={\tilde f}(x_c(1)-x)$ where $x_c(1)=1/\mu=0.3790522777\ldots.$
In this way we found an interpolation formula, which we used by
inserting the $x_c$ values from table~\ref{tab:y1}, so as to obtain the $y$ values corresponding to the $a$
values in table~\ref{tab:y1}. In this way we obtained the results shown in table~\ref{tab:ay1}.
As a check, we also calculated points on the curve starting with a
given value of $y$ and reversing the procedure. More precisely, from table~\ref{tab:y1} we calculated $1/a=f_2(x_c)$ also by using Eureqa on columns 1 and 2 of table~\ref{tab:y1}, and found an interpolation formula. We obtained a further set of $(a,y)$ values by substituting the $x_c$ values from table~\ref{tab:a1}, so as to obtain the $a$ values corresponding to the $y$ values in table~\ref{tab:a1}. In this way we obtained the results shown in table~\ref{tab:ay2}.
Combining the data in these two tables results in the phase boundary shown in
figure~\ref{fig:phaseboundary} where we have plotted the data from table~\ref{tab:ay1} as red circles and the data
from table~\ref{tab:ay2} as blue diamonds. The close agreement between the two independent analyses
implies that the results are accurate to at least graphical accuracy.. The curve passes through the point $(a_c^o,y_c^o)$, is
strictly monotone increasing and asymptotic to $y=a$, as shown in \cite{JvRW2013}. It is
interesting to note that the curve is not concave.
We can switch to physical variables (force and temperature)
using (\ref{eqn:physvar}). Without much loss of generality
we can set $\epsilon = -1$ and work in units where $k_B=1$. The corresponding phase
boundary in the force-temperature plane is given in figure~\ref{fig:ft1}. Notice that
the force at zero $T$ is 1 and the limiting slope at $T=0$ is zero, as predicted in
\cite{JvRW2013}. The curve is monotone decreasing as $T$ increases, with no re-entrance.
See for instance \cite{JvRW2013,Mishra2005,Skvortsov2009} for further discussion.
The force-temperature curve is in semi-quantitative agreement with an earlier numerical study by Mishra \emph{et al}
\cite{Mishra2005}, but is substantially more precise.
\begin{figure}[htbp]
\centering
\includegraphics[scale=0.5]{T-F.eps}
\caption{The phase boundary given as a force-temperature diagram. The horizontal axis is $T=\frac{1}{\log(a)}$, the vertical axis is the force, $f=\frac{\log(y)}{\log(a)}.$}
\label{fig:ft1}
\end{figure}
\begin{table}
\centering
\begin{tabular}{|l|l|l|}
\hline
$a$ & $x_c$ & $y(x_c)$\\
\hline
1.775615 & 0.37905227 & 1 \\
1.77569 & 0.37905197 &1.0001\\
1.8 & 0.37893 & 1.0030 \\
2.0 & 0.37112 & 1.0953\\
2.5 & 0.332682 & 1.4293\\
2.75 & 0.3125387 & 1.6174 \\
3.0 & 0.293630848 &1.8137\\
4.0 & 0.23291521603 &2.6510 \\
5.0 & 0.19121152726 &3.5399 \\
6.0 & 0.16158981268 & 4.4592 \\
7.5 & 0.13075132730 & 5.8729 \\
10.0 & 0.09892401046 & 8.2816 \\
15.0 & 0.06635363716 & 13.189 \\
20.0 & 0.04986944645 & 18.146 \\
40 & 0.02498400508 & 38.118 \\
60 & 0.01666196255 & 58.136 \\
90 & 0.01110972448 & 88.145\\
150 & 0.00666636842 & 148.00 \\
250 & 0.00399993575 & 247.18 \\
\hline
\end{tabular}
\caption{ Phase diagram estimated by calculating $y(x_c)$ from the interpolation formula found by Eureqa.}
\label{tab:ay1}
\end{table}
\begin{table}
\centering
\begin{tabular}{|l|l|l|}
\hline
$y$ & $x_c$ & $a(x_c)$\\
\hline
1.0 & 0.37905225 & 1.775615 \\
1.001 & 0.379019 &1.7854 \\
1.01 & 0.37862 & 1.8220 \\
1.02 & 0.37804 & 1.8456 \\
1.04 &0.37649 & 1.8915 \\
1.06 &0.37463 & 1.9347 \\
1.08 &0.37265 & 1.9733\\
1.1 &0.370564 & 2.0092\\
1.2 &0.3592886 & 2.1693 \\
1.3 &0.3475682 & 2.3161 \\
1.5 &0.3249328 & 2.5934 \\
1.75 &0.2995547603 & 2.9206 \\
2.0 &0.277548710 & 3.2329 \\
2.5 & 0.2418862105 &3.8287 \\
3& 0.214449855 & 4.4003 \\
4 & 0.1751070033 & 5.5029 \\
5 & 0.14820871438 & 6.5747 \\
7 & 0.11365731650 & 8.6708 \\
10 & 0.08442128192 & 11.757 \\
20 & 0.04563524407 & 21.876 \\
40 & 0.02383593409 & 41.923 \\
60 & 0.01666196255 & 61.912\\
90 & 0.01110972448 & 91.864 \\
150 & 0.00666636842 &151.74 \\
250 & 0.00399993575 & 251.50 \\
\hline
\end{tabular}
\caption{Phase diagram estimated by calculating $a(x_c)$ from the interpolation formula found by Eureqa.}
\label{tab:ay2}
\end{table}
\section{The behaviour and nature of the phase transition on the phase boundary.}
It is possible to use the results of \cite{JvRW2013} to prove that the phase transition
from the ballistic to the adsorbed phase is first order. We state this as a theorem.
\begin{theo}
The free energy $\psi(a,y)$ is not differentiable at the phase boundary between
the ballistic and adsorbed phases, except perhaps at the point $(a_c^o,y_c^o)$.
\end{theo}
{\it Proof: }
There is a
monotone strictly increasing curve
$y=y_c(a)$ in the $(a,y)$-plane, through the point
$(a_c^o,y_c^o)$, corresponding to the phase boundary between the
ballistic and adsorbed phases. In the ballistic phase $\psi(a,y)=\lambda(y)$
and in the adsorbed phase $\psi(a,y)=\kappa(a)$. The free energy $\kappa (a)$
is a monotone increasing function of $a$, convex in $\log a$. It therefore has
left and right derivatives at every value of $a$.
Throughout the adsorbed phase the left and right derivatives of $\kappa (a)$
are positive. Consider a line of fixed $y=y_1 > y_c^0 \ge 1$. The free energy $\psi(a,y_1)=\lambda(y_1)$
for $a \le a_c(y_1)$ and $\psi(a,y_1)=\kappa(a)$ for $a \ge a_c(y_1)$. For $a \le a_c(y_1),$
$\partial \psi(a,y_1) /\partial a = 0$. For $a \ge a_c(y_1)$ the right derivative of
$\kappa (a)$ with respect to $a$ is positive. Therefore the left and right derivatives
of $\psi(a,y)$ with respect to $a$ at $(a_c(y_1),y_1)$ are not equal and the free energy is not differentiable.
\qed
\vspace{4mm}
However it is still of interest to determine how the free energy behaves as we approach the phase boundary. We ``know" the location of the phase boundary from the results reported in the previous section, at least to graphical accuracy. That is to say, with an accuracy of three to four significant digits. At the phase boundary, we also know the value of the radius of convergence, from the data above. So, by way of example, at $a=2$ the phase boundary is at $y=1.095.$ If we analyse the series at this point, (recall this just means substituting the required values of $a$ and $y$ into the three-variable generating function we have, which produces a one-variable generating function, where the expansion variable is conjugate to the length of the walk) we find the critical point is at $x_c = 0.371125$ with an exponent of $-1.995.$ This is exactly the same value of $x_c$ found at $a=2, \,\, y=1,$ though at $(2,1)$ the exponent is a simple pole.
As we increase $y,$ we see exactly the same behaviour as observed in section 1 that allowed us to identify $a_c.$ That is to say, there is a variation in the criticial point and critical exponent as the approximants struggle to cope with a discontinuous exponent change. So at $(2,1.05)$ the $(x_c,exponent)$ pair is estimated to be $(0.37135, -1.36 \pm 0.3).$ This large error in the exponent estimate is a signature that the analysis method is struggling. At $(2,1.09)$ we find $(0.371325, -1.962 \pm 0.016),$ and at $(2,1.095),$ which is our best estimate of the intersection of the line $a=2$ with the phase boundary, we find for the critical point and exponent $(0.371125,-1.998 \pm 0.013).$ Note that this value of the critical point is exactly that found when $a=2$ and $y=1,$ which is well below the phase boundary.
Suggestive as this is, we note that the value of $a$ chosen is rather close to the point $(a_c,y_c)$
(a bicritical point, see \cite{Klushin1997}), where the behaviour is different, and may still have an effect on the convergence rate of the series. So we take another example, when $a=3,$ which is well away from the point $(a_c,y_c)$. At $(3,1)$ the critical point is $x_c=0.2936308\ldots,$ and the exponent is a simple pole. The phase boundary point at $a=3$ is estimated to be at $y=1.8137.$ Analysis of the series at that point gives $(0.293636, -2.0001 \pm 0.0002),$ rather confirming the double pole. The location is slightly different from that observed at $(a=3, y=1),$ but that is likely ascribable to the estimate of $y$ on the phase boundary being slightly in error. Indeed, if we repeat the analysis with $y=1.8139,$ we find $x_c=0.293627\ldots,$ and the exponent estimate is $-2.0000018 \pm 0.0000022,$ which is rather convincing evidence for a double pole!
So we find that at (or very near to and below) the phase boundary, the series are well-converged, give the estimates of the critical point we expect, and display a double pole singularity. Near to the phase boundary, the estimates are very variable, and behave in exactly the same way as did the series analysed in section \ref{sec:results}, with the approximants seemingly struggling to cope with a discontinuous change in the critical exponent.
It is instructive to consider the simpler case of directed positive walks. We discuss two cases:
\begin{enumerate}
\item
Positive walks with step set $(1,1)$ and $(1,-1)$, which we refer to loosely as Dyck
paths, and
\item
Positive walks with step set $(1,1)$, $(1,-1)$ and $(1,0)$, which we refer to loosely as Motzkin paths.
\end{enumerate}
It is straightforward to solve each of these models exactly, though we don't give
the details here. They each show three phases (a free
phase, an adsorbed phase and a ballistic phase). There is a phase boundary between the
adsorbed and ballistic phases and in both cases these phase boundaries are concave in
the $(\log a, \log y)$-plane. In the Motzkin path case the phase boundary is asymptotic to $y=a$ while
in the Dyck path case it is asymptotic to $y = a^{1/2}$, because a maximum of half the vertices
can be in the surface. In each case the singularity in both the adsorbed and ballistic phases is a
simple pole so the generating function has both of these singularities though one is dominant
in the adsorbed phase and the other is dominant in the ballistic phase. On the phase
boundary between the adsorbed and ballistic phases these two singularities are equal resulting in a
double pole, just as observed numerically for the case of SAWs.
\section{Discussion}
\label{sec:discuss} \setcounter{equation}{0}
We have considered a self-avoiding walk model of polymer adsorption at an
impenetrable surface where
\begin{enumerate}
\item
the walk is terminally attached to the surface,
\item
the walk interacts with the surface with an attractive potential, and
\item
the walk is subject to a force applied normal to the surface at the last vertex
of the walk.
\end{enumerate}
For the square lattice we have used series analysis techniques to investigate
the phases and phase boundaries for the system. There are three phases, a free
phase where the walk is desorbed but not ballistic, an adsorbed phase where
the walk is adsorbed at the surface and a ballistic phase where the walk
is desorbed but ballistic. We have located the phase boundaries and proved that
the phase transition from the adsorbed to the ballistic phase is first order. In addition
we have very precise values for the critical points for adsorption without a force and
for the free to ballistic transition with no surface interaction.
\section*{Acknowledgements}
The authors would like to acknowledge helpful conversations with Enzo Orlandini,
and help with the preparation of the data files by Jason Whyte.
This research was partially supported by NSERC of Canada.
The computations for this work were supported by an award to IJ under the
Merit Allocation Scheme on the NCI National Facility at the
Australian National University.
IJ and AJG were supported under the Australian Research Council's Discovery Projects
funding scheme by the grants DP120101593 and DP120100939 respectively.
We thank Cornell Creative Machines Lab for making available the program Eurequa \cite{Eureqa}.
|
1,477,468,750,332 | arxiv | \section{Introduction}
\label{}
Numerical simulations of large-scale structure have met with great success. However these same simulations fail
to account for several of the observed properties of galaxies. On large
scales, $\sim 0.01-100 \rm\, Mpc$, the ansatz of cold, weakly interacting
dark matter has led to realistic maps of the galaxy distribution, under the
assumptions that light traces mass and that the initial conditions are
provided by the observed temperature fluctuations in the cosmic microwave
background. On smaller scales, light no longer traces mass because of the
complexity of galaxy and star formation. Baryon physics must be added to the
simulations in order to produce realistic galaxies. It is here that the
modelling is still inadequate.
In this review, we will begin with the standard phenomenology of galaxy
formation, then discuss methods and present the recent observational and
modeling advances, finishing with a summary of the numerous outstanding issues in galaxy
formation theory.
\section{Phenomenology}
\subsection{The luminosity function}
\begin{figure}[ht]
\centering
\includegraphics[width=6.3cm,bb=10 175 430 700,angle=-90]{JoeLF3.eps}
\caption{Role of feedback in modifying the galaxy luminosity function}
\label{lfun}
\end{figure}
Theory provides the mass function of dark halos. Observation yields the
luminosity function of galaxies, usually fit by a \cite{Schechter76}
function. Comparison of the two is at first sight disconcerting. One can calculate the $M/L$ ratio for the two functions to overlap at one point, for a mass $M^\ast$ corresponding to $L_\ast.$
Define $t_{\rm cool}={3/ 2}nkT/ [n^2 \Lambda(T)]$ and
$ t_{\rm dyn}= {3}/( \sqrt {32\pi G \rho})$. For star formation to occur, cooling is essential, and the condition
$t_{\rm cool}<t_{\rm dyn}$ guarantees cooling in an inhomogeneous galactic halo where gas clouds collide at the virial velocity.
One finds that
$$
M_{\rm cool}^\ast={\alpha^{3}\over \alpha_g^2}\,{m_p\over m_e}\,{t_{\rm cool}\over
t_{\rm dyn}}\, T^{1+2\beta} \ ,$$
where $\alpha=e^2/(\hbar c)$ and $\alpha_g=G m_p^2 /e^2$ are the electromagnetic and gravitational fine structure constants.
For a cooling function $\Lambda(T)\propto T^\beta,$ over the relevant temperature range ($10^5-10^7$ K), one can take
$\beta\approx -1/2$ for a low metallicity plasma \citep{GS07}. The result is
that one finds a characteristic galactic halo mass, in terms of fundamental
constants, to be of order $10^{12} \rm M_\odot$ \citep{Silk77}. The inferred value of the mass-to-light ratio $M/L$ is similar to that observed for $L_\ast$ galaxies. This is a success for theory: dissipation provides a key ingredient in understanding the stellar masses of galaxies, at least for the ``typical'' galaxy.
The characteristic galactic mass is understood by the requirement that
cooling within a dynamical time is a necessary condition for efficient star
formation (Fig.~\ref{lfun}).
However,
the na\"{\i}ve assumption that stellar mass follows halo mass,
leads to too many small galaxies, too many big galaxies in the nearby
universe, too few massive galaxies at high redshift, and too many baryons
within the galaxy halos. In addition there are structural problems: for
example, massive galaxies with thin disks and/or without bulges are missing,
and the concentration and cuspiness of cold dark matter is found to be
excessive in barred galaxies and in dwarfs. The resolution to all of these
difficulties must lie in feedback. There are various flavors of feedback
that span the range of processes including reionization at very high
redshift, supernova (SN) explosions, tidal stripping and input from active
galactic nuclei (AGN). All of these effects no doubt have a role, but we shall see
that what is missing is a robust theory of star formation as well as adequate
numerical resolution to properly model the interactions between baryons,
dynamics and dark matter.
\subsection{Star formation rate and efficiency}
In addressing star-forming galaxies, the problem reduces to our fundamental
ignorance of star formation. Phenomenology is used to address this gap in our
knowledge. Massive star feedback in giant molecular clouds, the seat of most
galactic star formation, implies a star formation efficiency (SFE), defined
as star formation rate (SFR) divided by the ratio of gas mass to dynamical or disk
rotation time, of around 2\%. This is also found to be true globally in the
Milky Way (MW) disk.
Remarkably, a similar SFE is found in nearby star-forming disk galaxies. Indeed, SFRs per unit area in disk galaxies, both near and far, can be described by a simple law,
with SFE being the controlling parameter \citep{Silk97,Elmegreen97}:
\begin{equation}
\rm SFE ={SFR \times DYNAMICAL \, TIME \over GAS \, MASS} \approx 0.02.
\label{sfeeq}
\end{equation}
The motivation comes from the gravitational instability of cold gas-rich
disks, which provides the scaling, although the normalization depends on
feedback physics. For the global law, in terms of SFR and gas
mass per unit area, SN regulation provides the observed efficiency of about
2\% which fits essentially all local star--forming galaxies. One finds from
simple momentum conservation that ${\rm SFE} = {\sigma_{\rm gas} v_{\rm cool}
m*_{\rm SN}} / {E_{\rm SN}^{\rm initial}} \approx 0.02$. Here, $v_{\rm
cool}$ is the
SN-driven swept-up shell velocity at which approximate momentum conservation sets in and
$m*_{\rm SN}\approx 150 \rm M_\odot$ is the mass formed in stars per SNII, in
this case for a \cite{Chabrier03} initial mass function (IMF).
This is a crude
estimator of the efficiency of SN momentum input into the interstellar
medium, but it reproduces the observed global normalization of the star
formation law.
The fit applies not only globally but to star formation complexes in
individual galaxies such as M51 and also to starburst galaxies. The star
formation law is known as the Schmidt-Kennicutt law \citep{Kennicutt+07}, and
its application reveals that molecular gas is the controlling gas ingredient.
In the outer parts of galaxies, where the molecular fraction is reduced due
to the ambient UV radiation field and lower surface density, the SFR
per unit gas mass also declines \citep{BLW11}.
\begin{figure}[ht]
\centering
\includegraphics[width=0.77\hsize]{KDM12_Fig3.eps}
\caption{Schmidt-Kennicutt laws on nearby (including Local Group galaxies as
\emph{shaded regions}) and distant galaxies, as well as
Milky Way Giant Molecular Clouds \citep{KDM12}. The solid line is similar
to equation~(\ref{sfeeq}). }
\label{SKlaw}
\end{figure}
For disk instabilities to result in cloud formation, followed by cloud
agglomeration and consequent star formation, one also needs to maintain a
cold disk by accretion of cold gas. There is ample evidence of a supply of
cold gas, for example in the M33 group.
Other spiral galaxies show extensive reservoirs of HI in their outer regions,
for example NGC 6946 \citep{Boomsma+08} and UGC 2082 \citep{Heald+11a}.
Recent data extends the Schmidt-Kennicutt law to $z\sim 2,$ with
a tendency for ultraluminous starbursts at $z\sim 2$ to have somewhat higher
SFE (\citealp{Genzel+10}, see Fig.~\ref{SKlaw}).
A more refined theoretical model needs to take account of star formation in a
multi-phase interstellar medium.
One expects self-regulation to play a role. If the porosity in the form of SN
remnant-driven bubbles is low, there is no venting and the pressure is
enhanced, clouds are squeezed, and SN explosions are triggered by massive
star formation. This is followed by high porosity and blow-out,
and the turbulent pressure drops. Eventually halo infall replenishes the cold
gas, the porosity is lowered and the cycle recommences. Some of this
complexity can be seen in numerical simulations \citep{ATM11}. SNe provide
recirculation and venting of gas into fountains, thereby reducing the SFE and
prolonging the duration of star formation in normal disk galaxies.
\begin{figure}[ht]
\centering
\includegraphics[width=8cm]{Baldry+04_bimod.eps}
\caption{Illustration of galaxy bimodality. The contours are the density of SDSS
galaxies in color-luminosity space, after correction for selection effects
\citep{Baldry+04}.}
\label{bimod}
\end{figure}
In fact, galaxy colors illustrate the \emph{bimodality} of SFRs.
\emph{Elliptical and lenticular galaxies are red, spirals are blue.}
This lyric does not hide a continuity in galaxy properties:
most galaxies lie in either the \emph{Red Sequence} or the
\emph{Blue Cloud} (see Fig.~\ref{bimod}).
This suggests that star formation in galaxies is either ongoing or was quenched
several Gyr ago. The small fraction of intermediate population, \emph{Green
Valley} galaxies suggests that some galaxies have experienced a recent quenching of
their star formation.
\begin{figure}[ht]
\centering
\includegraphics[width=7.5cm,angle=90]{Schawinski+07_Fig10.eps}
\caption{Ages of galaxies of different activity \citep{Schawinski+07}}
\label{ages}
\end{figure}
Seyfert galaxies have intermediate age stellar populations (\citealp{Schawinski+07}, see
fig.~\ref{ages}) and mostly lie in the Green Valley \citep{Schawinski12}.
This suggests that star formation is quenched by nuclear activity.
\subsection{Scaling relations}
The global properties of early-type galaxies are known to correlate: early
work focussed on $L\sim \sigma_v^4$ \citep{FJ76}.
The early work found a slope of 4 because of the inclusion of bright and
faint galaxies. The modern work finds a slope of 5 for luminous galaxies
($M_B\la-20.5$, core-S\'ersic galaxies) and a slope of 2 for the less
luminous spheroids, and has been distilled into the Fundamental Plane linking
mass, mass-to-light ratio, and mean
surface brightness at the effective radius \citep{BBF92}.
\citep{BBF92}.
\begin{figure}[ht]
\centering
\includegraphics[width=9.5cm]{TBGW_Fig3.eps}
\caption{3D view of scaling relations of spheroidal systems from globular
clusters (GC) to clusters of galaxies (CSph), via ultra-compact dwarfs
(UCD), dwarf spheroidals (dSph), dwarf ellipticals (dE) and giant
ellipticals (E), where the axes are
half-luminosity, half-luminosity radius and total mass within
half-luminosity radius
\citep{TBGW11}.
The \emph{red} and \emph{blue planes} respectively represent the Fundamental Plane and
the ``virial plane'' of constant $M/L$.}
\label{3dscale}
\end{figure}
Figure~\ref{3dscale} shows a more modern version of the
properties of early-type galaxies, to which are added globular clusters and
clusters of galaxies. It is not yet understood what makes the continuity of
the global properties of massive systems fragment into two branches with
ultra-compact dwarfs and globular clusters on one side and dwarf spheroidals
on the other.
\subsection{Evolution of low mass galaxies}
\begin{figure}[ht]
\centering
\includegraphics[width=8.5cm]{vreionvsz.eps}
\caption{Evolution of circular velocities (at virial radius) for halos with
efficient star formation \citep{MTTC12}. The \emph{smooth curves} indicate the
mean evolution of halos (with final masses $\log h M = 8$ to 12 going
upwards).
The \emph{blue broken line} is a model for the
evolution of the minimum mass for galaxy formation (set by entropy feedback
and related to the temperature of the IGM) and the \emph{cyan shaded
region} represents our ignorance of this parameter. The \emph{magenta curve}
is the maximum circular velocity for efficient gas infall.
The \emph{grey shaded
bands} represent regions of thermal instability. The \emph{dashed curves}
shows 1 and $2\,\sigma$ fluctuations from the $\Lambda$CDM
primordial density fluctuation spectrum.}
\label{vminvsz}
\end{figure}
The accepted solution for gas disruption and dispersal in intermediate mass
and massive dwarfs (halo mass $\sim 10^8 -10^{10}\, \rm M_\odot$) is by SN
feedback. SNe expel the remaining baryons in systems of halo mass up to
$\sim 10^8\rm\, M_\odot$, leaving behind dim remnants of dwarf galaxies
\citep{DS86}. Presumably the luminous dwarfs accrete gas at later
epochs. Most gas is ejected by the first generations of SNe for
systems with escape velocity $\lower.5ex\hbox{$\; \buildrel < \over \sim \;$} 50 \rm\, km/s,$ leaving dim stellar
remnants behind.
In very
low-mass halos gas cannot even fall in, because its specific entropy is too
high \citep{Rees86}. This \emph{entropy barrier} amounts to a temperature barrier
since the gas density, which to first order is proportional to the total mass
density, is the same in different halos at a given epoch.
Only halos of mass $\lower.5ex\hbox{$\; \buildrel > \over \sim \;$} 10^5\rm \,M_\odot$ trap baryons that are able to
undergo early $\rm H_2$ cooling and eventually form stars.
Hydrodynamical simulations indicate that this lower limit is sharp \citep{Gnedin00,OGT08}.
Reionization
reinforces this limit by heating the intergalactic gas to high entropy,
hence suppressing subsequent
star formation (see Fig.~\ref{vminvsz}).
The abrupt increase of the sound speed to $\sim 10-20 \rm\, km/s$
at $z\sim 10$ means that dwarfs of halo mass $\sim 10^6-10^7\rm \,M_\odot,$ which
have not yet collapsed and fragmented into stars, will be disrupted. However
massive dwarfs are unaffected, as are the high $\sigma$ peaks that develop
into early collapsing, but rare, low mass dwarfs.
\subsection{Specific SFR}
Other serious, not unrelated, problems arise with low mass galaxies. In the hierarchical
approach, these generically form early. Theoretical models, both SAMs and
hydrodynamical, appear to fail to account for the observed specific star
formation rates (SFR per unit stellar mass or SSFR, \citealp{Weinmann+12}), producing too little star formation at
late times. Metallicity-dependent star formation alleviates the high
redshift problem, reducing the stellar mass that is in place early and
enhancing the SSFR as needed \citep{KD12}. However, it leads to inconsistency
at low redshift, because the change in metallicity
and the gas fraction anti-correlate, hence
leading to too little evolution in the SSFR.
As shown in Fig.~\ref{ssfr},
the star formation time-scale (or 1/SSFR) goes from the MW
value of $\sim 10\,\rm Gyr$ at low redshift to $\sim 0.5\,\rm Gyr$ at $z\lower.5ex\hbox{$\; \buildrel > \over \sim \;$} 2$. This
result suggests two distinct feedback-regulated
modes of star formation: at low redshift via SNe and without AGN, and
at high redshift with, most plausibly, quenching and possibly triggering
by AGN playing a central role. One would expect a transition between these
two modes as the AGN duty cycle becomes shorter beyond $z\sim 1.$
\begin{figure}[ht]
\centering
\includegraphics[width=0.45\hsize]{WND11_Fig1.eps} \\
\includegraphics[width=0.8\hsize]{Rodighiero+11_Fig1.eps}
\caption{Evolution of the specific SFR (SSFR) of galaxies of
stellar mass $0.2-1\times 10^{10} \rm \,M_\odot$ (\citealp{WND11},
\emph{top}); SFR for galaxies from different samples (different color
symbols), highlighting the ``Main Sequence'' and a population of
starbursts, with
SSFR mass-dependence inset (\citealp{Rodighiero+11}, \emph{bottom}).
}
\label{ssfr}
\end{figure}
A related triggering mechanism appeals to enhanced rate of merging at high $z$
\citep{KS11}. Alternatively, it has been argued that intensified halo cold
gas accretion at early epochs may account for all but the most the extreme
SFRs at high $z$, although this may require an implausibly
high SFE \citep{Dekel+09}.
\subsection{Spheroidal galaxies}
The baryon fraction is far from its primordial value in all systems other than massive galaxy clusters. SNe cannot eject significant amounts of gas from massive galaxies.
Baryons continue to be accreted over a Hubble time and the stellar mass grows. One consequence is that massive galaxies are overproduced in the models, and that the massive galaxies are also too blue.
Galaxies like the MW have peanut-shaped pseudobulges, in contrast with
the classical bulges of more massive spirals. If formed by secular gas-rich disk instabilities, they
should have an age distribution similar to that of the old disk. However the
formation time would be at least $\sim 1 \rm\, Gyr.$ The elevated $\alpha/[\rm Fe]$
ratio of our bulge favors a shorter formation time. This would be more
consistent with an early disk instability phase reminiscent of that
associated with clumpy gas-rich galaxies observed at $z\sim 2.$ Massive clump
merging provides a possible solution for forming bulges at a relatively late
epoch \citep{CDB10}.
However the time-scale (several Gyr) is too long to
result in the enhanced $\alpha/[\rm Fe] $ ratios characteristic of massive
spheroids, or even the less extreme enhancement in the MW bulge.
The shorter time-scales required arise in more plausible cosmological
initial conditions that result in a redshift $z>2$ for pseudobulge formation
\citep{Okamoto12}.
\subsection{The role of AGN }
SNe have little impact on the formation of massive galaxies. Feedback
from SN explosions fails to stop the streaming of cold flows towards
the centre~\citep{PSD11}. The SN ejecta tends to be driven out with
only modest interaction with, and entrainment of, cold infalling gas. A more
coherent and effective interaction is provided by AGN feedback from
supermassive black holes (SMBH).
\begin{figure}[ht]
\centering
\includegraphics[width=0.5\hsize]{McConnell+11_Fig3a.pdf}
\caption{Black hole mass versus spheroid velocity dispersion
(luminosity-weighted within one effective radius), from
\cite{McConnell+11}}
\label{smbhscale}
\end{figure}
A clue towards a solution for these dilemmas comes from the accepted
explanation of the \citeauthor{Magorrian+98} relation, which relates SMBH
mass to spheroid mass \citep{Magorrian+98} and velocity dispersion
(\citealp{FM00}, see Fig.~\ref{smbhscale}).
This requires collusion
between black hole growth and the initial gas content of the galaxy when the
old stellar spheroid formed. One conventionally appeals to outflows from the
central black hole that deliver momentum to the protogalactic gas. When the
black hole is sufficiently massive, the Eddington luminosity is high enough
that residual gas is ejected. An estimate of the available momentum supply
come from equating the Eddington momentum with self-gravity on circumgalactic
gas shells, $L_{\rm Edd}/c=4\pi G M/\kappa = GMM_{\rm gas}/r^2$, where
$\kappa$ us the opacity. Blowout occurs and star
formation terminates when the SMBH--$\sigma_v$ relation saturates. This
occurs for $M_{\rm BH}\propto\sigma_v^{4}$, close to the observed slope of
$\ga 5$
\citep{GOAC11}, and gives the correct
normalization of the relation, at least in order of magnitude. This is the early feedback quasar mode.
There is also a role for AGN feedback at late epochs, when the AGN radio mode
drives jets and cocoons that heat halo gas, inhibit cooling, resolve the
galaxy luminosity function bright end problem and account for the red colors
of massive early-type galaxies. AGN feedback in the radio mode may also
account for the suppression in numbers of intermediate mass and satellite
galaxies (e.g., \citealp{Cattaneo+09} and references therein).
Feedback from AGN in the host galaxies also preheats the halo gas
that otherwise would be captured by satellites.
\subsection{Galaxies downsize}
Our understanding of galaxy formation is driven by observations.
Prior to 2000 or so, it was accepted that hierarchical galaxy
formation predicted that small galaxies form prior to massive
galaxies.
The first indications that this was in error came from the recognition that
more massive early-type galaxies have redder colors \citep{deVaucouleurs61},
higher metallicities \citep{Faber73} and enhanced
$[\alpha]/[\rm Fe]$ metallicity ratios \citep{Ziegler+05}, indicative of an older
stellar population with a shorter star formation
time (see Fig.~\ref{figdownsize}).
\begin{figure}[ht]
\centering
\includegraphics[width=0.49\hsize]{Thomas+10_fig3a.eps}
\includegraphics[width=0.49\hsize]{Thomas+10_fig6.eps} \\
\includegraphics[width=0.52\hsize]{Thomas+10_fig9.eps}
\caption{Metallicity ratio and age versus galaxy velocity dispersion
(i.e. mass) and history of star formation \citep{Thomas+10}}
\label{figdownsize}
\end{figure}
This effect is called \emph{downsizing}, as
the most massive galaxies have their stellar populations in place early.
In effect, we have a cosmic clock: incorporation into stars of debris
from SNe~II ($ \lower.5ex\hbox{$\; \buildrel < \over \sim \;$} 10^8$ yr) versus SNe~I ($\lower.5ex\hbox{$\; \buildrel > \over \sim \;$} 10^{9}$ yr) provides a
means of dating the duration of star formation.
This result was soon
followed by infrared observations that showed that stellar mass assembly
favored more massive systems at earlier epochs \citep{Gonzalez+11}.
\subsection{Morphological evolution}
We cannot do justice in this review to a largely phenomnological discussion
of morphological evolution of both disk and irregular galaxies. Here,
observations are far ahead of theory. However, there are strong arguments to
support a continuous sequence between dwarf spheroidal galaxies and
S0 galaxies
\citep{KB12}. The transformation
applies to the disk components and may involve ram pressure stripping of cold
gas \citep{GG72} as well as galaxy harrassment \citep{MLK98}. This sequence seems to acts
in parallel to the pseudobulges or bulges of S0 galaxies being generated via
stripped/harassed or simply starved disk galaxies \citep{KB12}.
\section{Methods}
\subsection{Observational surveys}
The fundamental driver of progress in astronomy is through
observations. The advent of large galaxy surveys, either wide spectroscopic surveys
probing the nearby Universe (e.g., SDSS) or narrower surveys using
photometric redshifts and often in the infrared domain (e.g., with Spitzer and
Herschel) to probe distant
galaxies in the optical and near-infrared domains, has led to formidable
progress in understanding galaxy formation.
Nevertheless, it is difficult to link the galaxies we see at high redshift
with the ones we see in local Universe, and one is prone to Malmquist bias,
as well as aperture and other selection effects.
\subsection{Semi-Analytical Models}
Several simulation techniques have been developed to be able to link galaxies
from the past to the present, and to obtain a statistical view of the variety
of the evolution histories of galaxies, in terms of star formation, stellar
mass assembly and halo mass assembly.
\begin{figure}[ht]
\centering
\includegraphics[width=7.5cm]{LC93mergertree.eps}
\caption{Illustration of halo merger tree \citep{LC93} showing the
progenitors of a halo selected at time $t_0$}
\label{tree}
\end{figure}
Given current computational constraints, it is impossible to achieve the
sub-parsec or finer resolution needed to adequately model star formation and
accretion onto black holes in
a cosmological simulation. Theorists have invented a swindle, wherein the
complex processes of star formation and accretion onto SMBH
are hidden inside a black box called
``sub-grid physics'' that can be tagged onto a large-scale simulation.
In SAMs, galaxies are ``painted'' on halos built from halo merger trees or
detected in cosmological dissipationless (dark matter only) simulations.
The former (see Fig.~\ref{tree}) produce the mass assembly history (MAH) of halos with the
condition that they end up in a halo of mass $M_0$ at epoch $z_0$ (usually
$z_0=0$). The branches are drawn from random samplings given the known
conditional probabilities arising from extensions \citep{Bower91,LC93} and
modifications of the \cite{PS74} formalism. Halo merger trees have the
advantage of being rapid to compute, but lack positional
information. Cosmological simulations, with
up to $10^8$ particles, are becoming increasingly common, but are intensive to
process, in particular to detect halos (\citealp{Knebe+11} and references
therein) and subhalos (\citealp{Onions+12} and references therein) and build halo merger trees.
Once the halo MAH is identified, one follows the branches of the tree from
past to present, to build galaxies. The galaxy formation recipe includes
several ingredients:
\begin{enumerate}
\item
The gas cooling time must be short for the gas to
dissipatively cool into a disk.
In particular, gas cannot fall onto low-mass halos because of the cooling
barrier
and falls less efficiently onto
high-mass halos because of a virial shock, whereas gas can infall along cold
filaments on lower mass halos.
\item Star formation occurs at a rate $\dot m = {\rm cst}\, m_{\rm \rm gas}/t_{\rm
dyn}$, where $t_{\rm dyn}$ is a measure of the dynamical time of the
galaxy.
\item Feedback from SNe and from the relativistic jets arising from
central SMBHs hiding as AGN
heats up the surrounding gas.
\item While the gas settles into disks, major mergers of galaxies cause disks
to transform into ellipticals, and after subsequent disk build-up, the
merger remnant is identified to a bulge inside a spiral galaxy. The bulge
can also be built-up by repeated minor mergers, as well as starbursts and
secular evolution of the disk.
\item The SAM keeps track of star formation times and predicts galaxy
luminosities in different wavebands using population synthesis codes.
\item When a smaller halo enters a larger one, it becomes a subhalo, its
galaxy becomes a satellite, and
usually, the gas that attempts to fall onto the subhalo is now directed
towards the central galaxy of the halo.
\item Satellite galaxy trajectories are assumed to be those of the
subhalos they belong to, and when they are no longer resolved in the
cosmological simulation, or if one is only using a halo merger tree, the
galaxies are merged with the central galaxy on a dynamical friction timescale
(calibrated on simulations, e.g. \citealp{Jiang+08}).
\end{enumerate}
An advantage of SAMs is that it is easy to gauge the importance of various
physical processes by seeing how the outcome is changed when a process is
turned off in the SAM.
Present-day SAMs are increasingly complex, and SAMs can include up to $10^5$
lines of code. The popularity of SAMs has increased with public-domain outputs
\citep{Bower+06,Croton+06,DLB07,Guo+11} and codes \citep{Benson12}.
\subsection{Hydrodynamical simulations}
The weakness of SAMs is that much of the physics is controlled by hand (except for
gravity, when the SAMs are directly applied to cosmological N-body
simulations of the dark matter component). Hydrodynamical simulations provide
the means to treat hydrodynamical processes in a much more self-consistent
manner, and cosmological hydrodynamical simulations have been run for nearly
25 years (starting from \citealp{Evrard88}).
These simulations come in two flavors: cell-based and
particle-based. It was rapidly realized that cell-based methods could not
resolve at the same time the very large cosmological scales and the small
scales within galaxies, and the early progress in the field was
driven by the Smooth Particle Hydrodynamics (SPH) method
\citep{GM77,Monaghan92,Springel10_ARAA}, in which the
diffuse gas is treated as a collection of particles, whereas the physical
properties (temperature, metal content, etc.) are smoothed over neighboring
particles using a given SPH-smoothing kernel.
Despite early successes (comparisons of different hydrodynamical codes by
\citealp{Frenk+99}), SPH methods fail to
resolve shocks as well as Rayleigh-Taylor and Kelvin-Helmholtz
instabilities \citep{Scannapieco+12}.
This has brought renewed popularity to cell-based methods,
with the major improvement of resolution within the Adaptive Mesh Refinement
(AMR) scheme \citep{KKK97,OShea+04,Teyssier02}, where cells can be refined into
smaller cells following a condition on density or any other physical
property.
Moreover, schemes with deformable cells (that do not follow the Cartesian
grid) were developed 20 years ago, and are now becoming more widely used
(e.g., AREPO, \citealp{Springel10}).
In these hydrodynamical codes, stars can be formed when the gas is
sufficiently dense, with a convergent flow and a short cooling time.
Current codes do not have sufficient mass resolution
to resolve individual stars, so the star particles made from the gas are
really collections of stars, with an initial mass function. One can therefore
predict how many core-collapse SNe will explode after the star
particle forms, and the very considerable SN energy is usually
redeposited into the gas by adding velocity kicks to the neighboring gas
particles and possibly also thermal energy. Similarly, AGN can be implemented,
for example by forcing a Magorrian type of relation between the SMBH
mass and the spheroidal mass of the galaxy, and feedback from AGN jets can be
implemented in a similar fashion as is feedback from SNe.
Hydrodynamical codes are therefore not fully self-consistent, as they
include semi-analytical recipes for the
formation of stars and the feedback from SNe and AGN.
\subsection{New methods}
\subsubsection{Analytical models}
The growing complexity of SAM codes (e.g., the publicly-available GALCTICUS
code of \citealp{Benson12} contains over 120$\,$000 lines of code and
involves over 30 non-cosmological parameters with non-trivial values) has led
some to seek simpler descriptions of galaxy formation.
One level of simplicity is to parameterize the time derivatives of the
different components of galaxies (stars, cold gas, hot gas, dark
matter) as linear combinations of these parameters \citep{NW10}. But one can
go to an even simpler level and characterize the fraction of gas that can cool
\citep{BVM92} or the mass in stars \citep{CMWK11} as a function of halo mass
and epoch. Although such approaches are much too simple to be able to capture
the details of galaxy formation, they are sufficient to study simple
questions.
Assuming that all the gas in the range of temperatures between $10^4\,\rm K$
and the maximum where the cooling time is shorter than the age of the
Universe (or the dynamical time) effectively cools, and that the gas is
replenished one the timescale where halos grow, \cite{BVM92} showed that
nearly all the baryons should have converted to stars by $z=0$. Since this is
not observed, this simple calculation shows that feedback mechanisms are
required to prevent too high star formation.
\begin{figure}[ht]
\centering
\includegraphics[width=10cm,angle=-90]{CMWK11_Fig4.eps}
\caption{Stellar versus host halo mass from analytical model by \cite{CMWK11} run
on dark matter simulation. The shaded regions are results obtained from the
conditional luminosity function (\emph{blue}, \citealp{YMvdB09}) and
abundance matching (\emph{gold}, \citealp{GWLB10}).
The \emph{large symbols} denote galaxies who have acquired most of their stellar mass
through mergers (rather than smooth gas accretion). The \emph{green curves}
show the galaxy formation model at $z=0$ and $z=3$.
}
\label{mvsMCattaneo}
\end{figure}
\cite{CMWK11}
apply their simple galaxy formation prescription onto the halos
of a high-resolution cosmological $N$ body simulation and
reproduce the $z=0$ observed stellar mass function with only four parameters,
despite the overly simplistic model. They find a fairly narrow
stellar versus halo mass relation for the dominant (``central'') galaxies in
halos and a gap between their stellar masses and those of the satellites, in
very good agreement with the relations obtained by \cite{YMvdB09} from the
SDSS using conditional stellar mass functions (Fig.~\ref{mvsMCattaneo}). This gap is remarkable, as it
is less built-in
\citeauthor{CMWK11}'s method than it is in SAMs: for example, halos with two-dominant galaxies
(such as observed in the Coma cluster) are allowed.
Similar ``successful'' analytical models have been proposed by \cite{Peng+10} and \cite{Bouche+10}.
\subsubsection{Halo Occupation Distribution}
A simple way to statistically populate galaxies inside halos, called
\emph{Halo Occupation Distribution} (HOD) is to assume a
functional form for some galaxy statistic in terms of the halo
mass.
\begin{figure}[ht]
\centering
\includegraphics[width=0.42\hsize]{CWK06_Fig5b.eps}
\includegraphics[width=0.42\hsize]{CWK06_Fig5a.eps}\\
\includegraphics[width=0.6\hsize,angle=-90]{BWC12_Fig14.eps}
\caption{\emph{Top left}:
Illustration of HOD models of multiplicity functions (per halo) obtained from
abundance matching \citep{CWK06}.
\emph{Top right}:
Abundance matching prediction on the galaxy correlation function compared to
SDSS observations (\emph{symbols}), while the halo-halo correlation function
is shown as \emph{dotted lines} \citep{CWK06}.
\emph{Bottom}:
Halo mass for given stellar mass obtained by abundance matching (AM), HOD,
conditional luminosity function (CLF) and group catalogs (CL) at $z=0.1$ \citep{BWC12}.
The \emph{shaded region} shows the AM analysis of \cite{BCW10}.
}
\label{AMHOD}
\end{figure}
The galaxy statistic can be the multiplicity function (for galaxies
more massive or more luminous than some threshold, \citealp{BW02}, see upper
left panel of Fig.~\ref{AMHOD}),
the luminosity or
stellar mass function, generically denoted CLF for conditional luminosity
function \citep{YMvdB03}.
Although these HOD methods have no underlying
physics, they are a very useful tool to derive galaxy trends with halo mass,
or, in other words to find the effects of the global environment on galaxies.
\subsubsection{Abundance Matching}
An offshoot of HOD models is to link the mean trend of some galaxy property
in terms of the mass of its halo, using so-called Abundance Matching (AM).
The idea is to solve $N(>x) = N(>M_{\rm halo})$, i.e. matching cumulative
distributions of the
observed galaxy property, $x$, with the predicted one for halo masses, determined
either from theory \citep{PS74,SMT01} or from cosmological $N$ body
simulations \citep{WAHT06,Tinker+08,CFCG10,Courtin+11}.
Common uses of AM involve one-to-one
correspondences between
1) stellar and halo mass for central galaxies in halos,
2) total stellar mass and halo mass in halos, and
3) stellar and subhalo mass in galaxies.
\cite{MH02} performed the first such AM analysis to determine $M_{\rm
halo}/L$ versus $L$; they first had to determine the observed cosmic stellar
mass function, not counting the galaxies within groups, but only the groups
themselves \citep{MHG02}.
\cite{GWLB10} used this third approach (called subhalo abundance matching or
SHAM, and pioneered independently by \citealp{VO06} and \citealp{CWK06}),
to determine the galaxy
formation efficiency $m_{\rm stars}/M_{\rm halo}$ as a function of $M_{\rm
halo}$, by matching the observed stellar mass function with the subhalo mass
function that they determined in the Millennium \citep{Springel+05} and
Millennium-II \citep{BoylanKolchin+09} simulations.
Although AM methods are based upon a fine relation between stellar and halo
mass, they can easily be adapted to finite dispersion in this relation \citep{BCW10}.
Not only is AM very useful to determine, \emph{without free parameters}, the
relation of stellar to halo mass (lower panel of Fig.~\ref{AMHOD}), but it
superbly predicts the galaxy correlation function of SDSS galaxies
(\citealp{CWK06}, upper
right panel of Fig.~\ref{AMHOD}).
The drawback of AM methods is that they do not clarify the underlying physics
of galaxy formation.
\section{Results from numerical simulations}
\subsection{General results from semi-analytical models of galaxy formation}
SAMs have been remarkably successful in constructing mock catalogs of
galaxies at different epochs and are used in motivating and in interpreting
the large surveys of galaxies.
\begin{figure}[ht]
\centering
\includegraphics[width=0.52\hsize]{Guo+11_Fig7.eps}
\includegraphics[width=0.47\hsize]{Guo+11_Fig14.eps}\\
\includegraphics[width=0.5\hsize]{Guo+11_Fig22.eps}
\includegraphics[width=0.49\hsize]{Koposov+09_Fig12a.eps}
\caption{Illustrations of predictions of SAMs at $z-0$.
\emph{Upper left}: Stellar mass functions \citep{Guo+11}: symbols are from SDSS \citep{LW09},
while curves are from a SAM run on both wide and low-resolution Millennium
Simulation and on the higher but smaller MS-II simulation.
\emph{Upper right}: Galaxy correlation functions \citep{Guo+11}.
\emph{Bottom left}: Evolution of the cosmic SFR
\citep{Guo+11}.
\emph{Bottom right}: Very low end of the galaxy luminosity function \citep{Koposov+09}.
}
\label{GuoSAMz0}
\end{figure}
For example, they reproduce very well the
$z=0$ stellar mass function and correlation function (see Fig.~\ref{GuoSAMz0}).
\begin{figure}[ht]
\centering
\includegraphics[width=\hsize]{Guo+11_Fig23_cut.pdf}
\caption{Evolution of stellar mass functions predicted by \cite{Guo+11}.
\emph{Open triangles} and \emph{red circles} represent observations by
\cite{PerezGonzalez+08} and \cite{Marchesini+09}, respectively.
\emph{Black} and \emph{green curves} represent the predicted stellar mass functions
of galaxies, respectively before and after convolving the stellar masses by
0.25 dex measurement errors.}
\label{GuoSAMhiz}
\end{figure}
However, attempts to solve the problems of high redshift galaxies have so far
been woefully inadequate. For example, they cannot reproduce the rapid
decrease in the cosmic SFR since $z=1$ (see Fig.~\ref{GuoSAMhiz}).
The early SAM feedback models used AGN
quenching, and required excessive dust in early types in the nearby universe
\citep{Bower+06}. Refinements to high redshift attempted to account
simultaneously for galaxy and AGN accounts, and only succeeded by requiring
excessive amounts of dust in order to hide most of the AGNs seen in deep
X-ray surveys \citep{Fanidakis+11}.
An early indication
that SAMs were entering uncertain territory can be seen in the early
predictions of the cosmic star formation history: as numerical resolution was increased, the predicted SFR
increased without limit \citep{SH03b}.
This makes one begin to doubt the predictive power
of SAMs.
Clearly, baryon physics is far more complicated than assumed in the early
SAMs of the 1990s. In fact, we still lack an adequate
explanation for the evolution of the stellar mass function.
Attempts to patch up the problem at low redshift, to avoid an
excess of massive galaxies, exacerbate the inadequacy of the predicted
numbers of massive galaxies at high redshift \citep{Fontanot+09b}. One
attempt to correct the problem at large redshift incorporates for the first
time thermally pulsing AGB (or carbon) stars in the models, and the extra NIR
luminosity reduces the inferred galaxy masses \citep{Henriques+11}. However
the price is that the lower redshift galaxy count predictions no longer fit
the models.
\subsection{Feedback and dwarfs}
Dwarf spheroidal galaxies are dark matter laboratories, dominated by dark
matter. However the numbers defy interpretation. Feedback is readily adjusted
to reduce the numbers of low mass dwarfs \citep{Koposov+09}, but the most
massive dwarfs predicted by $\Lambda$CDM simulations are not observed \citep{BBK12}.
This may be a function of the neglect of baryons in the Aquarius simulations: inclusion
of baryons reduces the central densities of massive dwarfs \citep{Zolotov+12}.
Unorthodox feedback (AGN) may also be a solution \citep{BBK11}.
Moreover, most low-mass dwarfs have cores rather than the cusps predicted by CDM-only simulations.
Baryonic feedback may reconcile data on dwarf core profiles with simulations that include star formation and gas cooling \citep{Oh+11, Governato+12}, who find that
SN-driven outflows help flatten dark matter central density cusps. As mentioned earlier, enhanced early star formation and SN production creates strong tensions with
the need for strong late low mass galaxy evolution.
SN feedback at later epochs may turn cusps into cores by sloshing of more recently accreted gas clouds \citep{MCW06},
more recently addressed in \cite{PG12}, who consider bulk gas motions and require short intense bursts of star formation. There may be evidence for such phenomena in dwarf galaxies \citep{Weisz+12}.
Multiphase simulations \citep{PSD11} confirm the
effectiveness of SN-driven winds, but find that they do not lead to baryon ejection. In a multi-phase medium
with more realistic filamentary accretion, outflows are only typically $\la 10\%$ of the
gas accretion rate.
It is not clear whether SN feedback may still
provide enough momentum to yield an acceptable fit to the low mass end
of the galaxy luminosity function for the classical dwarfs. Ram pressure
stripping \citep{GG72,Mayer+07} remains an alternative or complimentary mechanism,
and morphological transformation of disks into dwarf spheroidals may be
accomplished by repeated rapid encounters, i.e. ``harassment'' \citep{MLK98}
or gravitationally-induced
resonances \citep{DBCH09}.
SN feedback enables present day disk galaxy properties to be reproduced,
including the Tully- Fisher relation and sizes, except for massive
disks. More energetic feedback, from an AGN phase, is envisaged as a
possible solution \citep{McCarthy+12}.
Many galaxies, including early types, have extended star formation
histories. Minor mergers provide an adequate gas supply to account for these
\citep{Kaviraj+09}. However, hydrodynamical studies of the baryonic evolution
and SFR in low mass galaxies disagree about whether or not
one can reproduce their observed properties, including dark matter cores and
baryon fraction. Outflows may reproduce the observed
cores \citep{Governato+12} if the SFE is high at early
epochs, but such models fail to result in the strong evolution observed at
low redshift \citep{Weinmann+12}.
Tidal disruption also plays a role in disrupting satellites whose orbits
intersect the disk or bulge. Dramatic discoveries due to deep imaging of
nearby galaxies with very small, wide field of view, telescopes confirm the
ubiquity of tidal tails that trace dwarf disruption in the remote past
\citep{MartinezDelgado+10}. Simulations provide a convincing demonstration
that we are seeing tidal disruption in action \citep{Cooper+10}. An
independent confirmation of disruption in action comes from studies of the
tidal tails around the outermost MW globular star clusters such as
Pal~13.
Gaps in the tails \citep{Grillmair09} indicate the presence of dark
satellites. Numerical simulations \citep{YJH11} find that high $M/L$
satellites of mass $\sim 10^7\rm\, M_\odot$ are required, again a prediction of
the CDM model.
\begin{figure}[ht]
\centering
\includegraphics[width=8.5cm]{BTFR_wHall_wSN_AM.eps}
\caption{Baryonic Tully-Fisher relation (Mamon \& Silk, in prep.).
\emph{Symbols} are from HI measurements, where the velocity is the flat part of the
rotation curve (\emph{green}, from \citealp{McGaugh12}) or from line-widths
(\emph{black}, \citealp{Gurovich+10}; \emph{magenta}, \citealp{Hall+11}).
The \emph{grey} line is the na\"{\i}ve $\Lambda$CDM (slope 3) prediction with no
feedback, while the \emph{brown dashed line} is the (slope 4) prediction from
MOND.
Note that the inclination of Ho~II is uncertain \citep{Gentile+12}.
}
\label{BTFR}
\end{figure}
At $z=0$, it is possible that SN feedback at intermediate and low masses
combines with entropy feedback from photoionization at low masses to conspire
to give a linear baryonic Tully-Fisher relation (BTFR), as observed (see
Fig.~\ref{BTFR}). This is an important issue as the normalization, slope and
linearity of the BTFR have been used as evidence for MOdified Newtonian
Dynamics (MOND, \citealp{Milgrom83}) and against $\Lambda$CDM. Indeed,
\cite{McGaugh11} has pointed out that the observations of baryonic mass
(stars plus cold gas) as a function of the velocity of the flat part of the
rotation curve is very well matched by the MOND prediction (with no free
parameters). He argues that considerable fine-tuning is
required to bring the na\"{\i}ve $\Lambda$CDM (slope 3) prediction with no
feedback to match the data. Our best-fit model matches the data equally well
(with three free parameters), but the entropy feedback (photoionization)
implies that the relation should curve at low masses, except if one considers
galaxies in which the bulk of the stars formed before the reionization epoch
($z>6$). \cite{Dutton12} also matched the BTFR data with a SAM. Moreover,
two of the lowest mass galaxies in the \citeauthor{McGaugh12} sample have
rotation velocities corrected for asymmetric drift (see \citealp{BC04}),
after which the rotation curve of these galaxies is roughly linear with
radius, and therefore depends on the last data point obtained with
radio-observations, contradicting the flat part of the rotation curve sought
by \citeauthor{McGaugh12}.
\begin{figure}[ht]
\centering
\includegraphics[width=0.33\hsize]{Bradley+12_Fig7.pdf}
\includegraphics[width=0.33\hsize]{Bouwens+12_Fig1bis.eps}
\includegraphics[width=0.33\hsize]{Bradley+12_Fig9.pdf}
\caption{\emph{Left}: Galaxy luminosity function at $z=8$ \citep{Bradley+12}.
\emph{Middle} and \emph{right}: evolution of the galaxy luminosity function
\citep{Bouwens+12} and of its faint-end slope \citep{Bradley+12}.}
\label{lfevol}
\end{figure}
Intermediate-mass dwarfs are present at high redshift and have a steep
luminosity function (\citealp{Bradley+12}, see Fig.~\ref{lfevol}).
They may contribute significantly to the reionization of the Universe.
\subsection{Gas accretion versus mergers}
Star formation seems to be too complex to be simply gravity-induced. Merging
and AGN triggering are culprits for playing possible roles. What seems to be
progressively clear is that there are two distinct modes of star
formation. One mode occurs without any intervention from AGN and is
characteristic of disk galaxies such as the MW, on a time-scale of order at
least several galactic rotation times. Another mode is more intense,
occurring on a relatively rapid time-scale, and involves the intervention of
AGN, at least for quenching and possibly for enhancement or even triggering.
The most important aspect of star formation is the role of the raw material,
cold gas. There are two modes of gas accretion, which may be classified as
cold flows/minor mergers and major mergers/cooling flows. The former provide supplies of cold gas along filaments, the latter
a source of hot gas which may cool and feed star formation.
\begin{figure}[ht]
\centering
\includegraphics[width=9cm]{Dekel+09_Fig1b.eps}
\caption{Mass flux map of a $M_v=10^{12}\,\rm M_\odot$ halo at $z=2.5$ from a
hydrodynamical simulation \citep{Dekel+09}. The \emph{circle} denotes the
virial radius.}
\label{fluxmap}
\end{figure}
The cold flows occur
in filamentary streams that follow the cosmic web of large-scale structure
(see Fig.~\ref{fluxmap}),
and include minor mergers via the dwarf galaxies that similarly trace the web
\citep{Dekel+09}. Theory suggests that, at low redshift, gas accretion by cold
streams is important, and that the cold streams are invariably clumpy and
essentially indistinguishable from minor mergers of gas-rich dwarfs. Major
galaxy mergers account for the observed morphological distortions that are
more common at high $z$,
and generally lead to cloud agglomeration, angular momentum loss and
cooling flows that feed star formation \citep{BPCT11}.
Observationally, one finds that cold flows are rarely if at all
observed. This is presumably because of the small covering factor of the
filaments \citep{Stewart+11_apjl, FK11}. Indirect evidence in favor of cold
accretion comes from studies of star formation in dwarfs. The best example
may be the Carina dwarf where three distinct episodes of star formation are
found \citep{THT09}. However at high redshift, major mergers between galaxies
are common. Indeed, Ultra-Luminous Infrared Galaxies (ULIRGs), whose SFRs are
huge, are invariably undergoing major, often multiple, gas-rich mergers
\citep{BBLC00}
and dominate the cosmic SFR history at $z\lower.5ex\hbox{$\; \buildrel > \over \sim \;$} 2,$ whereas
normal star-forming galaxies predominate at low ($z \lower.5ex\hbox{$\; \buildrel < \over \sim \;$} 2$) redshift
\citep{LBEOP09}.
This certainly favors the idea of massive spheroid
formation by major mergers.
Using, their analytical model of galaxy formation on top of
a high-resolution cosmological simulation, \cite{CMWK11} show that only
in massive galaxies ($m_{\rm stars} >10^{11} h^{-1}\, \rm M_\odot$) do galaxy
mergers contribute to the bulk of the stellar mass growth (see also
\citealp{GW08}, who analyzed a simulation with 11 times worse mass resolution)
and these mergers
are mainly `dry' (gas-poor). As one goes to
lower stellar masses (down to their simulation's effective resolution limit of $10^{10.6}
h^{-1}\, \rm M_\odot$) the role of mergers sharply diminishes, suggesting, by
extrapolation, that mergers are, in general, unimportant for the mass growth of both these
intermediate-mass galaxies and low-mass galaxies, for which
the bulk of the growth
must be by gas accretion. Nevertheless, among those rare intermediate-mass
galaxies built by mergers, the growth in mass is mostly in `wet' (gas-rich) and
minor mergers.
In particular, the non-dominant cluster galaxies, known to be mostly dwarf
ellipticals, are rarely built by mergers.
The sudden dominance of major mergers at high galaxy masses is confirmed by
trends with stellar mass of the colors, color gradients and elongations of
SDSS galaxies (\citealp{Tremonti+04,Bernardi+10}, see also
\citealp{vanderWel+09,Thomas+10}).
At lower masses (and low redshift),
minor mergers are required to account for sizes and masses
\citep{Mclure+12, LopezSanjuan+12}.
Herschel observations of the Main Sequence of galaxy formation
(SFR versus stellar mass) suggests that
starbursts, commonly associated with major mergers, are displaced to higher
mass and SFR, but only account for 10\% of the SFR density at $z=2$
\citep{Rodighiero+11}.
However, this conclusion depends critically on the $\sim 100\,\rm Myr$ timescale assumed
for the starbursts. If the starbursts had shorter duration, say 20 Myr,
given their effective observation time of $\sim 1\,\rm Gyr$,
they would account for as much as 50\% of the star formation at $z\sim 2$. It is difficult
to gauge independent estimates of starburst age, but for example the UV
continuum flattening observed at high $z$ for luminous star-forming galaxies
favors a younger starburst age \citep{Gonzalez+12} as would possible SED corrections for nebular emission.
\subsection{Initial stellar mass function}
The IMF of stars forming in galaxies
is usually treated as universal in galaxy formation modelling. There
has been a recent flurry of papers finding evidence for a systematic
steepening, from the \cite{Chabrier03} to \cite{Salpeter55} IMFs,
in massive early type galaxies. From
a spectral absorption line analysis, a correlation of IMF steepening with
enhanced velocity dispersion, [Mg/Fe] and sodium abundance is reported by
\citep{CvD12}.
A similar result is reported for stacked massive galaxy spectra
\citep{Ferreras+12}.
\begin{figure}[ht]
\centering
\includegraphics[width=10.5cm]{Cappellari+12_Fig2.eps}
\caption{Stellar mass-to-light ratio inferred from kinematical modeling
(after subtracting off the contribution of the DM component),
normalized to Salpeter ratio inferred from stellar populations versus
stellar $M/L$ from kinematical modeling, for six DM models \citep{Cappellari+12}.
}
\label{nonunivIMF}
\end{figure}
The modeling of the internal kinematics of early-type galaxies using
integral field spectroscopy
provides evidence for steeper IMFs (regardless of many plausible assumptions
on the DM) in increasingly more massive galaxies
(\citealp{Cappellari+12}, see Fig.~\ref{nonunivIMF}).
Lensing plus gas kinematics provides evidence for a Salpeter-like IMF in
several massive ellipticals \citep{Dutton+12}. There may also be a
correlation of a steeper IMF with the densest massive galaxies
\citep{DMS12}.
All of these studies report increasing M/L with increasing spheroid velocity
dispersion and $\alpha/\rm[Fe].$
The possible degeneracy between IMF and DM fraction and shape
is a concern because the DM profile steepens as a
consequence of adiabatic contraction.
While, \cite{Cappellari+12} tried a variety of DM models that do not
significantly
influence their result (since they only
probed the region where dark matter accounts for at best 20\% of the mass),
only one study \citep{Sonnenfeld+12} so far has cleanly broken the degeneracy
with the dark matter profile:
By using a double Einstein ring,
\citeauthor{Sonnenfeld+12} found a strong case for a Salpeter IMF.
The adiabatic contraction of the DM is within the range found by \cite{GKKN04}.
The implications of a steeper IMF in massive galaxies for galaxy formation
models remain to be explored. The increased efficiency of star formation
required at early epochs will certainly provide further tensions with the
need to leave a substantial gas supply at late epochs for the observed late
evolution observed for low mass galaxies, as discussed below.
\subsection{Feedback and AGN}
Quenching of
star formation has been largely motivated by the apparent success of SMBH
feedback in reproducing the scaling and normalization of the black hole
mass-spheroid velocity dispersion $M_{\rm BH}-\sigma_v$) relation, as first proposed
by \cite{SR98}.
SAMs indeed
demonstrate that AGN feedback is able to quench star formation in massive
elliptical galaxies \citep{Croton+06,Bower+06,Cattaneo+06,Somerville+08}.
One can reproduce the fairly sharp cut-off in the bright end of
the galaxy luminosity function~\citep{Bell+03, PJHC07}.
These SAMs do not require ``quasar mode'' AGN feedback with Eddington
luminosities.
\begin{figure}[ht]
\centering
\includegraphics[width=0.35\hsize]{GKKS12_Fig1a.eps}
\includegraphics[width=0.35\hsize]{GKKS12_Fig1b.eps} \\
\includegraphics[width=0.35\hsize]{GKKS12_Fig2a.eps}
\includegraphics[width=0.35\hsize]{GKKS12_Fig2b.eps}
\caption{Simulations (in 32 kpc box) of AGN feedback at 14 (\emph{left}) and
22 Myr (\emph{right}) after
the onset of the jet, in edge-on (\emph{top}) and face-on (\emph{bottom})
views of log density \citep{GKKS11}}
\label{AGNsim}
\end{figure}
High resolution hydrodynamical cosmological simulations indeed show that while cold
streams initially feed the black hole, transferring angular momentum to
produce central disks \citep{DDST12} that become gravitationally
unstable and feed the compact bulge through migration of
clumps (see \citealp{bournaud+11}), the cold flows are eventually interrupted
by AGN-driven super-winds \citep{Dubois+12_blowout}.
However the physics of driving SMBH outflows is still not well understood. One issue
is that momentum-driven winds fail to account for the normalization of the
$M_{\rm BH}-\sigma_v$ relation~\citep{SN10, DQM12}, with the shortfall being
about a factor of 10. This momentum deficit can be supplied by radio
jet-driven outflows \citep{WB11}, which also account for the observed high
velocities of entrained cold gas \citep{WBU12}. Alternative or complementary
possibilities, possibly more relevant to radio-quiet quasars, include
positive feedback from outflow-triggered star formation (\citealp{SilkNorman09,
GKKS11}, see Fig.~\ref{AGNsim}) and energy-driven outflows
\citep{FQ12}. Nearby AGN show dense molecular rings surrounding circumnuclear
rings of star formation \citep{Sani+12}, reminiscent of the simulated
triggering of star formation \citep{GKKS11}.
SMBHs are generally found to correlate with bulges rather than with disks, pseudobulges or dark halos
\citep{Ho07,KBC11,KB11}, although disk galaxies appear to follow a similar $M_{\rm BH}-\sigma_v$
relation, albeit with more scatter \citep{GOAC11}.
This would simplify formation mechanisms, suggesting that bulges and SMBH grow together, perhaps self-regulating each other.
Massive black hole growth at early epochs seems to be (just) achievable by gas accretion.
Large cosmological simulations~(\citealp{DiMatteo+12}, see
also~\citealp{Li+07, SSH09, Khandai+12}) have shown that primordial massive
BHs can grow by cold filamentary infall, and acquire masses of up to several
billion solar masses by $z=6$ in the most massive halos ($M_{\rm
vir}\simeq10^{12-13}\, \rm \,M_\odot$).
Insight into black hole growth is provided by looking for extreme deviations
in the $M_{\rm BH}-\sigma_v$ relation. Massive black holes seem to be in place at
high redshift before spheroids \citep{Wang+11}. This is also the case for a
nearby starbust galaxy containing an AGN but without any matching
spheroid or indeed massive stellar component
\citep{RD12}.
On the
other hand, SMGs seem to contain relatively low mass black holes for their
stellar content \citep{Alexander+08}.
\section {Future prospects in observations}
A clue as to the nature of a possible solution may come from the fact that
quasars also reveal luminosity downsizing. This translates into downsizing
of central SMBH mass. One might be able to connect the two
phenomena if feedback from AGN was initially positive and also a strongly
nonlinear function of SMBH mass. Predictions of positive feedback include
circumnuclear rings on 10--100 pc scales in star-forming AGN. These should be
resolvable with ALMA, via both molecular lines that probe pressurized
molecular gas and FIR fine-structure lines that probe the interplay of
intense FUV radiation fields with photodissociation regions (PDRs and XDRs).
More conventionally, the evidence for AGN quenching of star formation seems
strong.
Superwinds driven by AGN are capable of depleting the
reservoir of star-forming gas over relatively short time-scales. However,
questions remain as to the relative roles of AGN winds, jet-driven bubbles,
SNe, and radiation pressure especially from OB star clusters. No doubt, JWST,
as well as 30+meter telescopes such as ELT
will complement HST by producing spectacular IR images of star formation, AGN
and outflows at work.
Accretion of neutral gas will be studied at
high sensitivity by SKA. Ultimately one needs a spectroscopic survey akin to
SDSS at $z=1-2$ and this will be provided by the Subaru Prime Focus
Spectrograph with optical and NIR capability. The next decade should bring a
vast increase in our phenomenological understanding of the basic processes at
play in galaxy formation and evolution.
\section {Future prospects in astrophysical theory}
Theory lacks adequate resolution and physics. Of course these issues are
intricately connected. One needs to tackle baryon physics and the associated
possibilities for feedback. Today, state-of-the-art cosmological
simulations of the MW with gas and star formation, such as the ERIS simulation \citep{GCMM11},
provide only $\approx 100\,\rm pc$ resolution. Hence, in current simulations, the
gas and star formation physics is included in an ad hoc way, because of the
resolution limitation. For example, while stars are known to form in the dense
cores --- of density $\ga 10^5\,\rm cm^{-3}$ --- of Giant Molecular Clouds, the
current hydrodynamical simulations adopt SF thresholds of typically $1\,\rm
cm^{-3}$ and always $\la 10^2\,\rm cm^{-3}$. Sharp increases of the SF
density threshold result in moving the SF regions outside of the nucleus
\citep{TCB10}.
However,
in reality, it is the unresolved subgrid physics that determines the actual
threshold, if one even exists. Mastery of the required subparsec-scale
physics will take time, but there is no obvious reason why we cannot achieve
this goal with orders of
magnitude improvement in computing power.
For the moment, phenomenology drives all modelling. This is true especially
for local star formation. A serious consequence is that physics honed on
local star-forming regions, where one has high resolution probes of
star-forming clouds and of ongoing feedback, may not necessarily apply in the
more extreme conditions of the early universe.
One issue that arises frequently is whether the perceived challenges to
$\Lambda$CDM justify a new theory of gravity. From MOND \citep{Milgrom83}
onwards, there are
any number of alternative theories that are designed to explain certain
observations. However, none can explain the ensemble of observations any
better than $\Lambda$CDM, nor do they rely on solid physical grounds.
But to the extent that any unexplained
anomalies exist, these are invariably at no more than the $2\,\sigma$ level of
significance. It seems that such ``evidence" is not adequate motivation
for abandoning Einstein-Newton gravity. Indeed, while it is overwhelmingly clear
that there are many potential discrepancies with $\Lambda$CDM, we have
certainly not developed the optimal $\Lambda$CDM theory of galaxy
formation: the current models do not adequately include the baryons nor do we
reliably understand star formation, let alone feedback.
Other MOND-related issues are reviewed in \cite{FM11}, including challenges raised by the
apparent emptiness of local voids and satellite phase space correlations.
However, we regard these as more a matter of absorbing the significance of ever deeper galaxy and 21 cm surveys, on the one hand
(for example, deep blind HI surveys show that gas-rich galaxies are the least clustered of any galaxy population \citealp{MGHG12}),
and on the other hand, of
questioning the details of hitherto inadequately modelled baryonic physics,
as developed for example in \cite{Zolotov+12}.
Whether appeal to alternative gravity is justified by inadequate baryonic
physics is a question of judgement at this point. Here is a summary of many
of these failures: we cite some key reasons why $\Lambda$CDM does not yet
provide a robust explanation of the observations: we list below several
examples that represent challenges for theorists.
\begin{enumerate}
\item
Massive bulgeless galaxies with thin disks are reasonably common
\citep{KDBC10}. Simulations invariably make thick disks and bulges. Indeed,
the bulges are typically overly massive relative to the disks for all
galaxies other than S0s. Massive thin disks are especially hard to simulate
unless very fine-tuned feedback is applied. A consensus is that the feedback
prescriptions are far from unique \citep{Scannapieco+12}. One appealing
solution involves SN feedback. This drives a galactic fountain that
feeds the bulge. A wind is driven from the bulge where star formation is
largely suppressed for sufficiently high feedback \citep{Brook+12}.
Another proposal includes radiation pressure from massive stars as well as SNe. The combined feedback helps expand the
halo expansion, thereby limiting dynamical friction and bulge formation \citep{Maccio+12}.
\item
Dark matter cores are generally inferred in dwarf spheroidal galaxies,
whereas $\Lambda$CDM theory predicts a cusp, the NFW profile. Strong
SN feedback can eject enough baryons from the innermost region to
create a core \citep{Governato+10, PG12}, but this requires high early
SN feedback or a series of implausibly short bursts of star formation.
\item
The excessive predicted numbers of dwarf galaxies are one of the most cited
problems with $\Lambda$CDM. The discrepancy amounts to two orders of
magnitude. The issue of dwarf visibility is addressed by feedback that
ejects most of the baryons and thereby renders the dwarfs invisible, at least
in the optical bands. There are three commonly discussed mechanisms for
dwarf galaxy feedback: reionization of the universe at early epochs, SNe, and (ram
pressure and tidal) stripping. AGN-driven outflows via intermediate mass
black holes provide another alternative to which relatively little attention
has been paid \citep{SN10}.
None of these have so far been demonstrated to provide definitive solutions.
Reionization only works for the lowest mass dwarfs. The ultrafaint dwarfs in
the MW may be fossils of these first galaxies (as checked by detailed models
\citealp{Koposov+09,SF09,BR11b}).
It is argued that SN feedback solves the problem for the more massive dwarfs \citep{Maccio+10}.
However, this conclusion is disputed by \cite{BBK11}, who use the Aquarius
simulations \citep{Springel+08}
to predict more massive dwarfs in dark-matter-only simulations than are observed. These authors
argue that the relatively massive dwarfs should form stars, and we see no
counterparts of these systems, apart possibly from rare massive dwarfs such
as the Magellanic Clouds. We have previously remarked that omission of
baryonic physics biases the dark matter-only simulations to an overstatement
of the problem by overpredicting dwarf central densities
\citep{Zolotov+12}.
\item
The SFE in dwarfs is highly debated. Let us put aside
the high SFE at early epochs that is required to obtain
strong feedback in order to generate cores. For example, it is possible that
intermediate mass black holes could be invoked to solve this problem and
simultaneously generate the required low baryon fraction \citep{PJSP12}.
In order to obtain the required late epoch evolution \citep{Weinmann+12}, one
might appeal to a lower SFE in dwarfs, plausibly
associated with low metallicities and hence low dust and $\rm H_2$ content.
Models based on metallicity-regulated star formation can account for the
numbers and radial distribution of the dwarfs by a decreasing SFE
\citep{Kravtsov10}. This explanation is
disputed by
\cite{BBK11}, who infer a range in SFEs for the dwarfs of some two orders of magnitude.
A similar result appeals to varying the halo mass threshold below which star formation must be suppressed to account
for the dwarf luminosity function, whereas the stellar masses of many observed dwarfs violate this condition \citep{Ferrero+11}.
Finally, tidal stripping may provide a solution \citep{Nickerson+11}, at least for the inner dwarfs.
\item
Another long-standing problem relates to downsizing. Massive galaxies are
in place before lower mass galaxies as measured by stellar mass assembly, and
their star formation time-scales and chemical evolution time-scales at their
formation/assembly epoch are shorter. One popular explanation
\citep{CDFG08} is that
galaxies cannot accrete/retain cold gas in massive halos, either because of
AGN feedback or because of virial shocks that prevent the gas supply of the
disk in cold filaments \citep{BD03}.
\item
It is possible to develop galaxy
formation models with suitable degrees and modes of feedback that address
many of these issues. However, a major difficulty confronted by all SAMs
is that the evolution of the galaxy luminosity function
contradicts the data, either at high or at low redshift. The SAMs that are
normalized to low redshift and tuned to account for the properties of local
galaxies fail at high redshift by generating too many red galaxies
\citep{Fontanot+09b}. Too few blue galaxies are predicted at $z=0.3.$
This problem has been addressed by including AGB stars in the stellar
populations. This fix results in a more rapid reddening time-scale by
speeding up the evolution of the rest-frame near-infrared galaxy luminosity
function \citep{Henriques+11}. There is a price to be paid however: now there
are excess numbers of blue galaxies predicted at $z=0.5$.
\item
There is a well-known difficulty in matching both the galaxy luminosity function and
Tully-Fisher scaling relation, even at $z=0$.
Reconciliation of the Tully-Fisher zero point with the galaxy
luminosity function requires too high an efficiency of star formation
\citep{GWLB10}.
In fact, the problem is even worse: the models of massive spirals tuned to
fit the Tully-Fisher relation are too concentrated \citep {McCarthy+12}.
This is a reflection of the over-massive bulge problem in disk galaxies that
simply refuses to go away \citep{NS00,ANSE03}.
\item
The luminosity function problem is most likely related to another
unexplained property of high redshift galaxies. The SSFR evolution at high $z$
is very different from that at low $z$. Essentially, it saturates. One finds
an infrared Main Sequence of galactic SFRs: SFR versus
$M_*$ \citep{Elbaz+11}. Neither the slope nor the scatter are adequately
understood. Starburst galaxies lie above the Main Sequence, but the fraction of cosmic star formation in these systems depends on inadequately justified assumptions about starburst duration.
For example, nebular emission and dust extinction affect infrerred ages, and one cannot easily understand the blue continuum slopes oberved at high redshift and lower UV luminosities \citep{Bouwens+12_UV}.
\item The observed rapid growth of early-type galaxy sizes since $z=2$ for fixed stellar
mass cannot be reproduced in SAMs or analytical models \citep{CNC12}: at
$z=2$ galaxies are too compact.
\item
Much has been made of nearby rotation curve wiggles that trace similar
dips in the stellar surface density that seemingly reduce the significance of
any dark matter contribution. Maximum disks optimize the contribution of
stars to the rotation curve, and these wiggles are most likely associated
with spiral density waves. A similar result may be true for low surface
brightness gas-rich dwarf galaxies \citep{SSvAvdH11}.
\item
High mass-to-light
ratios are sometimes required for maximum disk models of spiral galaxy
rotation curves, but these are easily accommodated if the IMF
is somewhat bottom-heavy. The case for IMF variations has been made for
several data sets, primarily for early-type galaxies (e.g., see
\citealp{vDC11}). The LSB dwarfs are plausible relics of the building blocks
expected in hierarchical formation theories.
\item
Spiral arms are seen in the HI distribution in the outer regions of some disks. This tells us that significant
angular momentum transfer is helping feed the optical inner disk. The baryon self-gravity is large enough that one does not for example need to appeal to a flattened halo, which might otherwise be problematic for the DM model
\citep{BA10}.
\item
The slope and normalization of the baryon Tully-Fisher relation do not
agree with the simplest $\Lambda$CDM prediction. The observed slope is
approximately 4, similar to what is found for MOND \citep{Milgrom83},
whereas $\Lambda$CDM (without feedback) gives a slope of 3 \citep{McGaugh11,McGaugh12}, but fails to account for the observed dispersion and curvature.
\item
The baryon fraction in galaxies is some 50\% of the primordial value predicted by light element nucleosynthesis. These baryons are not in hot gaseous halos
\citep{AB10}. Convergence to the universal value on cluster scales is controversial: convergence to the WMAP value is seen for X-ray clusters above a temperature of 5 keV \citep{DBKR10},
but could be as large as 30\% even for massive clusters \citep{Andreon10, Scannapieco+12}.
If the latter discrepancy were to be confirmed, one would need significant bias of baryons relative to dark matter, presumably due to feedback, on unprecedentedly large scales.
\item The distribution of the MW satellite galaxies in a great circle
\citep{LyndenBell82} is unexpected in the $\Lambda$CDM context
\citep{KTB05}. However, infall onto halos is not spherically symmetric
\citep{APC04}, and subhalos tend to lie in a plane \citep{Libeskind+05}.
The details of the thickness of this plane remained to be settled (e.g.,
\citealp{Kroupa+10} versus \citealp{Libeskind+11}).
\item There is a significant lack of galaxies in comparison with standard
expectations
in the Local Void close to the
Local Group \citep{Peebles07,TK09}. But it is not yet clear whether this
region fairly low galactic latitude region has been surveyed as closely as
other regions.
\item
Bulk flows are found over 100 Mpc scales that are about two standard deviations
larger than expected in $\Lambda$CDM \citep{FWH10}. The technique primarily
uses Tully-Fisher and Fundamental Plane galaxy calibrators of the distance
scale. An X-ray approach, calibrating via the kinetic \cite{SZ72} effect
(kSZE), claims the existence of a bulk flow out to 800 Mpc
\citep{Kashlinsky+10}. However the discrepancies with $\Lambda$CDM are
controversial because of possible systematics. A recent detection of kSZE
confirms pairwise bulk flows of clusters at $4\,\sigma$ and is consistent
with $\Lambda$CDM \citep{Hand+12}.
\end{enumerate}
Several of these issues may be linked. For example, the analysis of
\cite{Cappellari+12} that the IMF is
non-universal, with shallower (top-heavy) IMFs for galaxies of lower velocity
dispersion, can be linked with the known relations between velocity dispersion
and metallicity (e.g., \citealp{AHSL09}) to produce a relation between IMF
and metallicity, which goes in the right direction: low-metallicity systems
have top-heavy IMFs. Until now, observers assumed a universal IMF when
deriving stellar masses. They have therefore overestimated the stellar masses of
low-metallicity systems. We would like to think that this overestimation of
$M_*$ might explain at the same time the evolution of the cosmic SSFR and
that of galaxy sizes. Indeed, at high redshift, galaxies are expected to be
more metal-poor, and the overestimate of their typical stellar masses will
lead to an underestimate of their SSFRs, relative to those of lower-redshift
galaxies. Therefore, the cosmic SSFR may not saturate at high redshift, which
will make it easier to fit to models. At the
same time, if high redshift galaxies have lower stellar masses than inferred
from a universal IMF, then for a given stellar mass, they have larger sizes
than inferred, and the too rapid evolution of galaxy sizes (relative to
models) might disappear.
We propose that observers replace stellar mass by $K$-band rest-frame luminosity, which,
if properly measured, can serve as a useful proxy for stellar
mass, independently of any assumed IMF.
In summary, it is clear that many problems await refinements in theoretical understanding.
No doubt, these will come about eventually as numerical simulations of galaxy formation are refined to tackle subparsec scales.
We are grateful to A.~Cattaneo, B.~Famaey, A.~Graham, J.~Kormendy, P.~Kroupa, S.~McGaugh,
A.~Pontzen and A.~Tutukov for very useful comments.
\small
|
1,477,468,750,333 | arxiv | \section{Introduction}
\label{sec:Intro}
Black hole thermodynamics has a variety of applications, from the insights of quantum gravity Gedankenexperiments to practical calculations in heavy ion collisions and condensed matter physics. And there are just as many formalisms for describing the thermodynamic behavior: the classical four laws of black hole mechanics, quantum fields on curved backgrounds, microstate counting by virtue of the Cardy formula, and the Euclidean path integral formulation, to name a few. Each approach has its benefits, but the path integral approach has proven especially useful for practical calculations at 0- and 1-loop level, particularly in the context of gauge/gravity dualities. This is due, in part, to its direct connection with the classical formulation of the gravitational theory. Given an action $I_{E}$ for the theory, every relevant field configuration contributes with weight $\exp(-I_{E})$. Of course, the identification and enumeration of the `relevant field configurations' may present a challenge.
Already in the early days of path integrals it was discovered that the most relevant class of paths behave like the `Weierstrass monster': they are continuous but not differentiable at any point. In a simple system, like a point particle in certain potentials, the contributions from these paths can be accounted for. But the prospect of summing contributions from non-differentiable field configurations seems overwhelming for a theory as complex as gravity. Instead, calculations that employ the gravitational path integral will often focus exclusively on the contributions from geometries that satisfy some smoothness conditions. This assumption may be justified by physically reasonable results, but the fact remains that smooth metrics account for only a fraction of the support of the path integral measure. The alternative is to require only continuity of the metric when performing path integral calculations, while relaxing the requirement of differentiability.
In this paper we make some modest progress in this direction by including metrics with conical singularities in the Euclidean path integral for a number of gravitational theories. Specifically, we consider the Euclidean partition function for a canonical ensemble defined inside a finite cavity. Field configurations in the ensemble satisfy certain boundary conditions at the wall of the cavity, where the system is coupled to an external thermal reservoir. In some cases (theories with asymptotically anti-de Sitter boundary conditions, for instance) the walls of the box can be removed to an asymptotic region without compromising the existence of the ensemble, but this is not always possible. Usually one computes the partition function for this ensemble by summing contributions from metrics that are regular everywhere. We will relax this condition and include certain configurations with a conical singularity. This ensemble could be referred to as the \emph{conical ensemble}, to distinguish it from the usual canonical ensemble. We then pose the following questions:
\begin{quote}
How do configurations with conical singularities contribute to the partition function? Can the ground state of the ensemble have a conical singularity?
\end{quote}
These questions can be answered quite generally for two-dimensional dilaton gravity. This class of models includes the spherically symmetric reduction of higher-dimensional theories that admit Schwarzschild, Schwarzschild-AdS, Reissner-Nordstr\"om, and BTZ black holes as solutions, as well as target space actions associated with certain string theory black holes.
How are conical singularities incorporated into the ensemble? In the semiclassical approximation the partition function is dominated by solutions of the classical equations of motion, with sub-leading contributions coming from smooth fluctuations of the fields around these configurations. We now wish to take into account geometries that are regular everywhere except for a single point. The dominant contributions would seem to come from configurations that ``almost'' extremize the action: they satisfy the equations of motion everywhere except for the singular point, similar to the `off-shell black holes' considered by Carlip and Teitelboim \cite{Carlip:1993sa}. The assumptions of our framework then require this point to sit either at the center of the cavity, or the horizon of a black hole. From the point of view of a higher dimensional model, a conical singularity at any other point other than the center of the cavity would not be consistent with the spherically symmetric reduction of the action. In the context of the two-dimensional dilaton gravity, the fact that the configurations satisfy the equations of motion everywhere except a single point implies the existence of a certain Killing vector that forces the singularity to sit at the center of the cavity. Thus, the dominant contributions to the partition function from configurations with a conical singularity correspond to solutions of the equations of motion that break down at the horizon, where there is a delta-function singularity in the curvature.
In the canonical ensemble, one finds that the stable ground state of the system is typically either `hot empty space'\,\footnote{In the present context, `hot empty space' refers to a space (with asymptotics appropriate to the model) that does not contain a black hole, but nevertheless has a finite period for the Euclidean time such that the boundary conditions of the ensemble are satisfied. Familiar examples include `hot flat space' \cite{Gross:1982cv} and thermal AdS \cite{Hawking:1982dh}.}, a regular black hole, or some superposition of the two, depending on the boundary conditions. Even when it is not the ground state a black hole may exist for certain boundary conditions as a local minimum of the free energy, stable against small thermodynamic fluctuations. We find that the inclusion of conical singularities in the ensemble does not change either of these statements. When the boundary conditions allow a thermodynamically stable black hole in the canonical ensemble, the addition of a small conical defect -- a conical singularity with deficit or surplus angle $|\alpha| \ll 1$ -- may result in a lower internal energy or a higher density of states. In other words, small conical defects may be energetically or entropically favored over their smooth counterparts in the conical ensemble.
\begin{align}
E(\textrm{conical})-E(\textrm{smooth}) < 0 \textrm{\;is\;possible} \label{eq:intro1} \\
S(\textrm{conical})-S(\textrm{smooth}) > 0 \textrm{\;is\;possible} \label{eq:intro2}
\end{align}
These results are somewhat surprising because they seem to contradict our intuition that smooth black holes should be the most stable configurations. However, \eqref{eq:intro1} and \eqref{eq:intro2} are not possible simultaneously, and in fact the presence of a small conical defect always increases the free energy:
\eq{
F(\textrm{conical})-F(\textrm{smooth}) \sim \alpha^2 > 0\,, \,\,\, \textrm{\;for\;} \,\,|\alpha| \ll 1 ~.
}{eq:intro3}
We conclude that black holes that are stable against Gaussian fluctuations in the canonical ensemble are also stable against the nucleation of a small conical defect. This perturbative stability generalizes to a non-perturbative statement about the ground state of the system. With a few caveats, it appears that the ground state of the conical ensemble is always a smooth field configuration\,\footnote{For certain theories we find specific classes of solutions that are only marginally stable against the decay into conical defects, but these examples (the so-called `constant dilaton vacua') tend to suffer from other problems.}. In fact, the relationship between the ground state and the boundary conditions is the same as in the canonical ensemble. We provide general arguments as well as several examples demonstrating these features.
In the semiclassical approximation the partition function is dominated by the ground state of the system, which is a smooth geometry. Corrections to this leading behavior from small, smooth fluctuations around the ground state are well-understood and can be evaluated using a variety of techniques. Since configurations with a conical singularity do not dominate the partition function, it is natural to ask how their contributions compare to the corrections from smooth fluctuations. The action is generally a complicated function of the fields, so contributions from conical singularities cannot be evaluated analytically except in special cases. However, it is possible to evaluate the contributions numerically, and in the semiclassical limit the results are approximated to high precision by relatively simple functions of the boundary conditions that define the ensemble. When the ground state is a black hole, we find that the contributions to the partition function from configurations with a conical singularity are comparable to the contributions from Gaussian fluctuations. This suggests that, even in the semiclassical approximation, non-smooth field configurations make important sub-leading contributions to thermodynamic quantities.
It is important to point out that we view these contributions as logically distinct from the ``mass fluctuations'' that are already studied in the literature, see e.g.~\cite{Das:2001ic}, or from 1-loop quantum corrections derived by taking into account conical defects \cite{Solodukhin:1994yz}. Indeed, the reader may wonder whether the configurations we consider are not already included in those calculations. In the canonical ensemble the proper temperature is held fixed at the wall of the cavity, and hence black holes with a regular horizon are only present in the ensemble for isolated values of the mass (if they are present at all). Shifting the black hole mass away from these values necessarily introduces a conical singularity at the horizon, and this must be addressed before the role of these configurations in the ensemble can be understood. Thus, we assume that ``mass fluctuations'' in previous calculations like \cite{Das:2001ic} refer only to small, smooth fluctuations of the fields around a black hole background with a fixed horizon, as opposed to an actual variation in the mass of the black hole. On the other hand, 1-loop corrections depend explicitly on the precise matter content coupled to the gravitational theory, so they are not exclusively an intrinsic property of the black hole. See for instance \cite{Sen:2012dw}, Eqs.~(1.1), (1.2), and Refs.~therein. Thus, our approach of considering non-smooth geometric configurations that are on-shell everywhere except at the horizon is logically distinct form these two approaches. Nevertheless, as we will show, the leading corrections to quantities like the entropy take essentially the same form as they do in other approaches.
This paper is organized as follows. In section \ref{sec:1} we recall some basic results of two-dimensional dilaton gravity and black hole thermodynamics in the canonical ensemble. In section \ref{sec:2} we study black holes with a conical defect and evaluate the conical ensemble partition function in the semiclassical approximation. In section \ref{sec:2a} we address general features of thermodynamical observables, discuss stability issues, and give approximate expressions for the contributions to the partition function. In section \ref{sec:3} we provide several explicit examples, and in section \ref{sec:4} we point to applications and open problems.
Before proceeding we point out a few important conventions, which are the same as \cite{Grumiller:2007ju} in most cases. Euclidean signature is employed throughout. Nevertheless, terms appropriate to Lorentzian signature such as `Minkowski' and `horizon' are used when the meaning is clear. We use natural units with $G_{d+1} = c = \hbar = k_{B} = 1$, and the dimensionless Newton's constant in two dimensions is set to $8\pi G_2 = 1$. These constants are restored in certain expressions, when necessary.
\section{Preliminaries}
\label{sec:1}
In this section we recapitulate some of the main results of \cite{Grumiller:2007ju}. A reader familiar with these results and the notation may skip this section.
\subsection{Two-dimensional dilaton gravity action}
In this paper we study black hole (BH) thermodynamics in two-dimensional models of dilaton gravity. Dilaton gravity in two dimensions is conventionally described by the Euclidean action
\begin{align}\label{Action}
\Gamma[g,X] = & \,\, - \frac{1}{2} \int_{\MM} \nts \nts \dd^{\,2}x \,\sqrt{g}\, \left( \,X\,R - U(X)\,\left(\nabla X\right)^2 - 2 \, V(X) \, \right) \\ \nonumber
& \,\, - \int_{\dM} \bns \dd x \,\sqrt{\gamma} \, X\,K
+ \int_{\dM} \bns \dd x \,\sqrt{\gamma} \, \sqrt{w(X)e^{-Q(X)}} ~.
\end{align}
The dilaton $X$ is defined in terms of its coupling to the Ricci scalar, which takes the form $X R$. Different models are distinguished by the kinetic and potential functions $U(X)$ and $V(X)$, cf.~e.g.~\cite{Grumiller:2002nm,Grumiller:2006rc} for reviews. The bulk terms in the first line of \eqref{Action} are supplemented by boundary terms in the second line. The first boundary term is the analog of the Gibbons-Hawking-York surface integral \cite{York:1972sj, Gibbons:1976ue}, where $\gamma_{ab}$ is the induced metric on the boundary and $K$ is the trace of the extrinsic curvature. Including this term in the action ensures a Dirichlet boundary problem. The second boundary term is the holographic counterterm derived in \cite{Grumiller:2007ju}. It ensures a well-defined variational principle, so that the first variation of the action vanishes on solutions of the equations of motion for all variations that preserve the boundary conditions\,\footnote{On non-compact spaces, the first variation of the action \eqref{Action} without the holographic counterterm vanishes only for field variations with compact support. It is worth mentioning that the specific combination of $w(X)$ and $Q(X)$ appearing in the counterterm is the supergravity pre-potential \cite{Grumiller:2007cj,Grumiller:2009dx}.}. The functions $w(X)$ and $Q(X)$, which depend on $U(X)$ and $V(X)$, are defined below.
\subsection{Equations of motion and all classical solutions}
\label{subsec:EOMandCS}
The equations of motion are obtained by extremizing the action \eqref{Action} with respect to small variations of the fields that preserve the boundary conditions. This yields
\begin{gather} \label{MetricEOM}
U(X)\,\nabla_{\mu}X \nabla_{\nu}X - \frac{1}{2}\,g_{\mu\nu} U(X) (\nabla X)^2
- g_{\mu\nu} V(X) + \nabla_{\mu} \nabla_{\nu} X - g_{\mu\nu} \nabla^{2} X = 0 \\ \label{XEOM}
R + \partial_X U(X) (\nabla X)^2 + 2 \,U(X) \nabla^{2} X - 2 \,\partial_X V(X) = 0 ~.
\end{gather}
Solutions of these equations always possess at least one Killing vector $\partial_{\tau}$ with orbits that are curves of constant $X$ \cite{Schmidt:1989ws, Banks:1990mk}. Fixing the gauge so that the metric is diagonal, the solutions take the form
\eq{
X = X(r) \qquad \dd s^2 = \xi(X) \,\dd\tau^2 + \frac{1}{\xi(X)}\,\dd r^2
}{metric}
with
\begin{align}
\partial_r X = & \,\, e^{-Q(X)} \label{XPrimeDef} \\ \label{xiDef}
\xi(X) = & \,\, w(X) \, e^{Q(X)}\,\left( 1 - \frac{2\,M}{w(X)} \right) ~.
\end{align}
The solutions depend on an integration constant $M$, as well as two model-dependent functions $Q(X)$ and $w(X)$ that are given by integrals of $U(X)$ and $V(X)$
\begin{eqnarray}\label{QDef}
Q(X) & \defeq & Q_0 \, + \int^{X} \bns \dd\tilde{X} \, U(\tilde{X}) \\ \label{wDef}
w(X) & \defeq & w_0 -2 \, \int^{X} \bns \dd\tilde{X} \, V(\tilde{X}) \, e^{Q(\tilde{X})} ~.
\end{eqnarray}
The integrals are evaluated at $X$, with constants of integration $Q_0$ and $w_0$. Notice that $w_0$ and $M$ contribute to $\xi(X)$ in the same manner; together they represent a single parameter that has been partially incorporated into the definition of $w(X)$. By definition they transform as $w_0 \to e^{\Delta Q_0} w_0$ and $M \to e^{\Delta Q_0} M$ under the shift $Q_0 \to Q_0 + \Delta Q_0$. This ensures that the functions \eqref{XPrimeDef} and \eqref{xiDef} transform homogeneously, allowing $Q_0$ to be absorbed into a rescaling of the coordinates\,\footnote{Note that the counterterm in the action \eqref{Action} depends on $w_0$ but not on $Q_0$.}. Therefore, the solution depends on a single constant $w_0 + M$.
With an appropriate choice of $w_0$ we can restrict $M$ to take values in the range $M \geq 0$ for physical solutions. As evident from \eqref{metric} the norm of the Killing vector $\partial_{\tau}$ is $\sqrt{\xi(X)}$. If it vanishes we encounter a Killing horizon. Solutions with $M > 0$, which exhibit horizons, will be referred to as BHs.
If the function $V(X)$ happens to have a zero at $X_\ts{CDV}$ then there is a second, inequivalent family of solutions that also have the form \eqref{metric}. The dilaton and metric for these solutions are given by
\begin{eqnarray}
X & = & X_\ts{CDV} \label{XCDV} \\ \label{xiCDV}
\xi & = & \hat c + \hat a\, r - V'(X_\ts{CDV})\,r^2 ~,
\end{eqnarray}
where $\hat{c}$ and $\hat{a}$ are arbitrary constants. In most applications these solutions, which are characterized by a constant dilaton and Ricci scalar, are not relevant. We will generally ignore them, so references to ``generic solutions'' or ``all solutions'' should be understood to mean the solutions \eqref{XPrimeDef} and \eqref{xiDef}, parametrized by the mass $M$.
\subsection{Smooth Black Holes}
In the models we consider, solutions have a non-negative metric function $\xi(X)$ over a semi-infinite interval
\begin{equation}\label{Interval}
X_\text{min} \leq X < \infty ~,
\end{equation}
with the lower end of this interval corresponding to either the origin or a horizon, and the upper end corresponding to the asymptotic region of the space-time. At the upper end of the interval the function $w(X)$ generally diverges
\begin{equation}\label{wAsymptotic}
\lim_{X \to \infty} w(X) \to \infty~,
\end{equation}
so the asymptotic behavior of the metric is characterized by the solution with $M=0$.
If the metric function is strictly positive then the lower end of the interval \eqref{Interval} is just the value of the dilaton at the origin. But if $\xi(X)$ vanishes for some value of the dilaton, $X=X_h$, then the lower bound is a Killing horizon. Assuming that the function $e^{Q(X)}$ is non-zero for finite values of $X$, the location of the horizon is related to the parameter $M$ by
\begin{equation}\label{Horizon}
w(X_h) = 2M ~.
\end{equation}
If this condition admits multiple solutions then $X_h$ is always taken to be the outermost horizon, so that $w(X) > 2M$ for $X > X_h$.
For a field configuration to extremize the action it should satisfy the equations of motion at all points, and this imposes certain differentiability conditions on solutions. In particular, for solutions with $M \neq 0$ the horizon must be a regular point of the geometry. This fixes the periodicity $\tau \sim \tau + \beta$ of the Euclidean time,
which is given by \cite{Gegenberg:1994pv}
\begin{equation}\label{beta}
\beta = \left. \frac{4\pi}{\partial_r \xi}\, \right|_{r_h}
= \left. \frac{4\pi}{w'(X)} \,\right|_{X_h} ~.
\end{equation}
The inverse periodicity is related to the surface gravity of the BH by $2\pi \beta^{-1} = \kappa$. In an asymptotically flat space-time $\beta^{-1}$ is also the temperature measured by an observer at infinity, so we denote this quantity by $T$
\begin{equation}\label{T}
T = \frac{1}{\beta} = \left. \frac{w'(X)}{4\,\pi} \right|_{X_h} ~.
\end{equation}
This slight abuse of notation should not be confused with the proper local temperature $T(X)$, which is related to $\beta^{-1}$ by a dilaton-dependent `Tolman factor'\cite{Tolman:1934}
\begin{equation}\label{Tc}
T(X) = \frac{1}{\beta\,\sqrt{\xi(X)}} ~.
\end{equation}
The proper temperature at infinity coincides with \eqref{T} only if $\xi(X) \to 1$ as $X \to \infty$.
A solution with $M=0$ has no horizon, and is therefore regular everywhere without having to place any conditions on the period of the Euclidean time. However, as we will see below, the boundary conditions of the canonical ensemble determine a unique, non-zero value for the otherwise arbitrary period. We will therefore refer to this solution, which does not contain a black hole but has a non-zero temperature, as `Hot Empty Space' (HES).
\subsection{Thermodynamics in the Canonical Ensemble}
To describe a consistent BH thermodynamics we must specify an ensemble and construct the appropriate partition function. Motivated by York's path integral formulation of the canonical ensemble \cite{York:1986it}, we introduce an upper bound $X_c$ on the interval \eqref{Interval}. This constrains the dilaton to a `cavity' $X \leq X_c$ whose wall is the dilaton isosurface $X=X_c$. Quantities evaluated at the wall or with an explicit dependence on $X_c$ will carry a subscript `$c$'.
Boundary conditions for the canonical ensemble are imposed by coupling the system to a thermal reservoir, which fixes a dilaton charge $D_c$ and the proper local temperature $T_c$ at the wall of the cavity\,\footnote{There is no unique dilaton charge in two dimensions: given any function of $X$, one can construct a current that yields that function as its conserved charge. For simplicity, we take the dilaton charge at the wall to be $D_c = X_c$, and refer to this boundary condition henceforth as fixing $X_c$. A detailed discussion can be found in \cite{Gibbons:1992rh}, or in section 3.1 of \cite{Grumiller:2007ju}.}. It is convenient to think of the boundary condition on the temperature as fixing the proper local period of the Euclidean time, which is just the inverse $\beta_c = T_{c}^{\,-1}$ of the proper local temperature. The proper local period is related to the period $\tau \sim \tau + \beta$ by
\begin{gather}\label{periodBC}
\beta_c := \beta\,\sqrt{\xi_{c}} ~.
\end{gather}
When combined with the smoothness condition \eqref{T} this becomes
\begin{gather}\label{SmoothPeriodBC}
\beta_c = \frac{4\pi}{w'(X_h)}\,\sqrt{\xi_{c}}~.
\end{gather}
This model-dependent (and often complicated) equation, which may or may not have solutions $M >0$, determines whether there are smooth black holes in the ensemble for given boundary conditions $\beta_c$ and $X_c$. Not all solutions of this equation are relevant: the upper bound on the dilaton implies that only solutions with $X_{h}(M) < X_c$ `fit' inside the cavity. Thus, any solutions $M$ of \eqref{SmoothPeriodBC} that lie in the range $0 \leq M < M^\ts{max}$ are elements of the canonical ensemble, where
\begin{gather}
M^\ts{max} = \frac{1}{2}\,w(X_c)
\end{gather}
corresponds to a black hole with horizon located at the wall of the cavity. One solution that almost always appears in the canonical ensemble is HES ($M=0$) with period fixed by the boundary condition \eqref{periodBC}
\begin{gather}\label{HESperiod}
\beta_\ts{HES} = \frac{\beta_c}{\sqrt{e^{Q_c}\,w_c}} ~.
\end{gather}
In most models the HES solution dominates the ensemble for at least some range of boundary conditions \cite{York:1986it, Hawking:1982dh}.
With these boundary conditions the partition function of the canonical ensemble is given by the Euclidean path integral
\begin{equation}\label{PartitionFunction}
\ZZ = \int_{_{X_c, T_c}} \bns \bns \CurlD g \CurlD X \, \exp\left(-\Gamma[g,X] \right) ~.
\end{equation}
For now we will take the path integral to include all smooth spaces $(\MM,g)$ and dilaton configurations $X$ that satisfy the boundary conditions, but we will relax the smoothness requirement in the next section. In the semi-classical limit the dominant contribution to the Euclidean path integral comes from the minimum of the action. The minimum is of course a stationary point of the action -- either a black hole with $M > 0$ satisfying \eqref{SmoothPeriodBC} (if such a solution exists), or HES with period \eqref{HESperiod}. So the action for smooth field configurations near the minimum can be written as
\begin{align}\label{Min}
\Gamma[g_\ts{min} + \delta g, X_\ts{min} + \delta X] = & \,\, \Gamma[g_\ts{min},X_\ts{min}] + \delta \Gamma[g_\ts{min}, X_\ts{min}; \delta g, \delta X] \\ \nonumber
& \,\, + \frac12 \,\delta^2\Gamma[g_\ts{min}, X_\ts{min}; \delta g, \delta X] + \ldots
\end{align}
where $\delta \Gamma$ and $\delta^2 \Gamma$ are the linear and quadratic terms in the Taylor expansion. In \cite{Grumiller:2007ju} it was shown that the leading term is finite and the linear term vanishes for all field variations $\delta g$, $\delta X$ consistent with the boundary conditions. Since the solution is assumed to be a minimum (as opposed to a saddle point) the quadratic term is positive definite and the semi-classical approximation of the path integral is given by
\begin{equation}\label{ApproxPF}
\ZZ \approx \exp\left(-\Gamma[g_{\ts{min}},X_{\ts{min}}]\right) \times (\text{Quadratic Corrections})~,
\end{equation}
where the second factor comes from performing the (Gaussian) integral over the quadratic terms in \eqref{Min}.
The partition function exhibits qualitatively different behavior depending on whether the minimum of the action is HES or a black hole. For a particular model the ground state can be readily determined from the values of the boundary conditions: one simply identifies the solutions of the equations of motion that belong to the ensemble, and then determines which solution has the smallest action for the given values $\beta_c$ and $X_c$. Evaluating the action \eqref{Action} for the solutions \eqref{XPrimeDef} and \eqref{xiDef} gives
\eq{
\Gamma_c(M)=\beta_c \sqrt{w_ce^{-Q_c}}\left(1-\sqrt{1-\frac{2M}{w_c}}\,\right)-2\pi\,X_h(M) ~,
}{eq:gammaM}
which is bounded below for finite $X_c$.
For HES with $M = 0$ this becomes
\begin{gather}\label{HESaction}
\Gamma_c(0) = -2\pi\,X_0 ~,
\end{gather}
where $X_0$ is the value of the dilaton at the origin (in most of the models we consider, $X_0 = 0$). If \eqref{HESaction} is less than \eqref{eq:gammaM} for all relevant solutions of \eqref{SmoothPeriodBC}, then HES is the ground state of the ensemble. On the other hand, if there is a solution of \eqref{SmoothPeriodBC} such that \eqref{eq:gammaM} is less than \eqref{HESaction}, then the ground state of the ensemble is a black hole. If the values of $\beta_c$ and $X_c$ are changed then the ground state of the ensemble may change as well, in which case the system will undergo a phase transition involving the nucleation of the new ground state -- either a stable black hole or HES. Assuming that there is a single minimum of the action\,\footnote{For special values of the boundary conditions there may be multiple values of $M$ that minimize the action. For instance, it may be possible to tune $\beta_c$ and $X_c$ so that HES and a black hole both have the same action.}, the dominant semiclassical contribution to the free energy $F_c = - T_c \,\ln{\ZZ}$ is given by
\begin{equation}\label{ZFRelation}
F_c (T_c, X_c) \simeq T_c \, \Gamma_c(M) = \sqrt{w_c\,e^{-Q_c}}\,\left(1 - \sqrt{1-\frac{2\,M}{w_c}}\right)- 2\pi\,X_{h}(M) \, T_c ~.
\end{equation}
From this result it is possible to derive all thermodynamical properties of interest, like the entropy and internal energy
\begin{align}
S := & \, - \left(\frac{\partial F_{c}}{\partial T_c}\right)_{X_c} = 2\pi X_{h}(M) \label{eq:entropy} \\
E_{c} := & \, F_{c} + T_{c}\,S = \sqrt{w_{c}\,e^{-Q_{c}}}\,\left(1 - \sqrt{1-\frac{2\,M}{w_{c}}}\,\right) ~.
\end{align}
A comprehensive discussion of thermodynamical properties is provided in \cite{Grumiller:2007ju}.
A key assumption in the derivation of \eqref{ApproxPF} is that the quadratic term in \eqref{Min} must be positive definite. This is just the thermodynamic stability condition that the ground state have positive specific heat at constant dilaton charge. The specific heat at constant $X_c$ is
\begin{gather}
C_{c} \defeq \frac{\partial E_c}{\partial T_c}\bigg|_{X_c} = T_c \, \frac{\partial S}{\partial T_c}\bigg|_{X_c}
\qquad \textrm{with} \quad E_c = F_c + T_c \, S ~,
\end{gather}
which yields
\begin{gather}\label{CD}
C_{c} = \frac{4\pi\,w_{h}'\,(w_c - 2M)}{2\,w_{h}''\,(w_c - 2M) + (w_{h}')^2} ~.
\end{gather}
Thus, given boundary conditions $X_c$ and $T_c$, a canonical ensemble dominated by a black hole exists if the minimum of the action is a solution $0 < M < M^\ts{max}$ of \eqref{SmoothPeriodBC}, and
\begin{gather}\label{CDinequality}
w_{h}'' > - \frac{(w_{h}')^2}{2\,(w_c - 2 M)}
\end{gather}
so that the specific heat \eqref{CD} is positive. An important point is that for some theories this inequality can only be realized for finite $X_c$. A theory whose boundary conditions and solutions have $w''(X_h) < 0$ will not have positive specific heat as $X_c \to \infty$, since $w_c \to \infty$ in this limit. In that case a finite cavity is required for the existence of the canonical ensemble. The classic example of this phenomenon is the Schwarzschild black hole in a theory with asymptotically flat boundary conditions \cite{York:1986it}. On the other hand, the right-hand side of \eqref{CDinequality} is strictly negative, so a finite cavity is not required for a theory with boundary conditions and solutions such that $w''(X_h) > 0$. In that case the specific heat remains positive as $X_c \to \infty$. If the action of the black hole is less than the action for HES in this limit, then there is a canonical ensemble with black hole ground state that does not require an external thermal reservoir. The expression \eqref{CD} for the specific heat will play an important role in later sections.
This concludes our review of thermodynamics in the Euclidean path integral formalism for two-dimensional dilaton gravity. In the rest of the paper we weaken the assumption of differentiability by considering continuous metrics that no longer satisfy the condition \eqref{beta}.
\section{Black holes with conical defect}
\label{sec:2}
Let us now reconsider the Euclidean partition function \eqref{PartitionFunction}. In the previous section we included contributions from smooth field configurations that satisfy the boundary conditions. Then the leading term in the semi-classical approximation of the canonical partition function is
\eq{
\ZZ \approx \exp\Big(-\Gamma_{c}(M)\Big) ~,
}{eq:con101}
with $M$ being the mass of the smooth solution that minimizes the Euclidean action and $\Gamma_c$ the on-shell action \eqref{eq:gammaM}. We assume as before that the absolute minimum of the action occurs for a single value of $M$; otherwise \eqref{eq:con101} contains a sum over all values of $M$ that minimize the action. We may rewrite \eqref{eq:con101} as
\eq{
\ZZ \sim \int\limits_{0}^{M^\ts{max}} \nts \dd\hat{M} \,\delta(\hat{M} - M)\,\exp\Big(-\Gamma_c(\hat{M})\Big)
}{eq:con102}
where $0 \leq \hat{M} < M^\ts{max}$ runs over the physically allowed values of the mass -- subject to the condition $X_{h}(\hat{M}) < X_c$ -- and the delta-function picks out the value $\hat{M} = M$ of the smooth solution that minimizes the action.
Now suppose that we enlarge the class of field configurations that contribute to $\ZZ$ by relaxing the assumption of smoothness. Instead of imposing the condition \eqref{beta} for the period, we allow for metrics that are continuous but not differentiable at the center of the cavity. In the semiclassical limit, the largest contributions to the partition function from this sector are expected to come from configurations that `almost' meet the conditions for stationary points of the action: they satisfy the equations of motion at all points except for the horizon, where there is a conical singularity. Assuming the action is well-defined for these backgrounds, the contributions to $\ZZ$ take the form \eqref{eq:con102} without the delta-function\,\footnote{In later sections we will refine the measure of this integral, expressing the contributions to $\ZZ$ as an integral over internal energy $\dd E_{c}$ with the weight given by $\exp(-\beta_{c}\,F_{c}(\hat{M})) = \exp(S)\,\exp(-\beta_{c}\,E_{c})$.}
\eq{
\ZZ \approx \int\limits_{0}^{M^\ts{max}} \nts \dd\hat{M} \,\exp\Big(-\Gamma_c(\hat{M})\Big) ~.
}{eq:con102b}
In the rest of this section we will consider the properties of the `conical defect' black holes that contribute to this integral, and evaluate the action $\Gamma_{c}(\hat{M})$ that appears in the exponent.
\subsection{Classical black hole solutions with conical defect}
Black hole field configurations that satisfy the boundary conditions but \emph{not} the smoothness condition \eqref{beta} have the same general form as before
\eq{
X = X(r)\,, \qquad \dd s^2 = \xi(\hat{M}, X) \,\dd\tau^2 + \frac{1}{\xi(\hat{M},X)}\,\dd r^2\,,
}{metricc}
with period $\tau \sim \tau + \tilde{\beta}$, and the functions $X$ and $\xi$ given by
\begin{eqnarray}
\partial_r X & = & e^{-Q(X)} \label{XPrimeDefc} \\ \label{xiDefc}
\xi(\hat{M},X) & = & w(X) \, e^{Q(X)}\,\left( 1 - \frac{2\,\hat M}{w(X)} \right) ~.
\end{eqnarray}
The location of the horizon, $X_{h}(\hat{M}) < X_c$, is determined as before:
\eq{
X_{h}(\hat{M}) = w^{-1}(2\hat M) ~.
}{eq:con202}
We will sometimes denote a function's dependence on $\hat{M}$ (as opposed to the values $M$ that satisfy \eqref{SmoothPeriodBC}, which typically comprise a discrete set) with a `hat', abbreviating $X_{h}(\hat{M})$ as $\hat{X}_{h}$ and $\xi(\hat{M},X)$ as $\hat{\xi}(X)$.
By assumption, most values of the parameter $\hat{M}$ do not correspond to smooth black holes -- the condition \eqref{SmoothPeriodBC} is not satisfied. This means that the periodicity of the Euclidean time is not equal to the period required for regularity at the horizon. Instead, the period $\tau\sim\tau+\tilde\beta$ is determined by the boundary condition \eqref{periodBC} and the parameter $\hat M$,
\eq{
\tilde\beta = \frac{\beta_{c}}{\sqrt{\hat{\xi}_{c}}} ~.
}{eq:con201}
In other words, the period $\tilde\beta$ does not agree with $\hat\beta$, defined as
\begin{gather}
\hat{\beta} := \frac{4\pi}{w'(X)}\bigg|_{\hat{X}_{h}} ~.
\end{gather}
As a result, these spaces exhibit a a conical singularity. The deficit (or surplus) angle $\alpha$ associated with the defect is
\eq{
\alpha \,:=\, 2\pi \,\frac{\hat\beta-\tilde\beta}{\hat\beta} \, = \, 2\pi\,\left(1 - \frac{\beta_c}{\hat\beta_c}\,\right) ~,
}{eq:con203}
where $\hat\beta_c := \hat\beta\,\sqrt{\hat\xi_c}$\,. If $\hat\beta > \tilde\beta$ then $\alpha$ is positive, and it represents a true deficit in the period of Euclidean time. Otherwise, if $\alpha$ is negative there is a surplus in the period. For convenience we will always refer to $\alpha$ as the `deficit angle', though it may be positive or negative.
An important distinction between spaces with a conical defect and smooth solutions of the equations of motion is that $\hat{M}$ is a continuous parameter that is independent of the boundary conditions $\beta_c$ and $X_c$. The only conditions on $\hat{M}$ are that it should lie in the range associated with physical solutions, $\hat{M} \geq 0$, and the horizon should fit inside the cavity, $X_{h}(\hat{M}) < X_c$. The discrete set of masses $M$ that correspond to smooth solutions are determined by the condition \eqref{SmoothPeriodBC}, which implies a (potentially complicated) dependence on $\beta_c$ and $X_c$.
\begin{figure}
\centering
\includegraphics{plot.pdf}
\caption{Euclidean BH geometries with positive (outer), vanishing (middle), and negative (inner) deficit angles.}
\end{figure}
\subsection{Euclidean action}
In the presence of a conical singularity the action must be evaluated carefully, taking into account the behavior of the metric at the horizon. This is accomplished using a smoothing procedure that regulates the defect \cite{Farhi:1989yr,Hayward:1993my,Brill:1994mb,Fursaev:1995ef}. When the action is evaluated for a black hole with conical singularity the result is independent of the details of the smoothing. This suggests that the action \eqref{Action}, without any modifications, is appropriate for weighting contributions to the partition function from these spaces, as in \eqref{eq:con102b}.
For our purposes, a conical singularity at a point $p$ on the interior of $\MM$ can be thought of as introducing a delta-function in the curvature, in the sense that the integral of the Ricci scalar over $\MM$ is
\begin{gather}
\int_{\MM} \nts \dd^{2}x \,\sqrt{g} \, R = 2 \,\alpha + \int_{\MM/p} \bns\nts \dd^{2}x \,\sqrt{g}\, R ~,
\end{gather}
where $\alpha$ is the deficit angle and $\MM/p$ is the manifold $\MM$ with the point $p$ removed.
The spaces described in the previous section have a conical defect at the horizon $\hat{X}_h$, so we can write the action as the usual functional on $\hat{\MM}$ -- the manifold $\MM$ with the singular point removed -- plus the contribution from the defect
\begin{align}\label{Actionc}
\Gamma[g,X] = & \,\, - \frac{1}{2}\,\int_{\hat\MM} \nts \dd^{\,2}x \,\sqrt{g}\, \left[ X\,R - U(X)\,\left(\nabla X\right)^2 - 2 \, V(X) \right] \\ \nonumber
& \,\, - \int_{\dM} \bns \dd x \,\sqrt{\gamma} \, X\,K + \int_{\dM} \bns \dd x \,\sqrt{\gamma} \, \sqrt{w(X)e^{-Q(X)}} - \hat{X}_h\,\alpha ~.
\end{align}
For $V(X)=0$ and constant dilaton the resulting functional of the metric is, up to an overall factor of $-2\pi X$, the Gauss-Bonnet formula for a compact Euclidean manifold $\MM$ with boundary $\dM$ and a deficit angle $\alpha$.
With the results above the on-shell action for a black hole with a conical singularity at the horizon is
\eq{
\Gamma_c(\hat M)=\beta_{c}\,\sqrt{w_{c}\,e^{-Q_c}}\,\Bigg(\,1-\sqrt{1-\frac{2\hat M}{w_c}}\,\Bigg)-2\pi\,X_{h}(\hat{M}) ~.
}{eq:gammac}
The free energy, which encodes all thermodynamical properties of interest, is then given by
\eq{
F_{c}(\hat{M})\, = \, T_c \, \Gamma_{c}(\hat{M}) \, = \, \sqrt{w_{c}\,e^{-Q_c}}\,\Bigg(\,1-\sqrt{1-\frac{2\hat M}{w_c}}\,\Bigg)-2\pi\,X_{h}(\hat{M})\, T_c ~.
}{eq:F}
When considering the role of black holes with a conical singularity in the ensemble it is useful to compare \eqref{eq:gammac} to the action for a smooth solution of the equations of motion. The difference between the actions is given by
\begin{align}\label{eq:deGa}
\Delta\Gamma := &\, \Gamma_c(\hat M)-\Gamma_c(M) \\
= &\, \beta_c \, \sqrt{w_{c}\,e^{-Q_c}}\,\left(\,\sqrt{1 - \frac{2\,M \vphantom{\hat{M}}}{w_c}} - \sqrt{1-\frac{2\,\hat{M}}{w_{c}}} \,\right) + 2 \pi \big(X_{h}(M) - X_{h}(\hat{M})\big)
\end{align}
This result will be useful in the next section.
\section{Thermodynamics and stability}\label{sec:2a}
We investigate now the thermodynamical properties of field configurations with a conical defect, and compare their role in the conical ensemble to that of smooth solutions. For now we assume that the ensemble contains among the smooth field configurations a black hole with mass $M$, horizon $X_{h}(M)$, and positive specific heat. Thus, the black hole is at least a local minimum of the action among smooth spaces, though it need not be the absolute minimum of the action. It is useful to define a quantity $\delta$ that relates the dilaton at the conical defect to the dilaton at the horizon of the smooth solution
\eq{
X_{h}(\hat{M}) = X_{h}(M) +\delta ~.
}{eq:con12}
After deriving expressions for the entropy and internal energy, we consider the case
\eq{
|\delta| \ll X_h ~,
}{eq:pert}
where the thermodynamic properties of conical defects can be analyzed perturbatively. Later in this section we derive results that are valid non-perturbatively, in particular concerning the stability of smooth configurations and the ground state of the ensemble.
\subsection{Entropy and Internal Energy}
In two-dimensional dilaton gravity the entropy of smooth black holes takes the universal form \eqref{eq:entropy}. This result, which is independent of both the details of the theory and the size of the cavity, generalizes to black holes with a conical singularity
\begin{gather}\label{eq:con20}
S(\hat{M}) := - \left(\frac{\partial F_{c}(\hat{M})}{\partial T_c}\right)_{X_c} = \,2\pi\,X_{h}(\hat{M}) ~.
\end{gather}
In terms of the parameter $\delta$ this becomes
\begin{gather}\label{Sdelta}
S(\hat{M}) = S(M) + 2\pi\,\delta ~,
\end{gather}
and we see that the entropy may either be greater than or less than the entropy of the smooth black hole, depending on the sign of $\delta$.
The internal energy is related to the entropy, temperature, and free energy by $E_{c} = F_{c} + T_{c}\,S$. The subscript `c' is retained here to emphasize that, unlike the entropy, the internal energy depends explicitly on the size of the cavity. Applying the results for the entropy \eqref{eq:con20} and the free energy \eqref{eq:F} gives
\begin{gather}\label{InternalEnergy}
E_{c}(\hat{M}) = \sqrt{w_{c}\,e^{-Q_{c}}}\,\left( 1 - \sqrt{1 - \frac{2\,\hat{M}}{w_c}}\,\right) = \sqrt{w_{c}\,e^{-Q_{c}}}\,\left( 1 - \sqrt{1 - \frac{w(X_{h} + \delta)}{w_c}}\,\right) ~.
\end{gather}
Like the entropy, the internal energy of a black hole with a conical singularity may be either greater than or less than that of the smooth black hole.
\subsection{Perturbative stability}
Based on the results above, a black hole with a conical singularity can have higher entropy or lower internal energy than a smooth black hole. However, to determine whether these black holes are favored in the conical ensemble one must consider the free energy. We will first address this for small $\delta$; i.e., $|\delta| \ll X_h$. In this limit the deficit angle is
\begin{gather}\label{SmallDefect}
\alpha = - \frac{4\pi^{2}}{C_{c}}\,\delta + \mathcal{O}(\delta^{2}) ~,
\end{gather}
so positive $\delta$ corresponds to a surplus, and negative $\delta$ represents a deficit.
The expression \eqref{eq:con20} for the entropy is linear in $X_{h}(\hat{M})$, so no expansions are needed when $\delta$ is small. On the other hand, the internal energy is a non-linear and potentially complicated function of $X_{h}(\hat{M})$. Expanding \eqref{InternalEnergy} for small $\delta$ gives
\begin{gather}\label{Edelta}
E_{c}(\hat{M}) \simeq E_{c}(M) + 2\pi\,T_c\,\delta + \frac{2\pi^{2}\,T_c}{C_{c}}\,\delta^{2} + \mathcal{O}(\delta^{3}) ~.
\end{gather}
So the internal energy of the conical defect may be greater or less than the internal energy of the smooth black hole, and at leading order this is controlled by the sign of $\delta$. Notice, however, that the term at order $\mathcal{O}(\delta^{2})$ is strictly positive. This is crucially important when we consider the free energy $F_{c}(\hat{M})$. Using \eqref{Sdelta} and the expansion \eqref{Edelta} we obtain
\begin{align}
F_{c}(\hat{M}) = E_{c}(\hat{M}) - T_{c}\,S(\hat{M}) \simeq F_{c}(M) + \delta^{2}\,\frac{2\pi^{2} T_{c}}{C_{c}} + \mathcal{O}(\delta^3) ~.
\end{align}
Expressed in terms of the deficit angle this is
\begin{gather}
F_{c}(\hat{M}) - F_{c}(M) \,\simeq \frac{C_{c}}{8 \pi^{2}}\,\alpha^{2} \quad \quad \alpha \ll 1 ~.
\end{gather}
Thus, the free energy of a smooth black hole with $C_c > 0$ is always smaller than the free energy of a nearby ($|\delta| \ll X_{h}(M)$) black hole with conical singularity. In terms of the internal energy and entropy this implies
\begin{gather}
E_{c}(\hat{M}) - E_{c}(M) \,\ge\, T_c\, \left( S(\hat{M}) - S(M) \right) ~.
\end{gather}
In other words, the presence of a small conical defect can lower the internal energy compared to a smooth black hole, but the corresponding decrease in the density of states prevents the ensemble from favoring such configurations. Likewise, a conical defect can have a larger entropy than a smooth black hole, but the cost in internal energy is too high for these configurations to be favored by the ensemble.
\subsection{Non-perturbative stability}
The previous section considered black holes with a small conical defect. In this section the assumption $|\delta| \ll X_{h}$ is dropped, which means that quantities like $\Delta \Gamma_c$ cannot be evaluated perturbatively. Nevertheless, it is still possible to show that the minimum of the action does not have a conical defect. If the absolute minimum of the action among smooth spaces is a solution with mass $M$, then $\Gamma_{c}(\hat{M}) > \Gamma_{c}(M)$ for any $\hat{M}$ with a conical singularity. The ensemble always has a smooth ground state that is stable against decay into a space with conical defect.
First let us illustrate our reasoning with a simple class of examples: theories that allow the $X_c \to \infty$ limit. The existence of the ensemble in this limit is addressed in the next section; for now let us assume that we are working with a model where taking $X_c \to \infty$ is allowed. Then as the cavity wall is removed \eqref{eq:deGa} becomes
\eq{
\lim_{X_c\to\infty}\Delta\Gamma = 2\pi\, \frac{\delta}{w^{\prime}(X_{h})} \left(\frac{w(X_h+\delta)-w(X_h)}{\delta}-w^{\prime}(X_{h})\right) ~.
}{eq:con21}
The condition $\Delta\Gamma\geq 0$ becomes a convexity condition on the function $w$. An even stronger statement is obtained by considering extrema of $\Delta\Gamma$. For very large $X_c$ the condition
\eq{
\frac{\dd\Delta\Gamma}{\dd\,\delta}=0
}{eq:con15}
simplifies to
\eq{
w^\prime(X_h) = w^\prime(X_h+\delta)
}{eq:con16}
But this implies that any extremum of $\Delta\Gamma$ has to have the same periodicity as the configuration without conical defect. While more than one such extremum may exist (cf.~the discussion about how to extract the mass $M$ from $T_c$ and $X_c$ in \cite{Grumiller:2007ju}), none of them exhibits a conical singularity.
For finite values of $X_c$ it is easier to work directly with the action \eqref{eq:gammac} for a space with conical singularity. Extremizing the action with respect to $\hat{M}$ gives
\begin{gather}
\beta_{c} = \frac{4\pi}{w'(\hat{X}_{h})}\,\sqrt{\hat{\xi}_{c}} ~,
\end{gather}
which means that extrema occur at precisely those values of $\hat{M}$ that correspond to smooth solutions of the equations of motion. Of course, it is possible that these extrema are local, and the absolute minimum of the action occurs at one of the endpoints of the interval $0 \leq \hat{M} \leq M^\ts{max}$. Indeed, in most models there is a range of boundary conditions where the minimum of the action occurs at the lower limit. But $\hat{M} = 0$ is just HES, which is a smooth solution of the equations of motion. The other possibility -- that the absolute minimum of the action occurs at $M^\ts{max} = w(X_c)/2$ -- can be ruled out quite generally. Consider the derivative $\partial \Gamma_{c}(\hat{M})/\partial \hat{M}$ as $\hat{M} \to M^\ts{max}$ from below
\begin{gather}\label{extrema1}
\frac{\partial\,\Gamma_{c}(\hat{M})}{\partial\,\hat{M}} = \frac{\beta_{c}}{\sqrt{\hat{\xi}_{c}}} - \frac{4\pi}{w'(\hat{X}_{h})} = 0 ~.
\end{gather}
For non-zero $\beta_c$ the first term is positive and diverges as $(M^\ts{max} - \hat{M})^{-1/2}$. Unless $w'(X)$ happens to have a zero at $X_c$, the first term dominates and the action is \emph{increasing} as $\hat{M}$ approaches $M^\ts{max}$. We conclude that the action is always minimized by a smooth solution of the equations of motion: either HES or a smooth black hole.
Note that it is possible for a conical singularity to have a smaller action than a smooth black hole, as long as that black hole is not the ground state of the ensemble. This includes black holes that are thermodynamically stable ($C_{c}>0$), but only a local minimum of the action. In that case there will necessarily be conical singularities close to the ground state that have a smaller action than any local minimum. But a smooth black hole (or HES, for that matter) will never tunnel quantum mechanically to a final configuration with a defect, because the ground state of the ensemble is necessarily smooth.
\subsection{Constant dilaton vacua}
The discussion up to this point has involved generic solutions of the equations of motion, but neglected the constant dilaton vacua (CDV) \eqref{XCDV}-\eqref{xiCDV} that may exist for some dilaton gravity models. These isolated solutions occupy a different superselection sector of the theory, so there is no perturbative channel for a BH -- with or without a deficit angle -- to decay into a CDV. However, in cases where the boundary conditions happen to coincide with a zero of the dilaton potential, $V(X_c) = 0$, tunneling between the two types of solutions is possible. A detailed discussion can be found in \cite{Grumiller:2007ju}.
Since we have extended the class of BH solutions to configurations with a deficit angle, it is appropriate to do the same for CDVs. Here we discuss these solutions and evaluate their free energy. The on-shell action can be calculated using \eqref{Actionc}, which gives
\eq{
\hat{\Gamma}_{CDV} = -2\pi X_0 + \tilde{\beta}\,\sqrt{\hat{\xi}_c} \, \sqrt{e^{-Q(X_0)}w(X_0)} ~.
}{eq:con30}
(If we drop the assumption that spacetime is topologically a disk the first term in \eqref{eq:con30} is multiplied by the Euler characteristic of the manifold.)
The difference between this action and the action for a smooth CDV solution is
\eq{
\Delta\Gamma_{CDV} = \left( \tilde{\beta}\sqrt{\hat{\xi}_c} - \beta \sqrt{\xi_c} \right) \sqrt{e^{-Q(X_0)} w(X_0)} = 0 ~,
}{eq:con31}
which always vanishes because both configurations satisfy the same boundary conditions
\eq{
\tilde{\beta} \sqrt{\hat{\xi}_c} = \beta \sqrt{\xi_c} = \beta_c ~.
}{eq:con32}
Therefore, all CDV solutions with given $X_0$ and $\lambda$ have the same free energy. It follows that the regular CDV solution is only marginally stable against decay into a CDV with conical defect, and vice-versa.
\subsection{Contributions to the Partition Function}
\label{sec:PartitionFunction}
The dominant contributions to the semiclassical partition function from spaces with a conical singularity are given by an integral like \eqref{eq:con102b}. But for systems with a finite cavity the measure in this integral should be treated more carefully. In the canonical ensemble the partition function is expressed as a sum or integral over different internal energies of the system, weighted by the density of states $\exp(S)$ and the Boltzmann factor $\exp(-\beta\,E)$. The internal energy for an ensemble with finite $X_c$ is given by \eqref{InternalEnergy}, which suggests that the appropriate measure is proportional to
\begin{gather}
\dd E_{c}(\hat{M}) = \frac{\dd \hat{M}}{\sqrt{\hat{\xi}_{c}}}
\end{gather}
rather than $\dd \hat{M}$. Thus, the semiclassical approximation for the partition function, including contributions from spaces with a conical singularity\,\footnote{This does not include the contributions from smooth fluctuations of the fields around the ground state of the system, which are equally important.}, is
\begin{gather}\label{CanonZ}
\ZZ \sim \int\limits_{0}^{M^\ts{max}} \dd \hat{M}\,\frac{1}{\sqrt{\hat{\xi}_{c}}} \, \exp\big( - \Gamma_{c}(\hat{M})\big) ~.
\end{gather}
The additional factor of $(\hat{\xi}_{c})^{-1/2}$ in the measure proves to be relevant when computing sub-leading corrections to thermodynamical quantities like the entropy. The functional form of the integrand in \eqref{CanonZ} almost always prevents the direct evaluation of this integral in closed form, but standard semiclassical approximation techniques prove to be very accurate when compared with numerical results.
\subsubsection{The $X_c \to \infty$ limit}
For some theories the conical ensemble exists even as the system is decoupled from the external thermal reservoir. Taking the $X_{c} \to \infty$ limit, or `removing the cavity wall', usually implies $w_c \to \infty$, and in this case the integral \eqref{CanonZ} simplifies.
Provided the limit $X_c \to \infty$ commutes with the integral over $\hat{M}$, the contributions to the semiclassical partition function becomes
\eq{
\ZZ_{\infty} := \lim_{X_c\to\infty} \ZZ \approx \int\limits_0^\infty \dd\hat{M}\,\lim_{X_c\to\infty} \frac{1}{\sqrt{w_c\,e^{Q_c}}} \exp\Big(\,2\pi\,X_{h}(\hat{M}) - \frac{\beta_{c}}{\sqrt{w_c\,e^{Q_c}}}\,\hat{M}\,\Big) ~,
}{eq:con104}
Assuming there are no obstructions, we can use $w(\hat{X}_{h}) = 2\hat{M}$ to convert this into an integral over $\hat{X}_h$
\eq{
\ZZ_{\infty} \approx \frac12 \int\limits_{\hat{X}_{0}}^\infty \dd\hat{X}_h \, w^\prime(\hat{X}_h)\, \exp\left(2 \pi \hat{X}_{h} - \frac{1}{2}\, w(\hat{X}_h)\, \lim_{X_c\to\infty} \frac{\beta_{c}}{\sqrt{w_c\,e^{Q_c}}}\right) ~,
}{eq:con106}
where the lower bound is set by the condition $w(\hat{X}_{0}) = 0$, and we have absorbed the factor $(w_c\,e^{Q_c})^{-1/2}$ into the normalization of $\mathcal{Z}_{\infty}$\,\footnote{This may seem odd, since we \emph{just} introduced the factor of $\hat{\xi}_{c}^{-1/2}$ in the measure in \eqref{CanonZ}. The point is that this factor is important for finite $X_c$, but it becomes state-independent, and hence irrelevant, in the $X_c \to \infty$ limit.}.
Before going further it is important to ask: if $X_c \to \infty$, what is being fixed in defining the ensemble? If $w_c\,e^{Q_c}$ is finite and non-zero in this limit then we may continue to express the action as a function of the same $\beta_c$ that is held fixed at finite $X_c$. But if $w_c\,e^{Q_c}$ diverges then we must take $\beta_c \to \infty$ while keeping the ratio $\beta_c/\sqrt{w_c\,e^{Q_c}}$ finite. In either case, the boundary conditions of the ensemble are specified by fixing a finite, non-zero value for the quantity
\begin{gather}\label{BetaInfty}
\beta_{\infty} := \lim_{X_c \to \infty} \frac{\beta_{c}}{\sqrt{w_c \, e^{Q_c}}}~.
\end{gather}
In other words, when $X_c \to \infty$ the ensemble is defined by fixing the value of the period, rather than the proper local period at the cavity wall. The contributions to the partition function from conical singularities are then given by
\begin{gather}\label{Zinfty}
\ZZ_{\infty} \simeq \frac{1}{2}\,\int\limits_{\hat{X}_{0}}^\infty \dd\hat{X}_h \, w^\prime(\hat{X}_h)\, \exp\left(2 \pi \hat{X}_{h} - \frac{1}{2}\,\beta_{\infty}\, w(\hat{X}_h)\right)~,
\end{gather}
which can also be expressed in the familiar form
\begin{gather}
\ZZ_{\infty} \simeq \int\limits_{0}^{\infty} \dd\hat{M}\, \exp\left(S(\hat{M})\right)\,\exp\left(- \beta_{\infty}\, \hat{M}\right) ~.
\end{gather}
Of course, the ensemble only exists if this integral is defined, which requires that $w(X)$ grows sufficiently fast at large values of $X$,
\begin{gather}\label{ExistenceCriteria}
\lim_{X \to \infty} \frac{w(X)}{X} > \frac{4\pi}{\beta_{\infty}} ~.
\end{gather}
If this condition is satisfied then the ensemble exists. For example, the $X_c \to \infty$ limit is not defined for the Schwarzschild model, which has $w(X) \sim \sqrt{X}$, but it is defined for the Jackiw-Teitelboim model, which has $w(X) \sim X^{2}$. If the large $X$ behavior of $w(X)$ is linear, so that $w(X) = w_{1}\,X + \ldots$ for large $X$, then the ensemble exists only if $\beta_{\infty} > 4\pi/w_{1}$, which corresponds to a Hagedorn temperature $T_{H} = w_{1}/4\pi$. This is especially relevant for the stringy black holes considered in section \ref{subsec:stringy}.
In most cases the integral \eqref{Zinfty} is easier to work with than the integral \eqref{CanonZ}, though approximation methods or numerical techniques are usually still required. The behavior of this integral depends on the ground state of the system, which is determined as in the finite $X_c$ case. The condition for extremizing the action in the $X_c \to \infty$ limit is
\begin{gather}\label{SmoothInfinity}
\frac{\partial \Gamma_{\infty}(\hat{X}_{h})}{\partial \hat{X}_{h}} = \frac{1}{2}\,\beta_{\infty}\,w'(\hat{X}_{h}) - 2\pi = 0 ~,
\end{gather}
where $\Gamma_{\infty}(\hat{X}_{h}) = \beta_{\infty}\,w(\hat{X}_{h})/2 - 2\pi\hat{X}_{h}$ is the action in the exponent of \eqref{Zinfty}. As before, this is the usual smoothness condition, so the ground state of the system will either be a smooth black hole or the $\hat{M}=0$ HES solution\,\footnote{Unlike the finite $X_c$ case, where the upper limit $M^\ts{max}$ had to be considered when determining the ground state, the upper limit $\hat{M} \to \infty$ is ruled out by the condition \eqref{ExistenceCriteria}.}. A solution $X_{h}$ of \eqref{SmoothInfinity} is a minimum if
\begin{gather}\label{MinInfinity}
\frac{\partial^{2} \Gamma_{\infty}(\hat{X}_{h})}{\partial \hat{X}_{h}^{\,2}}\bigg|_{X_{h}} = \frac{1}{2}\,\beta_{\infty}\,w''(X_{h}) > 0 ~,
\end{gather}
which is equivalent to the $X_c \to \infty$ limit of the condition \eqref{CDinequality} for positivity of the specific heat.
If a minimum exists, it must be compared to the action of HES, $\Gamma_{\infty}(X_0) = -2\pi X_{0}$, to determine the ground state. Thus, the ground state of the ensemble is a smooth black hole if there is a solution of \eqref{SmoothInfinity} that satisfies \eqref{MinInfinity} and
\begin{gather}
\Gamma_{\infty}(X_h) - \Gamma_{\infty}(X_0) < 0 \quad \Rightarrow \quad \frac{w(X_h)}{w'(X_h)} < X_{h} - X_{0} ~.
\end{gather}
Otherwise, the ground state is HES.
When the ground state of the ensemble is a black hole, the integral \eqref{Zinfty} is comparable to contributions to the partition function from smooth Gaussian fluctuations. To see why this is the case, define a new variable $Y = \hat{X}_{h} - X_{h}$ where $X_h$ is the dilaton at the horizon for the black hole ground state. In the semiclassical approximation the main contributions to \eqref{Zinfty} come from configurations close to the ground state, so the integral may be approximated as
\begin{gather}
\ZZ_{\infty} \simeq \frac{1}{2}\,\exp(-\Gamma_{\infty}(M)) \int\limits_{-\infty}^{\infty} \ns \dd Y\,w'(X_{h})\,\exp\left( - \frac{2\pi^{2}}{C_{\infty}}\,Y^2 \right) ~,
\end{gather}
where $C_{\infty} = 2\pi w'(X_h)/w''(X_h)$ is the $X_c \to \infty$ limit of the specific heat \eqref{CD}. Evaluating the integral gives
\begin{gather}\label{ZinftyApprox}
\ZZ_{\infty} \simeq \exp(-\Gamma_{\infty}(M))\,\frac{\sqrt{2\pi C_{\infty}}}{\beta_{\infty}} ~.
\end{gather}
This is comparable to the contributions from \emph{smooth} Gaussian fluctuations around the black hole ground state, because in both cases the coefficient of the quadratic term in the expansion of the action around the minimum is proportional to $C_{\infty}^{-1}$.
If the ground state of the ensemble is HES, then there are two possible approximations for the integral \eqref{Zinfty}. As before, configurations close to the ground state dominate the integral in the semiclassical approximation. But their contributions depend on the behavior of the function $w(\hat{X}_{h})$ in this region. We will assume, for convenience, that $X_{0} = w^{-1}(0) = 0$. Then if the ground state is HES, the conditions $\Gamma_{\infty}(0) = 0$ and $\Gamma_{\infty}(\hat{M}) > 0$ imply that $w(\hat{X}_{h}) \sim \hat{X}_{h}^{\,\gamma}$ near $\hat{X}_{h} = 0$, with $0 \leq \gamma \leq 1$. For $\gamma<1$, the $-\beta_{\infty} w(\hat{X}_{h})/2$ term dominates the action in this region, and the integral is well-approximated as
\begin{gather}\label{HESApprox1}
\ZZ_{\infty} \simeq \int\limits_{0}^{\infty} \ns \dd\hat{M}\,\exp(-\beta_{\infty}\,\hat{M}) = \frac{1}{\beta_{\infty}} ~.
\end{gather}
On the other hand, if $w(\hat{X}_{h})$ is approximately linear ($\gamma=1$) near $\hat{X}_{h}=0$ then both terms in the action are relevant, and \eqref{Zinfty} is approximated by
\begin{gather}\label{HESApprox2}
\ZZ_{\infty} \simeq \int\limits_{0}^{\infty} \ns \dd\hat{X}_{h} \,\frac{w'(0)}{2} \,\exp\left(-\left(\frac{1}{2}\,\beta_{\infty}\,w'(0) - 2\pi \right)\,\hat{X}_{h}\right) = \frac{w'(0)}{\beta_{\infty}\,w'(0) - 4\pi} ~.
\end{gather}
In both cases the contributions to the partition function are generally much smaller than other corrections (for instance, from radiation) for a HES ground state.
\subsubsection{Ensembles with finite $X_c$}
Though the analysis is a bit more complicated for finite $X_c$, the same approach yields reliable approximations for \eqref{CanonZ}. When the ground state of the ensemble is a black hole, expanding the action around its minimum gives
\begin{gather}
\Gamma_{c}(\hat{M}) = \Gamma_{c}(M) + \frac{2\pi^{2}}{C_{c}}\,Y^{2} + \mathcal{O}(Y^3) ~,
\end{gather}
where $Y = \hat{X}_{h} - X_{h}$, and $C_{c}$ is the specific heat at finite $X_c$ \eqref{CD}. The semiclassical limit implies that the integral is dominated by configurations near the minimum, and their contributions may be approximated as
\begin{gather}
\ZZ \simeq \exp(-\Gamma_{c}(M)) \int\limits_{-\infty}^{\infty} \ns \dd Y\,\frac{w'(X_{h})}{2\,\sqrt{\xi_{c}}}\,\exp\left( - \frac{2\pi^{2}}{C_{c}}\,Y^2 \right) ~.
\end{gather}
This gives essentially the same result as \eqref{ZinftyApprox}
\begin{gather}\label{ZfiniteApprox}
\ZZ \simeq \exp(-\Gamma_{c}(M))\,\frac{\sqrt{2\pi C_{c}}}{\beta_{c}} ~,
\end{gather}
but expressed in terms of the relevant quantities evaluated at $X_c$. When the ground state is HES, the analysis is very similar to the $X_c \to \infty$ case. Configurations near $\hat{M}=0$ dominate the integral, and depending on the behavior of $w(\hat{X}_{h})$ in this region \eqref{CanonZ} is approximated by either \eqref{HESApprox1} or \eqref{HESApprox2}, with $\beta_{\infty}$ replaced by $\beta_{c}$.
For comparison with other approaches that calculate corrections to free energy and entropy it is useful to represent the results above as
\eq{
F_c = -T_c \log \ZZ = T_c\,\Gamma_{c}(M) - T_{c}\,\log\left(T_{c}\,\sqrt{2\pi C_{c}}\right) ~.
}{eq:angelinajolie}
Then the entropy takes the form $S = S^{(0)} + S^{(1)}$, where $S^{(0)} = 2\pi X_{h}$ is the contribution from the leading term in the free energy, and
\begin{align}\label{eq:lalapetz}
S^{(1)} = &\,\,\log\left(T_{c}\sqrt{2\pi C_{c}}\right) + \frac{1}{2}\,\left(\frac{\partial \log C_{c}}{\partial \log T_{c}}\right)_{X_c} + 1 \\
\simeq & \,\, \frac{1}{2}\,\log\left(C_{c} T_{c}^2\right) + \dots\,\,~.
\end{align}
The second line gives the leading behavior of $S^{(1)}$ in our semiclassical calculations, which takes the same form as corrections from thermal fluctuations \cite{lali:stat1}. These results also apply to ensembles with $X_c \to \infty$, after making the appropriate replacements of quantities evaluated at the cavity wall.
\section{Black hole examples}\label{sec:3}
So far the discussion has allowed for arbitrary functions $U(X)$ and $V(X)$ in the action \eqref{Actionc}. In this section we apply our results to specific models, discussing their thermodynamic properties and determining the leading contributions to the partition function from configurations with conical singularities.
\subsection{Schwarzschild}
The Schwarzschild models, which belong to the so-called `$ab$-family' of dilaton gravities \cite{Katanaev:1997ni}, are motivated by a spherically symmetric reduction of gravity with asymptotically flat boundary conditions from $d+1 \geq 4$ dimensions down to two dimensions. The functions $w(X)$ and $e^{Q(X)}$ for these models take the form
\begin{gather}\label{Schwarzw}
w(X) = (d-1)\,\Upsilon^{\frac{1}{d-1}}\,X^{\frac{d-2}{d-1}} \\ \label{SchwarzQ}
e^{Q(X)} = \frac{1}{d-1}\,\Upsilon^{-\frac{1}{d-1}}\,X^{ - \frac{d-2}{d-1}} ~.
\end{gather}
The constant $\Upsilon$ is given by
\begin{gather}
\Upsilon = \frac{A_{d-1}}{8\pi\,G_{d+1}} ~,
\end{gather}
where $G_{d+1}$ is the $d+1$-dimensional Newton's constant and $A_{d-1}$ is the area of the unit sphere $S^{d-1}$. It will be convenient to retain factors of $\Upsilon$, even though they could be absorbed into a rescaling of the coordinates. Since the growth of $w(X)$ is sub-linear for large $X$, the existence condition \eqref{ExistenceCriteria} implies that the $X_c \to \infty$ limit is not possible for the Schwarzschild model. Thus, in order to work in the canonical ensemble one must couple the system to a heat bath at finite $X_c$.
Setting \eqref{Schwarzw} equal to $2\,\hat{M}$ gives the value of the dilaton at the horizon,
which can then be applied to \eqref{eq:con20} to obtain the entropy of a configuration with conical singularity
\begin{align}\label{SchwarzS}
S(\hat{M}) = & \,\, 2\pi\,\left(\frac{2\,\hat{M}}{(d-1)\,\Upsilon^{\frac{1}{d-1}}}\right)^{\frac{d-1}{d-2}} ~.
\end{align}
This result takes a more familiar form if we solve \eqref{XPrimeDef} for the dilaton as a function of the coordinate $r$
\begin{gather}\label{SchwarzDilaton}
X(r) = \Upsilon\,r^{d-1} = \frac{A_{d-1}}{8\pi\,G_{d+1}}\,r^{d-1}~.
\end{gather}
The expression for the entropy becomes one-quarter of the horizon area in Planck units, even for configurations with a conical singularity
\begin{gather}
S(\hat{M}) = \frac{A_{d-1}}{4\,G_{d+1}}\,r_{h}(\hat{M})^{d-1} ~.
\end{gather}
The internal energy of these configurations is obtained from \eqref{InternalEnergy}. Expressed in terms of the boundary conditions and mass parameter $\hat{M}$, it is
\begin{gather}\label{SchwarzEc}
E_{c}(\hat{M}) = (d-1)\,\Upsilon^{\frac{1}{d-1}}\,X_{c}^{\frac{d-2}{d-1}}\,\left(1 - \sqrt{1 - \frac{2\,\hat{M}}{d-1}\,\Upsilon^{-\frac{1}{d-1}}\,X_{c}^{-\frac{d-2}{d-1}}} \, \right) ~.
\end{gather}
For the HES solution $\hat{M} = 0$ -- the `hot flat space' of \cite{Gross:1982cv} -- the internal energy is zero. For non-zero values of the mass parameter the result \eqref{SchwarzEc} can be inverted to give $\hat{M}$ as a function of the internal energy in the region $X \leq X_c$
\begin{gather}
\hat{M} = \hat{E}_{c} - \frac{\hat{E}_{c}^{\,2}}{2\,\sqrt{X_c}} ~,
\end{gather}
which relates the ADM mass to the internal energy $E_{c}(\hat{M})$ and the gravitational binding energy $-\hat{E}_{c}^{\,2}/2\sqrt{X_c}$ in the cavity \cite{Grumiller:2007ju}.
In the rest of this section we consider the phase structure of the Schwarzschild model, and the dominant contributions to the Euclidean partition function coming from black holes with a conical singularity. It is tempting to focus on the familiar example $d+1=4$, but as we will see this is a special case that exhibits qualitatively different behavior than models based on the reduction from $d+1 \geq 5$ dimensions. The analysis is simpler when quantities are expressed as functions of $\hat{X}_{h}$ rather than the mass $\hat{M}$. In that case the action \eqref{eq:gammac} is
\begin{gather}\label{ConActionSchwarz}
\Gamma_{c}(\hat{X}_{h}) = (d-1)\,\beta_c \,\Upsilon^{\frac{1}{d-1}}\, X_{c}^{\,\frac{d-2}{d-1}} \, \left( 1 - \sqrt{ 1 - \left(\frac{\hat{X}_{h}}{X_c}\right)^{\nts\frac{d-2}{d-1}}} \, \right) - 2\pi\,\hat{X}_{h}~.
\end{gather}
In $d+1=4$ dimensions this is precisely $\beta_c$ times the ``generalized free energy'' obtained by York in \cite{York:1986it}. The action is plotted in figure \ref{fig:Action1} as a function of $\hat{X}_{h}$, for representative values of the boundary conditions. The minimum of the action is either HES or a smooth black hole, depending on the values of $\beta_c$ and $X_c$ fixed by the boundary conditions. The HES solution with $\hat{M}=0$ is always present, but smooth black holes exist in the ensemble only when the smoothness and boundary conditions are both met. This occurs at isolated values of $X_{h}$ that satisfy
\begin{gather}\label{SchwarzSmoothBC}
\beta_c = \frac{4\pi\,X_{c}^{\frac{1}{d-1}}}{(d-2)\,\Upsilon^{\frac{1}{d-1}}} \, \left(\frac{X_{h}}{X_{c}}\right)^{\frac{1}{d-1}} \,\sqrt{1 - \left(\frac{X_{h}}{X_{c}}\right)^{\frac{d-2}{d-1}}} ~.
\end{gather}
\begin{figure}
\centering
\includegraphics[width=4in]{SchwarzschildAction.pdf}
\caption{The Schwarzschild model action $\Gamma_{c}(\hat{X}_{h})$ for $0 \leq \hat{X}_{h} \leq X_c$. Local extrema appear only for $B_c$ below a certain value $B_{c}^{*}$. }\label{fig:Action1}
\end{figure}
To analyze this condition it is convenient to define the variables
\begin{gather}\label{SchwarzVariables}
\lambda := \left(\frac{X_{h}}{X_{c}}\right)^{\frac{1}{d-1}} \quad \quad \quad B_c := \frac{(d-2)\Upsilon^{\frac{1}{d-1}}}{4\pi}\,\beta_{c}\,X_{c}^{-\frac{1}{d-1}} ~,
\end{gather}
so that \eqref{SchwarzSmoothBC} takes the form
\begin{gather}\label{SchwarzSmooth}
B_c = \lambda\,\sqrt{1-\lambda^{d-2}} ~.
\end{gather}
There are no real solutions of \eqref{SchwarzSmooth}, and hence no smooth black holes in the ensemble, for $B_c > B_{c}^{*}$ with
\begin{gather}
B_{c}^{*} = \left(\frac{2}{d}\right)^{\frac{1}{d-2}}\sqrt{\frac{d-2}{d}} ~.
\end{gather}
In this case the action \eqref{ConActionSchwarz} is strictly non-negative and the ground state is HES with $\Gamma_c(0) = 0$. But if $B_c < B_{c}^{*}$ there are two black holes in the ensemble, corresponding to two real solutions $0 < \lambda_{-} < \lambda_{+} < 1$ of \eqref{SchwarzSmooth}. The smaller of the two black holes is a local maximum of the action, and the larger black hole is a local minimum. This minimum is positive when $B_c$ is greater than a critical value given by
\begin{gather}
B_{c}^{\ts{crit}} = \left(\frac{d-2}{d}\right)\,\left(\frac{4\,(d-1)}{d^{2}}\right)^{\frac{1}{d-2}} ~,
\end{gather}
so that the ground state of the ensemble remains HES for $B_{c}^{\ts{crit}} < B_{c} < B_{c}^{*}$\,. But for boundary conditions that satisfy $0 < B_c < B_{c}^{\ts{crit}}$, the minimum of the action is negative and the ground state of the ensemble is the large black hole.
Using the definition \eqref{SchwarzVariables} and expressing $X_c$ in terms of the radius of the cavity $r_c$, the three regimes of the Schwarzschild model can be described in terms of more conventional variables.
\begin{center}
\begin{tabular}{c|c|c}
Boundary Conditions & \,\, Smooth Black Hole? \,\, & \,\, Ground State\,\,\\
\hline
$T_c < \frac{\sqrt{d(d-2)}}{4\pi\,r_c}\,\left(\frac{d}{2}\right)^{\frac{1}{d-2}}$ & Does not exist & HES \vphantom{\bigg|}\\
$\frac{\sqrt{d(d-2)}}{4\pi\,r_c}\,\left(\frac{d}{2}\right)^{\frac{1}{d-2}} < T_c < \frac{d}{4\pi\,r_c}\,\left(\frac{d^2}{4(d-1)}\right)^{\frac{1}{d-2}}$ & Local minimum & HES \vphantom{\bigg|}\\
$T_c > \frac{d}{4\pi\,r_c}\,\left(\frac{d^2}{4(d-1)}\right)^{\frac{1}{d-2}}$ & Global minimum & SBH \vphantom{\bigg|}\\
\end{tabular}
\end{center}
The Schwarzschild model has a ``low temperature'' phase, set by the size of the cavity, where smooth black holes do not exist at all -- black holes in the ensemble necessarily exhibit a conical singularity in this regime. Two smooth black holes appear in the ensemble as the temperature is increased at fixed cavity size. One of the black holes is stable against small fluctuations (i.e., $C_{c}>0$), but at intermediate temperatures the system will eventually tunnel from this state to the HES ground state. Finally, a ``high temperature'' phase occurs above a critical temperature that is also set by the size of the cavity
\begin{gather}
T_{c}^{\ts{crit}} = \frac{d}{4\pi\,r_{c}}\,\left(\frac{d^2}{4(d-1)}\right)^{\frac{1}{d-2}} ~.
\end{gather}
For $T_{c} > T_{c}^{\ts{crit}}$ the ground state of the ensemble is a smooth black hole.
It is worth taking a moment to consider a Gedankenexperiment that examines the phases described above in a ``real world'' setting. Suppose we construct a cavity of macroscopic size in the lab, removing all matter from the interior and holding the walls at a constant temperature. Assuming that gravity is described by the usual Einstein-Hilbert action (and neglecting all physics besides gravity and radiation), what is the relevant ground state for the ensemble? Restoring dimensionful constants, the condition \eqref{SchwarzSmoothBC} becomes
\begin{gather}
\frac{\hbar\,c}{k_{B}\,T_c} = 4\pi\,r_{h}\,\sqrt{1 - \frac{r_{h}}{r_{c}}} ~.
\end{gather}
The two solutions for a cavity of radius $r_{c} = 0.1\,\text{m}$ held at temperature $T_{c} = 10^{3}\,\text{K}$ are
\begin{gather}
\frac{r_{h}^\ms{(-)}}{r_c} = 1.8 \times 10^{-6} \quad \quad \quad \frac{r_{h}^\ms{(+)}}{r_c} = 1 - 3 \times 10^{-12} ~.
\end{gather}
The larger solution describes a stable black hole with its horizon about a third of a \emph{picometer} from the wall of the cavity; reasonable laboratory conditions are apparently far into the high-temperature regime! A quick calculation shows that the stable black hole is the ground state of the ensemble, with a free energy of $-10^{48}\,\text{J}$ and an entropy of $10^{68}$. Yet the interior of the cavity remains HES, with a free energy of about $-10^{-4}\,\text{J}$ (from radiation). What prevents the system from tunneling to the overwhelmingly favorable black hole ground state? Recall from the analysis of \cite{Gross:1982cv} and \cite{York:1986it} that the rate of tunneling is approximately $\exp(-\Gamma_{c}(r_{h}^\ms{(-)})/\hbar)$. The action of the unstable black hole is enormous, $\Gamma_{c}(r_{h}^\ms{(-)}) \sim 10^{56}\,\hbar$\,, so the probability of a tunneling event is, for all intents and purposes, zero\,\footnote{Perhaps a better strategy for studying a black hole of this size is to find one that already exists, build a cavity around it, and then couple the system to a thermal reservoir. However, $r_{h} \simeq 0.1\,\text{m}$ corresponds to a mass well below the Chandrasekhar limit, so the chances of finding one are not good.}.
Contributions to the partition function from field configurations with a conical singularity are approximated to high accuracy by relatively simple functions of the boundary conditions, as described in section \ref{sec:PartitionFunction}. For the semiclassical approximation to be valid the action must be very large in units of $\hbar$, which requires $2\pi X_c \gg 1$ in natural units. In the low and intermediate temperature regimes the ground state is HES, and the contributions to the partition function are approximately
\begin{gather}\label{SchwarzCanonZLowT}
\ZZ(B_c > B_{c}^{\ts{crit}} ) \,\, \simeq \,\, \frac{1}{\beta_c} \, = \,T_c ~.
\end{gather}
This approximation can be compared with a numerical evaluation of \eqref{CanonZ}. The fractional error, defined as $f = (\ZZ - \ZZ_{\rm num})/\ZZ_{\rm num}$\,, is shown in figure \ref{fig:FELowT} for the case $d+1=4$, with $10^{4} < 2\pi X_c < 10^{5}$ and different values of $B_{c}$. In the low temperature regime the error is typically much less than $10^{-4}$, while in the intermediate temperature regime it is between $10^{-4}$ and $10^{-3}$ for $B_c$ not too close to $B_{c}^\ts{crit}$. The behavior at the critical temperature is described below.
\begin{figure}
\centering
\includegraphics[width=4.05in]{FE-Schwarz-LowTemp2.pdf}\,\,\,
\includegraphics[width=4in]{FE-Schwarz-IntTemp2.pdf}
\caption{The fractional error $f = (\ZZ - \ZZ_{\rm num})/\ZZ_{\rm num}$ as a function of $2\pi X_c$, for different values of $B_c$ in the low ($B_c > B_{c}^{*}$) and intermediate ($B_{c}^{\ts{crit}} < B_c < B_{c}^{*}$) temperature regimes.}\label{fig:FELowT}
\end{figure}
In the high temperature phase the behavior of $\ZZ$ is qualitatively different. The approximation \eqref{ZfiniteApprox} for the contributions to the partition function gives
\begin{gather}\label{SchwarzCanonZHighT}
\ZZ(B_c < B_{c}^{\ts{crit}}) \,\, \simeq \,\, \exp(-\Gamma_{c}\big(\lambda_{+})\big) \, \frac{(d-2)\sqrt{d-1}\,\lambda_{+}^{\frac{d-3}{2}} X_{c}^{\frac{d-3}{2(d-1)}} }{\sqrt{2\,d\,\lambda_{+}^{d-2} - 4\,}} ~.
\end{gather}
The fractional error for this approximation is shown for the $d+1=4$ model in figure \ref{fig:FEHighT}, with different values of $2\pi X_{c}$ and $\lambda_{+}$.
\begin{figure}
\centering
\includegraphics[width=4in]{FE-Schwarz-HighTemp2.pdf}
\caption{Fractional error in the approximation for $\ZZ$ as a function of $2\pi X_c$, for different values of $\lambda_{+}$ in the high temperature regime.}\label{fig:FEHighT}
\end{figure}
In the high temperature regime $\lambda_{+}$ takes values in the range
\begin{gather}\label{SchwarzLambda}
\left(\frac{4\,(d-1)}{d^2}\right)^{\frac{1}{d-2}} < \lambda_{+} < 1 ~,
\end{gather}
which becomes $8/9 < \lambda_{+} < 1$ when $d+1=4$. For $2\pi X_c > 10^4$ and $\lambda_{+}>8/9$, the error is typically below about $10^{-4}$. But at $\lambda_{+} = 8/9$ (when $B_c = B_{c}^\ts{crit}$) the error jumps by 1-2 orders of magnitude. This makes sense; at the lower end of \eqref{SchwarzLambda} the smooth black hole has action $\Gamma_{c}=0$, so the ground state of the ensemble is a superposition of the black hole and HES. A better approximation for \eqref{CanonZ} at this transitional value of $\lambda_{+}$ is given by the sum of \eqref{SchwarzCanonZLowT} and \eqref{SchwarzCanonZHighT}. For the $d+1=4$ model this results in a fractional error below $10^{-5}$.
The dominant contribution to the partition function in the high temperature phase is the overall factor of $\exp(-\Gamma_{c}(\lambda_{+}))$. This gives the leading term in the free energy as $F_{c}^{(0)} = \beta_{c}\,\Gamma_{c}(\lambda_{+})$, and the resulting contribution to the entropy for the smooth black hole is
\begin{gather}
S^{(0)} = 2\pi\,X_{h} = 2\pi\,X_c\,\lambda_{+}^{\,d-1} ~.
\end{gather}
The contributions to $\ZZ$ from configurations with conical singularities give corrections to $F_{c}$ and hence to $S$. The free energy $-T_{c} \log \ZZ$ obtained from \eqref{SchwarzCanonZHighT} is
\begin{gather}
F_{c} = F_{c}^{(0)} - \frac{(d-2)\Upsilon^{\frac{1}{d-1}}}{4\pi\,X_{c}^{\frac{1}{d-1}}\,\lambda_{+}\sqrt{1-\lambda_{+}^{d-2}}}\,\log\left(\frac{(d-2)\sqrt{d-1}\,\lambda_{+}^{\frac{d-3}{2}} X_{c}^{\frac{d-3}{2(d-1)}} }{\sqrt{2\,d\,\lambda_{+}^{d-2} - 4\,}}\right)
\end{gather}
which results in an entropy
\begin{align}
S = & \,\, S^{(0)} + \frac{1}{2} \left(\frac{d-3}{d-1}\right) \log S^{(0)} + \frac{(\lambda_{+}^{d-2}-1)(d\,\lambda_{+}^{d-2} + 2\,(d-3))}{(d\,\lambda_{+}^{d-2} - 2)^{2}} \\ \nonumber
& \quad + \log\left(\frac{d-2}{(2\pi)^{\frac{d-3}{2(d-1)}}}\,\sqrt{\frac{d-1}{2\,d\,\lambda_{+}^{d-2}-4}}\right) ~.
\end{align}
The last two terms combine to give an $\mathcal{O}(1)$ contribution for all $\lambda_{+}$ in the range \eqref{SchwarzLambda}. Thus, the entropy for the $d+1$-dimensional Schwarzschild model with corrections from conical singularities takes the form
\begin{gather}\label{SchwarzS1}
S = S^{(0)} + \frac{1}{2} \left(\frac{d-3}{d-1}\right) \log S^{(0)} + \mathcal{O}(1) ~.
\end{gather}
In $d+1=4$ dimensions there is no $\log S^{(0)}$ correction; its absence can be traced back to the dependence of various quantities on the boundary condition $X_c$. In an ensemble that contains smooth black holes, the boundary conditions must satisfy \eqref{SchwarzSmoothBC}, so for fixed $\lambda_{+}$ we have $\beta_{c} \sim X_{c}^{\frac{1}{d-1}}$. The specific heat, on the other hand, scales linearly with $X_c$ at fixed $\lambda_{+}$
\begin{gather}
C_{c} = \frac{4\pi\,(d-1)\,X_{c}\,\lambda_{+}^{d-1}\,(1- \lambda_{+}^{d-2})}{d\,\lambda_{+}^{d-2} - 2} ~.
\end{gather}
The contributions to $\mathcal{Z}$ involve the factor $\sqrt{C_{c}}/\beta_c$, and for fixed $\lambda_{+}$ this is independent of $X_c$ when $d+1=4$. As a result, the corrections to the free energy and the entropy in that case are $\mathcal{O}(1)$.
It is important to remember that the canonical partition function for the Schwarzschild model is not defined as $X_c \to \infty$, despite the fact that some of the results in this section seem well-behaved in that limit. In the next few sections we will consider models that \emph{do} admit an $X_c \to \infty$ limit. In that case results analogous to \eqref{SchwarzS1} simplify quite a bit, and are easier to interpret.
\subsection{Black holes in AdS}
The spherically symmetric reduction of gravity with a negative cosmological constant gives the AdS-Schwarzschild models. The functions $e^{Q}$ and $w$ in this case are
\begin{align}
w(X) = &\,\, (d-1)\,\Upsilon^{\frac{1}{d-1}}\,X^{\frac{d-2}{d-1}} + \frac{(d-1)}{\ell^{\,2}}\,\Upsilon^{-\frac{1}{d-1}}\,X^{\frac{d}{d-1}} \\
e^{Q(X)} = &\,\, \frac{1}{d-1}\,\Upsilon^{-\frac{1}{d-1}}\,X^{-\frac{d-2}{d-1}} ~,
\end{align}
where $\ell$ is the AdS length scale and $\Upsilon= A_{d-1}/8\pi G_{d+1}$. Since $w(X)/X \sim X^{\frac{1}{d-1}}$ at large $X$, this model satisfies the condition \eqref{ExistenceCriteria} for the existence of the partition function in the $X_c \to \infty$ limit. Rather than considering the ensemble with finite $X_c$, we will work with ensembles where the cavity wall is removed\,\footnote{A very nice treatment of the ensemble with finite $X_c$ may be found in \cite{Akbar:2004ke}.}.
Following the discussion in section \ref{sec:PartitionFunction}, the ensemble is defined by fixing the period $\beta_{\infty}$. The action for the theory with the cut-off removed is
\begin{gather}\label{AdSAction}
\Gamma_{\infty}(\hat{M}) = \frac{1}{2}\,\beta_{\infty}\,w(\hat{X}_{h}) - 2\pi \hat{X}_{h} ~.
\end{gather}
Figure \ref{fig:AdSAction} shows plots of the action for representative values of $\beta_{\infty}$.
\begin{figure}
\centering
\includegraphics[width=5in]{AdSAction2.pdf}
\caption{The action for the AdS-Schwarzschild model with the cavity wall removed.}\label{fig:AdSAction}
\end{figure}
The action is extremized by black holes with horizon $X_{h}(M)$ that satisfy the smoothness condition
\begin{gather}
\beta_{\infty} = \frac{4\pi}{w'(X_{h})} = \frac{4\pi \ell^{\,2}\,\Upsilon^{\frac{1}{d-1}} \, X_{h}^{\frac{1}{d-1}}}{d\,X_{h}^{\frac{2}{d-1}} + (d-2)\,\ell^{\,2}\,\Upsilon^{\frac{2}{d-1}}} ~.
\end{gather}
Expressed as a function of $\beta_{\infty}$, the possible smooth black hole horizons $X_{h}$ are
\begin{gather}\label{AdSHorizon}
X_{h} = \left(\frac{2\pi \ell^{\,2}}{d\,\beta_{\infty}}\right)^{d-1} \Upsilon \, \left( 1 \pm \sqrt{1 - d(d-2)\,\left(\frac{\beta_{\infty}}{2 \pi \,\ell}\right)^{2}}\right)^{d-1} ~.
\end{gather}
This presents three different scenarios for smooth black holes in the ensemble, assuming $d+1 \geq 4$ (the case $d+1=3$, the BTZ black hole, will be discussed separately). If $\beta_{\infty}>\frac{2\pi\ell}{\sqrt{d(d-2)}}$ then there are no real solutions of \eqref{AdSHorizon}, and all black holes in the ensemble have a conical singularity at the horizon. But if $\beta_{\infty}<\frac{2\pi\ell}{\sqrt{d(d-2)}}$ then there are two smooth black holes, with horizons $0<X_{h}^{-} < X_{h}^{+}$, which correspond to a local maximum ($X_{h}^{-}$, with $C_{\infty}<0$) and minimum ($X_{h}^{+}$, with $C_{\infty}>0$) of the action. The action at the smooth local minimum $X_{h}^{+}$ is
\begin{gather}\label{SmoothAdSAction}
\Gamma_{\infty}(M) = \frac{2\pi X_{h}^{+}}{d\,(X_{h}^{+})^{\frac{2}{d-1}} + (d-2)\,\ell^{\,2}\,\Upsilon^{\frac{2}{d-1}}}\,\left(\ell^{\,2}\,\Upsilon^{\frac{2}{d-1}} - (X_{h}^{+})^{\frac{2}{d-1}} \right) ~.
\end{gather}
The ground state of the ensemble is easily determined by comparing this to the action $\Gamma_{\infty}(0) = 0$ for the HES solution (the reduction of `thermal AdS'). When $X_{h}^{+} < \ell^{\,d-1}\,\Upsilon$ the action \eqref{SmoothAdSAction} is positive and HES is the ground state of the ensemble; when $X_{h}^{+} > \ell^{\,d-1}\,\Upsilon$ the action is negative and ground state is the smooth black hole. The transition between these phases occurs at a critical value of the period, $\beta_{\infty}^\ts{crit} = \frac{2\pi\ell}{d-1}$, which corresponds to a Hawking temperature $T_{\infty} = \frac{d-1}{2\pi\ell}$. Thus, there are two phases in the model, which can be divided into three distinct temperature regimes. At low temperatures, $T_{\infty} < \frac{\sqrt{d(d-2)}}{2\pi\ell}$, there are no smooth black holes in the ensemble and the ground state is HES. For an intermediate range of temperatures, $\frac{\sqrt{d(d-2)}}{2\pi\ell} < T_{\infty} < \frac{d-1} {2\pi\ell}$, two smooth black holes appear in the ensemble but the ground state remains HES. And finally, at high temperatures, $T_{\infty} > \frac{d-1}{2\pi\ell}$, the ensemble is dominated by the larger of the two smooth black hole solutions.
The approximations derived in section \ref{sec:PartitionFunction} for the contributions to the partition function should be accurate as long as $\ell\,\Upsilon^{\frac{1}{d-1}} \sim \ell/\ell_{pl} \gg 1$. In the low temperature regime where HES dominates the ensemble, $\beta_{\infty} > \frac{2\pi\ell}{\sqrt{d(d-2)}}$\,, the integral \eqref{Zinfty} is approximately
\begin{gather}
\ZZ_{\infty} \simeq \ell\,T_{\infty}~,
\end{gather}
independent of the dimension $d+1$ of the original model. The fractional error associated with this approximation is very small (of order $10^{-4}$ or less for $\ell/\ell_{pl} \sim 10^{3}$) and decreases for lower temperatures and larger values of $\ell/\ell_{pl}$. In the phase of the theory dominated by the smooth black hole, $0 < \beta_{\infty} < \frac{2\pi\ell}{d-1}$, the integral \eqref{Zinfty} behaves as
\begin{gather}\label{SBHContributionsAdS}
\ZZ_{\infty} \simeq \exp\left(-\Gamma_{\infty}(M)\right)\,\left(\frac{\ell}{\ell_{pl}}\right)^{\frac{d-1}{2}} \, \left(\ell\,T_{\infty}\right)^{\frac{d+1}{2}} ~,
\end{gather}
where the factor in the exponential is given in \eqref{SmoothAdSAction}. At temperatures less than about twice the critical temperature the fractional error in this approximation can be as large as $5\%$ for $\ell/\ell_{pl} \sim 10^{3}$, but at higher temperatures or larger values of $\ell/\ell_{pl}$ this rapidly drops to a fraction of a percent.
In the high temperature phase the contributions \eqref{SBHContributionsAdS} to the partition function are comparable to corrections from (smooth) quadratic fluctuations around the ground state. The free energy is $F_{\infty} = - T_{\infty} \, \log \ZZ_{\infty}$, so
\begin{gather}\label{SBHEntropyAdS1}
S = - \frac{\partial F_{\infty}}{\partial T_{\infty}} = 2\pi\,X_{h} + \frac{d+1}{2}\,\log(\ell\,T_{\infty}) + \ldots ~,
\end{gather}
where `$\ldots$' is a constant. At high temperatures, $T_{\infty} \gg \frac{d-1}{2\pi\ell}$, the relation \eqref{AdSHorizon} between the horizon and the temperature is
\begin{gather}
X_{h} \propto (\ell\,T_{\infty})^{d-1} ~.
\end{gather}
Thus, at high temperatures, the entropy of the smooth black hole can be expressed as
\begin{gather}
S = S^{(0)} + \frac{d+1}{2(d-1)}\,\log S^{(0)} ~,
\label{AdSentropy}
\end{gather}
where $S^{(0)} = 2\pi\,X_{h}$ is the dominant contribution to the entropy coming from the free energy of the smooth black hole.
\subsection{Exact Results for the BTZ Black Hole}
The AdS model with $d+1=3$ is an intriguing example where the contributions to the partition function \eqref{Zinfty} can be computed in terms of elementary functions when $X_c \to \infty$. The condition \eqref{AdSHorizon} has a non-zero solution, the BTZ black hole, with horizon
\begin{gather}\label{BTZHorizon}
X_{h} = \frac{\pi \ell^{\,2}}{2\,G_{3}\,\beta_{\infty}} ~,
\end{gather}
where we have used $\Upsilon = (4 G_{3})^{-1}$ for $d=2$. Thus, unlike higher dimensional models, a smooth black hole exists for ensembles with any value of $\beta_{\infty}$. The action \eqref{AdSAction} for the BTZ black hole is
\begin{gather}
\Gamma_{\infty}(M) = \frac{1}{8 G_{3} \beta_{\infty}}\,\left(\beta_{\infty}^{\,2} - (2\pi \ell)^{2}\right) ~,
\end{gather}
so HES is the ground state of the ensemble for $\beta_{\infty}$ greater than the critical value $\beta_{\infty}^\ts{crit} = 2\pi\ell$, and BTZ is the ground state for $\beta_{\infty} < 2\pi\ell$~.
The action \eqref{AdSAction} for a black hole with conical singularity in this model is
\begin{gather}
\Gamma_{\infty}(\hat{M}) = \frac{1}{8 G_{3}}\,\beta_{\infty}\,\left(1 + \left(\frac{4 G_{3}}{\ell}\right)^{2}\,\hat{X}_{h}^{\,2}\right) - 2\pi \hat{X}_{h} ~.
\end{gather}
This appears in the integral \eqref{Zinfty} with the measure $\dd \hat{M}$, which can be rewritten using the condition $w(\hat{X}_{h}) = 2\hat{M}$ to give
\begin{gather}
\dd\hat{M} = \frac{4 G_{3}}{\ell^{2}}\,\hat{X}_{h}\, \dd \hat{X}_{h} ~.
\end{gather}
The integral for contributions to the partition function then takes a form that can be evaluated directly, without the need for approximations
\begin{gather}
\ZZ \simeq \frac{4 G_{3}}{\ell^{\,2}}\,\exp\left(-\frac{\beta_{\infty}}{8 G_{3}}\right)\,\int\limits_{0}^{\infty} \ns \dd\hat{X}_{h}\,\hat{X}_{h}\,\exp\left(-\frac{2 G_{3}}{\ell^{\,2}}\,\beta_{\infty}\,\hat{X}_{h}^{\,2} + 2\pi \hat{X}_{h} \right) ~.
\end{gather}
Evaluating the integral gives
\begin{gather}
\ZZ \simeq \frac{1}{\beta_{\infty}}\,\exp\left(-\frac{\beta_{\infty}}{8 G_{3}}\right) + \exp\big(-\Gamma_{\infty}(M)\big)\,\sqrt{\frac{\pi^{3} \ell^{\,2}}{2 G_{3}\,\beta_{\infty}^{\,3}}}\,\left(1 + \text{Erf}\left(\sqrt{\frac{\pi^{2} \ell^{\,2}}{2 G_{3} \beta_{\infty}}}\,\right)\right) ~.
\end{gather}
In the high temperature regime, $T_{\infty} > 2\pi\ell$, the factor of $\exp(-\Gamma_{\infty})$ dominates and $\ZZ$ is
\begin{gather}
\ZZ_{\infty} \simeq (\ell\,T_{\infty})^{\,\frac{3}{2}}\,\exp\left(-\Gamma_{\infty}(M)\right) ~.
\end{gather}
The entropy of the BTZ is then the usual, leading term, and a sub-leading logarithmic correction with coefficient $3/2$
\begin{gather}\label{BTZentropy}
S = \frac{\pi^{2} \ell^{\,2}}{G_{3}}\,T_{\infty} + \frac{3}{2}\,\log\left(\frac{\pi^{2} \ell^{\,2}}{G_{3}}T_{\infty}\right) + \ldots
\end{gather}
As with the higher-dimensional AdS models, this is precisely the sort of correction that is obtained when smooth fluctuations around the ground state are included in the path integral. It also agrees with 1-loop calculations, see \cite{Sen:2012dw} and Refs.~therein.
\subsection{The Jackiw-Teitelboim Model}
Another example that can be treated in great detail is the Jackiw-Teitelboim model \cite{Jackiw:1984,Teitelboim:1984}, defined by the functions
\begin{gather}
w(X) = X^2 \quad \quad \quad e^{Q(X)} = 1 ~.
\end{gather}
It is convenient to work with the location of the horizon, rather than the mass parameter $\hat{M}$. Then the metric function for a black hole with conical singularity is $\hat{\xi} = X^{2} - \hat{X}_{h}^{\,2}$, and the action for such a configuration is
\begin{gather}\label{JTAction}
\Gamma_{c}(\hat{X}_{h}) = \beta_{c}\,X_c\,\left(1 - \sqrt{1- \frac{\hat{X}_{h}^{\,2}}{X_{c}^{\,2}}}\right) - 2\pi \hat{X}_{h} ~.
\end{gather}
The smoothness condition that extremizes the action yields
\begin{gather}
\beta_{c} = \frac{2\pi}{X_{h}}\,\sqrt{X_{c}^{\,2} - X_{h}^{\,2}}~.
\end{gather}
Inverting this expression for $X_{h}$ identifies a single smooth black hole that is present in the ensemble for all values of $\beta_c$ and $X_c$
\begin{gather}
X_{h} = \frac{2\pi \,X_c}{\sqrt{4 \pi^{2} + \beta_{c}^{\,2}}} ~,
\end{gather}
with action
\begin{gather}\label{JTSmoothAction}
\Gamma_{c}(X_{h}) = X_c\,\left(\beta_{c} - \sqrt{\beta_{c}^{\,2} + 4\pi^{2}}\right)~.
\end{gather}
This expression is always negative, which means that the smooth black hole dominates HES (with action $\Gamma_{c}(0) = 0$). Unlike the previous examples, the conical ensemble for the Jackiw-Teitelboim model always has a smooth black hole for the ground state.
The function $w(X)$ for the Jackiw-Teitelboim model satisfies the condition \eqref{ExistenceCriteria}, which means that the $X_c \to \infty$ limit of the ensemble exists. Before considering the contributions to the partition function for the ensemble with finite $X_c$, let us examine this simpler case. The action for the ensemble with the cavity wall removed is
\begin{gather}
\Gamma_{\infty}(\hat{X}_{h}) = \frac{1}{2}\,\beta_{\infty}\,\hat{X}_{h}^{\,2} - 2\pi\,\hat{X}_{h} ~,
\end{gather}
which is minimized by a smooth black hole with $X_{h} =2\pi/\beta_{\infty}$. The action at this minimum is $\Gamma_{\infty}(X_{h}) = - 2\pi^{2}/\beta_{\infty} = -2\pi^{2} T_{\infty}$, and the contributions \eqref{Zinfty} to the partition function are
\begin{gather}
\ZZ_{\infty} \simeq \int\limits_{0}^{\infty} \ns \dd\hat{X}_{h}\,\hat{X}_{h}\exp(2\pi^{2} T_{\infty}) \,\exp\left(-\frac{1}{2\,T_{\infty}}\,\hat{X}_{h}^{\,2} + 2\pi \hat{X}_{h} - 2\pi^{2} T_{\infty}\right) ~.
\end{gather}
As with the BTZ black hole, this integral can be evaluated in closed form to give
\begin{gather}\label{ZinftyJT}
Z_{\infty} = \big(2\pi T_{\infty}\big)^\frac{3}{2}\,\exp(2\pi^{2}T_{\infty})\,\left(\frac{1 + \text{Erf}(\sqrt{2\pi^2 T_{\infty}})}{2}\right) + T_{\infty} ~.
\end{gather}
For the semiclassical approximation to hold, the action of the minimum should be large in units of $\hbar$, which requires $T_{\infty} >> 1$ in natural units. In that case the entropy obtained from \eqref{ZinftyJT} is
\begin{gather}
S = 4\pi^{2}\,T_{\infty} + \frac{3}{2}\,\log(4\pi^{2} T_{\infty}) ~.
\end{gather}
The first term in this expression is the leading contribution to the entropy of the smooth black hole, and the second term represents a correction due to the contributions from conical singularities. The correction once again coincides with the result \eqref{PartitionFunction} when smooth quadratic fluctuations around the regular ground state are taken into account \cite{Grumiller:2005vy}.
For finite $X_c$ the contributions \eqref{CanonZ} to the semiclassical partition function cannot be evaluated exactly, but they are approximated to a high degree of accuracy by a relatively simple function of the boundary conditions. We find
\begin{gather}\label{JTZ1}
\ZZ_{c} \simeq \exp\big(-\Gamma_{c}(X_{h})\big)\,\frac{1}{(2\pi)^\frac{3}{2} X_c}\,\left(\frac{4\pi^{2}\,X_c\,T_c}{\sqrt{1 + 4\pi^{2} T_{c}^{\,2}}}\right)^{\frac{3}{2}} ~,
\end{gather}
where $\Gamma_{c}(X_{h})$ is the action \eqref{JTSmoothAction} for the smooth black hole. The fractional error for this approximation, compared to a numerical evaluation of \eqref{CanonZ}, is shown in figure \ref{fig:JTFE}.
\begin{figure
\centering
\includegraphics[width=4.5in]{FE-JT-FiniteXc2.pdf}
\caption{The fractional error in the approximation for $\ZZ$, as a function of $2\pi X_c$, for different ratios of horizon to cavity size. Each curve is labeled by the value of $X_h/X_c$.} \label{fig:JTFE}
\end{figure}
The entropy calculated using \eqref{JTZ1} consists of a leading term and corrections
\begin{align}\nonumber
S = &\,\,\left(\frac{\partial}{\partial T_{c}}(T_{c}\,\log\ZZ)\right)_{X_c} \\ \label{JTEntropyXc}
= &\,\, \frac{4\pi^{2}X_c\,T_c}{\sqrt{1 + 4\pi^{2} T_{c}^{\,2}}} + \frac{3}{2}\,\log\left(\frac{4\pi^{2}X_c\,T_c}{\sqrt{1 + 4\pi^{2} T_{c}^{\,2}}}\right) + \frac{3}{2\,\left(1 + 4 \pi^{2} T_{c}^{\,2}\right)} +\ldots ~,
\end{align}
where `$\ldots$' indicates terms that are independent of $T_{c}$. The first term in the entropy is the leading behavior associated with the smooth black hole, and the next two terms are the corrections from conical singularity contributions. For small values of $T_c$ \eqref{JTEntropyXc} is approximately
\begin{gather}
S \simeq \frac{4\pi^{2}X_c\,T_c}{\sqrt{1 + 4\pi^{2} T_{c}^{\,2}}} + \frac{3}{2}\,\log\left(\frac{4\pi^{2}X_c\,T_c}{\sqrt{1 + 4\pi^{2} T_{c}^{\,2}}}\right)
\end{gather}
which exhibits the same form $S^{(0)} + \frac{3}{2}\log S^{(0)}$ as the ensemble with the cavity wall removed.
This is not too surprising, since $\beta_{c}/\sqrt{w_c\,e^{Q_c}} = (T_c\,X_c)^{-1}$ must be held fixed in the $X_c \to \infty$ limit, which implies $T_c \to 0$.
\subsection{Stringy black holes}
\label{subsec:stringy}
In this section we consider three models related to black holes that arise in string theory, either as solutions of the $\beta$-functions at lowest order in $\alpha'$, or as exact solutions that incorporate corrections at all orders in $\alpha'$. There is an important difference between this section and the previous ones: the stringy models must be considered with the cavity wall removed. In string theory one cannot introduce new degrees of freedom in an arbitrary manner (as we have done with the thermal reservoir), and \emph{ad hoc} cut-offs on spacetime fields (the restriction $X \leq X_c$ on the dilaton) are usually equivalent to a truncation of states on the worldsheet that spoil the consistency of the theory.
Since the models we consider all have non-compact target spaces with $X \to \infty$ asymptotically, the $X_c \to \infty$ limit is required for any calculations that are meant to be interpreted in the context of string theory. Additional discussion can be found in \cite{Grumiller:2007ju}.
It is interesting that there is an intrinsic way to identify string-like behavior within the zoo of two-dimensional dilaton gravity models. Namely, a universal property of all stringy models is that the Weyl-invariant function $w$ is linear in the dilaton field $X$ for large values of the dilaton (which means weak coupling from a target-space perspective). Linearity of $w$ is not just a technical curiosity, but has important physical implications: by virtue of the inequality \eqref{ExistenceCriteria} stringy models always exhibit a Hagedorn temperature. The existence of a Hagedorn temperature (or, equivalently, asymptotic linearity of $w$) therefore can be considered as a defining property of stringy models.
\subsubsection{Witten Black Hole}
The Witten Black Hole \cite{Witten:1991yr, Gibbons:1992rh, Nappi:1992as} is obtained from a solution of bosonic string theory with the worldsheet dynamics described by a $SL(2,\mathbb{R})/U(1)$ coset model. When the level of the worldsheet current algebra is taken to be large, the tree-level $\beta$-functions at lowest order in $\alpha'$ have a black hole solution of the form \eqref{metric}. The equations may be obtained from an action with $U(X)$ and $V(X)$ such that
\eq{
w(X)=\lambda X \qquad e^{Q(X)}=\frac{1}{\lambda\,X} \,,
}{eq:con39}
with the positive parameter $\lambda$ related to the string scale as $\lambda \sim 1/\sqrt{\alpha'}$.
The condition \eqref{ExistenceCriteria} implies that the $X_c \to \infty$ limit is only defined for this model if $\beta_{\infty} > 4\pi/\lambda$. In this limit the action is
\begin{gather}
\Gamma_{\infty}(\hat{X}_h) = 2\pi\,\hat{X}_{h}\,\left(\frac{\lambda\,\beta{_\infty}}{4\pi} - 1 \right)~.
\end{gather}
The smoothness condition that extremizes the action gives a single value of $\beta_{\infty}$ for which a smooth black hole exists:
\begin{gather}
\beta_{\infty} = \frac{4\pi}{\lambda} ~.
\end{gather}
This corresponds to the Hagedorn temperature for the model, at which point contributions from states with a conical singularity at the horizon cause the partition function to diverge. Therefore the $X_c \to \infty$ limit of this model does not admit an ensemble containing a smooth black hole. However, one can see from \eqref{eq:con39} that black hole solutions would have scalar curvature of order $1/\alpha'$ near the horizon, indicating that $\alpha'$ corrections are important. In the next section we will consider a model that takes these corrections into account and always has a black hole ground state.
Before moving on it is worth considering two more aspects of the model \eqref{eq:con39}. First, the partition function for this model is well-defined in the $X_c \to \infty$ limit as long as $\beta_{\infty}>4\pi/\lambda$, in which case the ground state is HES. The integral \eqref{Zinfty} can be directly evaluated, and gives
\begin{gather}\label{WittenLowTemp}
\ZZ_{\infty} = \frac{1}{\beta_{\infty}\,\lambda - 4\pi} ~.
\end{gather}
where $T_{h} = \lambda/4\pi$ is the Hagedorn temperature. Second, despite problems with implementing a finite $X_c$ cut-off in string theory, one might consider this model as an example of a dilaton gravity where the contributions to the partition function can be calculated exactly for finite $X_c$. The action is
\begin{gather}\label{WittenFiniteXcAction}
\Gamma_{c}(\hat{X}_{h}) = \beta_{c}\,\lambda\,X_{c}\,\left(1 - \sqrt{1- \frac{\hat{X}_{h}}{X_c}}\,\right) - 2\pi \hat{X}_{h} ~,
\end{gather}
with local extrema $X_{h}$ given by the smoothness condition
\begin{gather}
\beta_{c} = \frac{4\pi}{\lambda}\,\sqrt{1- \frac{X_{h}}{X_c}} ~.
\end{gather}
The ensemble contains a single smooth black hole if $0<\beta_c<4\pi/\lambda$, and the action for this configuration is always negative. Thus, there are two phases: a HES ground state for $\beta_c>4\pi/\lambda$, and a smooth black hole ground state for $0<\beta_c<4\pi/\lambda$. The contributions to the partition function from conical singularities can be calculated exactly in either phase by expressing the action in terms of $\beta_c$ and $\hat{E}_{c}$
\begin{gather}
\Gamma_{c}(\hat{X}_{h}) = \left(\beta_c - \frac{4\pi}{\lambda}\right)\,\hat{E}_{c} + \frac{2\pi}{\lambda^{2}\,X_c}\,\hat{E}_{c}^{\,2} ~.
\end{gather}
Then the integral \eqref{CanonZ} is given in terms of exponentials and error functions. The exact form is not especially enlightening, but the behavior simplifies when $X_c \gg 1$. In the low temperature phase $\beta_c>4\pi/\lambda$ we obtain essentially the same result as \eqref{WittenLowTemp}
\begin{gather}
\ZZ_{c} \simeq \frac{1}{\beta_{c}\,\lambda - 4\pi} \quad \quad (X_c \gg 1)~.
\end{gather}
At high temperatures, $0< \beta_c < 4\pi/\lambda$, the smooth black hole dominates and the contributions to \eqref{CanonZ} are approximately
\begin{gather}\label{ZcWitten}
\ZZ_{c} \simeq \frac{\sqrt{X_c}}{2\sqrt{2}}\,\exp\left(2\pi X_c\,\left(1- \frac{\beta_{c}\,\lambda}{4\pi}\right)^{2}\right) \quad \quad (X_c \gg 1) ~,
\end{gather}
where the factor in the exponential is minus the action for the smooth black hole. The free energy is $-T_{c}\,\log \ZZ_{c}$, and the resulting entropy is
\begin{gather}\label{FiniteWittenEntropy}
S = 2\pi X_{h} + \log \left(\frac{\sqrt{X_c}}{2\sqrt{2}}\right) ~.
\end{gather}
Unlike the previous examples, the correction does not appear to be proportional to $\log S^{(0)}$. However, the approximate result \eqref{ZcWitten} assumes $X_c \gg 1$, and this assumption must be treated carefully since the model does not have an $X_c \to \infty$ limit above the Hagedorn temperature $\lambda/4\pi$. The large-$X_c$ result \eqref{FiniteWittenEntropy} can only be trusted if $\beta_c$ remains much less than $4\pi/\lambda$, which implies that $X_h/X_c$ must be close to $1$. Then \eqref{FiniteWittenEntropy} becomes
\begin{gather}\label{FiniteWittenEntropy2}
S \simeq 2\pi X_{h} + \frac{1}{2}\log (2\pi X_h) + \mathcal{O}(1)~.
\end{gather}
This is, in a sense, the expected result: the functions \eqref{eq:con39} that define this model may also be thought of as the $d\to\infty$ limit of the Schwarzschild model, and \eqref{FiniteWittenEntropy2} is indeed the $d\to\infty$ limit of \eqref{SchwarzS1}. Note, though, that the inverse specific heat for the Schwarzschild model goes to zero in this limit, so perhaps a better interpretation of \eqref{FiniteWittenEntropy2} is the leading term in a $1/d$ expansion for large but finite $d$.
\subsubsection{Exact string black hole}
The black hole background with $\alpha'$ corrections taken into account was studied in \cite{Dijkgraaf:1992ba}. The worldsheet theory is described by an $SL(2,\mathbb{R})/U(1)$ gauged WZW model with level $k>2$. As shown in \cite{Grumiller:2005sq}, the exact string black hole corresponds to a dilaton gravity model with
\begin{gather}
w(X) = 2b\,\left(\sqrt{1 + \gamma^{2}} - 1 \right) \quad \quad e^{Q(X)} = \frac{1}{2b\,\left(\sqrt{1 + \gamma^{2}} + 1 \right)} ~,
\end{gather}
where the field $\gamma$ is related to the conventionally defined dilaton $X$ by
\begin{gather}
X = \gamma + \sinh^{-1}\gamma ~.
\end{gather}
The parameter $b$ depends on both the level of the model and the string tension
\begin{gather}
b = \frac{1}{\sqrt{\alpha'}\sqrt{k-2}} ~.
\end{gather}
For a critical string theory with a target space of dimension $D$, it satisfies the condition
\begin{gather}\label{CentralChargeCondition}
D - 26 + 6\,\alpha'\,b^{2} = 0 ~.
\end{gather}
Normally this would fix $k$ at a specific value ($k_\ts{crit}=\frac{9}{4}$ for a critical string theory in two dimensions), but as in \cite{Kazakov:2001pj} we will assume that extra matter fields are present that contribute to the central charge. This modifies the condition \eqref{CentralChargeCondition}, which has the effect of allowing us to consider other values of $k$. In practice, $b$ is treated as a fixed parameter and the level takes values in the range $2 < k < \infty$.
The value of the field $\gamma$ at the horizon is related to the level of the CFT by
\begin{gather}
\gamma_{h} = \sqrt{k(k-2)} \quad \rightarrow \quad X_{h} = \sqrt{k(k-2)} + \sinh^{-1}\big(\sqrt{k(k-2)}\,\big) ~.
\end{gather}
The smoothness condition with the cavity wall removed has solutions for any value of $k>2$
\begin{gather}\label{BetaESBH}
\beta_{\infty} = \frac{2\pi}{b}\,\sqrt{\frac{k}{k-2}} ~,
\end{gather}
corresponding to a black hole of mass
\begin{gather}
M = b\,(k-2) ~.
\end{gather}
As expected, this black hole is always the ground state of the theory. The on-shell action for the black hole is explicitly negative for all $k>2$
\begin{gather}\label{ESBHaction}
\Gamma_{\infty}(M) = -\frac{1}{4 G_2}\,\sinh^{-1}\big(\sqrt{k(k-2)}\,\big)~,
\end{gather}
where we have restored factors of the two-dimensional Newton's constant $G_2$. Computing the entropy from the leading term in the free energy $F_{\infty}^{(0)} = T_{\infty}\,\Gamma_{\infty}$ gives
\begin{gather}\label{ESBHentropy0}
S^{(0)} = \frac{X_{h}}{4 G_2} = \frac{1}{4 G_2}\,\sqrt{k(k-2)} + \frac{1}{4 G_2}\,\sinh^{-1}\big(\sqrt{k(k-2)}\,\big) ~.
\end{gather}
Now, according to the approximation in section \ref{sec:PartitionFunction}, the contributions to the partition function from configurations with a conical singularity give
\begin{gather}
\ZZ_{\infty} \simeq \exp(-\Gamma_{\infty}(M))\,\frac{\sqrt{2 \pi \, C_{\infty}}}{\beta_{\infty}} ~,
\end{gather}
which suggests a sub-leading correction to the free energy of the form
\begin{gather}
F_{\infty}^{(1)} \simeq -T_{\infty}\,\log\left(T_{\infty}\,\sqrt{2 \pi\, C_{\infty}}\right)
= - T_{\infty}\,\log\left( k^{\frac{1}{4}}\,(k-2)^{\frac{3}{4}}\right) ~.
\end{gather}
and consequently a correction to the entropy of the form
\begin{gather}
S^{(1)} = \log\left(k^{\frac{1}{4}}\,(k-2)^{\frac{3}{4}}\right) + k - \frac{1}{2} ~.
\end{gather}
Since we are considering a solution of the genus-zero $\beta$-functions, we must take $G_2 \ll 1$ to ensure that the string coupling is small for any value of $k$. Then, in the semiclassical limit $k \gg 1$ the results for $S^{(0)}$ and $S^{(1)}$ simplify
\begin{align}
S^{(0)} \simeq & \,\, \frac{1}{4\,G_2}\,\big(\,k + \log k\,\big) + \mathcal{O}\left(\frac{1}{G_2}\right)\\
S^{(1)} \simeq & \,\, k + \log k + \mathcal{O}(1)
\end{align}
We arrive at an interesting result; the corrected entropy takes the form
\begin{gather}
S \simeq \left(\frac{1}{4\,G_2} + 1\right)\,\left(\vphantom{\Big|} k + \log k + \mathcal{O}(1)\right) ~.
\end{gather}
\subsubsection{2D type 0A black holes}
The genus-zero beta functions of type 0A string theory have a solution at leading order in $\alpha'$ that describes a two-dimensional black hole with constant Ramond-Ramond flux \cite{Berkovits:2001tg}. This can be viewed as a solution of a model with functions $w$ and $e^{Q}$ given by
\begin{gather}
w(X) = \lambda\,X - \lambda\,q^{2}\,\log X \quad \quad e^{Q(X)} = \frac{1}{\lambda\,X} ~,
\end{gather}
where $\lambda \sim 1/\sqrt{\alpha'}$ is positive, and $q$ is proportional to the flux of each of the two Ramond-Ramond gauge fields\,\footnote{Both Ramond-Ramond gauge fields have the same flux $q_\ts{R}/2\pi\alpha'$, in the conventions of \cite{Douglas:2003up}. This is related to the parameter in $w(X)$ by $q = q_\ts{R}/\sqrt{16\pi}$. }.
The ratio $w(X)/X$ approaches $\lambda$ in the limit $X \to \infty$, indicating that the ensemble is only defined at temperatures below $T_{H} = \lambda/4\pi$ when the cavity wall is removed. This is the same Hagedorn temperature as the Witten black hole model, but unlike that model the ensemble now contains a smooth black hole. The smoothness condition gives
\begin{gather}\label{0ATemp}
T_{\infty} = T_{H}\,\frac{X_{h} - q^{2}}{X_{h}} ~,
\end{gather}
which identifies a single black hole $X_h$ for any $T_{\infty}$ below the Hagedorn temperature
\begin{gather}\label{0Ahorizon}
X_{h} = \frac{q^{2}\,T_{H}}{T_{H}- T_{\infty}} ~.
\end{gather}
Combined with the upper limit on the temperature, this result implies that the dilaton at the horizon satisfies $X_h > q^{2}$. Indeed, it turns out that $q^2$ sets a lower bound on the dilaton at the horizon for any configuration in the ensemble, with or without a conical singularity. This is due to the fact that $w(X)$ has a minimum at $X = q^{2}$; the requirement $\hat{\xi}(X) > 0$ for $X > \hat{X}_{h}$ then implies $\hat{X}_{h} \geq q^{2}$.
The action for the smooth black hole is \cite{Davis:2004xi}
\begin{gather}\label{0Aaction}
\Gamma_{\infty}(X_{h}) = 2\pi\,q^{2}\,\frac{T_{H}}{T_{\infty}}\,\left(1 - \log q^{2} + \log\left(1 - \frac{T_{\infty}}{T_{H}}\right) \right) ~,
\end{gather}
and the action for a configuration with a conical singularity is
\begin{gather}
\Gamma_{\infty}(\hat{X}_{h}) = 2\pi \hat{X}_{h}\,\frac{T_{H}}{T_{\infty}}\,\left(1 - \frac{T_{\infty}}{T_{H}}\right) - 2\pi\,q^{2}\,\frac{T_{H}}{T_{\infty}} \, \log \hat{X}_{h} ~.
\end{gather}
To determine the ground state, we must compare the action for the smooth black hole to the action for the configuration with $\hat{X}_{h}$ taking the the minimum value $\hat{X}_{h} = q^{2}$. Their difference is
\begin{gather}
\Gamma_{\infty}(X_h) - \Gamma_{\infty}(\hat{X}_h=q^{2}) = 2\pi q^{2}\,\frac{T_{H}}{T_{\infty}}\,\left(\log\left(1 - \frac{T_{\infty}}{T_{H}}\right) + \frac{T_{\infty}}{T_{H}}\right) ~,
\end{gather}
which is always negative, since the ratio $T_{\infty}/T_{H}$ is less than one. Thus, the ground state of the ensemble is always the smooth black hole \eqref{0Ahorizon}.
Given the action \eqref{0Aaction} for the smooth black hole, the leading contribution to the free energy is $F_{\infty}^{(0)} = T_{\infty}\,\Gamma_{\infty}(X_h)$. The resulting entropy is
\begin{gather}
S^{(0)} = 2\pi X_{h} = \frac{2\pi q^{2}\,T_{H}}{T_{H} - T_{\infty}} ~.
\end{gather}
We may now consider corrections from configurations with conical singularities. The integral \eqref{Zinfty} can be evaluated exactly using incomplete gamma functions, but for our purposes the approximation \eqref{ZinftyApprox} is sufficient
\begin{gather}
\ZZ_{\infty} \simeq \exp(-\Gamma_{\infty}(X_h)) \, \frac{2\pi q\,T_{\infty}^{\,\frac{3}{2}}T_{H}^{\,\frac{1}{2}}}{T_{H} - T_{\infty}} ~.
\end{gather}
The contribution to the free energy is
\begin{gather}
F_{\infty}^{(1)} = - T_{\infty}\,\log\left(\frac{2\pi q\,T_{\infty}^{\,\frac{3}{2}}T_{H}^{\,\frac{1}{2}}}{T_{H} - T_{\infty}}\right)~,
\end{gather}
and the contribution to the entropy is
\begin{gather}
S^{(1)} = - \frac{\partial F_{\infty}^{(1)}}{\partial T_{\infty}} = \log\left(\frac{2\pi q\,T_{\infty}^{\,\frac{3}{2}}T_{H}^{\,\frac{1}{2}}}{T_{H} - T_{\infty}}\right) + \frac{T_{\infty}}{T_{H} - T_{\infty}} + \frac{3}{2} ~.
\end{gather}
As in the previous two examples, the relation between $S^{(1)}$ and $S^{(0)}$ is not clear until we consider the conditions for the semiclassical approximation. With the cavity wall removed, the semiclassical limit is obtained by taking $q^{2} \gg X_{h} \gg 1$. In that case $S^{(1)}$ becomes
\begin{gather}
S^{(1)} \simeq \frac{1}{2}\,\log S^{(0)} + \mathcal{O}\left(\frac{X_{h}}{q^{2}}\right) ~,
\end{gather}
which is the same general form as the Witten model at finite $X_c$.
\section{Discussion and outlook}
\label{sec:4}
We have considered black holes in an unsuitable box -- a cavity coupled to a thermal reservoir at a temperature that is in general different than the Hawking temperature -- and studied the thermodynamics of a `conical ensemble' that includes these spaces alongside the more conventional smooth field configurations. The focus was on black holes that allow an effective description in terms of 2-dimensional dilaton gravity, including Schwarzschild, Schwarzschild-AdS, Jackiw-Teitelboim and various stringy black holes. We demonstrated that smooth solutions of the equations of motion are locally (perturbatively) stable against singular configurations with a small angular deficit or surplus, proved that the ground state of the conical ensemble never exhibits a conical singularity, and calculated corrections to the entropy and free energy for several pertinent examples.
In all the examples we considered, configurations with a conical singularity result in corrections to the entropy that take the same form as generic logarithmic corrections from thermal (and in some cases also quantum) fluctuations. In fact, our results can be compared with previous results for entropy corrections from these sorts of fluctuations if the following caveats are taken into account:
\begin{enumerate}
\item In many cases existing results have been obtained in the microcanonical ensemble. Translating our results, derived in the canonical ensemble, into corrections for the microcanonical entropy changes the sign of the coefficient of the logarithmic term.
\item Matter interactions and non-spherical excitations have been neglected, so naturally we can compare only with results where these contributions are switched off.
\item In the Schwarzschild model the partition function does not exist in the $X_c \to \infty$ limit, so our results are only meaningful for a finite cavity.
\end{enumerate}
With these caveats in mind, we can present the Schwarzschild result \eqref{SchwarzS1} also as a correction to the microcanonical entropy (which coincides at leading order with the canonical entropy $S^{(0)}$)
\eq{
S_{\ts{mc}}^{\ts{Schwarzschild}} = S^{(0)} + \frac{1}{d-1}\,\log S^{(0)}\,\big(C_{\ts{local}} - \tfrac12\,(d-3) + C_{U(1)}\big) ~.
}{eq:Smc}
Here $C_{\ts{local}}$ refers to all matter fields and graviton excitations (basically their contributions to the trace anomaly) and $C_{U(1)}$ is a separate contribution from $U(1)$ gauge fields; our simple approach is not sensitive to either of these contributions. The result \eqref{eq:Smc} agrees precisely with Eq.~(1.4) in \cite{Sen:2012dw} when matter fields are switched off and the graviton excitations are frozen\,\footnote{The ensemble in which that result was calculated is called `mixed ensemble' in the notation of \cite{Sen:2012dw}, but it really corresponds to what we call here `microcanonical', since by construction we neglect angular momentum and are thus only sensitive to the s-wave (or $J=0$) contributions. Therefore, this is the appropriate ensemble to compare with. The 4-dimensional microcanonical result contains an additional contribution coming from the Cartan generators of the rotation group, to which our analysis naturally is blind.}. Likewise, the microcanonical analogs of the entropy corrections \eqref{AdSentropy} for the AdS-Schwarzschild model and \eqref{BTZentropy} for the BTZ black hole are the same as the results obtained in \cite{Das:2001ic}. We consider this to be a consequence of the semiclassical approximation, where the leading corrections to the partition function are given by a Gaussian integral. In both cases -- conical singularities and thermal fluctuations -- the coefficient of the quadratic term in the exponent is proportional to $1/C_c$ (or $1/C_\infty$), leading to similar corrections.
In our analysis we have only taken into account configurations with a single conical defect, located at the black hole horizon. A possible generalization is the inclusion of multiple conical defects, which are not necessarily located at the horizon. This is challenging for at least two reasons. First, the existence and description of multiple conical singularities on a given space is an open problem that depends on curvature bounds and other model-specific quantities \cite{troyanov1991prescribing,Troyanov2007}. Second, the action \eqref{Action} is not suitable for this purpose. We have assumed that field configurations in the ensemble exhibit the same symmetries as the cavity. From the point of view of a higher dimensional theory, the cavity is spherical and elements of the ensemble are spherically symmetric. Including less symmetric configurations in the ensemble requires a more general action that contains terms not present in \eqref{Action}, and ignoring the contributions from these terms leads to nonsensical results. For instance, it is tempting to try and study a conical singularity somewhere between the horizon of a smooth black hole and the cavity wall by replacing $-\hat{X}_{h}\,\alpha$ in \eqref{Actionc} with $-X_{d}\,\alpha$, for some $X_{h} < X_d < X_c$. In any model that admits an $X_c \to \infty$ limit, this gives an action that is unbounded below for $\alpha>0$. But this pathological behavior is simply the result of neglecting important contributions to the action; there is no catastrophic instability that suddenly produces conical singularities at large $X_d$\,.
One obvious shortcoming of our analysis is that it applies only to black holes that are symmetric enough to allow an effective 2-dimensional description in terms of a dilaton gravity model with action \eqref{Action}. It would be of interest to lift our results to higher dimensions, particularly to three and four dimensions. As an example of what one could learn from such a generalization let us focus on three dimensions. A few years ago the interest in 3-dimensional (quantum) gravity was rekindled, see e.g.~\cite{Witten:2007kt,Li:2008dq,Carlip:2008jk,Grumiller:2008qz}. In particular, Maloney and Witten showed that the Euclidean partition function of pure Einstein gravity with a negative cosmological constant is not a sensible CFT partition function and does not factorize holomorphically \cite{Maloney:2007ud}. They arrived at their result by taking into account all known contributions to the Euclidean partition function on the gravity side, assuming smooth metrics, and speculated (among other logical possibilities) that the partition function could be made sensible by taking into account configurations with a conical defect. Given the results of the present work this option does not seem to be likely: we have demonstrated in all explicit examples that the leading contribution from conical defects to the partition function behave in the same way as the leading contributions from thermal or quantum fluctuations. If it remains true in the presence of conical defects that the partition function of 3-dimensional Einstein gravity is 1-loop exact, this means that the qualitative features of the partition function are unlikely to change dramatically upon inclusion of conical defects. It would be of interest to demonstrate this explicitly by lifting our results to three dimensions.
\acknowledgements
We thank Roman Jackiw, Rob Myers and Rafael Sorkin for useful discussions and feedback on an early version of this work. DG is supported by the START project Y435-N16 of the Austrian Science Fund (FWF). RM is supported by a Faculty Research Stipend from Loyola University Chicago. SZ benefits from a PhD research grant of the Institut Interuniversitaire des Sciences Nucl\'eaires (IISN, Belgium); his work is supported by the Belgian Federal Office for Scientific, Technical and Cultural Affairs through the Interuniversity Attraction Pole P6/11. DG and RM thank the Perimeter Institute and the Center for Theoretical Physics at MIT for hospitality and support during the early stages of this work. Finally, RM would like to acknowledge the birth of his wonderful daughter Willa. Her arrival in February 2012 provided him with his best excuse yet for not completing a project on schedule.
\bibliographystyle{apsrev}
\providecommand{\href}[2]{#2}\begingroup\raggedright |
1,477,468,750,334 | arxiv | \section{Introduction}
The discovery of the Higgs boson at the LHC~\cite{Aad:2012tfa,Chatrchyan:2012xdj} completed the observation of all fundamental degrees of freedom predicted by the Standard Model (SM) of particle interactions.
Nevertheless, it is widely believed that the SM suffers from a series of shortcomings, related to the stability of the electroweak scale and the absence of a candidate for the dark matter (DM) of the Universe, for instance. Solutions to these problems require extending the present theoretical framework to include new degrees of freedom, possibly relevant at
the energy scales probed by colliders in the near future.
The LHC Run 2 at 13 TeV collision energy provides the potential to probe physics at shorter distances compared to LHC Run 1 at 7~TeV and 8~TeV.
The exploration has just started with about 4 fb$^{-1}$ of integrated luminosity delivered to the ATLAS and CMS experiments, beginning in June 2015.
Searching for two-particle resonances is an especially adequate way to look for new physics manifestations when new thresholds in collision energies are reached.
Both ATLAS and CMS are presently analysing the new data sets, and trying to get the most out of the very first run at 13 TeV.
In a recent CERN seminar both ATLAS~\cite{ATLAS} and CMS~\cite{CMS} presented a photon pair excess with an invariant mass at about 750 GeV, with a {\it local} significance varying (depending on the narrow- or wide-width assumption) in the range 2.6 to 3.9~$\sigma$.
The signal also exhibits some compatibility with the photon-pair studies of Run 1 data by both ATLAS and CMS.
Assuming that the observed diphoton excess is due to a new resonance, CMS provides a combination of Run 1 plus Run 2 data for its production cross section
times branching fraction into photons to be $4.5\pm 1.9$~fb~\cite{CMS}. The corresponding ATLAS result for 13~TeV was estimated to be
$10.6\pm2.9$~fb~\cite{DiChiara:2015vdm}.
While further data will be needed in order to clarify whether the observed excess is robust, it is exciting to assume that the di-photon excess is really pointing to
the existence of new physics below a scale of 1~TeV, and to try to determine which kind of SM extension can predict such an effect.
Presently, no anomaly in any other final state has been detected~\cite{ATLAS,CMS}, which severely restricts any realistic explanation of the excess.
The most natural interpretation of the observed diphoton excess is due to the decays of a singlet scalar $S$ into photons,
$S\to\gamma\gamma$~\cite{DiChiara:2015vdm,Franceschini:2015kwy,Knapen:2015dap,Buttazzo:2015txu,Backovic:2015fnp,Mambrini:2015wyu,Gupta:2015zzs,Ellis:2015oso,McDermott:2015sck,Dutta:2015wqh,Cao:2015pto,Kobakhidze:2015ldh,Martinez:2015kmn,Bian:2015kjt,Chakrabortty:2015hff,Falkowski:2015swt,Bai:2015nbs}.\footnote{Solutions with pseudoscalars have also been considered in~\cite{DiChiara:2015vdm,Pilaftsis:2015ycr,Low:2015qep,Higaki:2015jag,Molinaro:2015cwg,Becirevic:2015fmu}.}
The existence of light scalars much below the cut-off scale of the SM (such as the Planck scale) requires some mechanism to protect their masses
against radiative corrections from the cut-off scale. By far the most popular solution to the hierarchy problem is supersymmetry.
However, in recent years supersymmetry has lost some of its appeal because of the severe experimental constraints from the LHC~\cite{ATLAS,CMS}.
In the context of the diphoton excess the conventional supersymmetric models, such as the Minimal Supersymmetric Standard Model (MSSM), have several shortcomings.
Firstly, the excess cannot be explained within the MSSM alone~\cite{DiChiara:2015vdm,Buttazzo:2015txu,Angelescu:2015uiz,Gupta:2015zzs},
and the framework must be extended to accommodate the new signal. This points into the direction of the Next-to-Minimal Supersymmetric Standard Model (NMSSM)
rather than the MSSM. Secondly, most of the diphoton excess studies so far have assumed the existence of heavy coloured vectorlike fermions that, at one loop, induce singlet scalar couplings to gluons and photons~\cite{Franceschini:2015kwy,Knapen:2015dap,Pilaftsis:2015ycr,Buttazzo:2015txu,Angelescu:2015uiz,Gupta:2015zzs,Ellis:2015oso,McDermott:2015sck,Kobakhidze:2015ldh,Martinez:2015kmn,No:2015bsn,Chao:2015ttq,Fichet:2015vvy,Curtin:2015jcv,Falkowski:2015swt}.
Such coloured fermions, however, do not exist in the particle content of any supersymmetric extension
of the SM. In addition, new coloured fermions are severely constrained by LHC searches and must be very heavy~\cite{ATLAS,CMS}.
Therefore, extending the non-minimal supersymmetric
models further with charged and coloured vectorlike fermions implies that the model that is supersymmetrised is not the SM. The model becomes unnecessarily
complicated without any obvious need for these specific new particles.
In this work we argue that the diphoton excess hints at the existence of relatively light coloured and charged scalars.
First, these particles, {\it the squarks}, do exist in any supersymmetric extension of the SM and there is no need to
extend the model with new {\it ad hoc} particles.
Second, the LHC constraints on coloured scalar masses are much less stringent than on coloured fermions, such as gluinos.\footnote{The only exception are strongly coupled scalar diquarks~\cite{Ma:1998pi}, exotic scalars coupled to two valence quarks, that should produce quark-quark resonances at the LHC and
which masses are, therefore, constrained to be above 6~TeV~\cite{Khachatryan:2015dcf} The models with coloured scalars in the loop
presented in Ref.~\cite{Knapen:2015dap} are based on diquarks that are not superpartners of quarks. Also, their model does not contain dark matter candidates.}
These arguments allow for relatively light squarks in the loops generating $gg\to S\to\gamma\gamma$ that are potentially observable
at the LHC in coming years, rendering this scenario directly testable.
Third, one of the favourable feature of supersymmetric models is the existence of dark matter (DM) that comes for free as the lightest neutral superpartner of
gauge bosons and scalars. We use these arguments to address the LHC diphoton excess.
Motivated by the above mentioned good features of supersymmetric theories, we propose a supersymmetry inspired
simplified model that is able to explain the diphoton excess consistently with all other LHC results and with the existence of DM.
Although minimal by construction, and therefore not supersymmetric by itself, this model uses the particle content of the NMSSM
and can be embedded into the latter.\footnote{Alternatively, the singlet could be
a sgoldstino~\cite{Petersson:2015mkr,Bellazzini:2015nxw,Demidov:2015zqn}, the superpartner of supersymmetry breaking goldstone.}
Therefore, the mass spectrum of this type of supersymmetric models must be very different from the ones predicted by simple supersymmetry
breaking scenarios.
We study this effective model carefully and show that the requirement of a physical, charge and colour conserving vacuum restricts the allowed mass parameters to be constrained from above, rendering the model testable or falsifiable by collider experiments. In the context of the simplified model this statement means that the effective theory
breaks down and the new supersymmetric degrees of freedom must appear to cure the model. Therefore, if verified, our framework {\it predicts} the existence
of new supersymmetric particles at the reach of next collider runs.
Thus the di-photon excess may change our present understanding of the supersymmetry breaking patterns and the role
of scalars in supersymmetric models.
\begin{figure}[t]
\begin{center}
\includegraphics{Stofinal}
\caption{Leading order contributions to the main decay modes of $S$.}
\label{fig:Sdec}
\end{center}
\end{figure}
\section{The SUSY Inspired Simplified Model}
We construct a supersymmetry inspired simplified model that produces a narrow scalar resonance $gg \to S \to \gamma \gamma$. This resonance is necessarily a singlet under the SM gauge group. As shown in Fig.~\ref{fig:Sdec}, its interactions with photons and gluons are therefore induced at loop level by another scalar field $\tilde{Q}$ that is coloured and carries hypercharge, which we assume to be $q_{\tilde{Q}} = 2/3$. $\tilde{Q}$ is therefore identical to the well known right-handed up-type squark that transforms in the fundamental representation of $SU(3)$ but is a singlet under $SU(2)$.
To avoid any conflict with LHC phenomenology and cosmological and astrophysical observations, the squark $\tilde{Q}$ is required to be unstable.
As in the supersymmetric extension of the SM, we take it to decay into a quark and a neutralino-like fermion $\chi_{0}$, which is the dark matter candidate in our scenario.
Thus we consider a minimal extension of the SM with the real singlet scalar field $S$, three generations of `squarks' $\tilde{Q_{i}}$ and the `neutralino' $\chi_{0}$. Obviously, the model with this particle-content is by itself not supersymmetric, but requires embedding into a supersymmetric theory. The general Lagrangian for the given particle sector contains the following terms
\begin{align}
{\cal L}_{\rm kin} & = |D_{\mu} \tilde{Q}_{i}|^{2} + \frac{1}{2}(\partial_{\mu} S) (\partial^{\mu} S)
\\
& - M^2_{\tilde{Q}} |\tilde{Q_{i}}|^{2} - \frac{1}{2} M^2_{S0} S^{2}
+ \frac{1}{2} \bar{\chi}_{0}(\slashed{\partial} - m_{\chi_{0}})\chi_{0}, \notag
\\
{\cal L}_{\rm dec}
& = \frac{1}{2} y_{\chi} S \chi^{T}_{0}\chi_{0}
+ (y_{i} \tilde{Q_{i}}^{\dagger}\chi^{T}_{0} U_{iR} + \text{h.c.}),
\\
\label{Lag_scalar}
{\cal L}_{\rm scalar}
& =
- \mu_{\tilde{Q}} S |\tilde{Q_{i}}|^{2}
- \frac{\mu_{S}}{3} S^{3}
- \frac{\lambda_{S}}{4} S^{4}
\\
&
- \lambda_{S\tilde{Q}} S^{2} |\tilde{Q_{i}}|^{2}
- \lambda_{\tilde{Q}} |\tilde{Q_{i}}|^{2} |\tilde{Q_{j}}|^{2} \notag
\\
& - \lambda'_{\tilde{Q}} (\tilde{Q}_{i}^{\dag} \tilde{Q}_{j}) (\tilde{Q}_{j}^{\dag} \tilde{Q}_{i}), \notag
\\
{\cal L}_{H}
& =
- (\mu_{H} S + \lambda_{HS} S^{2}) H^{\dag} H \\
& - \lambda_{\tilde{HQ}} (\tilde{Q_{i}}^{\dag} \tilde{Q_{i}}) (H^{\dag} H), \notag
\end{align}
with the covariant derivative $D_{\mu}=\partial_{\mu}+g_sT^a G^a_{\mu} +e q_{\tilde{Q}} A_{\mu}$, where $G^a_{\mu}$ is the gluon and $A_{\mu}$ is the photon field, and we sum over the generation indices $i$ for $\tilde{Q}_{i}$. We assume a flavour symmetry to forbid any other terms involving $\tilde{Q}_{i}$. Eq.~\eqref{Lag_scalar} contains the interactions among the two scalars, most importantly the first term with the coupling $\mu_{\tilde{Q}}$ which has dimensions of mass.
We require that $M_{\chi}, M_{\tilde{Q}} > M_{S}/2 $ to forbid tree level decays of the $S$ resonance. Also, instability of $\tilde{Q}$ dictates that $M_{\tilde{Q}} > M_{\chi}$. This choice has the benefit of providing a dark matter candidate -- the neutralino $\chi_{0}$.
It has recently been shown~\cite{Backovic:2015fnp,Mambrini:2015wyu}
that in such a setup the observed amount of DM can be produced from thermal freeze-out analogously to the MSSM.
Thus the DM in our scenario is a thermal relic in the form of the stable neutralino $\chi^0.$ To satisfy the observation $\Omega_{\rm DM} h\sim 0.1,$
the neutralino mass must be ${\cal O}(300)$~GeV~\cite{Backovic:2015fnp,Mambrini:2015wyu}
, implying a somewhat compressed spectrum.
The latter implies that the model is not severely constrained by the LHC searches for squark pair production for the final state of two jets and missing energy~\cite{Aad:2015iea}.
As we have already commented, this model does not fit into the MSSM but requires some extended supersymmetric model, the NMSSM being
the simplest of them. We note that the mass spectrum of such a model must feature light scalars while the gluino must be heavy to comply with the LHC bound.
Since our study is phenomenological, we just assume this supersymmetry breaking pattern.
\section{Conditions for a Physical Vacuum}
We consider the conditions for the vacuum of the model not to break colour and electric charge. We need to ensure the following:
\begin{enumerate}
\item The potential is bounded from below in the limit of large field values.
\item The squarks $\tilde{Q}_{i}$ do not get VEVs, which would break colour and electric charge. The true vacuum should be at $S = 0$ and $\tilde{Q} = 0$, therefore the potential has to be positive everywhere else,\footnote{The Higgs portal couplings are already strongly constrained~\cite{Falkowski:2015swt} and we neglect them for phenomenological reasons. We also assume that $\mu_{H} \simeq 0$ to prevent a large decay width of $S$ into Higgs bosons.}
\begin{equation}
V(S \neq 0, \tilde{Q} \neq 0) > 0.
\end{equation}
\item $S$ does not get a VEV: a non-zero VEV for $S$ would shift the mass of $S$ away from $M_{S}$.\footnote{A VEV for $S$ would also generate large contribution to the mass of the squark, which would need to be fine-tuned.}
\end{enumerate}
The potential must be bounded from below in order for a finite minimum of potential energy to exist. In the limit of large field values, we can ignore the dimensionful terms in the scalar potential. The full bounded below conditions can be found via co-positivity constraints on the quartic part of the scalar potential \cite{Kannike:2012pe}:
\begin{align}
& \lambda_{S} > 0, \quad \lambda_{\tilde{Q}} + \theta(-\lambda'_{\tilde{Q}}) \lambda'_{\tilde{Q}} > 0, \quad \lambda_{H} > 0, \\
& \bar{\lambda}_{SQ} \equiv 2 \sqrt{\lambda_{S} [\lambda_{\tilde{Q}} + \theta(-\lambda'_{\tilde{Q}}) \lambda'_{\tilde{Q}}]} + \lambda_{S\tilde{Q}} > 0, \\
& \bar{\lambda}_{HQ} \equiv 2 \sqrt{\lambda_{H} [\lambda_{\tilde{Q}} + \theta(-\lambda'_{\tilde{Q}}) \lambda'_{\tilde{Q}}]} + \lambda_{H\tilde{Q}} > 0, \\
& \bar{\lambda}_{HS} \equiv 2 \sqrt{\lambda_{H} \lambda_{S}} + \lambda_{HS} > 0, \\
& \lambda_{HS} \sqrt{\lambda_{\tilde{Q}} + \theta(-\lambda'_{\tilde{Q}}) \lambda'_{\tilde{Q}}} + \lambda_{H\tilde{Q}} \sqrt{\lambda_{S}} + \lambda_{S\tilde{Q}} \sqrt{\lambda_{H}}
\notag \\
& + 2 \sqrt{[\lambda_{\tilde{Q}} + \theta(-\lambda'_{\tilde{Q}}) \lambda'_{\tilde{Q}}] \lambda_{S} \lambda_{H}}
\\
& + \sqrt{\bar{\lambda}_{SQ} \bar{\lambda}_{HQ} \bar{\lambda}_{HS}} > 0, \notag
\end{align}
where $\theta$ is the Heaviside step function. The conditions can be satisfied by taking $\lambda_{S} \geq 0$, $\lambda_{\tilde{Q}} \geq 0$, $\lambda'_{\tilde{Q}} \geq 0$, $\lambda_{S\tilde{Q}} \geq 0$, $\lambda_{HS} \geq 0$, $\lambda_{H\tilde{Q}} \geq 0$.
The stationary point equations for the new particles are then
\begin{align}
0 &= \mu_{\tilde{Q}} |\tilde{Q}_{i}|^{2} + (M_{S}^{2} + 2 \lambda_{S\tilde{Q}} |\tilde{Q}_{i}|^{2}) S \label{eq:min:S} \\
& + \mu_{S} S^{2} + \lambda_{S} S^{3}, \notag \\
0 &= |\tilde{Q}_{i}| (M_{\tilde{Q}}^{2} + 2 \lambda_{\tilde{Q}} |\tilde{Q}_{i}|^{2} + \mu_{\tilde{Q}} S + \lambda_{S\tilde{Q}} S^{2}). \label{eq:min:Q}
\end{align}
If $\tilde{Q}_{i} = 0$, we need
\begin{equation}
\mu_{S}^{2} < 4 \lambda_{S} M_{S}^{2}
\end{equation}
for $S$ not to get a VEV. We will take $\mu_{S} \simeq 0$ to get the largest allowed parameter space for the diphoton signal. However, we note that a small but non-zero $\mu_{S}$ could always be generated at two loops.
$S$ and $\tilde{Q}_{i}$ could also get non-zero VEVs simultaneously. We need to forbid this to prevent a coloured vacuum. The forbidden part of the parameter space is found by requiring that the vacua where $S$ and $\tilde{Q}_{i}$ have non-zero VEV -- if they exist -- are local minima of the potential, that is, $V > 0$.
Note that the bound does not depend on the number of flavours, which cancels out in the minimisation equations. Especially, the $\lambda'_{\tilde{Q}}$ term does not affect the result, since its minimal value is zero. Also, it is plausible that the bound will not be weakened by much if the flavor symmetry is abandoned.
To fit the diphoton signal we need a large $\mu_{\tilde{Q}}$ that tends to destabilise the SM vacuum. This effect can be countered with large quartic couplings. In Fig.~\ref{fig:XSecPlot} we show the forbidden region on $\mu_{\tilde{Q}}$ vs. $M_{\tilde{Q}}^{2}$ plane with gray colour for the least constraining choice $\lambda_{\tilde{Q}} = \lambda_{S} = \lambda_{S\tilde{Q}} = 4 \pi$. In the context of this effective model, that must be embedded into a supersymmetric model, the appearance of non-perturbative
couplings signals the break-down of the effective model. This implies that the supersymmetric particles of the full model must appear below the scale given by this constraint.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.45\textwidth]{XsecPlot}
\caption{$\sigma(pp \to S) \times \text{BR}(S \to \gamma \gamma)$ at the 13 TeV LHC. The colored regions correspond to 4.5$\pm$1.9 fb (inner region) and 4.5$\pm$ 3.8 fb (outer region) corresponding to the $1\sigma$ and $2\sigma$ regions for $N_f = 3$ degenerate squark generations. The horizontal axis shows the mass of the colored scalar particle $\tilde{Q}$ and the vertical axis the trilinear $S\tilde{Q}\tilde{Q}$ coupling. The grey shaded region is forbidden by the presence of colour symmetry breaking assuming $\lambda_{\tilde{Q}} = \lambda_{S} = \lambda_{S\tilde{Q}} = 4 \pi$.}
\label{fig:XSecPlot}
\end{center}
\end{figure}
\section{Event Rates}
We choose the mass of the singlet to be on the resonance, $M_{S} = 750$ GeV. At this energy scale $\alpha_s(M_{S}) = 0.0894(31)$ \cite{Chatrchyan:2013txa}, whereas for $\alpha = 1/137$ we use the zero momentum value.
From the CMS~\cite{CMS} we know that $\sigma (pp \to S \to \gamma \gamma) \simeq 4.5~\text{fb}$.
The partial decay widths of the singlet $S$ into two photons and into two gluons are \cite{Gunion:1989we,Djouadi:2005gj}
\begin{align}\label{eq:partial_G}
\Gamma (S \to \gamma \gamma)
&= \frac{ \alpha^{2} M_{S}^{3}\mu_{\tilde{Q}}^{2}}{1024 \pi^{3}M_{\tilde{Q}}^{4}} N_{f}^{2} N_{c}^{2} q_{\tilde{Q}}^{4} \left| A_{0}(\tau) \right|^{2},
\\
\Gamma (S \to g g)
&= \frac{ \alpha_{s}^{2} M_{S}^{3} \mu_{\tilde{Q}}^{2}}{512 \pi^{3} M_{\tilde{Q}}^{4}} N_{f}^{2} \left| A_{0}(\tau) \right|^{2},
\end{align}
respectively. $N_c = 3$ denotes the dimension of the representation for the squarks and $N_f$ the number of squark flavors.
The scalar loop function is given by~\cite{Gunion:1989we}
\begin{equation}
A_{0}(\tau) = \tau(1 - \tau f(\tau)),
\end{equation}
with $\tau = 4 M_{\tilde{Q}}^{2}/M_{S}^{2}$, and the universal scaling function is
\begin{equation}
\! f(\tau) =
\begin{cases}
\arcsin^{2} \sqrt{1/\tau}
& \tau \geq 1 ,
\\
-\left( {\rm arccosh} \sqrt{1/\tau} - i \pi/2 \right)^{2}
& \tau < 1.
\end{cases}
\end{equation}
The cross section for producing the diphoton signal via the decay of $S$ in the narrow width approximation is
\begin{equation}
\sigma(gg \to S \to \gamma \gamma) = \sigma (gg \to S) \, \text{BR}(S \to \gamma \gamma),
\end{equation}
where the production cross section is related to the decay width into gluons by
\begin{equation}\label{eq:partonicGGF}
\sigma(gg \to S) = \frac{\pi^{2}}{8 M_{S}} \Gamma (S \to g g) \delta(\hat s - M_{S}^{2}).
\end{equation}
Taking into account that $\Gamma (S \to \gamma \gamma) \ll \Gamma (S \to g g) \simeq \Gamma_{S}$, the branching ratio reads
\begin{equation}
\text{BR}(S \to \gamma \gamma)
\simeq \frac{1}{2} \frac{\alpha^{2}}{\alpha_{s}^{2}} N_{c}^{2}q_{\tilde{Q}}^{4}
\simeq 0.58\%,
\end{equation}
where we used $\alpha_s(M_{S}) \simeq 0.09$, $\alpha \simeq 1/137$ and assumed the up type squark with the charge $q_{\tilde{Q}} = 2/3$. We remark that if the dominant decay mode of $S$ is $S \to g g$ as assumed here, the cross section for the resonant production of diphotons by gluon-gluon fusion is approximately independent of the details of the strong interaction since
\begin{equation}\label{eq:partonicGGFapprox}
\sigma(gg \to S \to \gamma \gamma)
\simeq \frac{\pi^{2}}{8 M_{S}} \Gamma (S \to \gamma\gamma)\delta(\hat s - M_{S}^{2}).
\end{equation}
At the level of precision considered here, we assume that this cancellation also holds if higher orders corrections in $\alpha_s$ are taken into account.
To calculate the $S$ resonance production cross section at the LHC, we integrate Eq.~\eqref{eq:partonicGGFapprox} numerically using the MSTW parton distribution function (pdf) set \cite{Martin:2009iq}
\begin{align}
\sigma(gg \to S \to \gamma \gamma) = \frac{\pi^{2}}{8 M_{S}^3} I_{\rm pdf} \Gamma (S \to \gamma\gamma) ,
\end{align}
where $\sqrt{s} = 13\,\rm{\,TeV}$ is the center of mass energy of LHC proton-proton collisions, and
\begin{align}
I_{\rm pdf}
= \int_{M_S^2/s}^1 \; \frac{\mathrm{d}x}{x} \; \bar{g}(x) \; \bar{g}\left(\frac{M_S^2}{sx}\right)
\approx 5.8,
\end{align}
is the dimensionless pdf integral evaluated at $\sqrt{s} = 13\,\rm{\,TeV}$. Here $g(x, M_S) = \bar{g}(x, M_S)/x$ is the pdf of the gluon at momentum fraction $x$ evaluated at the scale $M_S=750$~GeV.
To reproduce the observed signal, we find that the partial decay width to photons is
\[
\Gamma(S \to \gamma \gamma) \approx (0.68 \pm 0.28) \mbox{ MeV}.
\]
The parameter space that reproduces the observed decay width for $N_f = 3$ generations is depicted in Fig.~\ref{fig:XSecPlot}. Accounting for unitarity and preserving color and charge symmetries, it follows, that within the $1\sigma$ band the data favors $N_f \geq 2$ generations of light squarks with masses below $M_{\tilde{Q}} \lesssim 800\rm{\,GeV}$ and a relatively large coupling to the scalar $S$ of $\mu_{\tilde{Q}} \gtrsim 2\rm{\,TeV}$. As was also noted in \cite{Gupta:2015zzs} in the context of a different model, we similarly find that the signal can not be reproduced by a single generation of light squarks within $1\sigma$.
The most important result evident in Fig.~\ref{fig:XSecPlot} is that the allowed parameter space of this effective model
is bounded to a small region by the di-photon excess and by the
consistency of the effective model. This implies that new particles must be present in Nature at the scale ${\cal O}(1)$~TeV.
\section{Effective Field Theory Approach}
We turn to analyse our scenario in terms of the effective Lagrangian approach.
In the case the squark is heavier than the singlet $S$, the latter can acquire effective couplings with photons and gluons by integrating out the squark field.
For generic squarks this corresponds to an effective Lagrangian
\begin{equation}
{\mathcal{L}}_{\rm eff}
= \frac{1}{\Lambda_{\gamma}} S \,F^{\mu \nu}F_{\mu \nu} + \frac{1}{\Lambda_{G}} S \,G^{a \mu \nu}G^a_{\mu \nu},
\label{Leff}
\end{equation}
where $F_{\mu \nu}$ and $G^a_{\mu \nu}$ are the field strengths of the SM gauge fields, while $\Lambda_{i}$ denote the effective scale of the non-renormalizable interaction. In the simplified model considered above, the condition $M_{\tilde{Q}} \gg M_S$ cannot hold for physically allowed parameters, see Fig.~\ref{fig:XSecPlot}.
Thus the rates obtained by using the effective Lagrangian need to explicitly account for the loop function $A_{0}$ (i.e. non-trivial scaling of $\Lambda_{i}$) to get accurate results, even if the expansion $E/\Lambda_{i}$ naively seems to be well defined.
Nevertheless, in this context the formalism of effective Lagrangian approach
is very useful, since it allows to capture in a model-independent way the crucial information
concerning the underlying dynamics responsible of generating the effective coupling.
If the effective operator is generated perturbatively by integrating out particles running in the loops, its coefficient has the general form
\begin{align}
\label{eff_scale}
\frac{1}{\Lambda_{i}} = \frac{\alpha_{i}}{4\pi} \frac{N_{e} g_{\tilde{Q}S}}{m_{\tilde{Q}}} C_{i},
\end{align}
where $C_{i}$ is an $\mathcal{O}(1)$ factor originating from loop integrals and $g_{S}$ denotes an effective coupling between $S$ and the mediators and $N_{e}$ is the effective number of degrees of freedom running in the loops.
The cross section obtained from the effective Lagrangian is roughly
\begin{equation}
\sigma(pp \to S \to \gamma \gamma)
\approx \frac{\alpha^2 N_e^{2}g_{S}^{2}}{512\, m_{\tilde{Q}}^{2}}.
\end{equation}
Fixing its value to $5~\text{fb}$ suggests that in order to reproduce the required phenomenology of the observed diphoton excess, the effective coupling defined by Eq.~\eqref{eff_scale} should satisfy
\begin{align}
N_e g_{S} \approx 70 \times \frac{m_{\tilde{Q}}}{M_{S}}\, .
\end{align}
As we can see, this would require necessarily a $g_{S} \simeq \mathcal{O}(10)$
if $N_e\sim {\cal O}(1)$. Then, from these results
one can naively guess that this large number of $g_{S}$
points towards either strong dynamics or a relatively large number of degrees of freedom in the loops.
This is, indeed, justified conclusion if one considers vector-like fermions running in the loop~\cite{Franceschini:2015kwy,Knapen:2015dap,Pilaftsis:2015ycr,Buttazzo:2015txu,Angelescu:2015uiz,Gupta:2015zzs,Ellis:2015oso,McDermott:2015sck,Kobakhidze:2015ldh,Martinez:2015kmn,No:2015bsn,Chao:2015ttq,Fichet:2015vvy,Curtin:2015jcv,Falkowski:2015swt,Aloni:2015mxa}, where
$g_S$ coincides with the corresponding fermion Yukawa coupling to the scalar resonance. In this respect, we qualitatively agree with the
effective model approach conclusions of Ref.~\cite{Aloni:2015mxa} on the large production rates.
However, as we have shown with the present simplified model, when scalar fields are propagating in the loop, the above conclusions do not hold anymore.
The coupling $g_S$ can be made naturally (and consistently) very large, even in the framework of weakly coupled field theories,
being related to the ratio $g_{S} = \mu_{\tilde{Q}}/m_{\tilde{Q}} \sim\mathcal{O}(10)$. This is the advantage of having {\it soft} coupling $\mu_{\tilde{Q}}$
in theories with scalars. However, this requires that the scalar resonance should not get a VEV or equivalently
it should not be a Higgs-like particle. We have seen that constraints from the colour-charge breaking minima could limit the ratio $\mu_{\tilde{Q}}/m_{\tilde{Q}}$.
This implies that the present simplified model breaks down and Eq.~\eqref{eff_scale} does not correspond to the scale in Eq.~\eqref{Leff} any more.
The correct interpretation would require the knowledge of the full supersymmetric theory.
\section{Discussion and Conclusions}
We have shown that the recently claimed evidence for the 750~GeV diphoton excess at the LHC
can actually, contrary to the general opinion, favour supersymmetry in Nature. However, the corresponding supersymmetric theory
must contain a singlet in addition to the SM particle content, and the mass spectrum of the sparticles must be rather unusual featuring
several light scalars while the gluino must be heavy to satisfy the LHC constraints.
To study the diphoton excess we have presented a simplified model that captures the required properties of the supersymmetric
theory it is to be embedded in. As the result, we have shown that the NMSSM-like particle content is sufficient to generate large enough $gg\to S$ and $S\to \gamma\gamma$ processes at loop level to explain the observations. In particular, the coloured scalars in the loops have an advantage over the fermions to produce the
needed large signal because of the possibly large dimensionful coupling $\mu_{\tilde{Q}}.$
We have also shown that the requirement of a colour and charge conserving vacuum constrains the parameter space of this scenario
so that the model is testable. In the context of the simplified model, that by itself is not supersymmetric, this implies that the model breaks
down at rather low energy where new superpartners of the complete supersymmetric model must appear to save physics.
The concrete prediction of our scenario is the existence of relatively light squarks which should be searched for
at the LHC.
We conclude that, if this scenario will turn out to be the explanation of the diphoton excess, supersymmetry, indeed, was `just around the corner.'
However, to study the full the model and its precise properties would require more discoveries at the LHC or at the future 100~TeV collider.
\section*{Acknowledgments}
The authors thank Luca Marzola and Stefano Di Chiara for useful discussions.
This work was supported by the grants IUT23-6, PUT716, PUT799 and by the EU through the ERDF CoE program.
\section*{References}
\bibliographystyle{elsarticle-num}
|
1,477,468,750,335 | arxiv | \section{Introduction}
In this work we will study the following type of boundary layer problem in dimension $d \geq 2$
\begin{equation}\label{e.cellmain}
\left\{\begin{aligned}
&- \nabla \cdot a(y,\nabla v_n^s) = 0
&\text{ in }& P_n^s = \{ y \cdot n >s\} ,\\
& v_n^s(y)=\varphi(y) &\text{ on }& \partial P_n^s.
\end{aligned}
\right.
\end{equation}
Here $n \in S^{d-1}$ is a unit vector, $s \in \mathbb{R}$, $\varphi$ is continuous and $\mathbb{Z}^d$ periodic, the operator $a$ is also $\mathbb{Z}^d$ periodic in $y$ and will satisfy a uniform ellipticity assumption. In this work will consider both the case of nonlinear scalar equations and linear systems so, for now, we do not specify the assumptions on $a$ any further.
The \emph{boundary layer limit} of the system \eref{cellmain} is defined by
\[ \varphi_*(n,s) := \lim_{R \to \infty} v_n(Rn+y) \hbox{ if the limit exists and is independent of $y \in \partial P_n^s$.}\]
Typically $\varphi_*$ is independent of $s$ for irrational directions $n$ and we write $\varphi_*(n)$, while for rational directions $n \in \mathbb{R} \mathbb{Z}^d$ the limits above exist but depend on $s$. If, additionally, the boundary layer limit is independent of $s$ then we say that the cell problem \eref{cellmain} homogenizes. The focus of this article is on the continuity (or lack thereof) of $\varphi_*$ as a function of the normal direction. The continuity of $\varphi_*$ is intrinsically linked with linearity of the operator $a(x,p)$. In the case of a linear system we show continuity of $\varphi_*$, while in the case of nonlinear scalar equations we give an example where $\varphi_*$ is discontinuous, this indicates generic discontinuity for nonlinear equations.
Before we continue with our discussion of the continuity properties of $\varphi_*$ we give a brief explanation about where the problem \eref{cellmain} arises and why one should be interested in the continuity of $\varphi_*$. Let $\Omega\subset \mathbb{R}^d$ be a bounded domain and consider the homogenization problem with oscillating Dirichlet boundary data,
\begin{equation}\label{e.maineqn}
\left\{\begin{aligned}
&- \nabla \cdot (a(\tfrac{x}{\varepsilon},\nabla u^\varepsilon)) =0
&\text{ in }& \Omega ,\\
& u^\varepsilon(x)=g(x,\tfrac{x}{\varepsilon}) &\text{ on }& \partial\Omega
\end{aligned}
\right.
\end{equation}
where $\varepsilon>0$ is a small parameter, $g(x,y)$ is continuous in $x,y$ and $\mathbb{Z}^d$ periodic in $y$. This system is natural to consider in its own right, but also it arises naturally in the study of homogenization with non-oscillatory Dirichlet data when one studies the higher order terms in the asymptotic expansion, see \cite{Gerard-Varet:2012aa} where this is explained.
The interest in studying \eref{maineqn} is the asymptotic behavior of the $u^\varepsilon$ solutions as $\varepsilon \to 0$. This problem has been studied recently by a number of authors starting with the work of Gerard-Varet and Masmoudi \cite{Gerard-Varet:2012aa,Gerard-Varet:2011aa} and followed by \cite{Aleksanyan:2015aa,Aleksanyan:2017aa, Choi:2014aa,Feldman:2015aa,Feldman:2014aa,Prange:2013aa,Zhang:2017aa, Armstrong:2017aa, Guillen:2016aa}. It has been established that solutions $u^\varepsilon$ converge, at least in $L^2(\Omega)$, to some $u^0$ which is a unique solution to
\begin{equation}
\left\{\begin{aligned}
&-\nabla \cdot (a^0( \nabla u^0)) =0
&\text{ in }& \Omega,\\
& u^0(x)=\varphi^0(x) &\text{ on }& \partial\Omega
\end{aligned}
\right.
\end{equation}
where $a^0$ and $\varphi^0(x)$ are called respectively the homogenized operator and homogenized boundary data. The identification of the homogenized operator $a^0$ is a classical topic. The homogenized boundary $\varphi^0$ is determined by the boundary layer problem \eref{cellmain},
\[ \varphi^0(x) = \varphi_*(n_x) \ \hbox{ when $n_x$ is the inward unit normal to $\Omega$ and } \ \varphi(y) = g(x,y).\]
That is, \eref{cellmain} can be viewed as a kind of cell problem associated with the homogenization of \eref{maineqn}. At least for linear equations this definition makes sense as long as the set of boundary points of $\partial \Omega$ where \eref{cellmain} does not homogenize, i.e. those with rational normal, has zero harmonic measure. The convergence of $u^\varepsilon$ to $u^0$ has been established rigorously for linear systems by G\'{e}rard-Varet and Masmoudi~\cite{Gerard-Varet:2012aa}, and further investigations have yielded optimal rates of convergence, see Armstrong, Kuusi, Mourrat and Prange \cite{Armstrong:2017aa} and Shen and Zhuge \cite{Shen:2016aa}. For nonlinear divergence form equations, to our knowledge, the problem has not been studied yet. This is the source of our interest in the continuity properties of $\varphi_*$.
The main result which we establish in this paper is that the directional limits of $\varphi_*$ at a rational direction are determined by a ``second cell problem" for the homogenized operator $a^0$. Let us take $\xi \in \mathbb{Z}^d \setminus \{0\}$ to be an irreducible lattice vector and $\hat\xi$ to be the corresponding rational unit vector in the same direction. Then the cell problem \eref{cellmain} solution $v_{\xi}^s$ exists for each $s \in \mathbb{R}$ and has a boundary layer limit,
\[ \varphi_*(\xi,s) := \lim_{R \to \infty} v_{\xi,s}(R\xi),\]
but that limit typically is not independent of the translation $s$ applied to the half-space domain $P_\xi$. We will see that $\varphi_*(\xi,s)$ is a $1/|\xi|$-periodic function on $\mathbb{R}$. Now suppose that we have a sequence of directions $n_k \to \hat\xi$ such that,
\[ \frac{\hat\xi-n_k}{|\hat \xi-n_k |} \to \eta \ \hbox{ a unit vector with } \ \eta \perp \xi.\]
Call $\eta$ the approach direction of the sequence $n_k$ to $\xi$. We will show that the limit of $\varphi_*(n_k)$ is determined by the following boundary layer problem. Define
\begin{equation}\label{e.cell2}
\left\{
\begin{array}{ll}
- \nabla \cdot a^0(\nabla w_{\xi,\eta}) = 0 & \hbox{ in } P_\xi \vspace{1.5mm}\\
w_{\xi,\eta} = \varphi_*(\xi,x \cdot \eta) & \hbox{ on } \partial P_\xi
\end{array}
\right.
\end{equation}
then it holds
\[ \lim_{k \to \infty} \varphi_*(n_k) = \lim_{R \to \infty} w_{\xi,\eta}(R\xi).\]
Thus the directional limits of $\varphi_*$ at $\xi$ are determined by the boundary layer limit of a half-space problem for the homogenized operator. This limit structure was first observed in Choi and Kim~\cite{Choi:2014aa} and developed further by the first author and Kim~\cite{Feldman:2015aa}, both papers studied non-divergence form and possibly nonlinear equations. We will explain in this paper how the second cell problem follows purely from \emph{qualitative} features which are shared by a wide class of elliptic equations, including divergence form linear systems, and both divergence and non-divergence form nonlinear equations. We are somewhat vague about the hypotheses, which will be explained in detail in Sections \ref{sec: linear background} and \ref{sec: nonlinear background}.
\begin{thm}\label{main1}
The limit characterization \eref{cell2} holds for divergence form linear systems and nonlinear equations satisfying a uniform ellipticity condition.
\end{thm}
Once we have established \eref{cell2}, the question of qualitative continuity/discontinuity of $\varphi_*$ is reduced to a much simpler problem. For linear equations the homogenized operator $a^0$ is linear and translation invariant and so a straightforward argument, for example by the Riesz representation theorem, shows that,
\[ \lim_{R \to \infty} w_{\xi,\eta}(R\xi) = |\xi| \int_0^{1/|\xi|} \varphi_*(\xi,s) \ ds,\]
i.e. it is the average over a period of $\varphi_*(\xi,\cdot)$. Evidently this does not depend on the approach direction $\eta$. Thus qualitative continuity of $\varphi_*$ for linear problems follows easily once we establish \eref{cell2}. Our arguments to derive \eref{cell2} can be quantified to obtain a modulus of continuity, which we make explicit below, however so far we cannot push the method to obtain the optimal modulus of continuity. The recent work Shen and Zhuge~\cite{Shen:2017aa} obtains an almost Lipschitz modulus of continuity by a different method, we will compare their approach with ours below.
\begin{thm}\label{main2}
For elliptic linear systems, $d \geq 2$, for any $0 < \alpha < 1/d$ there is a constant $C\geq1$ depending on $\alpha$ as well as universal parameters associated with the system (see Section~\ref{sec: linear background}) such that, for any $n_1,n_2$ irrational,
\[ |\varphi_*(n_1) - \varphi_*(n_2) | \leq C \|\varphi\|_{C^5} |n_1 - n_2|^{\alpha}.\]
\end{thm}
For nonlinear problems we expect that $\varphi_*$ is discontinuous at rational directions, at least for generic boundary data and operators. A result of this kind was established for non-divergence form equations in \cite{Feldman:2015aa}. We have not been able to prove such a general result for divergence form nonlinear equations, but we have constructed an explicit example showing that discontinuity is possible.
\begin{thm}\label{main3}
For $d \geq 3$ there exist smooth boundary data $\varphi$ and uniformly elliptic, positively $1$-homogeneous, nonlinear operators $a(x,p)$ such that $\varphi_*$ is discontinuous at some rational direction.
\end{thm}
We compare with the work of Shen and Zhuge~\cite{Shen:2017aa} which studies continuity properties of $\varphi_*$ for linear divergence form systems. They show, in the linear systems case, that $\varphi_*$ is in $W^{1,p}$ for every $p<\infty$. Their work relies on a formula for the homogenized boundary condition from Armstrong, Kuusi, Mourrat and Prange \cite{Armstrong:2017aa} which in turn relies on the result of Kenig, Lin and Shen~\cite{Kenig:2014aa} giving precise asymptotics for the Green's function and Poisson kernel associate with the equation \eref{maineqn}. For this reason their result seems to be restricted to the linear setting. Our approach, although it does not yield an optimal quantitative estimate as of yet, does not rely on formulas for the Poisson kernel asymptotics and applies to both linear and nonlinear equations (including both divergence form, as established here, and nondivergence form, as established previously in \cite{Feldman:2015aa}). We note that in the course of proving Theorem~\ref{main2} we actually show H\"{o}lder regularity for every $0 < \alpha < 1$ at each lattice direction $\xi \in \mathbb{Z}^d \setminus \{0\}$, the modulus of continuity however depends on the rational direction and degenerates as $|\xi| \to \infty$. This is why we only end up with (almost) H\"{o}lder-$\frac{1}{d}$ continuity in the end.
\subsection{Notation} We go over some of the notations and terminology used in the paper. We will refer to constants which depend only on the dimension or fundamental parameters associated with the operator $a(x,p)$ (to be made specific below), e.g. ellipticity ratio or smoothness norm, as universal constants. We will write $C$ or $c$ for universal constants which may change from line to line. Given some quantities $A,B$ we write $A \lesssim B$ if $A \leq CB$ for a universal constant $C$. If the constants depend on an additional non-universal parameter $\alpha$ we may write $A \lesssim_\alpha B$.
We will use various standard $L^p$ and H\"{o}lder $C^{k,\alpha}$ norms. For H\"{o}lder semi-norms, which omit the zeroth order $\sup$ norm term, we write $[f]_{C^{k,\alpha}}$. Given a measurable set $E \subset \mathbb{R}^d$ we will also use the $L^p_{avg}(E)$ norm which is defined by
\[ \|f\|_{L^p_{avg}(E)} = \left(\frac{1}{|E|}\int_{E} |f|^p \right)^{1/p}.\]
The oscillation is a convenient quantity for us since the solution property for the equations we consider is preserved under addition of constant functions. This is usually defined for a scalar valued function $u : E \to \mathbb{R}$ on a set $E \subset \mathbb{R}^d$ as $\osc_E u=\sup_E u - \inf_E u$. We use a slightly different definition which also makes sense for vector valued $u : E \to \mathbb{R}^N$,
\[ \osc_E u := \inf \{ r >0 : \ \hbox{ there exists } u_0 \in \mathbb{R}^N \hbox{ s.t. } \|u - u_0\|_{L^\infty(E)} \leq r/2 \}.\]
\section{Explanation of the limit structure at rational directions}\label{sec: outline}
We give a high level description of the asymptotics of the boundary layer limit at rational directions. What we would like to emphasize throughout this description is that the argument is basically geometric, and has to do with the way that $\partial P_n$ intersects the unit periodicity cell in the asymptotic limit as $n$ approaches a rational direction. This calculation relies only on certain qualitative features of Dirichlet problems for elliptic equations which are true both for divergence and non-divergence form both linear (including systems) and non-linear. To emphasize the level of abstraction we will write the boundary layer problem in the following form
\begin{equation}\label{e.abstractcell}
\left\{
\begin{array}{ll}
F[v_n,x] = 0 & \hbox{ in } P_n := \{ x \cdot n >0\} \vspace{1.5mm}\\
v_n = \varphi & \hbox{ on } \partial P_n
\end{array}\right.
\end{equation}
Always $F$ and $\varphi$ will share $\mathbb{Z}^d$ periodicity in the $x$ variable. In order to carry out the heuristic argument we will need the following properties of the class of equations/systems. We emphasize that the following properties are not stated very precisely, they are merely meant to be illustrative.
\begin{enumerate}[$(i)$]
\item (Homogenization) There is an elliptic operator $F^0$ in the same class such that if $u^\varepsilon$ is a sequence of solutions of $F[u^\varepsilon,\tfrac{x}{\varepsilon}] = 0$ in a domain $\Omega$ converging to some $u^0$ then $F[u^0] = 0$ in $\Omega$.
\item (Continuity with respect to boundary data in $L^\infty$) There exists $C>0$ so that if $n \in S^{d-1}$ and $u_1$, $u_2$ are bounded solutions of \eref{abstractcell} with respective boundary data $\varphi_1$ and $\varphi_2$ then,
\[ \sup_{P_n} |u_1 - u_2| \leq C \sup_{\partial P_n} |\varphi_1 - \varphi_2|.\]
\item (Large scale interior and boundary regularity estimates) There is $\alpha \in (0,1)$ such that for any $r>0$ if $F[u,x] = 0$ in $B_r \cap P_n$, where $B_r$ is some ball of radius $r$,
\[ [u]_{C^{\alpha}(B_{r/2} \cap P_n)} \lesssim r^{-\alpha}\osc_{B_{r} \cap P_n} u + [g]_{C^\alpha( B_{r} \cap \partial P_n)}. \]
\end{enumerate}
The heuristic outline below applies to a wide class of elliptic equations, already the arguments were carried out rigorously for non-divergence nonlinear equations by Choi and Kim~\cite{Choi:2014aa} and the first author and Kim~\cite{Feldman:2015aa} and similar ideas were used for parabolic equations in moving domains by the second author in~\cite{Zhang:2017aa}. Here we will be studying divergence form equations, linear systems and nonlinear scalar equations.
To begin we need to understand the boundary layer limit at a rational direction. Let $\xi \in \mathbb{Z}^d \setminus \{0\}$ and consider the solution $v^s_\xi(x)$ of,
\begin{equation}\label{e.abstractcellrat}
\left\{
\begin{array}{ll}
F[v^s_\xi,x] = 0 & \hbox{ in } P_\xi^s = \{ x \cdot n > s\} \vspace{1.5mm}\\
v^s_\xi = \varphi & \hbox{ on } \partial P_\xi^s.
\end{array}\right.
\end{equation}
Translating the half-space, by changing $s$, changes the part of the data $\varphi$ seen by the boundary condition. Thus the boundary layer limit of $v^s_\xi$ can depend on the parameter $s$, we define
\[ \varphi_*(\xi,s) = \lim_{R \to \infty} v^s_\xi(R\xi).\]
As will become clear, this particular parametrization of the boundary layer limits is naturally associated with the asymptotic structure of the boundary layer limits for directions $n$ near $\xi$.
The next step is to understand the geometry near $\xi$. Let $n \in S^{d-1}$ be a direction near $\xi$ and $v_n$ be the corresponding half-space solution. We can write,
\[ n = (\cos \varepsilon)\hat\xi -(\sin \varepsilon) \eta \ \hbox{ for some small angle $\varepsilon$ and a unit vector } \ \eta \perp \xi.\]
We obtain an asymptotic for $v_n$ at an intermediate length scale.
Let $x \in \partial P_n$, since $n$ is close to $\xi$, $\partial P_n$ is close to $\partial P_\xi + x \cdot \hat \xi$ in a large neighborhood, any scale $o(1/\varepsilon)$, of $x$. By using the local up to the boundary regularity we see that $v_n$ and $v^{s}_\xi$, with $s = x \cdot \hat \xi$, are close on the boundary of their common domain, at least in this $o(1/\varepsilon)$ neighborhood of $x$. Now $v_\xi^s$ has a boundary layer limit $\varphi_*(\xi,s)$, and the length scale $|\xi|$ associated with the boundary layer depends on $\xi$, but not on $\varepsilon$. Thus for $\varepsilon$ small and $ |\xi| \ll R \ll 1/\varepsilon$
\[ v_n(x+Rn) = \varphi_*(\xi,x \cdot \hat \xi) + o_\varepsilon(1) = \varphi_*(\xi,\tan \varepsilon (x \cdot \eta)) + o_\varepsilon(1).\]
This is one of the main places where we use the large scale boundary regularity estimates, property $(iii)$ above. Thus, moving into the domain by $Rn$ and rescaling to the scale $1/\tan \varepsilon$, i.e. letting $w^\epsilon(x)\sim v_n(\frac{x+Rn}{\tan \varepsilon})$, we find that the boundary layer limit is well approximated by the boundary layer limit of
\begin{equation}\label{e.cell2ep}
\left\{
\begin{array}{ll}
F[w^\varepsilon,\tfrac{x}{\tan\varepsilon}] = 0 & \hbox{ in } P_\xi \vspace{1.5mm}\\
w^\varepsilon = \varphi_*(\xi,x \cdot \eta) & \hbox{ on } \partial P_\xi
\end{array}\right.
\end{equation}
in the limit as $\varepsilon \to 0$. Now taking the limit as $\varepsilon \to 0$ of in \eref{cell2ep} we find the ``second cell problem"
\begin{equation}\label{e.cell2F}
\left\{
\begin{array}{ll}
F^0[w_{\xi,\eta}] = 0 & \hbox{ in } P_\xi \vspace{1.5mm}\\
w_{\xi,\eta} = \varphi_*(\xi,x \cdot \eta) & \hbox{ on } \partial P_\xi.
\end{array}\right.
\end{equation}
Thus we characterize the directional limits at the rational direction $\xi$ as the boundary layer limits of the associated second cell problem
\[ \lim_{k \to \infty} \varphi_*(n_k) = \lim_{R \to \infty} w_{\xi,\eta}(R\xi) \ \hbox{ if } \ \frac{\hat\xi -n_k}{| \hat\xi-n_k|} \to \eta.\]
With this characterization the \emph{qualitative} continuity and discontinuity of $\varphi_*$ can be investigated solely by studying the problem \eref{cell2F}.
In the following, Section~\ref{sec: linear background} and Section~\ref{sec: nonlinear background}, we will explain background regularity results for linear systems and nonlinear divergence form equations and the well-posedness of Dirichlet problems in half-spaces. In particular we will prove that properties we used in the heuristic arguments above do hold for the type of equations/systems we consider. In Section~\ref{sec: boundary layers} we will go into more detail about the boundary layer problem \eref{cellmain} in rational and irrational half-spaces. In Section~\ref{sec: asymptotics} we will make rigorous the above outline obtaining intermediate scale asymptotics which lead to the second cell problem \eref{cell2F}. In Section~\ref{sec: continuity} we show how to derive continuity of $\varphi_*$ from the second cell problem for linear problems, and in Section~\ref{sec: discontinuity} we show how nonlinearity can cause discontinuity of $\varphi_*$.
\section{Linear Systems Background Results}\label{sec: linear background}
In this section we will recall some results about divergence form linear systems. Let $\Omega$ be a domain of $\mathbb{R}^d$ and $N \geq 1$, we consider solutions of the following elliptic linear system:
\[ - \nabla \cdot (A(x) \nabla u) =0 \ \hbox{ in } \ \Omega\]
where $u \in H^1(\Omega;\mathbb{R}^N)$ is at least a weak solution. Here we use the notation $A = (A_{ij}^{\alpha\beta}(x))$ for $1 \leq \alpha,\beta \leq d$ and $1 \leq i,j \leq N$ defined for $x \in \mathbb{R}^d$, where we mean, using summation convention,
\[\left(\nabla \cdot (A(x) \nabla u^\varepsilon)\right)_i = \partial_{x_\alpha}(A^{\alpha\beta}_{ij}(x)\partial_{x_\beta} u_j^\varepsilon).\]
We assume that $A$ satisfies the following hypotheses:
\begin{enumerate}[(i)]
\item Periodicity:
\begin{equation}
A(x+z) = A(x) \ \hbox{ for all } \ x \in \mathbb{R}^d, z \in \mathbb{Z}^d.
\end{equation}
\item Ellipticity: for some $\lambda>0$ and all $\xi \in \mathbb{R}^{d \times N}$,
\begin{equation}
\lambda \xi^i_\alpha\xi^i_\alpha \leq A^{\alpha\beta}_{ij}\xi^i_\alpha\xi^j_\beta \leq \xi^i_\alpha\xi^i_\alpha.
\end{equation}
\item Regularity: for some $M>0$,
\begin{equation}
\|A\|_{C^{5}(\mathbb{R}^d)} \leq M.
\end{equation}
\end{enumerate}
We remark that the regularity on $A$ is far more than is necessary for most of the results below. When we say that $C$ is a universal constant below we mean that it depends only on the parameters, $d,N,\lambda,M$.
\subsection{Integral Representation}
Consider the following boundary layer problem, which will be the main object of our study,
\begin{equation}\label{e.linear hs}
\left\{\begin{aligned}
& -\nabla \cdot (A(x) \nabla u) = \nabla \cdot f +g &\text{ in }& P_n,\\
& u(x)=\varphi(x) &\text{ on }& \partial P_n.
\end{aligned}
\right.
\end{equation}
for $f,g$ smooth vector valued functions with compact support and $\varphi$ continuous and bounded. A solution is given by the Green's function formula
\[u(x)=\int_{P_n} \nabla G(x,y)\cdot f(y)dy + \int_{P_n}G(x,y)g(y)dy +\int_{ \partial P_n}P(x,y)\varphi(y)dy.\]
Here $G,P$ are Green matrix and Poisson kernel corresponding to our operator. For $y \in P_n$, $G$ solves
\begin{equation}
\left\{\begin{aligned}
& -\nabla_x \cdot (A(x) \nabla_x G(x,y)) = \delta(x-y)I_N &\text{ in }& P_n,\\
& G(x,y)=0 &\text{ on }& \partial P_n
\end{aligned}
\right.
\end{equation}
and the Poisson kernel is given, for $x\in P_n$ and $y \in \partial P_n$, by
\[P(x,y) = - n \cdot (A^t(y) \nabla_y G(x,y)) \ \hbox{ i.e. } \ P_{ij}(x,y) = - n_\alpha A^{\beta \alpha}_{ki}(y) \partial_{y_\beta}G_{kj}(x,y).\]
Following from the work of Avellaneda-Lin~\cite{Avellaneda:1991aa}, and exactly stated in \cite{Gerard-Varet:2012aa} (Proposition 5), $G$ and $P$ satisfy the same bounds as for a constant coefficient operator:
\begin{thm}\label{integral}
Call $\delta(y):= \textup{dist}(y,\partial P_n)$. For all $x\ne y$ in $ P_n$, one has
\begin{align*}
|G(x,y)|&\leq\frac{C}{|x-y|^{d-2}} \quad \text{ for }d\geq 3,\\
|G(x,y)|&\leq {C}(|\log |x-y||+1) \quad \text{ for }d=2,\\
|G(x,y)|&\leq \frac{C\delta(x)\delta(y)}{|x-y|^{d}} \quad \text{ for all }d,\\
|\nabla_x G(x,y)| &\leq \frac{C}{|x-y|^{d-1}} \quad \text{ for all }d,\\
|\nabla_x G(x,y)| &\leq C(\frac{\delta(y)}{|x-y|^{d}}+
\frac{\delta(x)\delta(y)}{|x-y|^{d+1}} )
\quad \text{ for all }d.
\end{align*}
For all $x\in P_n$ and $y\in \partial P_n $, one has
\begin{eqnarray*}
&&|P(x,y)|\leq\frac{C\delta (x)}{|x-y|^{d}},\\
&&|\nabla P(x,y)|\leq C(\frac{1}{|x-y|^{d}}+
\frac{\delta(x)}{|x-y|^{d+1}} ).
\end{eqnarray*}
\end{thm}
Although it is not precisely stated there, the methods of Avellaneda-Lin~\cite{Avellaneda:1991aa} also can achieve the same bounds for the Green's function and Poisson kernel associated with the operator $- \nabla \cdot( A(x) \nabla )$ in the strip type domains
\[ \Pi_n(0,R) := \{ 0 < x \cdot n < R \}, \]
with constants independent of $R$. This will be useful later.
From the Poisson kernel bounds we can derive the $L^\infty$ estimate which replaces the maximum principle for linear systems.
\begin{lem}\label{lem: system max}
Suppose that $u_1,u_2$ are bounded solutions of \eref{linear hs} with respective boundary data $\varphi_1,\varphi_2$ and zero right hand side. Then,
\[ \sup_{P_n} |u_1 - u_2| \leq C\| \varphi_1 - \varphi_2 \|_{L^\infty(\partial P_n)}\]
where $C$ is a universal constant. The same holds for solutions in $\Pi_n(0,R)$.
\end{lem}
For the solutions given by the Poisson kernel representation formula the result of Lemma~\ref{lem: system max} follows from a standard calculation using Theorem~\ref{integral}. There is some subtlety in showing uniqueness, see \cite{Gerard-Varet:2012aa} (Section 2.2) for a proof.
\subsection{Large scale boundary regularity} In this section we consider the large scale boundary regularity used in the heuristic argument of Section~\ref{sec: outline} for linear elliptic systems. We will need a boundary regularity result from Avellaneda-Lin~\cite{Avellaneda:1987aa} (Theorem 1). For the below we assume $\Omega$ is some domain with $0 \in \partial \Omega$ and that $u^\varepsilon$ solves
\[ - \nabla \cdot (A(\tfrac{x}{\varepsilon}) \nabla u^\varepsilon ) = 0 \ \hbox{ in } \ \Omega \cap B_1 \ \hbox{ and } \ u^\varepsilon = g \ \hbox{ on } \ \partial \Omega \cap B_1. \]
\begin{lem}\label{lem: flat reg}
For every $0<\alpha <1$ there is a constant $C$ depending on $\alpha$ and universal quantities such that, if $\Omega = \{x_d >0 \} \cap B_1 =: B_1^+$,
\[ [u^\varepsilon]_{C^\alpha(B_{1/2}^+)} \leq C( \| \nabla g\|_{L^\infty(\{x_d = 0\} \cap B_1)} + \|u^\varepsilon - g(0)\|_{L^2(B_1^+)}) ,\]
and for every $\nu >0$
\[ \|\nabla u^\varepsilon\|_{L^\infty(B_{1/2}^+)} \leq C( \| \nabla g\|_{C^{0,\nu}(\{x_d = 0\} \cap B_1)} + \|u^\varepsilon - g(0)\|_{L^2(B_1^+)}). \]
\end{lem}
We need the H\"{o}lder regularity result in cone type domains which are the intersection of two half-spaces with normal directions $n_1,n_2$ very close to each other. We will consider the more general class of domains $\Omega$ which are a Lipschitz graph over $\mathbb{R}^{d-1}$ with small Lipschitz constant. In particular we assume that there is an $f : \mathbb{R}^{d-1} \to \mathbb{R}$ Lipschitz with $f(0) = 0$ such that,
\[ \Omega \cap B_1 = \{(x',x_d): x_N > f(x') \} \cap B_1. \]
\begin{lem}\label{lem: bdry cont linear}
For every $0<\alpha <1$ there is a $\delta(\alpha)>0$ universal such that, if $\Omega$ as above with $\|\nabla f\|_{\infty} \leq \delta$, then
\[ [u^\varepsilon]_{C^\alpha(\Omega \cap B_{1/2})} \leq C( \| \nabla g\|_{L^\infty(\partial \Omega \cap B_1)} + \|u^\varepsilon - g(0)\|_{L^2(\Omega \cap B_1)}) .\]
\end{lem}
The proof is by compactness, we postpone it to Appendix~\ref{sec: A}.
\subsection{Poisson kernel in half-space intersection} From the regularity estimates of the previous subsection we can derive estimates on the Poisson Kernel in the intersection of nearby half-space domains. Consider two unit vectors $n_1,n_2$ with $|n_1 - n_2| \sim \varepsilon$ small. For simplicity we suppose that,
\[ n_j = (\cos \varepsilon) e_d +(-1)^{j}(\sin \varepsilon) e_1.\]
Call
\[ K= P_{n_1} \cap P_{n_2}.\]
Call $G_K(x,y)$ to be the Green's matrix. Although the domain is Lipschitz $G_K$ still satisfies the bound (via Avellaneda-Lin~\cite{Avellaneda:1987aa}), in $ d \geq 3$,
\[ |G_K(x,y)| \lesssim \frac{1}{|x-y|^{d-2}}.\]
We call $P_K(x,y)$, for $x \in K$ and $y \in \partial K$, to be the Poisson kernel for $K$, which is well-defined as long as $y_1 \neq 0$. Call $\delta(x)= \textup{dist}(x,\partial K)$.
\begin{lem}\label{lem: PK bounds 1}
For any $\alpha \in (0,1)$ and $\varepsilon$ sufficiently small depending on $\alpha$ and universal quantities,
\[
|P_K(x,y)| \lesssim_\alpha
\left\{
\begin{array}{lll}
\frac{\delta(x)^\alpha}{|x-y|^{d-1+\alpha}} & \hbox{ for } & |y_1| \geq \tfrac{1}{2}|x-y| \vspace{1.5mm}\\
\frac{1}{|y_1|}\frac{\delta(x)^\alpha}{|x-y|^{d-2+\alpha}} & \hbox{ for } & |y_1| \leq \tfrac{1}{2}|x-y|.
\end{array}\right.
\]
\end{lem}
The proof is postponed to Appendix~\ref{sec: A}, we show how the estimates are used. Suppose $\psi : \partial K \to \mathbb{R}^N$ satisfies,
\[ |\psi(x)| \leq \min\{|x_1|,1\}.\]
We consider the Poisson kernel solution of the Dirichlet problem,
\[ u(x) = \int_{\partial K} P_K(x,y) \psi(y) \ dy.\]
In particular we are interested in the continuity at $0$, we only consider really $x = te_d$ for some $t>0$ (or $x = tn_1$ or $tn_2$ but this is basically the same) so we restrict to that case. Now for $y \in \partial K$, $|x-y| \sim t + |y|$ and so $|x-y| \gtrsim |y_1|$ and the first bound in Lemma~\ref{lem: PK bounds 1} implies the second. Thus we can compute
\begin{align*}
|u(te_d)| &\lesssim \int_{\partial K} \frac{1}{|y_1|}\frac{t^\alpha}{(t+|y|)^{d-2+\alpha}}\min\{|y_1|,1\} \ dy \\
&\lesssim \int_{ \partial K }\frac{t^\alpha}{(t+|y|)^{d-2+\alpha}}\min\{1,\frac{1}{|y_1|}\} \ dy \\
&\lesssim \int_{\mathbb{R}}\int_{ \mathbb{R}^{d-2} }\min\{1,\frac{1}{|y_1|}\}\frac{t^\alpha}{(t+|y_1| +|z|)^{d-2+\alpha}} \ dzdy_1
\end{align*}
Computing the inner integrals
\[\int_{ \mathbb{R}^{d-2} }\frac{1}{(t+|y_1| +|z|)^{d-2+\alpha}} \ dz = \frac{1}{(t+|y_1|)^{\alpha}} \int_{\mathbb{R}^{d-2} }\frac{1}{(1+|w|)^{d-2+\alpha}} \ dw \lesssim \frac{1}{(t+|y_1|)^{\alpha}}. \]
Then
\[ u(te_d) \lesssim \int_{\mathbb{R}} \min\{1,\frac{1}{|y_1|}\}\frac{t^{\alpha}}{(t+|y_1|)^{\alpha}} dy_1 \lesssim t^\alpha \ \hbox{ for } \ t \leq 1. \]
We state the result of a slight generalization of this calculation as a Lemma.
\begin{lem}\label{lem: PK bounds 2}
Suppose that $K = P_{n_1} \cap P_{n_2}$, $\alpha \in (0,1)$ and $\varepsilon = |n_1 - n_2|$ is sufficiently small so that the estimates of Lemma~\ref{lem: PK bounds 1} hold, $\psi : \partial K \to \mathbb{R}$ smooth and satisfies the bound $|\psi(x)| \leq \min\{ \delta^\beta |x \cdot (n_1 - n_2)|^\beta,1\}$ for some $\delta >0$ and $1 \geq \beta >\alpha$, Then for any bounded solution $u$ of
\[ - \nabla \cdot( A(x) \nabla u) = 0 \ \hbox{ in } \ K \ \ \hbox{ with } \ u = \psi \ \hbox{ on } \ \partial K. \]
it holds
\[ |u(te_d)| \lesssim \delta^\alpha t^\alpha \ \hbox{ for } \ t \leq 1/\delta.\]
\end{lem}
There is an additional subtlety which is the uniqueness of the bounded solution of the Dirichlet problem in $K$, the argument is the same as in the half-space case, see \cite{Gerard-Varet:2012aa}. To derive Lemma~\ref{lem: PK bounds 2} from the previous calculation just do a rescaling to $u(\cdot/\delta)$, the domain $K$ is scaling invariant and the Poisson kernel associated with $A(\cdot/\delta)$ satisfies the same bounds as for $A$.
\section{Nonlinear Equations Background Results}\label{sec: nonlinear background}
In this section we consider the boundary layer problem for nonlinear operators. To explain the assumptions we write out the problem in a general domain
\begin{equation}\label{e.nonlinearhomgen}
\left\{\begin{array}{ll}
- \nabla \cdot a(\tfrac{x}{\varepsilon},\nabla u^\varepsilon) =0
&\text{ in } \Omega ,\vspace{1.5mm}\\
u^\varepsilon(x)=g(x,\frac{x}{\varepsilon}) &\text{ on } \partial\Omega.
\end{array}
\right.
\end{equation}
This type of equation would arise as the Euler-Lagrange equation of a variational problem,
\[ \hbox{ minimize } \ E(u) = \int_{\Omega} F(\tfrac{x}{\varepsilon},Du) \ dx \ \hbox{ over } \ u \in H^1_0(\Omega) + g(\cdot,\tfrac{\cdot}{\varepsilon}).\]
A natural uniform ellipticity assumption on the functional $F$ is
\[ \hbox{$F$ is convex with $1 \geq D^2F \geq \lambda>0$}.\]
Then $a = DF$ is $1$-Lipschitz continuous in $p$ and has the monotonicity property
\[ (a(x,p) - a(x,q))\cdot(p-q) \geq \lambda|p-q|^2 \ \hbox{ for all } \ p,q \in \mathbb{R}^d. \]
Now we consider how to determine the effective boundary conditions for the homogenization problem \eref{nonlinearhomgen}. We zoom in at a boundary point $x_0 \in \partial \Omega$ defining,
\[ v^\varepsilon(y) = u^\varepsilon(x_0+\varepsilon y) \ \hbox{ which solves } \
\left\{
\begin{array}{ll}
- \nabla \cdot a(y+\tfrac{x_0}{\varepsilon},\tfrac{1}{\varepsilon}Dv^\varepsilon) = 0 & \hbox{ in } \tfrac{1}{\varepsilon}(\Omega-x_0) \vspace{1.5mm}\\
v^\varepsilon(y) = g(x_0 + \varepsilon y,y+\tfrac{x_0}{\varepsilon}) & \hbox{ on } \tfrac{1}{\varepsilon}\partial(\Omega-x_0)
\end{array}\right.
\]
Now in order to have a unique equation in the limit $\varepsilon \to 0$ the following limit needs to exist,
\[ a_* (y,p) = \lim_{t \to 0} ta(y,t^{-1}p) \ \hbox{ for some } \ 1 \leq k < \infty.\]
Note that, if said limit exists, it is always $1$-homogeneous in $p$,
\[ a_*(y,\lambda p) = \lim_{t \to 0} ta(y,(\lambda^{-1}t)^{-1} p) = \lambda a_*(y,p)\]
In other words we need $a$ to be $1$-homogeneous in $p$ at $\infty$, then the operator $a_*$ is this limiting homogeneous profile of $a$ at $x_0$.
The above discussion motivates our assumption on the operators we study in the half-space problem.
\begin{enumerate}[(i)]
\item Periodicity:
\begin{equation}
a(x+z,p) = a(x,p) \ \hbox{ for all } \ x \in \mathbb{R}^d, z \in \mathbb{Z}^d, p \in \mathbb{R}^d.
\end{equation}
\item Ellipticity: for some $\lambda>0$ and all $p,q \in \mathbb{R}^{d}$
\begin{equation}
(a(x,p) - a(x,q))\cdot(p-q) \geq \lambda|p-q|^2 \ \hbox{ and } \ |a(x,p - a(x,q)| \leq |p-q|.
\end{equation}
\item Positive Homogeneity: for all $x,p$ and $t>0$,
\begin{equation}
a(x,tp) = ta(x,p)
\end{equation}
\end{enumerate}
For convenience will also assume $a(x,p)$ is $C^1$ in $x$ so that, by the De Giorgi regularity theorem, solutions are locally $C^{1,\alpha}$ for some universal $\alpha >0$.
\subsection{Regularity estimates for nonlinear equations}
In this section we explain the regularity estimates which we use to obtain $(1)$ existence of boundary layer limits and $(2)$ the characterization of limits at rational directions. For both results we need the De~Giorgi estimates respectively for the interior and boundary. As is the usual approach for regularity of nonlinear equations, we can reduce to considering actually the regularity of linear equations but with only bounded measurable coefficients.
For what follows we will take $A : \mathbb{R}^d \to M_{d \times d}$ to be measurable and elliptic,
\[ \lambda \leq A(x) \leq 1.\]
Recall that results for bounded measurable coefficients imply results for solutions of nonlinear uniformly elliptic equations and for the difference of two solutions. If $u_1,u_2 \in H^{1}_{\textup{loc}}(\Omega)$ solve
\[ - \nabla \cdot a(x,\nabla u_j) = 0 \ \hbox{ in } \Omega\]
then $w = u_1 - u_2$ solves
\begin{equation}\label{e.diffeqn}
-\nabla \cdot (A(x) \nabla w) = 0 \ \hbox{ in } \Omega \ \hbox{ with } \ A(x) = \int_0^1 D_p a(x,s\nabla u_1 + (1-s)\nabla u_2) \ ds,
\end{equation}
and one can easily check that $\lambda\leq A(x) \leq 1$.
We remind that, despite the overlap of notation, the results in this section apply to solutions of scalar equations not systems.
\begin{thm}[De Giorgi]\label{thm: degiorgi}
There is an $\alpha \in (0,1)$ and $C>0$ depending on $d,\lambda$ so that if $u$ solves,
\[ - \nabla \cdot (A(x) \nabla u) =0 \ \hbox{ in } \ B_1\]
then,
\[ [u]_{C^\alpha(B_{1/2})} \leq C\osc_{B_1} u\]
\end{thm}
A similar result holds up to the boundary for regular domains. We say that $\Omega$ is a regular domain of $\mathbb{R}^d$ if there are $r_0,\mu>0$ so that for every $x \in \partial \Omega$ and every $0<r < r_0$,
\[ |\Omega^C \cap B_{r}(x)| \geq \mu |B_r|.\]
\begin{lem}\label{lem: bdry cont nonlinear}
Suppose that $\Omega$ is a regular domain, $r_0 \geq 1$ and $0 \in \partial \Omega$, and $\varphi \in C^{\beta}$. There is an $\alpha_0(d,\lambda,\mu) \in (0,1)$ such that for $0<\alpha < \min\{\alpha_0,\beta\}$ there is $C(d,\lambda,\mu,\alpha)>0$ so that if $u$ solves,
\[ - \nabla \cdot (A(x) \nabla u) =0 \ \hbox{ in } \ B_1 \cap \Omega , \ \hbox{ with } \ u = \varphi \ \hbox{ on } \ \partial \Omega\]
then for every $r \leq 1$,
\[ \osc_{B_r} u \leq C([ \varphi]_{C^{\beta}(B_1)}+\osc_{B_1} u)r^\alpha\]
\end{lem}
The proof is postponed to Appendix~\ref{sec: A}. We make a remark on the optimality of this estimate. Using these results one can show local $C^{1,\alpha}$ estimates for solutions of non-linear uniformly elliptic equations. Large scale $C^{1,\alpha}$ estimates are not possible due to the $x$-dependence, but in the spirit of Avellaneda-Lin~\cite{Avellaneda:1991aa} one can likely prove large scale Lipschitz estimates. See Armstrong-Smart~\cite{Armstrong:2016aa} for the (more difficult) stochastic case, we are not aware of a citation for the periodic case. These estimates however are for \emph{solutions}, we seem to require the result of Lemma~\ref{lem: bdry cont nonlinear} for \emph{differences} of solutions (i.e. basically it is a $C^\alpha$ estimate of a derivative). It is not clear, therefore, whether we can do better than Lemma~\ref{lem: bdry cont nonlinear}.
\subsection{Half-space problem} We consider the basic well-posedness results for nonlinear problems set in half-spaces. Consider
\begin{equation}\label{hsnonlinear}
\left\{
\begin{array}{ll}
- \nabla \cdot a(x,Du) = 0 & \hbox{ in } P_n \vspace{1.5mm}\\
u = \varphi(x) & \hbox{ on } \partial P_n.
\end{array}
\right.
\end{equation}
Then the maximum principle holds.
\begin{lem}\label{lem: hs comparison nonlinear}
Suppose $u_1$ and $u_2$ are respectively bounded subsolutions and supersolutions of \eqref{hsnonlinear} with boundary data $\varphi_1 \leq \varphi_2$ on $\partial P_n$, then,
\[ u_1 \leq u_2 \ \hbox{ in } \ P_n.\]
\end{lem}
\noindent The result follows from the maximum principle in bounded domains and Lemma~\ref{lem: bdry cont nonlinear}.
\section{Boundary layers limits}\label{sec: boundary layers}
In this section we will discuss the boundary layer problem for divergence form elliptic problems in rational and irrational half-spaces. The results that we need for this paper are valid for both nonlinear scalar equations and linear systems and the proofs have only minor differences. For that reason, in this section and the next, we will discuss both types of equations in a unified way. We use the nonlinear notation for the PDE. We consider the cell problem,
\begin{equation}\label{e.cellbll}
\left\{
\begin{array}{ll}
- \nabla \cdot a(y,\nabla v^s_n) = 0 & \hbox{ in } P_n^s \vspace{1.5mm}\\
v^s_n = \varphi(y) & \hbox{ on } \partial P_n^s.
\end{array}
\right.
\end{equation}
We will first consider the case when $n \in S^{d-1} \setminus \mathbb{R} \mathbb{Z}^d$ is irrational.
\subsection{Irrational half-spaces} For linear systems the problem \eref{cellbll} in irrational half-spaces has been much studied \cite{Gerard-Varet:2012aa,Gerard-Varet:2011aa,Aleksanyan:2015aa,Aleksanyan:2017aa,Armstrong:2017aa,Prange:2013aa,Shen:2017aa}. Typically the focus has been on the Diophantine irrational directions. We do not give the definition, since it is not needed for our work, but basically the Diophantine condition is a quantification of the irrationality. Under this assumption strong quantitative results can be derived for the convergence to the boundary layer limit.
For the purposes of this paper we are only interested in the qualitative result, the existence of a boundary layer limit for \eref{cellbll} in a generic irrational half-space (no Diophantine assumption). The existence of a boundary layer tail in general irrational half-spaces was originally proven by Prange \cite{Prange:2013aa} for divergence form linear systems, and for nonlinear non-divergence form equations by the first author in \cite{Feldman:2015aa} (following the work of Choi-Kim \cite{Choi:2014aa} on the Neumann problem). To our knowledge the case of nonlinear divergence form equations has not been studied yet.
What we would like to explain here is that the proof of \cite{Feldman:2015aa} applies also to the problems we consider in this paper, careful inspection shows that the proof of \cite{Feldman:2015aa} only required the interior regularity, continuity up to the boundary (small scale), and the $L^\infty$ estimate (or maximum principle) w.r.t. the boundary data.
\begin{thm}
Suppose that $n \in S^{d-1} \setminus \mathbb{R} \mathbb{Z}^d$. Then there exists $\varphi_*(n)$ such that,
\[ \sup_{s} \sup_{y \in \partial P_n} |v^s_n(y+Rn) - \varphi_*(n)| \to 0 \ \hbox{ as } \ R \to \infty.\]
\end{thm}
One consequence of this theorem is that, for irrational directions, we can just study $v_n = v^0_n$. We give a sketch of the proof following \cite{Feldman:2014aa}.
\begin{proof}(Sketch) The boundary data, and hence the solution $v^s_n$ as well, satisfies an almost periodicity property in the directions parallel to $\partial P_n$. Given $N \geq 1$ there is a modulus $\omega_n(N) \to 0$ as $N \to \infty$ (uses $n$ irrational) so that for any $y \in \partial P_n$ there is a lattice vector $z \in \mathbb{Z}^d$ with $|z-y| \leq N$ and $|z \cdot n - s| \leq \omega(N)$, see \cite{Feldman:2015aa} Lemma 2.3.
Since $v_n^0(\cdot + z)$ solves the same equation in $P_n^{z \cdot n}$ we can use the up to the boundary H\"{o}lder continuity and the $L^\infty$ estimate (or maximum principle) to see that
\[ \|v^s_n(\cdot) - v^0_n(\cdot + z)\|_{L^\infty(P_n^s \cap P_n^{z \cdot n})} \lesssim \| \nabla \varphi\|_\infty \omega_n(N)^\alpha. \]
Sending $N$ to $\infty$ we see that if $v^0_n$ has a boundary layer limit then so does $v^s_n$ and they have the same value.
Then we just need to argue for $v^0_n$. Given $y \in \partial P_n$ the same argument as above shows there is $\bar{z} \in \partial P_n$ with $|\bar{z}-y| \leq N$ and
\[ |v^0_n(\cdot) - v^0_n(\cdot + \bar{z})| \lesssim \| \nabla \varphi\|_\infty \omega_n(N)^\alpha.\]
Then using the $L^\infty$ estimate Lemma~\ref{lem: system max} (or the maximum principle) and the large scale interior regularity estimates, Theorem~\ref{thm: degiorgi} above for the nonlinear case or Lemma 9 in \cite{Avellaneda:1987aa} for the linear systems case,
\begin{align*}
\osc_{y \cdot n \geq R} v^s_n(y) &\lesssim \osc_{y \cdot n = R} v^s_n(y) \\
&\leq \osc_{y \in B_N(0) \cap \partial P_n} v^s_n(y+Rn) + C\| \nabla \varphi\|_\infty \omega_n(N)^\alpha \\
&\lesssim \| \nabla \varphi\|_{\infty}( (N/R)^\alpha + \omega_n(N)^\alpha).
\end{align*}
Choosing $N$ large first to make $\omega_n(N)$ small and then $R \gg N$ gets the existence of a boundary layer limit.
\end{proof}
\subsection{Rational half-spaces} Next we consider the case of a rational half-space. Let $\xi \in \mathbb{Z}^d \setminus \{0\}$ be an irreducible lattice direction, and $v^s_\xi$ be the corresponding half-space problem solution. In this case $\varphi$ is periodic with respect to a $d-1$-dimensional lattice parallel to $\partial P_\xi$. There exist $\ell_1,\dots,\ell_{d-1}$ with $\ell_j \perp \xi$ and $|\ell_j| \leq |\xi|$ which are periods of $\varphi$. Then by uniqueness $\ell_j$ are also periods of $v^s_\xi$. In this special situation it is possible to show that there is a boundary layer limit with an exponential rate of convergence.
We give a general set-up. We consider the half-space problem,
\begin{equation}
\left\{
\begin{array}{ll}
- \nabla \cdot a(x, \nabla v) = \nabla \cdot f & \hbox{ in } \mathbb{R}^d_+ \vspace{1.5mm}\\
v = \psi(x') & \hbox{ on } \partial \mathbb{R}^d_+.
\end{array}
\right.
\end{equation}
where $\psi : \partial \mathbb{R}^d_+ \to \mathbb{R}$ and $f$ are smooth, and $\psi$, $f$, and $a(\cdot,p)$ all share $d-1$ linearly independent periods $\ell_1,\dots, \ell_{d-1} \in \partial \mathbb{R}^d_+$ such that,
\[ \max_{1 \leq j \leq d-1} |\ell_j| \leq M.\]
The operators $a$, as always, will also satisfy the assumptions of either Section~\ref{sec: linear background} or Section~\ref{sec: nonlinear background}. For now we will take $f=0$, this covers most of the situations we will run into in this paper. Then $v$ has a boundary layer limit with exponential rate of convergence.
\begin{lem}\label{lem: exp tail nonlinear}
There exists a value $c_*(\psi)$ such that,
\[ \sup_{y \in \partial P_n} |v(y+Re_d) - c_*| \leq C(\osc \psi) e^{-cR/M},\]
with $C,c>0$ depending only on $\lambda ,d$.
\end{lem}
The proof of this result is the same as the proof of the analogous result, Lemma 3.1, in \cite{Feldman:2015aa}, so we only include a sketch. The only tools necessary are the maximum principle (or $L^\infty$ estimate Lemma~\ref{lem: system max}) and the large scale interior H\"{o}lder estimates via De Giorgi-Nash-Moser for nonlinear equations (Theorem~\ref{thm: degiorgi} here) or Avellaneda-Lin for linear systems (Lemma 9 in \cite{Avellaneda:1987aa}).
\begin{proof}(Sketch) Let $L \geq 1$ to be chosen, call $Q$ to be the unit periodicity cell of $\psi$ which has diameter at most $\sim M$. Apply the De Giorgi interior H\"{o}lder estimates or the Avellaneda-Lin large scale H\"{o}lder estimates to find,
\[ \osc_{ \partial P_n + LMn} v= \osc_{y \in Q} v( y + LMn) \leq C L^{-\alpha}\osc_{P_n} u \leq C L^{-\alpha} \osc \psi \leq \frac{1}{2} \osc \psi .\]
The second inequality is by the maximum principle or the $L^\infty$ estimate Lemma~\ref{lem: system max}, for the third inequality we have chosen $L \geq 1$ universal to make $CL^{-\alpha} \leq 1/2$. Then iterate the argument with the new boundary data on $\partial P_n + LMn$ with oscillation decayed by a factor of $1/2$.
\end{proof}
We will also need a slight variant of the above result when the operator $a$ does not share the same periodicity as the boundary data, but instead has oscillations at a much smaller scale. We assume that $\psi$ has periods $\ell_1,\dots,\ell_{d-1}$ as before, and now we also assume that there are $e_1,\dots, e_d$ which are periods of $a$ and,
\[ \max_{1 \leq j \leq d } |e_j| \leq \varepsilon.\]
For example this is the case with $a(\frac{x}{\varepsilon},p)$ when $a(\cdot,p)$ is $\mathbb{Z}^d$-periodic. In this situation we do not quite have a boundary layer limit with exponential rate, but at least there is an exponential decay of the oscillation down to a scale $\sim \varepsilon^\alpha$.
\begin{lem}\label{lem: exp tail nonlinear variant}
There exists a value $c_*(\psi)$ such that, for some universal $\alpha \in (0,1)$ (nonlinear case) or for every $\alpha \in (0,1)$ (linear case),
\[ \sup_{y \in \partial \mathbb{R}^d_+} |v(y+Re_d) - c_*| \leq C(\osc \psi) e^{-cR/M}+C\|\nabla \psi\|_\infty \varepsilon^\alpha ,\]
with $c,C>0$ universal and $C$ depending on $\alpha$ as well.
\end{lem}
\noindent Again the proof of this result mirrors the proof of Lemma~3.2 in \cite{Feldman:2015aa} and we do not include it. Briefly, the idea is the same as Lemma~\ref{lem: exp tail nonlinear} except that the lattice vectors generated by $\ell_1,\dots,\ell_{d-1}$ are no longer periods of $v$, instead for each lattice vector there is a nearby vector (distance at most $\varepsilon$) which is a period of the operator. This vector will almost be a period of $v$, with error of $\varepsilon^\alpha$ which comes from the boundary continuity estimate Lemma~\ref{lem: bdry cont nonlinear} (nonlinear) or Lemma~\ref{lem: bdry cont linear} (linear system).
Finally we discuss the boundary layer problem \eref{cellbll} with non-zero right hand side $f$. We will restrict to the case of linear systems. We need to put a decay assumption on $f$ to guarantee even the existence of a solution. We will assume that there are $K,b > 0 $ so that,
\begin{equation}\label{e.fnorm}
\sup_{y_d \geq R} |f(y)| \leq \frac{K}{R} e^{-bR/M}.
\end{equation}
Such assumption arises naturally, it is exactly the decay obtained for $\nabla v$ when $v$ solves \eref{cellbll} with $f = 0$. The $1/R$ polynomial decay is important since we will care about the dependence on $M \gg 1$, the exponential does not take effect until $R \gg M$ while the $1/R$ decay begins at the unit scale.
\begin{lem}\label{lem: bdry layer rhs}
Suppose that $f$ satisfies the bound \eref{fnorm} and $v$ is the solution of the half space problem \eref{cellbll} for a linear system satisfying the standard assumptions of Section~\ref{sec: linear background}. Then there exists $c_*(\psi,f)$ such that,
\[ \sup_{y \in \partial \mathbb{R}^d_+} |v(y+Re_d) - c_*| \leq C ((\osc \psi) +K \log M)e^{-b_0R/M} \]
the constants $C$ and $b_0$ depend on universal parameters as well as $b$ from \eref{fnorm}.
\end{lem}
See the appendix and Lemma A.4 of \cite{Feldman:2015aa} for more details.
\subsection{Interior homogenization of a boundary layer problem} In this section we will consider the \emph{interior} homogenization of half-space problems with periodic boundary data, as explained in Section~\ref{sec: outline} such a problem arises in the course of computing the directional limits of $\varphi_*$ at a rational direction.
\begin{equation}\label{e.half-space hom}
\left\{
\begin{array}{ll}
- \nabla \cdot a(\tfrac{x}{\varepsilon},\nabla u^\varepsilon) = 0 & \hbox{ in } P_n \vspace{1.5mm}\\
u^\varepsilon = \psi(x) & \hbox{ on } \partial P_n
\end{array}
\right.
\end{equation}
homogenizing to
\begin{equation}\label{e.half-space hom2}
\left\{
\begin{array}{ll}
- \nabla \cdot a^0(\nabla u^0) = 0 & \hbox{ in } P_n \vspace{1.5mm}\\
u^0 = \psi(x) & \hbox{ on } \partial P_n.
\end{array}
\right.
\end{equation}
Here $\psi : \partial P_n \to \mathbb{R}^N$, as in the previous section, will be smooth and periodic with respect to $d-1$ linearly independent translations parallel to $\partial P_n$ which we call $\ell_1,\dots,\ell_{d-1} \in \partial P_n$. As before we call $M = \max_{j} |\ell_j|$, expecting homogenization we assume $M \gg \varepsilon$, and for convenience we can assume that $M = 1$, general results can be derived by scaling.
This problem is quite similar to the standard homogenization problem for Dirichlet boundary data, the unboundedness of the domain is compensated by the periodicity of the boundary data and by the existence of a boundary layer limit which is a kind of (free) boundary condition at infinity. The main result of this section is the \emph{uniform} convergence of $u^\varepsilon$ to $u^0$, and hence also (importantly for us) the convergence of the boundary layer limits,
\begin{prop}\label{prop: half-space hom}
Homogenization holds for \eref{half-space hom} with estimates:
\begin{enumerate}[$(i)$]
\item \textup{(nonlinear equations)} For every $\beta \in (0,1)$, $\varepsilon \leq 1/2$, there exists $\alpha(\beta,\lambda,d)$ such that,
\[ \sup_{P_n} |u^\varepsilon - u^0| \leq C [ \psi]_{C^\beta} \varepsilon^\alpha.\]
\item \textup{(linear systems)} For every $\varepsilon \leq 1/2$,
\[ \sup_{P_n} |u^\varepsilon - u^0| \leq C [ \psi]_{C^4} \varepsilon (\log \tfrac{1}{\varepsilon})^3.\]
\end{enumerate}
\end{prop}
\noindent We will follow the idea of \cite{Feldman:2015aa} Lemma~4.5, there is a slight additional difficulty since for divergence form nonlinear problems it is not possible to add a linear function $n \cdot x$ and preserve the solution property, even for the homogenized problem. The $C^4$ norm we require for $\psi$ in the linear systems case is more than necessary.
The proof will use known results about homogenization of Dirichlet boundary value problems in bounded domains, specifically we consider the problem in a strip type domain,
\begin{equation}
\left\{
\begin{array}{ll}
- \nabla \cdot a(\tfrac{x}{\varepsilon},\nabla u^\varepsilon_R) = 0 & \hbox{ in } \Pi_n(0,R) = \{ 0 < x \cdot n < R\} \vspace{1.5mm}\\
u^\varepsilon_R = \psi(x) & \hbox{ on } \partial \Pi_n(0,R)= \{ x \cdot n \in \{0,R\} \},
\end{array}
\right.
\end{equation}
where we make some choice to extend $\psi$ to $x \cdot n = R$. The solution of the homogenized problem $u^0_R$ is defined analogously.
For linear systems we have the following rate for convergence, for $R \geq 1$,
\begin{equation}\label{e.hom est lin}
\sup_{\Pi_n(0,R)} |u^\varepsilon_R - u^0_R| \leq CR^4\| \psi\|_{C^{4}}(R^{-1}\varepsilon),
\end{equation}
which can be derived from the rate of convergence proved in Avellaneda-Lin \cite{Avellaneda:1991aa} by scaling. The $C^4$ regularity on $\psi$ is sufficient, we did not state the precise regularity requirement on $\psi$ which can be found in \cite{Avellaneda:1991aa}. With less regularity on $\psi$ one can also obtain an algebraic rate of convergence $O(\varepsilon^\alpha)$.
For nonlinear equations there is an algebraic rate of convergence, for any $ \beta \in (0,1)$
\begin{equation}\label{e.hom est nonlin}
\sup_{\Pi_n(0,R)}| u^\varepsilon_R - u^0_R| \leq CR^\beta\|\psi\|_{C^{0,\beta}} (R^{-1} \varepsilon)^\alpha
\end{equation}
with some $\alpha = \alpha(\beta) \in (0,1)$ universal.
\begin{proof}[Proof of Proposition~\ref{prop: half-space hom}]
We define the boundary layer limits of, respectively, the $\varepsilon$-problem and the homogenized problem in \eref{half-space hom}. We have not proven that the $\varepsilon$-problem has a boundary layer limit, however Lemma~\ref{lem: exp tail nonlinear variant} gives that the limit values are concentrated in a set of diameter $o_\varepsilon(1)$. So we define,
\[ \mu^\varepsilon \in \lim_{R \to \infty} u^\varepsilon(Rn) \ \hbox{ and } \ \mu^0 = \lim_{R \to \infty} u^0(Rn),\]
where $\mu^\varepsilon$ can be any subsequential limit and satisfies, again via Lemma~\ref{lem: exp tail nonlinear variant},
\begin{equation}\label{e.nonlinear mu est}
|\mu^\varepsilon - u^\varepsilon(Rn)| \leq C\|\nabla \psi\|_\infty (\varepsilon^\alpha + e^{-cR}) \ \hbox{ (nonlinear case) }
\end{equation}
and
\begin{equation}\label{e.linear mu est}
|\mu^\varepsilon - u^\varepsilon(Rn)| \leq C\|\nabla \psi\|_{C^{0,\nu}}( \varepsilon + C e^{-cR}) \ \hbox{ (linear system case).}
\end{equation}
Instead of arguing directly with $u^\varepsilon$ and $u^0$ we consider,
\begin{equation}\label{e.R probs}
\left\{
\begin{array}{ll}
-\nabla \cdot a(\tfrac{x}{\varepsilon}, \nabla u^{\varepsilon}_{R})=0 & \hbox{ in } \Pi_n(0,R) ,\vspace{1.5mm}\\
u_{R}^{\varepsilon} =\psi(x) & \hbox{ on } x\cdot n = 0, \vspace{1.5mm}\\
u_R^{\varepsilon} = \mu^\varepsilon & \hbox{ on } x \cdot n = R.
\end{array}
\right.
\end{equation}
and, for $j \in \{0,\varepsilon\}$
\begin{equation}
\left\{
\begin{array}{ll}
-\nabla \cdot a^0( \nabla u^{0}_{R,j})=0 & \hbox{ in } \Pi_n(0,R) ,\vspace{1.5mm}\\
u_{R,j}^{0} =\psi(x) & \hbox{ on } x\cdot n = 0, \vspace{1.5mm}\\
u_{R,j}^{0} = \mu^j & \hbox{ on } x \cdot n = R.
\end{array}
\right.
\end{equation}
We will choose $R = R(\varepsilon)$ below to balance the various errors. The error in replacing $u^\varepsilon$ by $u^\varepsilon_R$,
\[ |u^\varepsilon(x) - u^\varepsilon_R(x)| \leq C\|\nabla \psi\|_\infty (\varepsilon^\alpha + e^{-cR}) \ \hbox{ for } \ x \in \Pi_n(0,R),\]
and replacing $u^0$ by $u^0_{R,0}$,
\[ |u^0(x) - u^0_{R,0}(x)| \leq C(\osc \psi) e^{-cR} \ \hbox{ for } \ x \in \Pi_n(0,R), \]
the estimates holds on $\partial \Pi_n(0,R)$ by \eref{nonlinear mu est} (or for linear we use \eref{linear mu est} instead), and therefore by the maximum principle (or by Lemma~\ref{lem: system max} for linear systems) they hold on the interior as well. To estimate the error in replacing $u^0_{R,0}$ by $u^0_{R,\varepsilon}$ we need to estimate the difference $\mu^\varepsilon - \mu^0$, which is basically the goal of the proof, this will be achieved below.
By Lemma~\ref{lem: bdry cont nonlinear} (or Lemma~\ref{lem: flat reg} in the linear systems case) there exists a universal $\delta_0(\lambda,d)>0$ so that if $B$ is uniformly elliptic and $q$ solves,
\begin{equation}\label{e.q unif}
\left\{
\begin{array}{ll}
-\nabla \cdot (B(x) \nabla q)=0 & \hbox{ in } \Pi_n(0,1) ,\vspace{1.5mm}\\
q =0 & \hbox{ on } x\cdot n = 0, \vspace{1.5mm}\\
|q| =1 & \hbox{ on } x \cdot n = 1,
\end{array}
\right. \ \hbox{ then } \ |q(x)| \leq \frac{1}{2} \ \hbox{ for } \ x \cdot n \leq \delta_0.
\end{equation}
Now call,
\[ q^\varepsilon = u^0_{R,0} - u^{0}_{R,\varepsilon} \ \hbox{ which solves } \ \left\{
\begin{array}{ll}
-\nabla \cdot (B(x) \nabla q^\varepsilon)=0 & \hbox{ in } 0<x\cdot n<R ,\vspace{1.5mm}\\
q^\varepsilon =0 & \hbox{ on } x\cdot n = 0, \vspace{1.5mm}\\
q^\varepsilon = \mu^0 - \mu^\varepsilon & \hbox{ on } x \cdot n= R
\end{array}
\right.
\]
with $B(x) = A^0$ in the linear case or,
\[ B(x) = \int_0^1 Da^0( t \nabla u^0_{R,0}(x) + (1-t) \nabla u^0_{R,\varepsilon}(x)) \ dt \ \hbox{ uniformly elliptic,}\]
in the nonlinear case. Now $\frac{1}{|\mu^0 - \mu^\varepsilon|}q(Rx)$ solves an equation of the type \eref{q unif} and so,
\[ |q(\delta_0 Rn)| \leq \tfrac{1}{2}|\mu^0 - \mu^\varepsilon|.\]
Now we apply the homogenization error estimates \eref{hom est nonlin} and \eref{hom est lin} for the domain $\Pi_n(0,R)$ to \eref{R probs}
\[ |u^{0}_{R,\varepsilon} - u^\varepsilon_R| \leq CR \|\nabla \psi\|_{\infty}(R^{-1}\varepsilon)^\gamma\]
or respectively in the linear system case,
\[ |u^{0}_{R,\varepsilon} - u^\varepsilon_R| \leq CR^4 \|\psi\|_{C^4}(R^{-1}\varepsilon). \]
Now we estimate the error in $\mu^\varepsilon - \mu^0$, for the nonlinear case,
\begin{align*}
| \mu^\varepsilon - \mu^0 | &\leq |u^\varepsilon(\delta_0 Rn) - u^0(\delta_0 Rn)| + C\|\nabla \psi\|_\infty(\varepsilon^\alpha + e^{-cR}) \\
&\leq |u^\varepsilon_R(\delta_0 Rn) - u^{0}_{R,\varepsilon}(\delta_0 Rn)| + |q^\varepsilon(\delta_0Rn)|+C\|\nabla \psi\|_\infty(\varepsilon^\alpha + e^{-cR}) \\
&\leq CR \|\nabla \psi\|_{\infty}(R^{-1}\varepsilon)^\gamma + \tfrac{1}{2} |\mu^\varepsilon - \mu^0|+ C\|\nabla \psi\|_\infty(\varepsilon^\alpha + e^{-cR}).
\end{align*}
Moving the middle term above to the left hand side we find,
\[ | \mu^\varepsilon - \mu^0 | \leq C \|\nabla \psi\|_\infty( R(R^{-1}\varepsilon)^\gamma + \varepsilon^\alpha + e^{-cR}) \leq C \|\nabla \psi\|_\infty \varepsilon^{\alpha'} \]
where finally we have chosen $R = C \log \frac{1}{\varepsilon}$ and $\alpha' < \min\{ \alpha, \gamma \}$. The same argument in the linear case yields,
\[ | \mu^\varepsilon - \mu^0 | \leq C [\psi]_{C^{4}}( R^4(R^{-1}\varepsilon) + \varepsilon + e^{-cR}) \leq C[\psi]_{C^{4}} \varepsilon (\log \tfrac{1}{\varepsilon})^3.\]
\end{proof}
\section{Asymptotics near a rational direction}\label{sec: asymptotics}
We study asymptotic behaviour of the cell problems as $n \in S^{d-1}$ approaches a rational direction $\xi\in \mathbb{Z}^d\backslash\{0\}$. We recall $v^s_{\xi}$ the solution of the cell problem,
\begin{equation}\label{cell1}
\left\{\begin{aligned}
& -\nabla \cdot a(x+s\xi, \nabla v^s_{\xi}) =0 &\text{ in }& P_\xi ,\\
& v^s_{\xi}(x)=\varphi(x+s\xi) &\text{ on }& \partial P_\xi .
\end{aligned}
\right.\end{equation}
The boundary layer limit of the above cell problem depends on the parameter $s$ and we define,
\begin{equation}\label{e.phi*}
\varphi_{*}(\xi,s):=\lim_{R\rightarrow \infty}v^s_{\xi}(x+R\xi)
\end{equation}
which is well-defined and the limit is independent of $x$, see Lemma~\ref{lem: exp tail nonlinear}. It follows from Bezout's identity that $\varphi_*$ is a $1/|\xi|$ periodic function on $\mathbb{R}$, see Lemma 2.9 in \cite{Feldman:2015aa}. As long as we can we will combine the arguments for linear systems and nonlinear equations.
\subsection{Regularity of \texorpdfstring{$\varphi_*(\xi,\cdot)$}{Lg}}
To begin we need to establish some regularity of $\varphi_*(\xi,\cdot)$. For quantitative purposes it is important to control the dependence of the regularity on $|\xi|$. We just state the results, postponing the proofs till the end of the section. A modulus of continuity for $\varphi_*(\xi,\cdot)$ which is uniform in $|\xi|$ is not difficult to establish. This follows from the continuity up to the boundary Lemma~\ref{lem: bdry cont nonlinear} (or Lemma~\ref{lem: flat reg}) and the maximum principle Lemma~\ref{lem: hs comparison nonlinear} (or the $L^\infty$ estimate Lemma~\ref{lem: system max}).
\begin{lem}\label{lem: reg holder}
The boundary layer limits $\varphi_*(\xi,s)$ are continuous in $s$.
\begin{enumerate}[(i)]
\item \textup{(Nonlinear equations)}
\[ [ \varphi_*(\xi,\cdot)]_{C^\alpha} \leq C \| \nabla \varphi\|_\infty ,\]
which holds for some universal $C \geq 1$ and $\alpha \in (0,1)$.
\item \textup{(Linear systems)} H\"{o}lder estimates as above holds for all $\alpha \in (0,1)$ and moreover,
\[ \| \frac{d}{ds}\varphi_*(\xi,\cdot)\|_\infty \leq C \| \nabla \varphi\|_{C^{0,\nu}} \ \hbox{ for any } \ 0 < \nu \leq 1. \]
\end{enumerate}
\end{lem}
To optimize our estimates, in the linear case, we will also need higher regularity of $\varphi_*$ which is (almost) uniform in $|\xi|$, this is somewhat harder to establish.
\begin{lem}\label{lem: reg smooth}
\textup{(Linear systems)} For any $\xi \in \mathbb{Z}^d \setminus \{0\}$, suppose $\varphi_{*}(\xi,s)$ is defined as above. Then for all $j\in\mathbb{N}^d$ and any $\nu >0$ there exists some constant $C_j$ universal such that,
\[\sup_s |\frac{d^j}{d s^j} \varphi_{*}(\xi,s)|\leq C_j\|\varphi\|_{C^{j,\nu}}\log^{j} (1+|\xi|).\]
\end{lem}
Note that Lemma~\ref{lem: reg smooth} is a bit weaker than Lemma~\ref{lem: reg holder} in the case $j=1$, this is because we take a different approach which is suboptimal in the $j=1$ case, it is not clear if the logarithmic terms are necessary when $j >1$. The proof is similar to Lemma 7.2 in \cite{Feldman:2015aa}, taking the derivative of $v^s_\xi$ with respect to $s$ and estimating based on the PDE. Probably more precise Sobolev estimates are possible but we did not pursue this.
\subsection{Intermediate scale asymptotics}\label{sec3}
Consider an irrational direction $n$ close to a lattice direction $\xi \in \mathbb{Z}^d \setminus \{0\}$. Let $\varepsilon>0$ small and we write,
\[ n = (\cos\varepsilon)\hat\xi - (\sin\varepsilon)\eta \ \hbox{ for some } \ \xi \in \mathbb{Z}^d \setminus \{0\} \ \hbox{ and a unit vector } \ \eta \perp \xi.\]
We will assume below that $|\varepsilon| \leq \pi/6$. We consider the cell problem in $P_n$
\begin{equation}\label{e.vn}
\left\{\begin{aligned}
& -\nabla \cdot a(y,\nabla v_n) =0 &\text{ in }& P_{n} ,\\
& v_n=\varphi( {y}) &\text{ on }& \partial P_{n}.
\end{aligned}
\right.\end{equation}
The first step of the argument is to show, with error estimate, that the boundary layer limit of $v_n$ is close to the boundary layer limit of the problem
\begin{equation}\label{e.vI}
\left\{\begin{aligned}
& -\nabla \cdot a(\tfrac{y}{\tan \varepsilon},\nabla v^I_n) =0 &\text{ in }& P_{n} ,\\
& v_n^I=\varphi_*(\xi, y \cdot \eta) &\text{ on }& \partial P_{n}.
\end{aligned}
\right.\end{equation}
The solution $v_n^I$ approximates $v_n$, asymptotically as $\varepsilon \to 0$, starting at an intermediate scale $1 \ll R \ll 1/\varepsilon$ away from $\partial P_n$. The argument is by direct comparison of $v_n$ with $v_\xi^s$ in their common domain.
Since the problem \eref{vI} has a boundary layer of size uniform in $\varepsilon$ we can replace, again with small error, by a problem in a fixed domain
\begin{equation}\label{e.we}
\left\{\begin{aligned}
& -\nabla \cdot a(\tfrac{y}{\tan\varepsilon},\nabla w_{\xi,\eta}^\varepsilon) =0 &\text{ in }& P_{\xi} ,\\
& w_{\xi,\eta}^\varepsilon=\varphi_*(\xi, y \cdot \eta) &\text{ on }& \partial P_{\xi}.
\end{aligned}
\right.\end{equation}
We remark that for both \eref{vI} and \eref{we} we have not proven the existence of a boundary layer limit, rather we use Lemma~\ref{lem: exp tail nonlinear variant}. For convenience we will state estimates on $\lim_{R\to\infty} v_n^I(Rn)$ or on $\lim_{R \to \infty} w_{\xi,\eta}(R\hat\xi)$, but technically we will mean that the estimate holds for every sub-sequential limit.
\begin{prop}\label{prop: int scale}
Let $\xi \in \mathbb{Z}^d \setminus \{0\}$ and $n = (\cos\varepsilon)\hat\xi - (\sin\varepsilon)\eta$ with $\varepsilon >0$ small and a unit vector $\eta \perp \xi$.
\begin{enumerate}[$(i)$]
\item \textup{(Nonlinear equations)} There is universal $\alpha \in (0,1)$ such that
\[ |\varphi_*(n) - \lim_{R \to \infty} w_{\xi,\eta}^\varepsilon(R\hat\xi)| \lesssim \|\nabla \varphi\|_\infty |\xi|^\alpha \varepsilon ^\alpha,\]
where we mean that the estimate holds for any sub-sequential limit of $w_{\xi,\eta}^\varepsilon(R\hat\xi)$ as $R \to \infty$.
\item \textup{(Linear systems)} For every $\alpha \in (0,1)$ and any $\nu>0$
\[ |\varphi_*(n) - \lim_{R \to \infty} w_{\xi,\eta}^\varepsilon(R\hat\xi)| \lesssim_{\alpha,\nu} [\varphi]_{C^{1,\nu}} |\xi|^\alpha \varepsilon ^\alpha,\]
where again we mean that the estimate holds for any sub-sequential limit of $w_{\xi,\eta}^\varepsilon(R\hat\xi)$ as $R \to \infty$.
\end{enumerate}
\end{prop}
The first step is to compare the boundary layer limits of \eref{vn} and \eref{vI}.
\begin{lem}\label{lem: vn vI}
Fix any $x\in \partial P_{n }$, $1 \leq R \leq 1/\varepsilon$ and let $s=x\cdot\eta\tan \varepsilon$.
\begin{enumerate}[$(i)$]
\item \textup{(Nonlinear equations)} There is universal $\alpha \in (0,1)$ such that
\[|v_n-v_\xi^{s}|(x+Rn)\lesssim \| \nabla \varphi\|_\infty(R\varepsilon)^\alpha.\]
\item \textup{(Linear systems)} For every $\alpha \in (0,1)$
\[ |v_n-v_\xi^{s}|(x+Rn)\lesssim_\alpha\| \nabla \varphi\|_{\infty}(R\varepsilon)^\alpha. \]
\end{enumerate}
\end{lem}
Before we go to the proof let us derive some consequences of the Lemma. Let's assume that $\|\nabla \varphi\|_\infty \leq 1$ to simplify the exposition, the general inequalities can of course be derived by rescaling. Combining Lemma~\ref{lem: exp tail nonlinear} with Lemma~\ref{lem: vn vI} we find that for any $ R \geq 1$,
\[|v_n(x+Rn)-\varphi_*(\xi, x \cdot \eta \tan \varepsilon)|\lesssim \left[(R\varepsilon)^\alpha+e^{-cR/|\xi|}\right] \ \hbox{ for } \ x \in \partial P_n.\]
Choosing $R = |\xi| \log \frac{1}{\varepsilon}$ we obtain,
\[ |v_n(x+Rn)-\varphi_*(\xi, x \cdot \eta \tan \varepsilon)|\lesssim |\xi|^\alpha \varepsilon^\alpha \ \hbox{ for } \ x \in \partial P_n,\]
either for a slightly smaller universal $\alpha$ in the nonlinear case, or again for every $\alpha \in (0,1)$ in the case of linear systems.
Now consider the rescaling
\[ \tilde{v}_n^I(y) = v_n(Rn + \tfrac{y}{\tan \varepsilon}) \ \hbox{ defined for } \ y \in P_n\]
which solves
\begin{equation}\label{e.vI2}
\left\{\begin{aligned}
& -\nabla \cdot a(Rn+\tfrac{y}{\tan\varepsilon},\nabla \tilde{v}^I_n) =0 &\text{ in }& P_{n} ,\\
& |\tilde{v}_n^I - \varphi_*(\xi, y \cdot \eta)| \leq C |\xi|^\alpha \varepsilon^\alpha &\text{ on }& \partial P_{n}.
\end{aligned}
\right.\end{equation}
This is almost the same as equation \eref{vI} solved by $v_n^I$. First assume $Rn = 0 \bmod \mathbb{Z}^d$ to make things simple, then the $L^\infty$-estimate Lemma~\ref{lem: system max} (or the maximum principle) implies that
\begin{equation}\label{e.vvtildeest}
\sup_{P_{n}}|v_n^I - \tilde{v}_n^I| \lesssim |\xi|^\alpha \varepsilon^\alpha.
\end{equation}
To be precise we should consider instead $\tilde{v}^I_n(Rn - [Rn] + \frac{y}{\tan \varepsilon})$ where $[Rn]$ is the representative of $Rn\bmod\mathbb{Z}^d$ in $[0,1)^d$. Then we would instead have,
\[ |\tilde{v}_n^I(y) - \varphi_*(\xi, y' \cdot \eta)| \lesssim |\xi|^\alpha \varepsilon^\alpha \ \text{ on } \ \partial P_{n} \]
for some $|y' - y| \leq \sqrt{d} \tan \varepsilon$. Then applying the regularity of $\varphi_*$ from Lemma~\ref{lem: reg holder} we get the same estimate as before \eref{vvtildeest}.
\begin{proof}[Proof of Lemma~\ref{lem: vn vI}]
Let us call the cone domains,
\[K(x):=(P_\xi+x)\cap P_{n} \ \hbox{ and } \ K_R(x)=K(x)\cap B_R(x), \]
we may simply write $K,K_R$ if $x=0$.
Let $x_0 \in \partial P_n$, we compute using $n \cdot x_0 = 0$ and $n = (\cos \varepsilon )\hat\xi - (\sin \varepsilon) \eta$ that,
\[ x_0 \cdot \hat \xi = (x_0 \cdot \eta)\tan \varepsilon . \]
Let $x \in \partial K(x_0)$, then $x \in \partial P_n$ (or $x \in \partial P_\xi + x_0$) and there exists $y \in \partial P_\xi + x_0$ (or respectively $\partial P_n$) with,
\[ |x-y| \leq |x-x_0|\sin \varepsilon \leq \varepsilon |x-x_0|.\]
{\it Nonlinear equations:} Applying the De Giorgi boundary continuity estimates Lemma~\ref{lem: bdry cont nonlinear} for small enough $\alpha \in (0,1)$ universal, for all $x \in \partial K(x_0)$,
\[ |v_\xi^s(x) - v_n(x)| \leq |v_\xi^s(x) - \varphi(y)|+|\varphi(y) - v_n(x)| \lesssim \|\nabla \varphi\|_{\infty}\varepsilon^\alpha|x-x_0|^\alpha.\]
Now since $v_\xi^s(x) - v_n(x)$ is a difference of solutions we can apply the boundary continuity estimate from Lemma~\ref{lem: bdry cont nonlinear} again,
\[ |v_\xi^s(x) - v_n(x)| \lesssim \|\nabla \varphi\|_{\infty}\varepsilon^\alpha|x-x_0|^\alpha \ \hbox{ for } \ x \in K(x_0)\]
with perhaps a slightly smaller $\alpha(d,\lambda)$.
\medskip
\noindent {\it Linear systems:}
We have, by almost the same argument as above now using instead Lemma~\ref{lem: flat reg}, for any $\alpha \in (0,1)$
\[ |v_\xi^s(x) - v_n(x)| \lesssim \|\nabla \varphi\|_{\infty}\varepsilon^\alpha|x-x_0|^\alpha \ \hbox{ on } \ \partial K(x_0). \]
Now by the Poisson kernel bounds in $K(x_0)$, Lemma~\ref{lem: PK bounds 1} and Lemma~\ref{lem: PK bounds 2}, for a slightly smaller $\alpha$ and $\varepsilon$ sufficiently small depending on $\alpha$
\[ |v_\xi^s(x) - v_n(x)| \lesssim \|\nabla \varphi\|_{\infty} \varepsilon^\alpha |x-x_0|^\alpha \ \hbox{ for } \ x \in K(x_0). \]
The remainder of the proof is the same as the case of scalar equations.
\end{proof}
To complete the proof of Proposition~\ref{prop: int scale} we just need to compare the solutions of \eref{vI} and \eref{vn}. The width of the boundary layer is now of uniform size in $\varepsilon$ so this is not a problem, we will just need to use the boundary continuity estimate (Lemmas \ref{lem: bdry cont linear} and \ref{lem: bdry cont nonlinear}) and the continuity estimate of $\varphi_*(\xi,\cdot)$ Lemma~\ref{lem: reg holder}.
\begin{lem}\label{lem: vI we}
The following estimates hold for the boundary layers of $v_n^I$ and $w^\varepsilon_{\xi,\eta}$.
\begin{enumerate}[$(i)$]
\item \textup{(Nonlinear equations)} There is $\alpha \in (0,1)$ universal such that
\[ |\lim_{R \to \infty} v_n^I(Rn) - \lim_{R \to \infty} w_{\xi,\eta}^\varepsilon(R\hat\xi)| \lesssim \|\nabla \varphi\|_\infty |\xi|^\alpha \varepsilon^\alpha ,\]
where technically we mean that the estimate holds for any pair of sub-sequential limits.
\item \textup{(Linear systems)} For every $\alpha \in (0,1)$ and any $\nu>0$
\[ |\lim_{R \to \infty} v_n^I(Rn) - \lim_{R \to \infty} w_{\xi,\eta}^\varepsilon(R\hat\xi)| \lesssim_{\alpha,\nu} [\varphi]_{C^{1,\nu}} |\xi|^\alpha \varepsilon^\alpha ,\]
where technically we mean that the estimate holds for any pair of sub-sequential limits.
\end{enumerate}
\end{lem}
\begin{proof}
We compare the two solutions in their common domain. Calling as before $K = P_n \cap P_\xi$ and,
\[ u = v_n^I - w_{\xi,\eta}^\varepsilon.\]
\medskip
\noindent {\it Nonlinear equations:} We have that
\[ - \nabla \cdot( A(x) \nabla u )= 0 \ \hbox{ in } \ K \ \hbox{ with some $\lambda \leq A(x) \leq 1$ as in \eref{diffeqn}}.\]
We compute the error on $\partial K$ in the same way that we did in Lemma~\ref{lem: vn vI}. Using Lemma~\ref{lem: bdry cont nonlinear} we find for $x \in \partial K$,
\[ |u(x)| = |v_n^I(x) - w_{\xi,\eta}^\varepsilon(x)| \lesssim \|\varphi_*(\xi,\cdot)\|_{C^{\alpha'}}\varepsilon^{\alpha}|x|^\alpha \lesssim \|\nabla \varphi\|_\infty \varepsilon^\alpha |x|^\alpha,\]
where $\alpha'$ is the, universal, continuity modulus from Lemma~\ref{lem: reg holder} and $\alpha < \alpha'$. Next we use the De Giorgi boundary continuity estimate, Lemma~\ref{lem: bdry cont nonlinear} to obtain, again with a slightly smaller $\alpha$,
\begin{equation}\label{e.bc we}
|u(x)|\lesssim \|\nabla \varphi\|_\infty \varepsilon^\alpha |x|^\alpha \ \hbox{ for } \ x \in K.
\end{equation}
Next we use that the size of the boundary layers for $v_n^I$ and $w_{\xi,\eta}^\varepsilon$ are uniformly bounded in $\varepsilon$, via Lemma~\ref{lem: exp tail nonlinear variant}, to find for all $R_0 \geq 1$,
\[ \sup_{y \in \partial P_n}|v_n^I(y+R_0n) - \lim_{R \to \infty} v_n^I(Rn)| \lesssim \|\varphi_*(\xi,\cdot)\|_{C^{\alpha'}} \varepsilon^\alpha + (\osc \varphi_*) e^{-R_0/|\xi|},\]
where again we mean that the estimate holds for any sub-sequential limit of $v_n^I(Rn)$. An analogous estimate holds for $w_{\xi,\eta}^\varepsilon$ replacing $R n$ with $R\hat\xi$. Using our assumption that $\varepsilon \leq \pi /4$ we have $n \cdot \hat \xi \geq 1/\sqrt{2}$ and so we have,
\begin{align}\label{e.bdry layer n xi}
\max \{ |v_n^I(R_0\hat\xi) - \lim_{R \to \infty} v_n^I(Rn)|,&|w_{\xi,\eta}^\varepsilon(R_0\hat\xi) - \lim_{R \to \infty} w_{\xi,\eta}(R\hat\xi)|\} \lesssim \\
&\|\varphi_*(\xi,\cdot)\|_{C^{\alpha'}} \varepsilon^\alpha + (\osc \varphi_*) e^{-R_0/|\xi|}. \notag
\end{align}
Finally we combine \eref{bc we} with \eref{bdry layer n xi}, choosing $R_0 = |\xi| \log \frac{1}{|\xi|\varepsilon}$, to find,
\begin{align*}
|\lim_{R \to \infty} v_n^I(Rn) - \lim_{R \to \infty} w_{\xi,\eta}^\varepsilon(R\hat\xi)| &\leq |v_n^I(R_0\hat\xi) - w_{\xi,\eta}^\varepsilon(R_0\hat\xi)| + C\|\nabla \varphi\|_\infty |\xi|^\alpha\varepsilon^\alpha\\
&\lesssim \|\nabla \varphi\|_\infty \varepsilon^\alpha R_0^\alpha \\
& \lesssim \|\nabla \varphi\|_\infty |\xi|^\alpha \varepsilon^\alpha (\log\tfrac{1}{|\xi|\varepsilon})^\alpha.
\end{align*}
Making $\alpha$ slightly smaller we can remove the logarithmic term.
\medskip
\noindent {\it Linear systems:} We have that
\[ - \nabla \cdot( A(x) \nabla u )= 0 \ \hbox{ in } \ K.\]
Using Lemma~\ref{lem: flat reg} we find, for $x \in \partial K$ and any $\nu >0$,
\[ |u(x)| = |v_n^I(x) - w_{\xi,\eta}^\varepsilon(x)| \lesssim_\alpha \|\nabla \varphi_*(\xi,\cdot)\|_{\infty}\varepsilon^\alpha |x|^\alpha \lesssim_\nu \|\nabla \varphi\|_{C^{0,\nu}} \varepsilon^\alpha |x|^\alpha.\]
By the Poisson kernel bounds in $K$, Lemma~\ref{lem: PK bounds 1} and Lemma~\ref{lem: PK bounds 2}, we have for a slightly smaller $\alpha \in (0,1)$ and $\varepsilon$ sufficiently small depending on $\alpha$
\[ |u(x)| \lesssim_\alpha [\varphi]_{C^{1,\nu}}\varepsilon^\alpha |x|^\alpha \ \hbox{ for } \ x \in K. \]
The remainder of the proof is the same as the case of scalar equations.
\end{proof}
\subsection{Interior homogenization of the intermediate scale problem}\label{subsect homo inter scale} We take $\varepsilon\rightarrow 0$ in \eref{we} and derive the second cell problem,
\begin{equation}\label{cell2e}
\left\{\begin{aligned}
& -\nabla \cdot a( \tfrac{x}{\tan\varepsilon},\nabla w_{\xi,\eta}^\varepsilon)=0 &\text{ in }& P_{\xi},\\
& w_{\xi,\eta}^\varepsilon(x)=\varphi_{*}(\xi,x\cdot\eta) &\text{ on }& \partial P_{\xi}
\end{aligned}
\right.
\end{equation}
which homogenizes to
\begin{equation}\label{cell2}
\left\{\begin{aligned}
& -\nabla \cdot a^0( \nabla w_{\xi,\eta})=0 &\text{ in }& P_{\xi},\\
& w_{\xi,\eta}(x)=\varphi_{*}(\xi,x\cdot\eta) &\text{ on }& \partial P_{\xi} .
\end{aligned}
\right.
\end{equation}
where $a^0$ is the homogenized operator associated with $a(\tfrac{x}{\varepsilon},\cdot)$.
We make the definition
\[L(\xi,\eta)=\lim_{R\rightarrow \infty} w_{\xi,\eta}(x+R\xi).\]
As we will show below $L(\xi,\cdot)$ is the limiting $0$-homogeneous profile of $\varphi_*$ at the direction $\xi$,
\[ \lim_{n \to \hat\xi} \varphi_*(n) = L(\xi,\eta) \ \hbox{ for $n$ irrational } \ n \to \hat \xi \ \hbox{ and } \ \frac{ \hat\xi - n}{|\hat \xi - n|} \to \eta.\]
This characterization is the first main result of the paper Theorem~\ref{main1}.
We make a further remark about the second cell problem in \eref{cell2}. It is straightforward to see that $w_{\xi,\eta}$ is actually a function only of two variables $x \cdot \xi$ and $x \cdot \eta$. The boundary data $\varphi_*(\xi,x \cdot \eta)$ is invariant with respect to translations which are perpendicular to both $\xi$ and $\eta$, and so by uniqueness the solution $w_{\xi,\eta}$ is invariant in those directions as well. Note that we are using the spatial homogeneity of the operator here, the same is not true of $w^\varepsilon_{\xi,\eta}$. This property was useful in \cite{Feldman:2015aa} since solutions of nonlinear non-divergence form elliptic problems in dimension $d=2$ have better regularity properties. Although we do not use this in a significant way here, we point it out anyway since it could be potentially useful in the future.
Now we state and prove the quantitative version of Theorem~\ref{main1}:
\begin{thm}\label{mainineq}
Let $\xi \in \mathbb{Z}^d \setminus \{0\}$ be irreducible and $n = (\cos \varepsilon ) \hat \xi - (\sin \varepsilon ) \eta$ be an irrational direction then,
\begin{enumerate}[$(i)$]
\item (Nonlinear equations) There is a universal $\alpha \in (0,1)$ such that
\[|\varphi_*(n)-L(\xi,\eta)|\lesssim \| \nabla \varphi\|_{\infty}|\xi|^\alpha\varepsilon^\alpha. \]
\item (Linear systems) For every $\alpha \in (0,1)$
\[ |\varphi_*(n)-L(\xi,\eta)|\lesssim_\alpha [ \varphi]_{C^5}|\xi|^\alpha\varepsilon^\alpha \]
\end{enumerate}
\end{thm}
\begin{proof}
The ingredients have all been established elsewhere, we just need to combine them. By Proposition~\ref{prop: half-space hom}, homogenization of problems in half-space type domains, for nonlinear equations,
\[ \sup_{P_\xi} |w_{\xi,\eta} - w^\varepsilon_{\xi,\eta}| \lesssim [\varphi _*]_{C^\beta} \varepsilon^\alpha \lesssim \| \nabla \varphi\|_{\infty} \varepsilon^\alpha \ \hbox{ for some universal $\beta,\alpha \in (0,1)$,} \]
or in the linear systems case,
\[ \sup_{P_\xi} |w_{\xi,\eta} - w^\varepsilon_{\xi,\eta}| \lesssim_\alpha [ \varphi_*]_{C^4} \varepsilon^\alpha \lesssim \ \hbox{ for every $\alpha \in (0,1)$.} \]
We have used Lemmas~\ref{lem: reg holder} and \ref{lem: reg smooth} to obtain the necessary regularity estimates of $\varphi_*(\xi,\cdot)$. The factors of $\log(1+|\xi|)$ in Lemma~\ref{lem: reg smooth} can be absorbed by making $\alpha$ slightly smaller.
\end{proof}
\subsection{Proofs of regularity estimates of \texorpdfstring{$\varphi_*$}{Lg}} We return to prove the regularity estimates of $\varphi_*$ Lemma~\ref{lem: reg holder} and Lemma~\ref{lem: reg smooth}. The H\"{o}lder regularity Lemma~\ref{lem: reg holder} is relatively straightforward, while the higher regularity Lemma~\ref{lem: reg smooth} requires some more careful estimates.
\begin{proof}[Proof of Lemma~\ref{lem: reg holder}]
We will show an upper bound for $|\varphi_*(\xi,h) - \varphi_*(\xi,0)|$ with $h<0$, the proof works also for nonzero $s$ and $h \in \mathbb{R}$. Consider $v_\xi^0$ a solution in $P_\xi$ and $v_\xi^h$ a solution in $P_\xi + h \hat \xi \supset P_\xi$. By the boundary continuity estimates for $v_\xi^h$ for every $y \in \partial P_\xi$,
\[ |v_\xi^h(y) - v_\xi^0(y)| = |v_\xi^h(y) - \varphi(y)| \leq |v_\xi^h(y) - \varphi(y - h \hat \xi)| + \| \nabla \varphi\|_\infty h \leq C\| \nabla\varphi\|_\infty h^\alpha,\]
for some $\alpha \in (0,1)$ by Lemma~\ref{lem: bdry cont nonlinear}. For the case of linear systems we have similarly,
\[ |v_\xi^h(y) - v_\xi^0(y)| = |v_\xi^h(y) - \varphi(y)| \leq C [ \varphi]_{C^{1,\nu}} h \]
for any $\nu>0$ by the boundary gradient estimates for smooth coefficient linear systems. Then the maximum principle, or respectively the $L^\infty$ estimate for systems Lemma~\ref{lem: system max}, implies the same bound holds in all of $P_\xi$ and therefore also for the boundary layer limits.
\end{proof}
\begin{proof}[Proof of Lemma~\ref{lem: reg smooth}] In order to get estimates on higher derivatives of $v^s_\xi$ in $s$ the method for Lemma~\ref{lem: reg holder} doesn't work, we need to differentiate in the equation. Since we only consider only one normal direction $\xi \in \mathbb{Z}^d \setminus \{0\}$ we drop the dependence $v^s = v^s_\xi$ on $\xi$. We denote derivatives with respect to $s$ by $\partial$ and then,
\begin{equation}\label{eqn: s deriv}
\left\{
\begin{array}{ll}
- \nabla \cdot (A(x + s \hat \xi) \nabla \partial^k v^s) = \nabla \cdot f & \hbox{ in } \ P_\xi \vspace{1.5mm}\\
\partial^kv^s = ( \hat \xi \cdot \nabla)^k \varphi(x + s \hat \xi) & \hbox{ on } \ \partial P_\xi,
\end{array}
\right.
\end{equation}
where $f$ involves derivatives $\partial^jv$ for $j <k$,
\[ f = \sum_{j=0}^{k-1} { k \choose j}(\hat \xi \cdot \nabla)^{k-j}A(x + s \hat \xi) \nabla \partial^jv^s.\]
Let $p>d$ arbitrary but fixed. We will suppose, inductively, that we can prove for any $R\geq 0$ and every $j <k$,
\[ \sup_{y \in \partial P_\xi, R' \geq R}\|\nabla \partial^j v^s\|_{L^p_{avg}(B_{R'/2}(y+R'\hat\xi))} \leq C_j [ \varphi ]_{C^{j+1,\nu}}\frac{1}{R}\log^j(1+|\xi|) e^{-c_jR/|\xi|} .\]
the constants depend on $j,[A]_{C^j}$ and universal parameters. The case $R \leq 1$ corresponds basically to an $L^\infty$ bound on $P_\xi$.
Then by Lemma~\ref{lem: rhs infty}
\begin{equation}\label{e.induct Linfty}
\|\partial^kv^s\|_{L^\infty(P_\xi)} \leq C\|( \hat \xi \cdot \nabla)^k \varphi\|_{\infty} + C\log^k(1+|\xi|)[\varphi]_{C^{k,\nu}}.
\end{equation}
Furthermore, by Lemma~\ref{lem: layer limit rhs}, $\partial^kv^s$ has a boundary layer limit $\mu_k = \frac{d^k}{ds^k}\varphi_*(\xi,s)$ with,
\[|\partial^kv^s - \mu_k| \leq C\log^k(1+|\xi|)[\varphi]_{C^{k,\nu}}e^{-cR/|\xi|}.\]
Now we aim to establish the inductive hypothesis. The following argument will also establish the base case when $j=0$. First in the case $R \leq 1$. This follows from \eref{induct Linfty} and the up to the boundary gradient estimates (Lemma~\ref{lem: flat reg}),
\[ \|\nabla \partial^kv^s\|_{L^\infty(P_\xi)} \leq C\|( \hat \xi \cdot \nabla)^k \varphi\|_{C^{1,\nu}} +C \log^k(1+|\xi|)[\varphi]_{C^{k,\nu}} \leq C\log^k(1+|\xi|)[\varphi]_{C^{k+1,\nu}} \]
In the case $R \geq 1$, by the Avellaneda-Lin large scale interior $W^{1,p}$ estimates and the inductive hypothesis,
\begin{align*}
\|\nabla \partial^kv^s\|_{L^p_{avg}(B_{R/2}(y+R\hat\xi))} &\leq C\frac{1}{R} \osc_{B_{3R/4}(y+R\hat\xi)}\partial^kv^s + \|f\|_{L^p_{avg}(B_{3R/4}(y+R\hat\xi))} \\
&\leq C\frac{1}{R}\log^k(1+|\xi|)[\varphi]_{C^{k,\nu}}e^{-cR/|\xi|}.
\end{align*}
Combining the cases $R\leq 1$ and $R \geq 1$ establishes the inductive hypothesis for $j=k$. The bound on $\|\partial^kv^s\|_{L^\infty}$ and hence on the boundary layer limit $\mu_k$, which is also a consequence of the induction, is the desired result.
\end{proof}
\section{Continuity Estimate for Homogenized Boundary Data Associated with Linear Systems}\label{sec: continuity}
In this section we use the limiting structure at rational directions established above to prove that the homogenized boundary condition associated with a linear system is continuous. We recall the second cell problem, let $\xi \in \mathbb{Z}^d \setminus \{0\}$ a rational direction and suppose that we have a sequence of directions $n_k \to \hat\xi$ such that,
\[ \frac{\hat\xi-n_k}{|\hat \xi-n_k |} \to \eta \ \hbox{ a unit vector with } \ \eta \perp \xi.\]
Then the limit of $\varphi_*(n_k)$ is determined by the following second cell problem,
\begin{equation}
\left\{
\begin{array}{ll}
- \nabla \cdot (A^0\nabla w_{\xi,\eta}) = 0 & \hbox{ in } P_\xi \vspace{1.5mm}\\
w_{\xi,\eta} = \varphi_*(\xi,x \cdot \eta) & \hbox{ on } \partial P_\xi,
\end{array}
\right. \ \hbox{ then } \ \lim_{k \to \infty} \varphi_*(n_k) = \lim_{R \to \infty} w_{\xi,\eta}(R\xi).
\end{equation}
Where $A^0$, constant, is the homogenized matrix associated with $A(\frac{\cdot}{\varepsilon})$ and $\varphi_*(\xi,\cdot)$ defined in \eref{phi*} is a $1/|\xi|$ periodic function on $\mathbb{R}$ (see Lemma 2.9 in \cite{Feldman:2015aa} where the period of $\varphi_*$ is explained).
First we state the qualitative result, identifying the limit and showing continuity at rational directions. Continuity of $\varphi_*$ at the irrational directions has been established, for example in Prange \cite{Prange:2013aa}. Combining those results shows that $\varphi_*$ extends to a continuous function on $S^{d-1}$.
\begin{lem}\label{lem rational dir limit}
Let $\xi \in \mathbb{Z}^d \setminus \{0\}$ then for any sequence $n_k \to \hat \xi$,
\[ \lim_{k \to \infty} \varphi_*(n_k) = |\xi| \int_0^{1/|\xi|} \varphi_*(\xi,t) \ dt.\]
\end{lem}
From this we know that $L(\xi,\eta)$, defined in Section \ref{subsect homo inter scale}, is independent of $\eta$ in the linear case. And we will simply write $L(\xi)=L(\xi,\eta)$.
\begin{proof}
By rotation and rescaling we can reduce to proving that the boundary layer limit associated with the half-space problem,
\begin{equation}
\left\{
\begin{array}{ll}
- \nabla \cdot (A^0\nabla v) = 0 & \hbox{ in } \mathbb{R}^d_+ \vspace{1.5mm}\\
v = g(x_1,\dots,x_{d-1}) & \hbox{ on } \partial \mathbb{R}^d_+,
\end{array}
\right.
\end{equation}
where $A^0$ is a constant and uniformly elliptic and $g$ is a $\mathbb{Z}^{d-1}$-periodic continuous function $\mathbb{R}^{d-1} \to \mathbb{R}^N$ is,
\[ \lim_{R \to \infty} v(Re_d) = \int_{[0,1]^{d-1}} g(x) \ dx.\]
Consider the (linear) map $T : C(\mathbb{T}^{d-1}) \to \mathbb{R}^N$ mapping $g \mapsto \lim_{R \to \infty} v(Re_d)$. The $L^\infty$ estimates Lemma~\ref{lem: system max} imply that $T$ is continuous. Since $A^0$ is constant translating $g$ parallel to $\partial \mathbb{R}^d_+$ just translates the solution $v$ and so we also get translation invariance, for any $y \in \mathbb{T}^{d-1}$,
\[ T g(\cdot - y) = Tg.\]
The Riesz representation theorem implies that $T g = \int_{\mathbb{T}^{d-1}} g(x) \ d\mu(x)$ for some (vector-valued) measure $\mu$, then by the translation invariance, uniqueness of Haar measure, and that $T 1 = 1$ we obtain the result.
\end{proof}
The next result is quantitative, the argument uses the Dirichlet approximation theorem as in \cite{Feldman:2015aa}.
\begin{thm}
Let $\varphi_*(\cdot)$ be defined the boundary layer limit associated with \eref{cellmain} defined for $n \in S^{d-1} \setminus \mathbb{R} \mathbb{Z}^d$. Then for every $\alpha < 1/d$ and all $n_1,n_2 \in S^{d-1} \setminus \mathbb{R} \mathbb{Z}^d$,
\[ |\varphi_*(n_1) - \varphi_*(n_2)| \lesssim_\alpha \|\varphi\|_{C^5}|n_1 - n_2|^{\alpha}\]
\end{thm}
\begin{proof}
Let $n_1,n_2$ be any pair of irrational unit vectors and call $\varepsilon=|n_1-n_2|$. Let $M=\varepsilon^{-\frac{s}{s+1}}$ with $s=d-1$. By Dirichlet's Approximation Theorem (Lemma 2.11 in \cite{Feldman:2015aa}), there exists $\xi \in \mathbb{Z}^{d} \setminus \{0\}$ and $k\in \mathbb{Z}$ with $1\leq k \leq M$ such that:
\[|n_1-k^{-1}\xi |\leq Ck^{-1}M^{-{1/s}}.\]
Also
\[|n_2-k^{-1}\xi |\leq \varepsilon+Ck^{-1}M^{-{1/s}}.\]
Note $|\xi |\lesssim k$, so
\[|\xi||\varepsilon+Ck^{-1}M^{-{1/s}}| \lesssim \varepsilon^\frac{1}{s+1}.\]
Apply Theorem \ref{mainineq}, for any $0<\alpha <1$ we have,
\begin{align*}
|\varphi_*(n_1)-\varphi_*(n_2)|&\leq |\varphi_*(n_1)-L(\xi)|+|L(\xi)-\varphi_*(n_2)| \\
&\lesssim \varepsilon^\frac{\alpha}{1+s} = \varepsilon^\frac{\alpha}{d}.
\end{align*}
\end{proof}
\section{A Nonlinear Equation with Discontinuous Homogenized Boundary Data}\label{sec: discontinuity}
In this final section we study the second cell problem \eref{cell2} for nonlinear equations. We give an example of a nonlinear divergence form equation, with smooth boundary condition, for which the boundary layer limit of \eref{cell2} depends on the approach direction $\eta$.
We consider the nonlinear operator
\[a(p_1,p_2,p_3)=\left(p_1,p_2,p_3+f(p_1,p_3)\right)^t\]
where
\[
f(p_1,p_3):=\frac{1}{8}\left( \sqrt{8p_1^2+9p_3^2}+p_3\right).
\]
Here $f$ is a solution of
\[
8f^2-2p_3 f-(p_1^2+p_3^2)=0.\]
It is easy to check that $f$ is positively $1$-homogeneous and uniformly elliptic.
We will take $\xi = e_3$ and $\eta= e_1$ or $e_2$ and we will call $(x_1,x_2,x_3) = (x,y,z)$. For the boundary condition we choose,
\[ \varphi(y) = \frac{1}{3}+ \cos(y \cdot \xi) \ \hbox{ so that } \ \varphi_*(\xi,s)=\frac{1}{3}+\cos(s).\]
It is worthwhile to note that arbitrary $\varphi_*(\xi,s)$ can be achieved by choose $\varphi(y) = \varphi_*(\xi,y \cdot \xi)$. We aim to compute $L(\xi,\eta)$.
If $\eta=e_1$ \eref{cell2} becomes
\begin{equation}\label{counter1}
\left\{
\begin{array}{ll}
-\nabla\cdot \left(u_x,u_y,u_z+f(u_x,u_z)\right)=0 & \hbox{ in } \mathbb{R}^3_+, \vspace{1.5mm}\\
u(x,y,0) =\frac{1}{3}+\cos x & \hbox{ in } \mathbb{R}^3_+.
\end{array}\right.
\end{equation}
The operator and boundary data were chosen to make the solution
\[u(x,y,z)=(\frac{1}{3}+\cos x)e^{-z}.\]
Note that
\[f(u_x,u_z)=\frac{1}{3}e^{-z}\]
and so
\[(u_x,u_y,u_z+f(u_x,u_z))=(-\sin x\; e^{-z},\;0,\;-\cos x\; e^{-z})\]
from which it is easy to verify that $u$ solves \eqref{counter1}. The boundary layer limit in this case is $0$ and so, by its definition, $L(\xi,e_1) = 0$.
If $\eta = e_2$ then the equation becomes
\begin{equation}
\left\{
\begin{array}{ll}
-\nabla\cdot \left(u_x,u_y,u_z+f(u_x,u_z)\right)=0 & \hbox{ in } \mathbb{R}^3_+, \vspace{1.5mm}\\
u(x,y,0) =\frac{1}{3}+\cos y & \hbox{ in } \mathbb{R}^3_+.
\end{array}\right.
\end{equation}
This reduces to the following two-dimensional problem for $v(y,z) = u(x,y,z)$
\begin{equation}\label{counter2}
\left\{
\begin{array}{ll}
-\nabla\cdot \left(v_y,\frac{9}{8}v_z+\frac{3}{8}|v_z|\right)=0 & \hbox{ in } \mathbb{R}^2_+ \vspace{1.5mm}\\
v(y,0) =\frac{1}{3}+\cos y & \hbox{ on } \partial \mathbb{R}^2_+.
\end{array}\right.
\end{equation}
Let $v$ be the solution of \eqref{counter2}. Consider $w(y,z):=\left(\frac{1}{3}+\cos y\right)e^{-z}$, the solution from before,
\begin{align}
-\nabla\cdot \left(w_y,\frac{9}{8}w_z+\frac{3}{8}|w_z|\right) &=[(-\tfrac{4}{9} - \tfrac{1}{3}\cos y){\bf 1}_{\{\cos y < 0\}} + \tfrac{1}{4}(\cos y - 1){\bf 1}_{\{ \cos y >0\}}]e^{-z} \notag\\
\label{subsolnw} &\leq 0.
\end{align}
Thus $w$ is a subsolution of \eqref{counter2}, from Lemma~\ref{lem: hs comparison nonlinear} we have $w \leq v$.
The operator $(v_y,\frac{9}{8}v_z+\frac{3}{8}|v_z|)$ is uniformly elliptic and lipschitz continuous. We use a strong maximum principle of Serrin~\cite{serrin1970strong} (see Theorem $1'$ there), in any bounded domain, we either have $w\equiv v$ or $w<v$. Since the inequality in \eqref{subsolnw} is strict, except when $y = 0 \bmod 2\pi$, the case must be $w < v$. Since both $w,v$ are $1$-periodic in $y$ direction, restricting to the set $z=1$, $w(y,1)\leq v(y,1)-\delta$ for some $\delta>0$. Then by comparing $w$ and $v-\delta$ on $z\geq 1$, again using Lemma~\ref{lem: hs comparison nonlinear}, we deduce that $w\leq v-\delta$, in particular
\[\lim_{z\to\infty}v\geq \lim_{z\to\infty}w+\delta=\delta.\]
Thus $L(\xi,e_2) <0 = L(\xi,e_1)$ and thus $\varphi_*(n)$ is discontinuous at the direction $e_3$.
|
1,477,468,750,336 | arxiv | \section{Introduction and statement of results}
A domain $U$ in $\mathbb{C}$ is {\it simply connected} if any curve in $U$ can be homotopically deformed to a point. Equivalently, $U$ is simply connected if $U^c$ is connected, where the complement is taken in the Riemann sphere, $\hat{\mathbb{C}}=\mathbb{C} \cup \{\infty\}$, which is the one point compactification of $\mathbb{C}$. Such domains are of paramount importance in complex analysis, due in large part to the Riemann Mapping Theorem, which states that for any simply connected domain $U \subsetneq \mathbb{C}$ and point $z \in U$ there is a conformal map from $\mathbb{D}$ onto $U$ with $f(0)=z$. The {\it Schlicht class} ${\cal S}$ is the set of all conformal functions $f$ on $\mathbb{D}$ normalized so that $f(0)=0, f'(0)=1$. If a domain $U$ is equal to $f(\mathbb{D})$ for some $f\in {\cal S}$, we will call $U$ a {\it Schlicht domain}. Clearly, if $U$ is any simply connected domain smaller than $\mathbb{C}$ itself and $z \in U$ we may find a linear map sending $z$ to 0 and $U$ to a Schlicht domain. In this way the study of ${\cal S}$ is no less general than the study of arbitrary simply connected domains. In a number of different ways, the identity function $I(z)=z$ with image $I(\mathbb{D})=\mathbb{D}$ is considered to be the smallest in ${\cal S}$, and the Koebe function $K(z) = \frac{z}{(1-z)^2}$ with image $K(\mathbb{D})=\mathbb{C} \backslash (\infty, -1/4]$ is considered to be the largest. We may view this intuitively as follows. The normalization creates an equivalence between the notion of "size" and the notion of "closeness of $0$ to the boundary". The disc is the only domain in which $0$ is equally close to every boundary point. In that sense, $0$ is closer to the boundary in the unit disc than in any other Schlicht domain. On the other hand, the smallest nonempty complement of a simply connected domain seems likely to be a half-line $(-\infty,0]$, and a point $x$ on $(0,\infty)$ should be farther from the boundary than $e^{i \theta} x$, $\theta \in (0,\pi)$, for instance. Arguing in this manner indicates that $K(\mathbb{D}) = \mathbb{C} \backslash (\infty, -1/4]$ should be the Schlicht domain for which $0$ is farthest from the boundary. This intuition may be useful as far as it goes, but a far more sophisticated statement is the following celebrated theorem of de Branges, which was originally conjectured by Bieberbach.
\begin{theorem} \label{deb}
If $f \in {\cal S}$ with $f(z) = z + \sum_{n=2}^{\infty} a_n z^n$, then $|a_n| \leq n$ for all $n \geq 2$. If for any such $n$ we have $|a_n|=n$, then $f$ is of the form $e^{-i \alpha} K(e^{i \alpha}z)$ for real $\alpha$.
\end{theorem}
Thus, out of all functions in ${\cal S}$, the Koebe function and its rotations have the largest derivatives at $0$. The highly nontrivial proof of Theorem \ref{deb}, which follows decades of work by analysts, can be found in \cite{deb}.
\vspace{12pt}
These concepts can be connected to probabilistic considerations in the following way. The expected amount of time that it takes a Brownian motion starting at a point $z$ to leave a domain $U$ gives some sort of measure of the size of $U$ and the distance between $z$ and $U^c$. This was perhaps first noted by Davis in \cite{davis}. In that paper the question was raised of finding a way in which Brownian motion identifies the Koebe domain as the largest Schlicht domain and the unit disk as the smallest. Initially we may hope that the expected amount of time that it takes for a Brownian motion starting at $0$ to leave a Schlicht domain is bounded above and below by the corresponding times for the Koebe domain and unit disc, respectively. This turns out to be correct for the unit disc, and we may say more. In \cite{mac}, McConnell proved the following.
\begin{theorem} \label{}
Let $\Phi$ be a nonnegative, nondecreasing convex or concave function, where if $f$ is concave then we also require that $\Phi(e^{2x})$ is convex. Then, for any $f \in {\cal S}$ we have
\begin{equation} \label{}
E_0[\Phi(\tau(\mathbb{D}))] \leq E_0[\Phi(\tau(f(\mathbb{D})))]
\end{equation}
In particular, $E_0[\tau(\mathbb{D})^p] \leq E_0[\tau(f(\mathbb{D}))^p]$ for $0<p<\infty$.
\end{theorem}
We will prove the following addition to this result.
\begin{theorem} \label{add}
Let $\Phi$ be a nonnegative, strictly increasing convex or strictly concave function, where if $f$ is strictly concave then we also require that $\Phi(e^{2x})$ is convex. Assume further that $E_0[\Phi(\tau(\mathbb{D}))]<\infty$. Then, for any $f \in {\cal S}, f \neq I$ we have
\begin{equation} \label{}
E_0[\Phi(\tau(\mathbb{D}))] < E_0[\Phi(\tau(f(\mathbb{D})))]
\end{equation}
In particular, $E_0[\tau(\mathbb{D})^p] < E_0[\tau(f(\mathbb{D}))^p]$ for $0<p<\infty$.
\end{theorem}
Such a statement for $K(\mathbb{D})$ is harder to formulate and prove. To begin with, many of the moments of $\tau(K(\mathbb{D}),0)$ are infinite. It is a consequence of \cite[Thm. 4.1]{burk} that $E[\tau(K(\mathbb{D}),0)^p]<\infty$ if and only if $p<1/4$, and it seems likely that $E[\tau(K(\mathbb{D}),0)^p] > E[\tau(f(\mathbb{D}),0)^p]$ for all $p<1/4$ and $f \in {\cal S} \backslash {\cal K}$, where we let ${\cal K}$ denote the Koebe function and its rotations, that is, ${\cal K} = \{e^{-i \alpha} K(e^{i \alpha}z) : \alpha \in {\mathbb R}\}$. Unfortunately, this statement, if true, remains unproved. We consider a different approach to the problem for $K(\mathbb{D})$. We will use $f(r\mathbb{D})$ as a bounded approximation to $f(\mathbb{D})$ for all $f \in {\cal S}$. This is justified by the fact that $f(r\mathbb{D}) \longrightarrow f(\mathbb{D})$ in the sense of Carath\'eodory as $r \longrightarrow 1$(see \cite[Ch. 3]{dur}), and the approximation is in some sense uniform between domains. We will then show that Brownian motion leaves $K(r\mathbb{D})$ more quickly than $f(r\mathbb{D})$, where $f$ is any function in ${\cal S} \backslash {\cal K}$.
\begin{theorem} \label{greg}
If $f \in {\cal S}$ and $r \leq 1$, then
\begin{equation} \label{}
E_{0}[\tau(I(r\mathbb{D}))] \leq E_{0}[\tau(f(r\mathbb{D}))] \leq E_{0}[\tau(K(r\mathbb{D}))]
\end{equation}
If $E_{0}[\tau(f(r\mathbb{D}))] = E_{0}[\tau(I(r\mathbb{D}))]$ for any $r \leq 1$ then $f(z)=I(z)=z$, and if $E_{0}[\tau(f(r\mathbb{D}))] = E_{0}[\tau(K(r\mathbb{D}))]$ for any $r<1$ then $f\in {\cal K}$.
\end{theorem}
In this sense, Brownian motion sees $K(r\mathbb{D})$ as larger than $f(r\mathbb{D})$ for any $f \in {\cal S} \backslash {\cal K}$. We will also show
\begin{theorem} \label{kris}
If $f \in {\cal S}, f \notin {\cal K}$, then $E_{0}[\tau(K(r\mathbb{D}))] - E_{0}[\tau(f(r\mathbb{D}))]$ is an increasing function of $r$ which is positive for $r>0$.
\end{theorem}
We may intuitively rephrase this result as "Brownian motion would take longer to exit $K(\mathbb{D})$ than any other Schlicht domain were it not that many of the expected exit times are infinite".
\section{Proofs of Theorems \ref{add}, \ref{greg}, and \ref{kris}}
Any analytic function can be expressed locally as a Taylor series(see \cite{rud}, \cite{ahl}, or \cite{schaum}). The Parseval Identity for analytic functions(see \cite[Thm. 10.22]{rud}) states that
\begin{equation} \label{par}
\frac{1}{2\pi} \int_{0}^{2\pi}|g(s e^{i \theta})|^2 d\theta = \sum_{n=0}^{\infty} |b_n|^2 s^{2n} ,
\end{equation}
for any $g(z) = \sum_{n=0}^{\infty} b_n z^n$ analytic on an open set containing $s\mathbb{D}$. This equality will be the key for the proof of all three theorems.
\subsection*{Proof of Theorem \ref{add}}
Assume first that $\Phi$ is convex. We will follow the method given in \cite{mac}. Assume $f \in {\cal S}, f \neq I$. $f(B_t)$ is a time-changed Brownian motion(see \cite{durBM} or \cite{revyor}). That is, there is a complex Brownian motion $\hat{B}_t$ such that $f(B_t) = \hat{B}_{\sigma_t}$, where $\sigma_t = \int_{0}^{t}|f'(B_s)|^2 ds$. It follows that
\begin{equation} \label{}
E_{0}[\tau(f(\mathbb{D}))] = E \int_{0}^{\tau(\mathbb{D},0)} |f'(B_s)|^2 ds
\end{equation}
The process $e^{i\theta}B_s$ is again a planar Brownian motion for real $\theta$. Let $f(z)$ have the Taylor expansion $\sum_{n=1}^\infty a_n z^n$. The assumptions on $f$ imply $a_1=1, a_m \neq 0$ for some $m\geq 2$. We therefore have by Jensen's inequality
\begin{equation} \label{}
\begin{split}
E_0[\Phi(\tau(f(\mathbb{D})))] &= \frac{1}{2\pi} \int_{0}^{2\pi} E \Phi \Big( \int_0^{\tau(\mathbb{D},0)} |f'(e^{i\theta}B_s)|^2 ds\Big)d\theta \\
& \geq E \Phi \Big( \int_0^{\tau(\mathbb{D},0)}\frac{1}{2\pi} \int_{0}^{2\pi} |f'(e^{i\theta}B_s)|^2 d\theta ds \Big) \\
& = E \Phi \Big( \int_0^{\tau(\mathbb{D},0)} \sum_{n=1}^{\infty} n^2 |a_n|^2 |B_s|^{2n-2} ds\Big) \\
& > E \Phi \Big( \int_0^{\tau(\mathbb{D},0)} |a_1|^2 ds \Big) \\
& = E \Phi \Big( \int_0^{\tau(\mathbb{D},0)} \! \! \! \! \! ds \Big) = E \Phi(\tau(\mathbb{D},0)) = E_0[\Phi(\tau(\mathbb{D}))]
\end{split}
\end{equation}
where Parseval's Identity \rrr{par} was applied to $f'(z) = \sum_{n=1}^\infty n a_n z^{n-1}$.
\vspace{12pt}
Suppose now that $\Phi$ is strictly concave and $\Phi(e^{2x})$ is convex. Examining the proof corresponding to this statement in \cite{mac}, we see that the following consequence of Jensen's inequality for concave functions was used.
\begin{equation} \label{}
\Phi \Big( \int_{0}^{\tau} |f'(e^{i \theta}B_s)|^2 ds \Big) \geq \int_{0}^{\tau} \Phi(\tau |f'(e^{i \theta}B_s)|^2) \frac{ds}{\tau}
\end{equation}
However, with strictly concave $\Phi$, Jensen's inequality is easily seen to be strict when $\tau|f'(e^{i \theta}B_s)|^2$ is nonconstant and the integrals in question are finite. We see that if $E_0[\Phi(\tau(f(\mathbb{D})))]< \infty$ then McConnell's proof in \cite{mac} is sufficient to prove our result. Clearly, if $E_0[\Phi(\tau(f(\mathbb{D})))]= \infty$ then the result holds as well.
\vspace{12pt}
It remains only to show that $E[\tau(\mathbb{D})^p] < \infty$ for all $p>0$. There must surely be many proofs of this fact, but we will be content to state it as an immediate consequence of \cite[Thm. 4.1]{burk}. {\hfill $\Box$ \bigskip}
\subsection*{Proof of Theorems \ref{greg} and \ref{kris}}
Both theorems are a direct consequence of Lemma \ref{bigguy} below. Due to the simplicity of the proof, it seems almost certain that this lemma has been noted before. Nevertheless, a literature search has failed to yield a suitable reference. We therefore include a proof.
\begin{lemma} \label{bigguy}
Suppose $f(z) = \sum_{n=0}^{\infty} a_n z^n$ is conformal on $\mathbb{D}$. Then, for $0<r \leq 1$ we have
\begin{equation} \label{re}
E_{f(0)}[\tau(f(r\mathbb{D}))] = \frac{1}{2}\sum_{n=1}^{\infty} |a_n|^2 r^{2n}
\end{equation}
In particular,
\begin{equation} \label{re2}
E_{f(0)}[\tau(f(\mathbb{D}))] = \frac{1}{2}\sum_{n=1}^{\infty} |a_n|^2
\end{equation}
\end{lemma}
{\bf Proof of lemma:} Assume first that $r<1$. As before, due to the fact that $f(B_t)$ is a time-changed Brownian motion we have
\begin{equation} \label{}
E_{f(0)}[\tau(f(r\mathbb{D}))] = E \int_{0}^{\tau(r\mathbb{D},0)} |f'(B_s)|^2 ds
\end{equation}
Set $\sigma_{t} = \int_{0}^{t} |f'(B_s)|^2 ds$. Applying the Optional Stopping Theorem to the martingale $|f(B_t)|^2 - 2 \sigma_t$ gives
\begin{equation} \label{}
\begin{split}
2 E_{f(0)}[\tau(f(r\mathbb{D}))] & = 2 E[\sigma_{\tau(r\mathbb{D},0)}] \\
& = E[|f(B_{\tau(r\mathbb{D},0)})|^2] \\ &= \frac{1}{2\pi} \int_{0}^{2\pi} |f(re^{i\theta})|^2 d\theta \\ &= \sum_{n=1}^{\infty} |a_n|^2r^{2n}
\end{split}
\end{equation}
In order to set $r=1$ to obtain the final statement, we note first that if $\sum_{n=1}^{\infty} |a_n|^2 = \infty$ then $E_{f(0)}[\tau(f(r\mathbb{D}))] \longrightarrow \infty$ as $r \longrightarrow 1$, so that \rrr{re2} holds. On the other hand, if $\sum_{n=1}^{\infty} |a_n|^2 < \infty$ then it is known that $f$ can be extended to a radial limit function $f(e^{i\theta})$ on $\delta \mathbb{D}$ for a.e. $\theta$(see \cite[Thm. 17.10]{rud} or \cite[Sec. 6.5]{durBM}) such that $\lim_{r \nearrow 1} \int_{0}^{2\pi} |f(re^{i\theta})-f(e^{i\theta})|^2d\theta = 0$ and $\sum_{n=1}^{\infty} |a_n|^2 = \frac{1}{2\pi}\int_{0}^{2\pi} |f(e^{i\theta})|^2 d\theta$. The result follows. An alternate proof of the lemma can be given using Green's function $G_r(z,0) = \frac{1}{\pi} \log\frac{r}{|z|}$, which satisfies the following "expected occupation time" formula(see \cite[Sec.'s 1.8,1.9]{durBM})
\begin{equation} \label{}
E\int_{0}^{\tau(r\mathbb{D},0)} g(B_t)dt = \int_{r\mathbb{D}}g(z) G_r(z,0) dA(z)
\end{equation}
for any measurable, nonnegative function $g$. We have for $r \leq 1$
\begin{equation} \label{}
\begin{split}
E \int_{0}^{\tau(r\mathbb{D},0)} |f'(B_s)|^2 ds & = \int_{r\mathbb{D}} |f'(z)|^2 G_r(z,0)dA(z) \\
& = \frac{1}{\pi}\int_{0}^{r} s \log \frac{r}{s} \int_{0}^{2\pi} |f'(se^{i\theta})|^2 d\theta ds \\
& = 2 \int_{0}^{r} s \log \frac{r}{s} \sum_{n=1}^{\infty} n^2 |a_n|^2 s^{2n-2} ds \\
& = 2 \sum_{n=1}^{\infty} n^2 |a_n|^2 \int_{0}^{r} s^{2n-1} \log\frac{r}{s} ds \\
& = 2 \sum_{n=1}^{\infty} n^2 |a_n|^2 r^{2n} \int_{0}^{1} s^{2n-1} \log \frac{1}{s} ds \\
& = 2 \sum_{n=1}^{\infty} n^2 |a_n|^2 r^{2n} (\frac{1}{4n^2}) = \frac{1}{2}\sum_{n=1}^{\infty} |a_n|^2 r^{2n}
\end{split}
\end{equation} {\hfill $\Box$ \bigskip}
Both theorems now follow. Theorem \ref{kris} and the upper bound in Theorem \ref{greg} are immediate from de Branges' Theorem because the coefficients of the Taylor series of any $f \in {\cal S}$ are dominated by those of $K(z)$. The lower bound in Theorem \ref{greg} is also clear since $I(z)$ is the only function in ${\cal S}$ with $a_n = 0$ for $n \geq 2$.
\vspace{12pt}
{\bf Remarks:} The expression \rrr{re2} is half of the square of the $H^2$ Hardy norm of $f$, written $||f||^2_{H^2}$; see \cite{rud} for more details. In \cite{burk}, the $H^p$ norm of $f$ is used to study the finiteness of the $p/2$ moment of $\tau(f(\mathbb{D}),f(0))$. The upper bound of Theorem \ref{greg} can also be taken to be the special case $p=2$ of the following result of Baernstein in \cite{burn}.
\begin{theorem} \label{just}
If $f\in {\cal S}, f \notin {\cal K}$, $r \in (0,1)$ then
\begin{equation} \label{}
\int_{0}^{2\pi} |f(re^{i\theta})|^p d\theta < \int_{0}^{2\pi} |K(re^{i\theta})|^p d\theta
\end{equation}
for any $p \in (0,\infty)$.
\end{theorem}
The proof of Theorem \ref{just} given in \cite{burn}, however, is quite involved, so it seemed as well to give a simple proof based on de Branges' Theorem(which was not available when \cite{burn} was published), as we have done. We remark that the fact that Baernstein's result can be related to the exit time of Brownian motion was first noted by Betsakos in \cite{bets2}. In light of the connection given by Burkholder in \cite{burk} between Hardy norms of analytic functions and moments of the exit times of Brownian motion, it is tempting to conjecture that the following is a consequence of Theorem \ref{just}.
\begin{conjecture} \label{}
If $f\in {\cal S}, f \notin {\cal K}$, $r \in (0,1)$, then
\begin{equation} \label{}
E_{0}[\tau(f(r\mathbb{D}))^p] < E_{0}[\tau(K(r\mathbb{D}))^p]
\end{equation}
for any $p \in (0,\infty)$. The same holds when $r=1$ and $p \in (0,1/4)$.
\end{conjecture}
\vspace{12pt}
Unfortunately, the inequalities in \cite{burk} do not seem to be sufficient to prove this result.
\section{Further consequences of Lemma \ref{bigguy}} \label{cons}
Lemma \ref{bigguy}, if noticed before, does not seem to have been used in calculating the exit times of domains. In this section we give a few examples where it can be applied with relative ease. We also point out how the lemma can be used in a different application, namely, the summation of certain series. That is, Lemma \ref{bigguy} expresses the expected exit time as a sum. If we have a different way of evaluating the expected exit time, then we have obtained a value for the sum. The following derivation of a result due to Euler is perhaps the simplest example of this idea; see Example \ref{square} below for a more complex instance.
\begin{prop} \label{}
\begin{equation} \label{all}
\sum_{n=1}^\infty \frac{1}{n^2} = \frac{\pi^2}{6}
\end{equation}
\end{prop}
{\bf Proof:} We will prove the corresponding statement obtained by summing only the odd terms, namely
\begin{equation} \label{odd}
\sum_{n=1}^\infty \frac{1}{(2n-1)^2} = \frac{\pi^2}{8}
\end{equation}
This is equivalent to \rrr{all}, as
\begin{equation} \label{}
\sum_{n=1}^\infty \frac{1}{(2n-1)^2} = \sum_{n=1}^\infty \frac{1}{n^2} - \sum_{n=1}^\infty \frac{1}{(2n)^2} = (1-\frac{1}{4})\sum_{n=1}^\infty \frac{1}{n^2} = \frac{3}{4}\sum_{n=1}^\infty \frac{1}{n^2}
\end{equation}
Let $W_t$ be a one-dimensional Brownian motion with $W_0=0$ a.s., and let $T= \inf_{t>0} \{|W_t| = \frac{\pi}{4}\}$. We will calculate $E[T]$ in two different ways. The first way is quite standard(see, for example, \cite[Ex. 7.5]{fima}). We apply the Optional Stopping Theorem to the martingale $W_t^2 - t$ to obtain
\begin{equation} \label{true}
E[T] = E[W_T^2] = \frac{\pi^2}{16}
\end{equation}
We now calculate $E[T]$ using Lemma \ref{bigguy}. $W_t$ may be taken to be the real part of our two dimensional Brownian motion $B_t$, and it is therefore clear that $E[T] = E_0[\tau(U)]$, where $U = \{ \frac{-\pi}{4} < \mbox{Re } z < \frac{\pi}{4} \}$. We need then to find a conformal map $f(z)$ mapping $\mathbb{D}$ onto $U$ with $f(0) = 0$. The function
\begin{equation}
\tan z = \frac{\sin z}{\cos z} = -i \frac{e^{2iz} - 1}{e^{2iz} + 1}
\end{equation}
maps $U$ conformally to $\mathbb{D}$. This can be seen by noting that the function $x+iy \longrightarrow e^{2i(x+iy)} = e^{-2y + 2ix}$ maps $U$ conformally to $\{ \mbox{Re } z > 0\}$, and then that the M\"obius transformation $z \longrightarrow -i(\frac{z-1}{z+1})$ maps $\{ \mbox{Re } z > 0\}$ conformally to $\mathbb{D}$. We conclude that the principal branch of $\tan^{-1}z$ maps $\mathbb{D}$ conformally to $U$. $\tan^{-1}z$ admits the Taylor series expansion
\begin{equation} \label{}
\tan^{-1}z = z - \frac{z^3}{3} + \frac{z^5}{5} - \ldots = \sum_{n=1}^{\infty} \frac{(-1)^{n+1} z^{2n-1}}{2n-1}
\end{equation}
Thus, by Lemma \ref{bigguy},
\begin{equation} \label{doubletrue}
E[T] = \frac{1}{2} \sum_{n=1}^{\infty} \frac{1}{(2n-1)^2}
\end{equation}
Equating \rrr{true} and \rrr{doubletrue} yields \rrr{odd}. {\hfill $\Box$ \bigskip}
Here are some further examples.
\vspace{12pt}
\ccases{disc}: Consider the disc $r\mathbb{D}$ for any $r > 0$, and let $a \in r\mathbb{D}$. It may be checked that
\begin{equation} \label{}
f(z) = r \frac{z+\frac{a}{r}}{1 + \frac{\bar{a}}{r} z} = r \frac{r z+a}{r + \bar{a} z}
\end{equation}
is a conformal map sending $\mathbb{D}$ to $r\mathbb{D}$ and $0$ to $a$. To find the power series for $f$ we expand
\begin{equation} \label{}
\begin{split}
r \frac{z+\frac{a}{r}}{1 + \frac{\bar{a}}{r} z} & = r (z+\frac{a}{r})(1-\frac{\bar{a}z}{r}+ (\frac{\bar{a}z}{r})^2 - \ldots) \\
& = a + (r^2 - |a|^2) \sum_{n=1}^{\infty} \frac{(-1)^{n-1} \bar{a}^{n-1}z^n}{r^{n}}
\end{split}
\end{equation}
Lemma \ref{bigguy} now gives
\begin{equation*} \label{}
E_a[\tau(r\mathbb{D})] = \frac{1}{2}(r^2 - |a|^2)^2 \sum_{n=1}^{\infty} \frac{|a|^{2n-2}}{r^{2n}} = \frac{(r^2 - |a|^2)^2}{2r^2} \frac{1}{1-\frac{|a|^2}{r^2}} = \frac{1}{2} (r^2 - |a|^2)
\end{equation*}
This is well known; see for example \cite[Ex. 7.4.2]{ox}.
\vspace{12pt}
\ccases{hp}: The function $f(z)=\frac{z}{1-z} = z + z^2 + z^3 + \ldots$ maps $\mathbb{D}$ conformally to $\{\mbox{Re }z> -\frac{1}{2}\}$. Lemma \ref{bigguy} gives $E_0[\tau(\{\mbox{Re }z> -\frac{1}{2}\})] = \infty$, and it is easy to see that $E_z[\tau(U)] = \infty$ whenever $U$ is a half-plane and $z \in U$. Letting $W_t = \mbox{Re }B_t$ we recover a known result on one-dimensional Brownian motion, that if $T=\inf_{t>0}\{W_t = a\}$ with $a \neq 0$ then $E[T] = \infty$.
\vspace{12pt}
\ccases{cardiod}: Let $U$ be the cardioid with boundary defined by the polar equation $r = 2(1+\cos \theta)$.
\hspace{1.2in} \includegraphics[width=70mm,height=70mm]{cardioid.PDF}
This is the conformal image under $z^2$ of the disc $\{|z-1|=1\}$. The conformal map from $\mathbb{D}$ to $U$ mapping $0$ to $1$ is given by $f(z)=(z+1)^2 = 1+2z +z^2$. Applying Lemma \ref{bigguy} we get $E_1[\tau(U)] = \frac{5}{2}$.
\vspace{12pt}
\ccases{logimage}: Let $\gamma$ denote the curve in ${\mathbb R}^2$ defined by $e^x = 2 \cos y$ for $\frac{-\pi}{2} < y < \frac{\pi}{2}$.
\hspace{1.2in} \includegraphics[width=100mm,height=70mm]{catequalstrength.JPG}
This curve has been referred to as the "catenary of equal resistance". Let $U$ be the component of $\gamma^c$ which contains 0. Then $U$ is the conformal image of $\mathbb{D}$ under the map
\begin{equation}
f(z) = \log(1+z) = z -\frac{z^2}{2} + \frac{z^3}{3} - \frac{z^4}{4} + \ldots
\end{equation}
Applying Lemma \ref{bigguy} we obtain $E_0[\tau(U)] = \frac{1}{2}\sum_{n=1}^{\infty} \frac{1}{n^2} = \frac{\pi^2}{12}$.
\vspace{12pt}
\ccases{wedge}: Let ${\mathbb H} = \{ \mbox{Re }z > 0\}$ be the right half-plane\footnotemark
\footnotetext{This is at odds with standard practice in complex analysis, where usually ${\mathbb H} = \{ \mbox{Im }z > 0\}$ is the upper half-plane. It is convenient for our purposes, however.}
, and let ${\mathbb H}^p = \{|Arg(z)| < \frac{\pi p}{2}\}$ for $p \leq 1$, where $Arg(z)$ is the principal branch of the argument function taking values in $(-\pi,\pi]$. ${\mathbb H}^p$ is the infinite wedge of width $\pi p$ centered at the positive real axis, and ${\mathbb H}^1 = {\mathbb H}$. It was shown in $\cite{spitz}$, among other things, that $E_1[\tau({\mathbb H}^p)]< \infty$ if $p < \frac{1}{2}$, and $E_1[\tau({\mathbb H}^p)]= \infty$ if $p \geq \frac{1}{2}$. We will derive this result and find bounds for $E_1[\tau({\mathbb H}^p)]$ using Lemma \ref{bigguy}. The conformal map from $\mathbb{D}$ to ${\mathbb H}^p$ is $g(z) = \frac{(1+z)^p}{(1-z)^p}$, but the Taylor series expansion for $g$ appears to be unwieldy for arbitrary $p$, so we will simplify somewhat. Define the principal branch of $z^p$ on ${\mathbb H}$, where $Arg(z^p) = p Arg(z)$. Then the inverse $z^{1/p}$ is well defined on ${\mathbb H}^p$. Let $\tilde{{\mathbb H}}^p = \{Re(z^{1/p})>1/2\}$. ${\mathbb H}^p$ is then the image of $\{\mbox{Re }z > \frac{1}{2}\}$ under $z^p$. The relationship between ${\mathbb H}^p$ and $\tilde{{\mathbb H}}^p$ is shown below.
\hspace{1.2in} \includegraphics[width=70mm,height=50mm]{Hp2.JPG}
It is clear that $\tilde{{\mathbb H}}^p \subseteq {\mathbb H}^p$, and it is also easy to check that ${\mathbb H}^p \subseteq \tilde{{\mathbb H}}^p_s := \{\frac{2^p-1}{2^p} z+\frac{1}{2^p} \in \tilde{{\mathbb H}}^p\}$. The conformal map from $\mathbb{D}$ to $\tilde{{\mathbb H}}^p$ is given by $f(z) = \frac{1}{(1-z)^p}$, and the conformal map from $\mathbb{D}$ to $\tilde{{\mathbb H}}^p_s$ is given by $\frac{2^p}{2^p-1}(f(z) - 1/2^p)$. We have
\begin{equation} \label{}
\frac{d^n}{dz^n} \frac{1}{(1-z)^p} \Big| _{z=0} = p(p+1) \ldots (p+n-1)
\end{equation}
The expression $(p)_n := p(p+1) \ldots (p+n-1)$ with $(p)_0 = 1$ is known as the {\it Pochhammer symbol}, and we see that $f(z) = \sum_{n=0}^\infty \frac{(p)_n z^n}{n!}$. Applying Lemma \ref{bigguy} we obtain
\begin{equation} \label{}
E_1[\tau(\tilde{\mathbb H}^p)] = \frac{1}{2}\sum_{n=1}^{\infty} \frac{(p)_n^2}{(n!)^2}
\end{equation}
This series can be evaluated explicitly. Using the notation in \cite{luke} it is given by $_2 F_1(p,p;1;1)-1$, where $_2 F_1$ refers to the hypergeometric function. Applying Euler's integral formula \cite[Sec. 3.6]{luke} and using the standard definitions for the $\beta$ and $\Gamma$ functions, we calculate
\begin{equation} \label{}
E_1[\tau(\tilde{\mathbb H}^p)] = \frac{1}{2}(_2F_1(p,p;1;1)-1) = \frac{1}{2}\Big(\frac{\Gamma(1)\beta(p,1-2p)}{\Gamma(1-p)\Gamma(p)} - 1 \Big)
\end{equation}
Similarly, we have
\begin{equation} \label{}
E_1[\tau(\tilde{\mathbb H}^p_s)] = \frac{2^{2p}}{2(2^p-1)^2} \Big(\frac{\Gamma(1)\beta(p,1-2p)}{\Gamma(1-p)\Gamma(p)} - 1 \Big)
\end{equation}
We obtain
\begin{equation} \label{}
\frac{2^{2p}}{2(2^p-1)^2} \Big(\frac{\Gamma(1)\beta(p,1-2p)}{\Gamma(1-p)\Gamma(p)} - 1 \Big) < E_1[\tau({\mathbb H}^p)] < \frac{1}{2}\Big(\frac{\Gamma(1)\beta(p,1-2p)}{\Gamma(1-p)\Gamma(p)} - 1 \Big)
\end{equation}
This is finite for $p<\frac{1}{2}$, but infinite for $p \geq \frac{1}{2}$, as the integral defining $\beta(p,1-2p)$ diverges at $p=\frac{1}{2}$. This proves the given statement. Note that \cite{burk} also used the Hardy norm of $f$ in a similar manner to deduce a more general result.
\vspace{12pt}
\ccases{square} Let $m$ be an integer greater than 2 and set $\omega = e^{\frac{2 \pi i}{m}}$. Let $U_m$ be the regular $m$-gon with vertices at $1, \omega, \omega^2, \ldots , \omega^{m-1}$. We will calculate $E_0[\tau(U_m)]$. Consider the Schwarz-Christoffel mapping given by
\begin{equation} \label{}
g(z) = \int_0^z \frac{d\zeta}{(1-\zeta)^{2/m}(1-\omega \zeta)^{2/m} \ldots (1-\omega^{m-1} \zeta)^{2/m}} = \int_0^z \frac{d\zeta}{(1-\zeta^m)^{2/m}}
\end{equation}
$g(z)$ is a conformal mapping of the unit disc onto an $m$-gon with all angles $\pi^{(m-2)/m}$(see \cite{tref} for details), and symmetry arguments show that the image is in fact a regular $m$-gon with vertices $W,\omega W,\ldots, \omega^{m-1}W$ for some $W>0$. The points $1,\omega,\ldots,\omega^{m-1}$ map to these vertices, and we can calculate
\begin{equation} \label{}
\begin{split}
W & = g(1) = \int_0^1 \frac{dx}{(1-x^m)^{2/m}} = \frac{1}{m} \int_0^1 u^{-(m-1)/m}(1-u)^{-2/m}du \\
& = \frac{1}{m} \beta(1/m,(m-2)/m)
\end{split}
\end{equation}
Setting $f(z) = \frac{1}{W} g(z)$, we obtain our conformal map from $\mathbb{D}$ to $U_m$. Recall from Example \ref{wedge} that $\frac{1}{(1-z)^{2/m}} = \sum_{n=0}^\infty \frac{(2/m)_n z^n}{n!}$. We see that
\begin{equation} \label{}
\begin{split}
f(z) & = \frac{1}{W} \int \sum_{n=0}^\infty \frac{(2/m)_n z^{mn}}{n!} = \frac{1}{W} \sum_{n=0}^\infty \frac{(2/m)_n z^{mn+1}}{n!(mn+1)} \\
& = \frac{z}{W} \sum_{n=0}^\infty \frac{(1/m)_n(2/m)_n z^{mn}}{((m+1)/m)_nn!} = \frac{z}{W} {}_2F_1(1/m,2/m;(m+1)/m;z^m)
\end{split}
\end{equation}
where we have used the identity $(1/m)_n (mn+1) = ((m+1)/m)_n$. Theorem \ref{bigguy} gives
\begin{equation} \label{rte}
\begin{split}
E_0[\tau(U_m)] & = \frac{1}{2W^2}\sum_{n=0}^\infty \frac{(2/m)^2_n (1/m)^2_n }{(n!)^2((m+1)/m)_n^2} \\
& = \frac{1}{2W^2} {}_4 F_3(1/m,1/m,2/m,2/m;(m+1)/m,(m+1)/m,1;1) \\
& = {}_4 F_3(1/m,1/m,2/m,2/m;(m+1)/m,(m+1)/m,1;1) \\
& \hspace{1.3in} \times \frac{m^2}{2\beta(1/m,(m-2)/m)^2}
\end{split}
\end{equation}
In the cases $m=3,4$ we may compare this with known results. For the equilateral triangle, applying \cite[Thm. 1]{ala} gives $E_0[\tau(U_3)] = 1/6$. Computer approximation shows agreement with \rrr{rte} evaluated at $m=3$. We arrive at the following value for the hypergeometric sum.
\begin{equation} \label{}
{}_4 F_3(1/3,1/3,2/3,2/3;4/3,4/3,1;1) = \frac{\beta(1/3,1/3)^2}{27}
\end{equation}
For the case $m=4$, the square, \rrr{rte} gives
\begin{equation} \label{run}
E_0[\tau(U)] = {}_4F_3(1/4,1/4,1/2,1/2;5/4,5/4,1;1) \times \frac{8}{\beta(1/4,1/2)^2} \approx .294685
\end{equation}
This agrees with the approximation given in \cite[Tab. 10]{helm}\footnotemark \footnotetext{The value in \cite[Tab. 10]{helm} must be doubled, as the calculations there were for a square with unit side length rather than our normalization.}. This approximation was based on an explicit expression obtained from \cite{knight}\footnotemark \footnotetext{There is a misprint in \cite{knight}; the correct formula is given in \cite{helm}.}, and equating that expression with \rrr{run} we obtain the following strange identity.
\begin{equation} \label{}
\begin{split}
{}_4F_3(1/4,& 1/4,1/2,1/2;5/4,5/4,1;1) \times \frac{1}{\beta(1/4,1/2)^2} \\ & = \frac{8}{\pi^4} \sum_{n=1}^\infty \sum_{m=1}^{\infty} \frac{(-1)^{m+n}}{(2m-1)(2n-1)((2m-1)^2+(2n-1)^2)}
\end{split}
\end{equation}
The result \rrr{rte} for $m \geq 5$ may be new.
\section{Extension to arbitrary domains}
Although not the main focus of the paper, we would be remiss if we failed to observe that Lemma \ref{bigguy} can be extended to some analytic functions which are not conformal. Examining the proof of the theorem should reveal that the hypothesis of injectivity is not necessary for the statement to hold. Instead, the important property of conformal maps which was used is that $f(B_t)$ leaves $f(\mathbb{D})$ at time $\tau(\mathbb{D}, 0)$. This is not true of arbitrary analytic maps. For example, let $f$ map $\mathbb{D}$ conformally to the rectangle $V=\{-1 < \mbox{Re }z < 1, -2\pi < \mbox{Im }z < 2\pi\}$. Then $g=e^f$ maps $\mathbb{D}$ onto the annulus $\{\frac{1}{e} < |z| < e\}$. The boundary segments $\ell_1=\{-1 < \mbox{Re }z < 1,\mbox{Im }z = 2\pi\}$ and $\ell_2=\{-1 < \mbox{Re }z < 1,\mbox{Im }z = -2\pi\}$ are mapped by $e^z$ to the interior segment $\{\frac{1}{e} < \mbox{Re }z < e,\mbox{Im }z = 0\}$. We see that for any Brownian path $B_t(\omega)$ such that $B_{\tau(\mathbb{D},z)}(\omega) \in f^{-1}(\ell_1 \cup \ell_2)$ we have $g(B_{\tau(\mathbb{D},z)}(\omega)) \in \{\frac{1}{e} < \mbox{Re }z < e,\mbox{Im }z = 0\}$. Since $g(B_t)$ does not leave $g(\mathbb{D})$ with probability 1 at time $\tau(\mathbb{D}, 0)$, Theorem \ref{bigguy} will fail to hold.
\vspace{12pt}
With this in mind, let us define an analytic function $f$ on $\mathbb{D}$ to be $B-proper$ if a.s. $f(B_t)$ leaves every compact subset of $f(\mathbb{D})$ as $t$ increases to $\tau(\mathbb{D},0)$. Let a domain be called $B-proper$ if it is the image of a B-proper map on $\mathbb{D}$. The reason for this terminology is that analytic functions with the property that $f(z_n)$ leaves every compact set for every sequence $\{z_n\}$ which approaches $\delta \mathbb{D}$ are commonly referred to as $proper$. It is easy to see that every conformal map is proper. It is also clear that every proper function is B-proper, but the converse is not true, as the example given below shows. With the same proof as Lemma \ref{bigguy} we have the following.
\begin{lemma} \label{bigguy2}
Suppose $f(z) = \sum_{n=0}^{\infty} a_n z^n$ is B-proper on $\mathbb{D}$. Then
\begin{equation} \label{re3}
E_{f(0)}[\tau(f(\mathbb{D}))] = \frac{1}{2}\sum_{n=1}^{\infty} |a_n|^2
\end{equation}
\end{lemma}
We now give an example showing that a function can fail to be proper but still be B-proper. Let $f(z) = e^{tan^{-1}z}$. $f$ maps $\mathbb{D}$ conformally to the annulus $\{e^{\frac{-\pi}{4}} < |z| < e^{\frac{\pi}{4}}\}$. If $z_n$ is a sequence in $\mathbb{D}$ approaching $i$ or $-i$ along the imaginary axis, $f(z_n)$ simply cycles around $A$ on the circle $\{ |z|=1 \}$. $f$ is, however, B-proper, since $f$ extends continuously to map $\{z=e^{i\theta}; -\frac{\pi}{2}<\theta <\frac{\pi}{2}\}$ to $\{ |z| = e^{\frac{\pi}{4}}\}$ and $\{z=e^{i\theta}; \frac{\pi}{2}<\theta <\frac{3\pi}{2}\}$ to $\{ |z| = e^{\frac{-\pi}{4}}\}$. We see that $f(B_t)$ leaves $f(\mathbb{D})$ with probability 1 as $t$ increases to $\tau(\mathbb{D})$.
\section{Acknowledgements}
I would like to thank George Markowsky and Kais Hamza for helpful conversations. I am also grateful for support from Australian Research Council Grant DP0988483.
\def\noopsort#1{} \def\printfirst#1#2{#1} \def\singleletter#1{#1}
\def\switchargs#1#2{#2#1} \def\bibsameauth{\leavevmode\vrule height .1ex
depth 0pt width 2.3em\relax\,}
\makeatletter \renewcommand{\@biblabel}[1]{\hfill#1.}\makeatother
\bibliographystyle{alpha}
|
1,477,468,750,337 | arxiv | \section{Introduction}
The notion of quantum entanglement \cite{NC, BZ, WGE, PKP} speaks of a shared existence of particles having their properties interlinked with each other. An interesting manifestation of entanglement is that the correlation survives even when the particles move to a large distance after having come into contact. Different aspects of quantum entanglement have been studied and a substantial literature has accumulated in this subject (see, for instance, \cite{SS, Wan}). In the $PT$-symmetry context an early attempt was made in \cite{Wan}. In this article we push the issue a little further by addressing quantum entanglement in the framework of a complex extension of quantum mechanics concentrating on the special class of complex parity ($P$)-time ($T$)-symmetric Hamiltonians.
\par
Almost two decades ago, Bender and Boettcher\cite{BB1} proposed a special class of non-Hermitian Hamiltonians, which were manifestly
($PT$)-symmetric, that support a real bound-state spectrum. The interplay between the parametric regions where $PT$ is unbroken and
the ones in which it is not, as signaled by the appearance of conjugate-complex eigenvalues, has also found experimental
support through the observations of a phase transition\footnote{The transition refers to the breaking of $PT$-symmetry when exceptional points appear.} that clearly marks out the separation of these regions (see, for example, \cite{Gan} and earlier references therein): in particular, balancing gain and loss of certain experimental properties has uncovered the relevance of $PT$-symmetric structures in such systems \cite{ZM1, ZM2, KM}.
\par
In standard quantum mechanics ($SQM$), a coherent description of physics is possible when Dirac's norm is employed \cite{Drc1}. Of course there could be other norms (see \cite{CMB} for their physical implications). However these norms run into one problem or another. In particular, $PTQM$ systems are generally plagued with negative norms. The reason lies in the difference in the definition of the inner product in $SQM$ as introduced by Dirac namely,
\begin{equation}
(f,g) \equiv \int_\Re dx[Tf(x)]g(x), \quad f,g \in L_2(\Re)
\end{equation}
where $Tf(x)=f^*(x)$, and that of $PTQM$ namely,
\begin{equation}
(f,g)_{PT} \equiv \int_\Re dx[PT f(x)]g(x), \quad f,g \in L_2(\Re)
\end{equation}
where one defines $PTf(x)=[f(-x)^*]$. The above definition of $PT$-norm very often leads to an indefinite norm implying that a $PT$-systems lacks a viable probabilistic interpretation \cite{Bag1, Jap1}.
\par
However, an introduction of a linear operator $C$ to construct a $CPT$-inner product \cite{BB4} in the following sense
\begin{equation}
\label{eq:3}
(f,g)_{CPT} \equiv \int_\Re dx[CPT f(x)]g(x)
\end{equation}
with the positive-definiteness of the associated norm,
enables one to get rid of this handicap. Note that $C$ commutes with both
the Hamiltonian and the operator $PT$. Further it is idempotent and has eigenvalues $\pm 1$.
A $PT$-symmetric system is supposed to evolve in a manner such that the accompanying time evolution of the state vector
is unitary with respect to the $CPT$ inner product. For a plausible construction of the $C$-operator see \cite{Bag2}.
\par
We propose that no-signaling principle holds for bipartite systems whenever one of the subsystems is $PT$-symmetric. In a different context, such a study\cite{Jap2} has led to the reproduction of the Clauser-Horne-Shimony-Holt (CHSH) inequality in connection with the invariance of the entanglement. Interestingly, an experimental search seems to put in evidence that a simulated $PT$ symmetric subsystem preserves no-signaling\cite{SB4}. However, theoretical results pointing to the contrary have also been noted\cite{SB1,SK1}. Here, we must emphasize that in the bipartite scenario, no-signaling means that, for two observers, say Alice and Bob, whatever Alice does the outcome probability of any measurement by Bob is unchanged. This is the central assumption\footnote{$PT$-symmetry is well established in the single-party case but for a multi-partite scenario the problem of finding entangled states is NP-hard\cite{WGE}. It insists on the resolution for bipartite systems first.} that is implicit in our paper. In $SQM$, where the concept of Hermiticity\footnote{Hermiticity is known by the condition satisfied by an operator associated with an observable.} holds confirming the reality of the associated energy spectrum, one shows that the outcome probability of any measurement by Bob is determined entirely by his reduced density matrix. Consequently, no signaling can be proven by showing that the reduced density matrix of Bob is
unchanged whatever operation Alice does.
\par
In the present work, we will consider a two-dimensional example of a $PT$-Hamiltonian and compute the reduced density matrix of one party using the definition of $CPT$-inner product as just given. We will then show that the entanglement entropy of the density matrix remains unaltered after applying a time evolution operator on them. It should be noted that in our work we assumed the case of the even time-reversal operator which is valid for bosonic systems \cite{REF1}.
\par
It needs to be mentioned that similar results as ours concerning the no-signaling principle were obtained by the authors of \cite{Jap2} , in which the former is shown for bipartite systems where either one or both Hamiltonians of the subsystems are non-Hermitian and $PT$-symmetric, as defined in a space of states controlled by a $CPT$ inner product. However the present work differs from their approach in our use of modified density matrices, leading to an appropriate reasoning for the estimation of the reduced density matrices of the parties by the employment of the $CPT$ inner product for the finite representation of the subsystems. We further highlight that the work of \cite{Jap2} suffered the lack of reasoning to establish the reduced density matrices calculated under the modified norm as an appropriate quantity in realising the local measurements made by the observer within the $PTQM$ framework. However, this work demonstrates to solve precisely that.
\section{Prerequisites}
\subsection{Finite representation of unbroken $PT$-symmetric systems}
For the operators in a $PTQM$ theory it needs to be noted that their eigenstates are not orthogonal under the standard Dirac inner product. An implication is that the Hamiltonian given by
\begin{equation}
\label{rep}
H = \sum_{i=1}^{N} \lambda_{i}\ket{\psi_{i}}\bra{\phi_{i}}
\end{equation}
with eigenvalues $\{\lambda_{i}|1\leq i \leq N\}$ and eigenstates $\{\ket{\psi_{i}}|1 \leq i \leq N\}$ along with $\sum_{i=1}^{N}\ket{\psi_{i}}\bra{\phi_{i}} = \mathbb{I}$, can
never coincide with the more familiar
\begin{equation}
\label{incompref}
H_{S} = \sum_{i=1}^{N} \lambda_{i}\ket{\psi_{i}}\bra{\psi_{i}}
\end{equation}
which satisfies the Hermiticity condition of $SQM$.
\par
The utmost we can do is to project $H$ in a factorized form involving $H_S$ as one of the factors
\begin{equation}
\label{repnewform}
H = \left(\sum_{i=1}^{N} \lambda_{i}\ket{\psi_{i}}\bra{\psi_{i}}\right)\left(\sum_{j=1}^{N} \ket{\phi_{j}}\bra{\phi_{j}}\right) = H_{S}\hat{\eta}
\end{equation}
where $\hat{\eta}$ denoting the sum
\begin{equation}
\hat{\eta} =\sum_{j=1}^{N} \ket{\phi_{j}}\bra{\phi_{j}}.
\end{equation}
The feature of $H$ is that it commutes with the $PT$ operator defined by
\begin{equation}
PT = \sum_{i=1}^{N} \alpha_{i}\ket{\psi_{i}}\bra{\phi_{i}}.
\end{equation}
Consider now an operator $A$ admitting factorization
\begin{equation}
A = A_{S}\hat{\eta},\;\;\;\;A_{S}=\sum_{i=1}^{N}\lambda_{i}^{'}\ket{n_{i}}\bra{n_{i}},\;\;\;\;\ket{n_{i}}=\sum_{j=1}^{N}c_{ij}\ket{\psi_{j}}
\end{equation}
where $\lambda_{i}^{'}$'s are the coefficient constants in the expansion. We can then write corresponding to the modified inner product
\begin{equation}
\label{expect1}
(\psi,A\psi)_{\hat{\eta}} = \braket{\psi|\hat{\eta} A_{S}\hat{\eta}|\psi} = \braket{\phi|A_{S}|\phi}.
\end{equation}
This makes the role of $\hat{\eta}$ clear. With $H$ defining the Hamiltonian of the system, any valid measurement corresponding to the operator $A$ over the energy eigenstates satisfies the pseudo-hermiticity relationship\cite{AM1, AM2}
\begin{equation}
\label{pseher}
\hat{\eta}H\hat{\eta}^{-1} = H^{\dagger}.
\end{equation}
The expectation value of $A$ over the state $\psi$ in a pseudo-Hermitian framework\cite{AM2} is the same as the expectation value of $A_{S}$ over the state $\phi$ in the standard framework.
\par
As in $SQM$, the outcome of measurements performed corresponding to the operator $A_{S}$ solely depend on the state $\ket{\phi}$ and given by the density matrix $\rho = \ket{\phi}\bra{\phi}$ namely,
\begin{equation}
\braket{\phi|A_{S}|\phi} = \text{Tr}\left(\ket{\phi}\bra{\phi}A_{S}\right).
\end{equation}
Using the cyclic property of the trace, $\text{Tr}\left(\ket{\phi}\bra{\phi}A_{S}\right) = \text{Tr}\left(\hat{\eta}\ket{\psi}\bra{\phi}A_{S}\right) = \text{Tr}\left(\ket{\psi}\bra{\phi}A_{S}\hat{\eta}\right) = \text{Tr}\left(\ket{\psi}\bra{\phi}A\right)$, and also \eqref{expect1}, we can express the $\eta$-inner product to be
\begin{equation}
(\psi,A\psi)_{\hat{\eta}} = \text{Tr}\left(\ket{\psi}\bra{\phi}A\right).
\end{equation}
The effective density operator of $\ket{\psi}$ in a pseudo-hermitian framework is thus $\ket{\psi}\bra{\phi} = \ket{\psi}\bra{\psi}\hat{\eta}$. Note that when $\hat{\eta}=\mathbb{I}$ we recover the standard result of $SQM$.
\par
We now provide a scheme to calculate overlaps under a $CPT$-inner product by introducing a generalized inner-product restricting to finite Hilbert space. For any operator $\hat{X}$ which shares simultaneous eigenstates with the Hamiltonian $\hat{H}$, the $X$-inner product is defined to be $\braket{-|-}_{X}$ and obeys
\begin{equation}
\label{eq:Xnorm}
\braket{\phi|\psi}_{X} = \hat{X}\ket{\phi}\cdot\ket{\psi}, \quad \bra{\phi}_{X} = (\hat{X}\ket{\phi})^{T}.
\end{equation}
Replacing $\hat{X}$ by the $CPT$-operator, the above equation showcases an appropriate way to perform calculations when a $CPT$ inner product is invoked. Indeed, one could derive
\begin{equation}
\braket{\phi|\psi}_{CPT} = CPT\ket{\phi}\cdot\ket{\psi} = \braket{\phi|(CP)^{T}|\psi}.
\end{equation}
which implies that the intertwining operator $\hat{\eta}$ for an unbroken $PT$-symmetric system is $(CP)^{T}$. For the density operator $\rho$ in the state $\ket{\psi}$ we deduce easily that
\begin{equation}
\rho = \ket{\psi}\bra{\psi}_{CPT} = \ket{\psi}\bra{\psi}(CP)^{T}.
\end{equation}
With the above results at hand, we analyze in the next section the no-signaling principle in a $PT$-symmetric framework.
\subsection{Entanglement and the no-signaling principle in a $PT$-symmetric framework}
For $N$ quantum subsystems defined over a set of Hilbert spaces $\{\mathcal{H}_{i}|1\leq i\leq N\}$, a composite system generated out of these subsystems will exist in $\mathcal{H}_{1}\otimes\mathcal{H}_{2}\otimes...\otimes\mathcal{H}_{N}$. Let us define a joint state as the tensor product
\begin{equation}
\ket{\psi} = \ket{\psi_{1}}\otimes\ket{\psi_{2}}\otimes...\otimes\ket{\psi_{N}}\in \mathcal{H},\;\;\;\;\ket{\psi_{i}}\in\mathcal{H}_{i}.
\end{equation}
A pure state $\ket{\psi}$ of $\mathcal{H}$ such as the one given above is said to be separable \cite{Sib}. A state of $\mathcal{H}$ which is non-separable is called an entangled pure state\footnote{This criterion also holds for an infinite representation of subsystems.}.
\par
For $N=2$, which conforms to a bipartite system, a measure of entanglement is provided by the following definition of information entropy
\begin{equation}
E(\psi) = -\mathrm{Tr}_{1}(\rho_{1}\log_{2}\rho_{1}) = -\mathrm{Tr}_{2}(\rho_{2} \log_{2}\rho_{2})
\end{equation}
where $\rho$ is the density matrix corresponding to $\ket{\psi}$ and the reduced density matrices $\rho_{1}$ and $\rho_{2}$ are given respectively by the partial traces of $\rho$: $\rho_{1} = \mathrm{Tr}_{2}(\rho)$ and $\rho_{2} = \mathrm{Tr}_{1}(\rho)$. The entropy $E(\psi)$ is
\begin{equation}
E(\psi) = -\sum_{i} \lambda_{i}\log_{2}{\lambda_{i}}
\end{equation}
where $\lambda_{i}$'s are the eigenvalues of the relevant reduced density matrix. The scheme of calculating the density matrix of the states of the system will, however, vary if we dealing with pseudo-hermitian subsystems.\footnote{One can equivalently perform the calculation of the density operator by following the scheme for the bra vector as provided in \eqref{eq:Xnorm}}
\par
Consider $\{\ket{u_{n}}\}$ and $\{\ket{v_{n}}\}$ as basis sets of the respective Hilbert spaces $\mathcal{H}_{1}$ and $\mathcal{H}_{2}$. The basis set of the composite Hilbert space $\mathcal{H}_{1}\otimes\mathcal{H}_{2}$ is then $\{\ket{u_{n}}\otimes\ket{v_{n}}\}$. As such, an entangled of a pure bipartite state is given by
\begin{equation}
\label{eq:purebipartite}
\ket{\psi} = \sum_{n,m=1}^{D_{1},D_{2}} C_{nm}\ket{u_{n}}\otimes\ket{v_{m}}, \quad \sum_{n,m} \mid C_{nm}\mid^{2} = 1
\end{equation}
where $D_{1},D_{2}$ are the respective dimensions of the Hilbert spaces and $C_{nm}$ are constants. Since we restrict to bipartite systems only we take in what follows, $D_{1}=D_{2}=2$.
\subsection{Overview of a $2\times2$ $PT$-Symmetric model}
For calculational simplicity we adopt the following form\footnote{This structure is equivalent to the matrix considered in \cite{BB4} modulo an identity factor.} of a two-level $PT$-symmetric Hamiltonian\cite{Jog}
\begin{equation}
\label{eq:Hamiltonian}
\hat{H}=\begin{pmatrix}
i\gamma & -\zeta\\
-\zeta & -i\gamma
\end{pmatrix}
\end{equation}
where $\gamma >0$ and $\zeta >0$. With representation of the parity operator being $\hat{P} = \begin{bmatrix} 0&1 \\ 1&0 \end{bmatrix}$ and the time-reversal operation transforming like $T:i \rightarrow -i$, the $PT$-symmetric character of $\hat{H} $ is evident.
\par
Because of the underlying $PT$-symmetry, the right and left eigenvectors of $\hat{H}$ are not the same. Specifically, the right eigenvectors (for $\left|\frac{\gamma}{\zeta}\right|\leq1$) read
\begin{equation}
\ket{\psi_{\pm}} = \frac{1}{\sqrt{2\cos{\phi}}}
\begin{pmatrix}1 \\ \mp e^{\mp i\phi}\end{pmatrix}
\end{equation}
\\
where $\sin \phi = \frac{\gamma}{\zeta}$ and the eigenvalues of $\hat{H}$ are
\begin{equation}
\lambda_{\pm} = \pm\sqrt{\zeta^2-\gamma^2}
\end{equation}
These are entirely real if the inequality $\gamma < \zeta$ holds. The degeneracy of the eigenvalues takes place when $\gamma = \zeta$. However for $\gamma > \zeta$ the eigenvalues become purely imaginary complex conjugates.
\par
Following \cite{BB4}, we adopt, up to a sign, the $\hat{C}$ operator in the form
\begin{equation}
\hat{C} = \begin{bmatrix}-i\tan{\phi}&\sec{\phi}\\\sec{\phi}&i\tan{\phi}\end{bmatrix}.
\end{equation}
\\
It is immediately verified that the actions of $\hat{P}\hat{T}$ and $\hat{C}$ operators on the eigenstates $\ket{\psi_{\pm}}$ work as
\begin{equation}
\begin{split}
\hat{P}T&\ket{\psi_{\pm}} = \frac{\mp e^{\pm i\phi}}{\sqrt{2\cos{\phi}}}\begin{bmatrix}1 \\ \mp e^{\mp i\phi}\end{bmatrix}, \\
\hat{C}&\ket{\psi_{\pm}} = \frac{\mp1}{\sqrt{2\cos{\phi}}}\begin{bmatrix}1 \\ \mp e^{\mp i\phi}\end{bmatrix}.
\end{split}
\end{equation}
\\
These lead to the positive definiteness of the $CPT$-inner product for an arbitrary state $\ket{\psi} = \begin{bmatrix}a\\b\end{bmatrix} = \begin{bmatrix}r_{a}e^{i\theta_{a}}\\r_{b}e^{i\theta_{b}}\end{bmatrix}$ which is given using
\begin{equation}
\hat{C}\hat{P}T\ket{\psi} = \frac{1}{\cos{\phi}}\begin{bmatrix}a^{*}-i b^{*}\sin{\phi}\\b^{*}+i a^{*}\sin{\phi}\end{bmatrix} \\
\end{equation}
as
\begin{equation}
\begin{split}
\braket{\psi|\psi}_{CPT} &= \frac{1}{\cos{\phi}}[aa^{*}+bb^{*} - i(b^{*}a-a^{*}b)\sin{\phi}] \\&= \frac{1}{\cos{\phi}}[r_{a}^{2}+r_{b}^{2}+2r_{a}r_{b}\sin{\phi}\sin{(\theta_{a}-\theta_{b})}]\geq 0,
\end{split}
\end{equation}
consistent with the result obtained in \cite{SB3}.
\par
Finally, we might keep in mind that as $\phi\rightarrow 0$, the framework of $PT$-symmetric quantum mechanics ($PTQM$) transits to that of ($SQM$):
\begin{equation}
\label{PTQMQM}
\begin{split}
& \hat{H}\rightarrow-\zeta\sigma_{x}
\\& \frac{1}{\sqrt{2\cos{\phi}}}
\begin{pmatrix}1 \\ \mp e^{\mp i\phi}\end{pmatrix} \rightarrow \frac{1}{\sqrt{2}}
\begin{pmatrix}1 \\ \mp 1\end{pmatrix}
\\& \pm\sqrt{\zeta^2-\gamma^2} \rightarrow \pm\zeta
\\& \hat{C} = \begin{bmatrix}-i\tan{\phi}&\sec{\phi}\\\sec{\phi}&i\tan{\phi}\end{bmatrix} \rightarrow \begin{bmatrix}0&1\\1&0\end{bmatrix} = \hat{P}
\\& \braket{-|-}_{CPT} \rightarrow \braket{-|-}_{T}
\end{split}
\end{equation}
where we identify $\braket{-|-}_{T}$ as the usual Dirac norm. It is useful to note that the Hamiltonian $\hat{H}$ also affords the following factorized representation
\begin{equation}
\hat{H} = \hat{H}_{QM}\hat{\eta},\;\;\;
\hat{H}_{QM} = -\zeta\cos{\phi}\begin{bmatrix}0&1\\1&0\end{bmatrix},\;\;\;\hat{\eta}=\begin{bmatrix}\sec{\phi}&i\tan{\phi}\\-i\tan{\phi}&\sec{\phi}\end{bmatrix}=(CP)^{T}.
\end{equation}
\section{Entanglement in $PT$-symmetric systems}
In $SQM$, as the density operator of a bipartite system evolves, no-signaling is established by showing the invariance of entropy for the initial and final entangled state. For an unbroken $PT$-symmetric system, however, the same can be established by adopting states to conform to pseudo-hermitian transformations. Specifically, the following steps are followed:
\begin{enumerate}
\item First, for a pure entangled state $\ket{\psi}$, we determine the initial estimate of the quantity $E(\psi_{t=0})$.
\item Then, we apply the time evolution operator on the composite state in the given Hilbert space $\mathcal{H}$ and calculate the reduced density matrix by performing partial traces of the density operator. It puts us in a position to determine the entanglement measure of $\psi_{t}$.
\item Finally, we determine the time-dependent quantity $E(\psi_{t})$ to demonstrate the invariant result $E(\psi_{t=0}) = E(\psi_{t=t'})$.
\end{enumerate}
We now proceed to address the different subsystems as alluded to above.
\subsection{Subsystems governed by $PTQM$}
We focus on two subsystems each controlled by $PTQM$ according to
\begin{equation}
\label{eq:Hamiltonians}
\hat{H}_{1} = \begin{bmatrix}i\gamma&-\zeta \\ -\zeta&-i\gamma\end{bmatrix}, \quad \hat{H}_{2} = \begin{bmatrix}i\gamma'&-\zeta' \\ -\zeta'&-i\gamma'\end{bmatrix}
\\[0.5ex]
\end{equation}
with one Hamiltonian for each subsystem. The associated time evolution operator $U(t)=e^{-i\hat{H_i}t}, i=1,2$ maps $\hat{H}_{1}$ and $\hat{H}_{2}$ to their time-dependent forms. The eigenstates (normalised under $CPT$-inner product) of $\hat{H}_{i}$, which serve as a basis set of $\mathcal{H}_{i}$, $i=1,2$, are given by
\begin{equation}
\begin{split}
\{\ket{u_{1}},\ket{u_{2}}\}&= \left\{\frac{1}{\sqrt{2\cos{\phi}}}\begin{bmatrix}1\\- e^{- i \phi}\end{bmatrix},\frac{1}{\sqrt{2\cos{\phi}}}\begin{bmatrix}1\\+ e^{+ i \phi}\end{bmatrix}\right\}
\\[0.75ex]\{\ket{v_{1}},\ket{v_{2}}\}&= \left\{\frac{1}{\sqrt{2\cos{\phi'}}}\begin{bmatrix}1\\- e^{- i \phi'}\end{bmatrix},\frac{1}{\sqrt{2\cos{\phi'}}}\begin{bmatrix}1\\+ e^{+ i \phi'}\end{bmatrix}\right\}
\end{split}
\end{equation}
where $\sin{\phi} = \frac{\gamma}{\zeta}$ and $\sin{\phi'} = \frac{\gamma'}{\zeta'}$. We now construct an entangled state and look into its behaviour upon the application of the time evolution operator $\mathbb{I}\otimes U(t)$ where for concreteness we take $U(t) = e^{-i\hat{H_{2}}t}$. To this end, using the definition of \eqref{eq:purebipartite} we first arrive at the form
\begin{equation}
\label{eq:entstate}
\ket{\psi} = \sum_{n,m=1}^{2,2} C_{nm}\ket{u_{n}}\otimes\ket{v_{m}}, \quad \sum_{n,m} \mid C_{nm}\mid^{2} = 1.
\end{equation}
having its bra counterpart reading
\begin{equation}
\bra{\psi}= \left(\hat{C}\hat{P}T\otimes\hat{C}\hat{P}T\ket{\psi}\right)^{T} = \sum_{n,m=1}^{2,2} C_{nm}^{*}\bra{u_{n}}_{CPT}\otimes\bra{v_{m}}_{CPT}.
\end{equation}
with $X$ replaced by $CPT$ in the scheme formulated in \eqref{eq:Xnorm}. Although of no direct concern here, the above notation of bra would be useful to handle entanglement for the multi-partite cases where the individual subsystems contribute towards defining an overall inner product of the composite system.
\par
The full density matrix of the entangled state which reads
\begin{equation}
\label{eq:denfull}
\rho_{1,2} = \sum_{n,m,a,b=1}^{2} C_{ab}C_{nm}^{*}\ket{u_{a}}\bra{u_{n}}_{CPT}\otimes\ket{v_{b}}\bra{v_{m}}_{CPT}.
\end{equation}
has the individual elements as summarized below
\begin{equation}
\begin{split}
\braket{u_{i}|u_{j}}_{CPT} = \delta_{ij}&\\[0.5ex]\ket{u_{1}}\bra{u_{1}}_{CPT} = \frac{1}{2\cos{\phi}}\begin{bmatrix}e^{i\phi}&-1\\-1&e^{-i\phi}\end{bmatrix}&\\[0.5ex] \ket{u_{2}}\bra{u_{2}}_{CPT} = \frac{1}{2\cos{\phi}}\begin{bmatrix}e^{-i\phi}&1\\1&e^{i\phi}\end{bmatrix}&\\[0.5ex]\ket{u_{1}}\bra{u_{2}}_{CPT}= \frac{1}{2\cos{\phi}}\begin{bmatrix}e^{-i\phi}&1\\-e^{-2i\phi}&-e^{-i\phi}\end{bmatrix}&\\[0.5ex]\ket{u_{2}}\bra{u_{1}}_{CPT} = \frac{1}{2\cos{\phi}}\begin{bmatrix}e^{i\phi}&-1\\e^{2i\phi}&-e^{i\phi}\end{bmatrix}&. \end{split} \end{equation}
These correspond to $\mathcal{H}_{1}$. A similar set can be found for $\mathcal{H}_{2}$ by replacing $u$ by $v$ and $\phi$ by $\phi'$. Concerning the trace of density operators it suffices to mention that it follows the usual results of normalized eigenstates for an appropriate inner product.
\par
Applying the partial trace in $\mathcal{H}_{2}$ gives us the reduced density operator for $\mathcal{H}_{1}$
\begin{equation}
\rho_{1} = \mathrm{Tr}_{2}[\rho_{1,2}] = \sum_{a,b,n=1}^{2} C_{ab}C_{nb}^{*}\ket{u_{a}}\bra{u_{n}}_{CPT}
\end{equation}
where $\rho_{1}$ stands for the matrix
\begin{equation}
\label{eq:den1}
\begin{split}
&\frac{1}{2\cos{\phi}}\begin{bmatrix}
N_{11}& N_{12}\\N_{21}&N_{22}
\end{bmatrix}
\end{split}
\end{equation}
whose elements read explicitly
\begin{equation}
\label{eq:denterm}
\begin{split}
& N_{11} = (\alpha+\gamma)e^{i\phi} + (\beta+\delta)e^{-i\phi},\\
& N_{12} = (\beta+\delta-\alpha-\gamma),\\
& N_{21} = (\delta-\alpha)-\beta e^{-2i\phi} + \gamma e^{2i\phi},\\
& N_{22} = (\delta-\gamma)e^{i\phi}+(\alpha-\beta)e^{-i\phi},\\
& \alpha = C_{11}C_{11}^{*} + C_{12}C_{12}^{*},\\
& \beta = C_{11}C_{21}^{*} + C_{12}C_{22}^{*},\\
& \gamma = C_{21}C_{11}^{*} + C_{22}C_{12}^{*} = \beta^{*},\\
& \delta = C_{21}C_{21}^{*} + C_{22}C_{22}^{*} \;\;\;\;\text{and}\\
& \alpha+\delta=1.
\end{split}
\end{equation}
What happens when we apply\footnote{The act of operation of time evolution is equivalent to making a measurement on the entangled state by the first party here.} the time evolution operator on the density matrix of \eqref{eq:entstate}? A straightforward calculation gives
\begin{equation}
\begin{split}
&\ket{\psi_{t}} = \mathbb{I}\otimes e^{-i\hat{H}_{1}t}\ket{\psi} = \sum_{n,m=1}^{2,2} e^{-i\lambda_{m}t}C_{nm}\ket{u_{n}}\otimes\ket{v_{m}}, \\ & \lambda_{1} = \sqrt{\zeta'^2-\gamma'^2},\;\;\;\lambda_{2} = -\sqrt{\zeta'^2-\gamma'^2}
\end{split}
\end{equation}
resulting in the following time-dependent form $\rho_{1,2}$
\begin{equation}
\rho_{1,2}(t) = \sum_{n,m,a,b=1}^{2}e^{i(\lambda_{m}-\lambda_{b})t} C_{ab}C_{nm}^{*}\ket{u_{a}}\bra{u_{n}}_{CPT}\otimes\ket{v_{b}}\bra{v_{m}}_{CPT}.
\end{equation}
In particular $\rho_{1}(t)$ is expressible as
\begin{equation}
\label{eq:den2}
\rho_{1}(t) = \frac{1}{2\cos{\phi}}\begin{bmatrix}
N_{11}&N_{12} \\ N_{21}&N_{22}
\end{bmatrix}
\end{equation}
following the convention set up in \eqref{eq:denterm}.
\par
We therefore obtain the result that \eqref{eq:den1} and \eqref{eq:den2} are the reduced density matrices corresponding to $\mathcal{H}_{1}$ respectively holding before and after the operation of time evolution operator. It shows invariance of the measurement made by the system guided by $\mathcal{H}_{2}$ i.e. $E(\psi)=E(\psi_{t})$. Thus no-signaling is a valid criterion in $PTQM$.
\par
To inquire as to whether the eigenvalues of the reduced density operators undergo any change if we transform to the standard $QM$ formalism, the answer is self-explanatory if we look at the dependence of the eigenvalues ($\omega_{\pm}$) on the parameters of the Hamiltonian \eqref{eq:Hamiltonians}. It is easily seen that
\begin{equation}
\label{eq:eigPT}
\begin{split}
\omega_{\pm} &= \frac{1}{2}\left((\alpha+\delta)\pm\sqrt{1+4(\beta\gamma-\alpha\delta)}\right)
\\&= \frac{1}{2}\left(1\pm\sqrt{1-4\mid C_{11}C_{22}-C_{12}C_{21}\mid^{2}}\right)
\end{split}
\end{equation}
implying that the parameters stay invariant.
\subsection{Subsystems governed by $PTQM$ and $SQM$}
For concreteness let the Hamiltonian $\mathcal{H}_{1}$ be relevant for the $PTQM$ while $\mathcal{H}_{2}$ holds for the $SQM$ system. For the latter we choose it to be represented by $\sigma_{x}$ whose eigenstates act as the basis states,
under the usual inner product definition, are
\begin{equation}
\{\ket{v_{1}},\ket{v_{2}}\} = \{\ket{1},\ket{0}\} = \left\{ \frac{1}{\sqrt{2}}\begin{bmatrix}1\\-1\end{bmatrix},\frac{1}{\sqrt{2}}\begin{bmatrix}1\\1\end{bmatrix}\right\}.
\end{equation}
The inner product structure, which is the same as in $SQM$, shows
\begin{equation}
\begin{split}
\bra{\psi} &= (\hat{C}\hat{P}T\otimes K\ket{\psi})^{T}
\\&= \sum_{n,m=1}^{2,2} C_{nm}^{*}(\hat{C}\hat{P}T\ket{u_{n}})^{T}\otimes(\ket{u_{m}})^{\dagger}
\end{split}
\end{equation}
where $K=T$ mimics the usual complex conjugation.
\par
The initial density matrix of the composite state \eqref{eq:entstate}, using notations furnished in \eqref{eq:denterm}, is the tensor product
\begin{equation}
\label{eq:denoverall}
\begin{split}
\rho_{1,2} = &\frac{1}{2\cos{\phi}}\begin{bmatrix}
N_{11}& N_{12}\\N_{21}&N_{22}
\end{bmatrix} \otimes \\&\frac{1}{2}\begin{bmatrix}
1+\beta+\beta^{*}& (\delta-\alpha)+(\beta-\beta^{*})\\(\delta-\alpha)-(\beta-\beta^{*})&1-(\beta+\beta^{*})
\end{bmatrix}.
\\
\end{split}
\end{equation}
We immediately infer from \eqref{eq:eigPT} and \eqref{PTQMQM} that finding the partial trace of $\rho_{1,2}$ in either of the Hilbert spaces would retain the same set of eigenvalues. In fact,
if we denote the density matrices by the notations $\rho_{P}(t)$ and $\rho_{S}(t)$ and have the subsystems evolve by means of the operators $U_{1}(t)\otimes\mathbb{I}$ and $\mathbb{I}\otimes U_{2}(t)$ respectively, where $U_{1}(t) = e^{-i\hat{H}_{1}t}$ and $U_{2}(t) = e^{-i\sigma_{x}t}$, then it transpires that for $\rho_{P}(t)$ we have
\begin{equation}
\begin{split}
&\ket{\psi_{t}} = U_{1}(t)\otimes\mathbb{I}\ket{\psi} = \sum_{n,m=1}^{2,2} e^{-i\lambda_{n}t}C_{nm}\ket{u_{n}}\otimes\ket{v_{m}},
\\ & \lambda_{1} = \sqrt{\zeta^2-\gamma^2},\;\;\;\lambda_{2} = -\sqrt{\zeta^2-\gamma^2},
\\&\rho_{P}(t) = \sum_{n,m,a,b=1}^{2}e^{i(\lambda_{n}-\lambda_{a})t} C_{ab}C_{nm}^{*}\ket{u_{a}}\bra{u_{n}}_{CPT}\otimes\ket{v_{b}}\bra{v_{m}}
\end{split}
\end{equation}
while for $\rho_{S}(t)$ the following holds
\begin{equation}
\begin{split}
&\ket{\psi_{t}} = \mathbb{I}\otimes U_{2}(t)\ket{\psi} = \sum_{n,m=1}^{2,2} e^{-i\lambda_{m}t}C_{nm}\ket{u_{n}}\otimes\ket{v_{m}},
\\&\lambda_{1}=-1, \;\;\;\lambda_{2} = 1,
\\&\rho_{S}(t) = \sum_{n,m,a,b=1}^{2}e^{i(\lambda_{m}-\lambda_{b})t} C_{ab}C_{nm}^{*}\ket{u_{a}}\bra{u_{n}}_{CPT}\otimes\ket{v_{b}}\bra{v_{m}}.
\end{split}
\end{equation}
This implies that the entangled state \eqref{eq:entstate} reflects no change in either of the subsystems demonstrating successfully the no-signaling hypothesis.
\section{Concluding remarks}
The problem of preservation of no-signaling principle is addressed for certain combinations of $PT$-symmetric systems. Since all $PT$-symmetric Hamiltonians are known to belong to the class of pseudo-Hermitian theory, we use the techniques of the latter to establish the result in the affirmative. In this regard, we considered the pair of subsystems governed each by $PTQM$ as one possibility along with $PTQM$ and $SQM$ as another. The key ingredient that we employed is the notion of $CPT$-inner product, which is known to admit of a probabilistic interpretation for a $PTQM$ system, to establish the invariance of the relevant reduced density matrix before and after the operation of time evolution operator. Although the results obtained in this work is deemed to be similar with\cite{Jap2} , we highlight the difference from our work with the elaborate use of modified density matrices, a crucial feature missing in the former.
\section{Acknowledgments}
We thank our colleagues for making several constructive criticisms that, in our opinion, led to a substantial improvement of the paper.
|
1,477,468,750,338 | arxiv | \section{#1}\setcounter{equation}{0}}
\renewcommand{\theequation}{\thesection.\arabic{equation}}
\def{\hbox{ 1\kern-.8mm l}}{{\hbox{ 1\kern-.8mm l}}}
\def{\hbox{ 0\kern-1.5mm 0}}{{\hbox{ 0\kern-1.5mm 0}}}
\begin{document}
{}~
{}~
\hfill\vbox{\hbox{hep-th/0601228}}\break
\vskip .6cm
\medskip
\baselineskip 20pt
\begin{center}
{\Large \bf
BTZ Black Hole
with Chern-Simons and Higher Derivative Terms}
\end{center}
\vskip .6cm
\medskip
\vspace*{4.0ex}
\centerline{\large \rm Bindusar Sahoo and Ashoke Sen}
\vspace*{4.0ex}
\centerline{\large \it Harish-Chandra Research Institute}
\centerline{\large \it Chhatnag Road, Jhusi,
Allahabad 211019, INDIA}
\vspace*{1.0ex}
\centerline{E-mail: bindusar@mri.ernet.in, ashoke.sen@cern.ch,
sen@mri.ernet.in}
\vspace*{5.0ex}
\centerline{\bf Abstract} \bigskip
The entropy of a BTZ black hole in the
presence of gravitational Chern-Simons terms has previously
been analyzed using
Euclidean action formalism. In this paper we
treat
the BTZ solution
as a two dimensional black hole by regarding the angular
coordinate as a compact direction, and
use
Wald's Noether charge method to
calculate the entropy of this
black hole in the
presence of higher derivative and
gravitational Chern-Simons terms.
The parameters labelling the black hole solution can be
determined
by extremizing an entropy function whose value at the extremum
gives the entropy of the black hole.
\vfill \eject
\baselineskip 18pt
\tableofcontents
\sectiono{Introduction} \label{s1}
BTZ solution describes a black hole in three
dimensional theory of gravity with negative cosmological
constant\cite{9204099} and often
appears as a factor in the near horizon geometry of
higher dimensional black holes in string theory\cite{9712251}.
For this
reason
it has provided us with a useful
tool for relating black hole entropy to the degeneracy of
microstates of the black hole, both in three dimensional
theories of gravity and also in string theory\cite{9712251,brown}.
Initial studies involved relating the Bekenstein-Hawking formula
for BTZ black hole entropy in two derivative theories of gravity
to the Cardy formula
for the degeneracy of states
in the two dimensional conformal field theory
living on the asymptotic boundary.
Later this was
generalized to higher derivative theories of gravity, where the lagrangian
density contains arbitrary powers of Riemann tensor and its covariant
derivatives\cite{9909061}. For computing entropy of black holes in such
theories one can no longer use the area formula. Instead one needs to
use
the Noether charge method developed by
Wald\cite{9307038,9312023,9403028,9502009}.
In three dimensions one can also
add to the action the gravitational
Chern-Simons terms. In this case the
Lagrangian density cannot be written in a
manifestly covariant form and as a result
Wald's formalism cannot be applied
in a straightforward fashion. For this reason
the effect of this term on the black
hole entropy was analyzed in \cite{0506176,0508218}
using the Euclidean action formalism\cite{9804085}. A different
Euclidean method yielding the same result can be found in
\cite{0509148}.
The
goal of this paper is to compute the entropy of BTZ
black holes in the presence of Chern-Simons and higher derivative
terms using Wald's Noether charge method.
In order to do this we regard the BTZ black hole as a two dimensional
configuration by treating the angular coordinate as a
compact direction\cite{9304068}.
The black hole entropy is then calculated using the dimensionally
reduced two dimensional theory. This has the advantage that the
Chern-Simons term, which was not manifestly covariant
in three dimensions,
reduces to a manifestly covariant set of terms in two
dimensions\cite{0305117}.
Hence Wald's formula for the black hole entropy can be applied in a
straightforward fashion. The result agrees with the one calculated using
the Euclidean action formalism.
The rest of the paper is organized as follows. In section \ref{s2} we
discuss the dimensional reduction of a general three dimensional theory of
gravity, including the gravitational Chern-Simons term, to two dimensions
and describe the BTZ solution from two dimensional viewpoint. In section
\ref{s3} we calculate the entropy of extremal BTZ black holes using the
entropy
function formalism\cite{0506177} which is known to be equivalent to Wald's
Noether charge method. In section \ref{s4} we calculate the
entropy of a non-extremal BTZ black hole using Wald's method directly.
Both for the extremal and the non-extremal black holes the parameters
labelling the solution can be obtained by extremizing an
entropy function whose value at the extremum gives the entropy.
\sectiono{The Two Dimensional View} \label{s2}
Let us consider a three dimensional theory of gravity with
metric $G_{MN}$ ($0\le M,N\le 2$) and a general
action of the form:\footnote{We could
add any number of scalar
fields without changing the final result since they must be frozen
to constant values in order to comply with the homogeneity of the
BTZ configuration.}
\begin{equation} \label{e1}
S = \int d^3 x \sqrt{-\det G}\left[ {\cal L}^{(3)}_0
+ {\cal L}^{(3)}_1 \right]\, .
\end{equation}
Here
${\cal L}^{(3)}_0$ denotes an arbitrary scalar constructed out of the metric,
the
Riemann tensor and covariant derivatives of the Riemann tensor. On
the other hand $\sqrt{-\det G}\, {\cal L}^{(3)}_1$ denotes
the gravitational Chern-Simons term:
\begin{equation} \label{e1a}
\sqrt{-\det G}\, {\cal L}^{(3)}_1 = K \, \Omega_3(\widehat\Gamma) \, ,
\end{equation}
where
$K$ is a
constant, $\widehat\Gamma$ is the Christoffel connection constructed out of
the metric $G_{MN}$ and
\begin{equation} \label{e2}
\Omega_3(\widehat\Gamma) = \epsilon^{MNP} \left[{1\over 2}
\widehat\Gamma^R_{MS} \partial_N \widehat\Gamma^S_{PR} + {1\over 3}
\widehat\Gamma^R_{MS} \widehat\Gamma^S_{NT}
\widehat\Gamma^T_{PR}\right]\, .
\end{equation}
$\epsilon$ is the
totally anti-symmetric symbol with $\epsilon^{012}=1$.
We shall consider field configurations where one of the coordinates (say
$y\equiv x^2$) is compact with
period $2\pi$ and the metric is independent
of this compact direction. In this case we can define two
dimensional fields through the relation:
\begin{equation} \label{e3}
G_{MN} dx^M dx^N = \phi \left[ g_{\mu\nu}
dx^\mu dx^\nu + (dy + A_\mu
dx^\mu)^2\right]\, .
\end{equation}
Here $g_{\mu\nu}$ ($0\le \mu,\nu\le 1$) denotes a
two dimensional metric, $A_\mu$ denotes a two dimensional
gauge field and $\phi$ denotes a
two dimensional scalar field. In terms of these
two dimensional fields the
action takes the form:
\begin{equation} \label{e4}
S = \int d^2 x \sqrt{-\det g}\left[ {\cal L}^{(2)}_0 + {\cal L}^{(2)}_1 \right]
\end{equation}
where
\begin{equation} \label{e5}
\sqrt{-\det g} \, {\cal L}^{(2)}_0 = \int dy \sqrt{-\det G} \,
{\cal L}^{(3)}_0
= 2\pi \sqrt{-\det G} \, {\cal L}^{(3)}_0\, ,
\end{equation}
and\cite{0305117}
\begin{equation} \label{e5a}
\sqrt{-\det g} \, {\cal L}^{(2)}_1 = K \, \pi \, \left[
{1\over 2} R \varepsilon^{\mu\nu} F_{\mu\nu}
+{1\over 2} \varepsilon^{\mu\nu} F_{\mu\tau} F^{\tau\sigma}
F_{\sigma\nu} \right]\, .
\end{equation}
Here
$R$ is the scalar curvature of the two dimensional metric
$g_{\mu\nu}$:
\begin{eqnarray}\displaystyle \label{e7}
&& \Gamma^\mu_{\nu\rho} = {1\over 2} g^{\mu\sigma}
\left( \partial_\nu g_{\sigma\rho} + \partial_\rho g_{\sigma\nu}
- \partial_\sigma g_{\nu\rho} \right) \nonumber \\
&& R^\mu_{~\nu\rho\sigma} = \partial_\rho
\Gamma^\mu_{\nu\sigma} - \partial_\sigma
\Gamma^\mu_{\nu\rho} + \Gamma^\mu_{\tau\rho}
\Gamma^\tau_{\nu\sigma} - \Gamma^\mu_{\tau\sigma}
\Gamma^\tau_{\nu\rho} \nonumber \\
&& R_{\nu\sigma} = R^\mu_{~\nu\mu\sigma}, \qquad
R = g^{\nu\sigma} R_{\nu\sigma}\, ,
\end{eqnarray}
$\varepsilon^{\mu\nu}$ is the totally antisymmetric symbol with
$\varepsilon^{01}=1$, and
\begin{equation} \label{e6}
F_{\mu\nu} = \partial_\mu A_\nu - \partial_\nu A_\mu\, .
\end{equation}
\refb{e5} follows in a
straightforward
fashion from \refb{e1}.
\refb{e5a}
comes from dimensional reduction of the Chern-Simons term after
throwing away total derivative terms and was
worked out in
\cite{0305117}. Note that although the
Chern-Simons term cannot be expressed in a
manifestly covariant form in three dimensions, it does reduce to a
manifestly covariant expression in two dimensions.
A general BTZ black hole in the three dimensional theory
is described by the metric:
\begin{equation} \label{e16}
G_{MN} dx^M dx^N = -{
(\rho^2 - \rho_+^2) (\rho^2 - \rho_-^2)\over l^2 \rho^2} d\tau^2
+ {l^2 \rho^2 \over (\rho^2 - \rho_+^2) (\rho^2 - \rho_-^2)} d\rho^2
+ \rho^2 \left(dy - {\rho_+ \rho_-\over l \rho^2} d\tau\right)^2\, ,
\end{equation}
where $l$, $\rho_+$ and $\rho_-$ are parameters labelling the
solution.\footnote{Since the local geometry of a BTZ black hole is
that of $AdS_3$ which is maximally symmetric, the higher derivative
corrections do not change the structure of the solution. However for a
black hole carrying a given mass and angular momentum, the values of the
parameters $l$, $\rho_+$ and $\rho_-$ depend on the higher derivative
terms.}
Comparing this with \refb{e3} gives\cite{9304068}
\begin{eqnarray}\displaystyle \label{e17}
&& \phi = \rho^2\, , \qquad
A_\mu dx^\mu = - {\rho_+ \rho_-\over l \rho^2} d\tau, \nonumber \\
&& g_{\mu\nu} dx^\mu dx^\nu
= -{
(\rho^2 - \rho_+^2) (\rho^2 - \rho_-^2)\over l^2 \rho^4} d\tau^2
+ {l^2 \over (\rho^2 - \rho_+^2) (\rho^2 - \rho_-^2)} d\rho^2
\, .
\end{eqnarray}
\sectiono{Extremal BTZ Black Holes} \label{s3}
We shall define
a general extremal black hole in the two dimensional theory
to
be the one whose near horizon geometry is $AdS_2$ and for
which the scalar
field $\phi$ and
the gauge field strength $F_{\mu\nu}$ are invariant under
the $SO(2,1)$ isometry of the $AdS_2$ background\cite{0506177}.
The most general near
horizon background consistent with this requirement is
\begin{equation} \label{e8}
g_{\mu\nu} dx^\mu dx^\nu = v\left(-r^2 dt^2 + {dr^2\over r^2}\right),
\qquad F_{rt} = e, \qquad \phi = u\, ,
\end{equation}
where $v$, $e$ and $u$ are constants.
Following \cite{0506177,0508042} we define
\begin{equation} \label{e9}
f(u, v, e) = \sqrt{-\det g}\, ({\cal L}^{(2)}_0+{\cal L}^{(2)}_1)\, ,
\end{equation}
evaluated in the background \refb{e8}, and
\begin{equation} \label{e10}
{\cal E}(u, v, e, q) = 2\pi (eq - f(u, v, e))\, .
\end{equation}
As was shown in \cite{0506177},
the near horizon values of $u$, $v$ and $e$ for an extremal
black hole with electric charge $q$ is obtained by extremizing the
`entropy
function' ${\cal E}$ with respect to these variables. Furthermore, Wald's entropy
for this black hole is given by the value of the function ${\cal E}$ at this
extremum\cite{0506177}.
Using eqs.\refb{e5}, \refb{e5a} and \refb{e8},
\refb{e9} we see that
for the theory considered here,
\begin{equation} \label{e10a}
f(u, v, e) = f_0(u,v,e) + \pi\, K\, (2\, e \,
v^{-1} - e^3\, v^{-2})\, ,
\end{equation}
where
\begin{equation} \label{e10b}
f_0(u, v, e) = {2\pi}\, \sqrt{-\det G}\, {\cal L}^{(3)}_0 \, .
\end{equation}
Let us now specialize to the case of extremal BTZ black holes.
These correspond to choosing
$\rho_-=\pm
\rho_+$ in \refb{e16}, \refb{e17}
and the near horizon
limit is obtained by taking $\rho$ close to $\rho_+$. Defining
\begin{equation} \label{e18}
r=\rho-\rho_+, \qquad t = {4 \over l^2} \, \tau\, ,
\end{equation}
we can express \refb{e16}
\refb{e17} for $\rho_-=\pm \rho_+$ and small $r$
as
\begin{equation} \label{e3d}
G_{MN} dx^M dx^N = {l^2\over 4}\, \left( - r^2 dt^2
+ {dr^2\over r^2}\right) + \rho_+^2 \, \left(
dy \pm
\left(- {l\over 4} + {l\over 2\rho_+}\, r
\right) \, dt
\right)^2
\end{equation}
\begin{equation} \label{e19}
\phi = \rho_+^2\, , \qquad
A_\mu dx^\mu = \pm\left(- {l\over 4} + {l\over 2\rho_+}\, r
\right) \, dt, \qquad g_{\mu\nu} dx^\mu dx^\nu =
{l^2\over 4 \rho_+^2} \, \left( - r^2 dt^2 + {dr^2\over r^2}\right)
\, .
\end{equation}
Comparison with \refb{e8} now yields
\begin{equation} \label{e20}
u = \rho_+^2, \qquad v={l^2\over 4\rho_+^2}, \qquad
e = \pm {l\over 2 \rho_+}\, .
\end{equation}
Note that instead of three independent parameters $u$, $v$ and $e$,
we now have two independent parameters $l$ and $\rho^+$
labelling the near
horizon
geometry. In particular $v$ and $e$ satisfy the relation
\begin{equation} \label{e20a}
v = e^2\, .
\end{equation}
This is a reflection of the fact that the BTZ black hole is
locally $AdS_3$ and hence
has a higher degree of symmetry than the more general configuration
considered in \refb{e8}. This is a consistent truncation of the parameter
space and hence we can extremize the entropy function ${\cal E}$
subject to this constraint. We shall choose $e$ and
\begin{equation} \label{e21}
l = 2\sqrt{ue^2}\, ,
\end{equation}
as independent
variables.
Our next step will be to investigate the structure of $f_0$ given in
\refb{e10b} for the extremal BTZ black hole
solution described above.
Since the BTZ black hole is locally the maximally symmetric $AdS_3$ space, ${\cal L}^{(3)}_0$, being a scalar
constructed out of the Riemann tensor and its covariant derivatives,
must be a constant. Furthermore since locally BTZ metrics for
different values of $\rho_\pm$ are related by coordinate
transformation, ${\cal L}^{(3)}_0$ must be independent of
$\rho_\pm$ and hence is a function of $l$ only.
Let us define
\begin{equation} \label{en2}
h(l) = {\cal L}^{(3)}_0
\end{equation}
evaluated in the BTZ black hole geometry. (Note that this definition
is independent of whether we are using the extremal or non-extremal
metric.) Since for the extremal black hole metric \refb{e3d}
\begin{equation} \label{en3}
\sqrt{-\det G} = {l^2 \rho_+ \over 4} = {l^3 \over 8\, |e|}\, ,
\end{equation}
we get
\begin{equation} \label{e22}
f_0 = 2\pi \, \sqrt{-\det G} \,
{\cal L}^{(3)}_0 = {1\over |e|} g(l)\, ,
\end{equation}
where
\begin{equation} \label{en6}
g(l) = {\pi \, l^3\, h(l)\over 4}\, .
\end{equation}
Eqs.\refb{e10}, \refb{e10a},
\refb{e20a} and \refb{e22} now give
\begin{equation} \label{e23}
{\cal E} = 2\pi \left( q\, e - {1\over |e|} g(l) - {\pi \, K\over e}
\right)\, .
\end{equation}
We need to extremize this with respect to $l$ and $e$. The
extremization with respect to $l$ requires extremization of
$g(l)$ with respect to $l$. Let us define
\begin{equation} \label{ecdef}
C = -{1\over \pi} g(l)
\end{equation}
at the extremum of $g$. This gives
\begin{equation} \label{e24}
{\cal E} = 2\pi \left( q\, e + {\pi\, C\over |e|} - {\pi\, K\over e}
\right) \, .
\end{equation}
We shall assume that $C\ge |K|$.
Extremizing \refb{e24} with respect to $e$ we now get:
\begin{eqnarray}\displaystyle \label{eevalue}
e &=& \sqrt{\pi (C-K)\over q} \quad \hbox{for} \, q>0\, ,
\nonumber \\
&=& \sqrt{\pi (C+K)\over |q|} \quad \hbox{for} \, q<0\, .
\end{eqnarray}
Furthermore, at the extremum,
\begin{eqnarray}\displaystyle \label{e25}
{\cal E} &&= 2\pi \sqrt{ c_R \, q\over 6} \quad \hbox{for} \, q>0\, ,
\nonumber \\
&&= 2\pi \sqrt{c_L \, |q|\over 6} \quad \hbox{for} \, q<0\, ,
\end{eqnarray}
where we have defined
\begin{equation} \label{e26}
c_L = 24\, \pi\, (C+K)\, , \qquad c_R =
24\, \pi\, (C-K) \, .
\end{equation}
\refb{e25} gives the entropy of extremal BTZ black hole.
Since the conserved charge $q$ measures
momemtum along $y$, which
for a BTZ black hole represents the angular
momentum $J$, eqs.\refb{e25}, \refb{e26} are in agreement
with the results of \cite{0506176} for extremal BTZ black holes
with mass = $|J|$.
Note that the Chern-Simons term plays no role in the determination
of the parameter $l$ and
\begin{equation} \label{e26a}
c_L+c_R = 48\, \pi \, C = -48\, g(l)\, .
\end{equation}
This is a reflection of the fact that in three dimensions the effect of
the Chern-Simons term on the equations of motion involves covariant
derivative of the Ricci tensor\cite{0305117} which vanishes for BTZ
solution. On the other hand \begin{equation} \label{e27}
c_L - c_R = 48\, \pi \, K \, ,
\end{equation}
is insensitive to the detailed structure of the higher derivative
terms and is determined completely by the coefficient of the
Chern-Simons term. In the analysis of \cite{0506176} this was
a consequence of the fact that $c_L-c_R$ is related to the
diffeomorphism anomaly of the bulk theory. In the present
context this is a consequence of the fact that $c_L-c_R$ is
determined by the parity odd part of the action evaluated
on the near horizon geometry of the BTZ black hole,
and this contribution
comes solely from the Chern-Simons term.
\sectiono{Non-extremal BTZ Black Holes} \label{s4}
We now turn to the computation of the entropy of a general
non-extremal BTZ black hole solution given in
eqs.\refb{e16}, \refb{e17}.
First we note that the local
geometry of extremal and non-extremal black holes are identical since
both describe a locally $AdS_3$ space-time of curvature radius $l$.
Thus $l$ for a non-extremal black hole is determined by the same
equation as in the extremal case, \i.e. via the extremization of the
function $g(l)$:
\begin{equation} \label{en1}
g'(l)=0\, .
\end{equation}
Since the contribution to the Noether charge from different terms
in the action add,
the entropy computed from Wald's general
formula\cite{9307038,9312023,9403028,9502009} can be regarded
as the sum of two terms, -- one arising from the ${\cal L}^{(3)}_0$
term in the action and the other arising out of the ${\cal L}^{(3)}_1$
term in the action. In the $\rho$-$\tau$-$y$ coordinate system
the contribution from the ${\cal L}^{(3)}_0$
term may be expressed as:
\begin{equation} \label{ewald}
{\cal E}_0=8\, \pi \, \int \left. \, dy \, \sqrt{G_{yy}}
\,
{\partial {\cal L}^{(3)}_0\over \partial {\cal R}_{\rho\tau\rho\tau}}\, G_{\rho\rho}
\, G_{\tau\tau}\right|_{\rho=\rho_+}\, .
\end{equation}
Here ${\cal R}_{MNPQ}$ denotes the
Riemann tensor computed using
the three dimensional metric $G_{MN}$ and
in computing $\partial {\cal L}^{(3)}_0/\partial {\cal R}_{MNPQ}$ we need to treat
$G_{MN}$ and ${\cal R}_{MNPQ}$ as independent variables.
In writing \refb{ewald}
we have used the fact that all terms involving covariant
derivatives of the Riemann tensor vanish in the BTZ black hole
solution.
Using the fact
that in
three dimensions ${\cal R}_{MNPQ}$ can be expressed in terms of
${\cal R}_{MN}$ and $G_{MN}$, and that for BTZ black hole
both $\partial {\cal L}^{(3)}_0/\partial {\cal R}_{MNPQ}$ and ${\cal R}_{MNPQ}$
are proportional to $(G_{MP} G_{NQ}-G_{MQ} G_{NP})$,
\refb{ewald}
may be rewritten as\cite{9909061}:
\begin{equation} \label{en0}
{\cal E}_0 = \left. {4\pi\over 3} \, \, \left[
\int \, dy \, \sqrt{G_{yy}}
\,
G_{MN} \,
{\partial {\cal L}^{(3)}_0\over
\partial {\cal R}_{MN} }\right]\right|_{\rho=\rho_+}= \left.
{8\pi^2\over 3}
\, \rho_+ \, \left[G_{MN} \,
{\partial {\cal L}^{(3)}_0\over
\partial {\cal R}_{MN} }\right]\right|_{\rho=\rho_+}\, .
\end{equation}
In order to evaluate the right hand side of \refb{en0} we note that
for the BTZ black hole solution given in eq.\refb{e16},
\begin{equation} \label{en7}
{\cal R}_{MN} = -2\, l^{-2}\, G_{MN}\, ,
\end{equation}
and ${\cal L}^{(3)}_0=h(l)$ according to eq.\refb{en2}.
Thus
\begin{equation} \label{en8}
G_{MN} \,
{\partial {\cal L}^{(3)}_0\over
\partial {\cal R}_{MN} } = -{1\over 2}\, l^2\, {\cal R}_{MN} \,
{\partial {\cal L}^{(3)}_0\over
\partial {\cal R}_{MN} } =
-{1\over 2}\, l^2 l^{-2}{\partial \over \partial (l^{-2})} h(l)
= {l^3\over 4} h'(l) \, .
\end{equation}
Using \refb{en6} this can be written as
\begin{equation} \label{en9}
G_{MN} \,
{\partial {\cal L}^{(3)}_0\over
\partial {\cal R}_{MN} } = -{3\over \pi l} \, g(l) +{1\over \pi} g'(l)
= {3\, C\over
l}\, ,
\end{equation}
where in the second step we have used eqs.\refb{ecdef}
and \refb{en1}.
Hence \refb{en0} gives
\begin{equation} \label{en10}
{\cal E}_0 = 8 \, \pi^2\, C \, l^{-1}\, \rho_+ \, .
\end{equation}
Let us now turn to the contribution ${\cal E}_1$ from the
Chern-Simons term. For this we shall
view the black hole as a two dimensional solution and apply Wald's
formula. This gives
\begin{equation} \label{en11}
{\cal E}_1 = 8 \, \pi \, {\partial {\cal L}^{(2)}_1\over \partial R_{\rho\tau\rho\tau}}
\, g_{\rho\rho} g_{\tau\tau}
= {4\, K\, \pi^2} \, {\varepsilon^{\mu\nu} F_{\mu\nu}
\over \sqrt{-\det g}} \, {1\over 2}\,
g^{\rho\rho} g^{\tau\tau} g_{\rho\rho} g_{\tau\tau}
= - 8\pi^2 K l^{-1} \rho_-\, .
\end{equation}
Thus the total entropy is
\begin{equation} \label{en12}
{\cal E} = {\cal E}_0 + {\cal E}_1 = 4\pi^2 l^{-1}\left( (C-K) (\rho_++\rho_-)
+ (C+K) (\rho_+ - \rho_-)\right) \, .
\end{equation}
In order to express this as a function of
the physical mass $M$ and angular
momentum $J$ we need to relate $M$ and $J$ to
the parameters
$l$ and $\rho_\pm$. We define
$M$ and $J$ as the conserved Noether charges
associated with time and $y$ translation symmetries respectively.
The
contribution
splits into a sum of two terms, one coming from the ${\cal L}^{(3)}_0$
part and the other coming from the ${\cal L}^{(3)}_1$ part.
The contribution from the ${\cal L}^{(3)}_0$ part was evaluated in
\cite{9909061} and is given by,
\begin{equation} \label{em1}
M_0\pm J_0 = {2\, \pi\over 3\, l} (\rho_+\pm \rho_-)^2
G_{\mu\nu} {\partial {\cal L}^{(3)}_0\over
\partial R_{\mu\nu} } = {2\, \pi \, C\over l^2} \, (\rho_+\pm \rho_-)^2\, ,
\end{equation}
where in the second step we have used \refb{en9}.
On the other hand
the contributions from ${\cal L}^{(3)}_1$
were computed in \cite{0508218,0509148} and are given by:
\begin{equation} \label{em4}
J_1 = -{2\pi K\over l^2} \, (\rho_+^2 + \rho_-^2)\, ,
\qquad
M_1 = -{4\pi K\over l^2} \, \rho_+ \, \rho_-\, .
\end{equation}
Eqs.\refb{em1} and \refb{em4} now give
\begin{equation} \label{em7}
M\pm J = (M_0+M_1)\pm (J_0+J_1) = {2\pi (C\mp K)\over l^2}
(\rho_+\pm \rho_-)^2\, .
\end{equation}
Substituting this into \refb{en12} and using
the definitions of $c_L$, $c_R$ given in \refb{e26} we get
\begin{equation} \label{e35}
{\cal E} = 2\pi \sqrt{ c_L q_L\over 6} + 2\pi \sqrt{ c_R q_R\over 6}\, ,
\end{equation}
where
\begin{equation} \label{e34}
q_L={1\over 2} (M-J), \qquad q_R = {1\over 2} (M+J)\, .
\end{equation}
This
is the correct expression for the entropy of a non-extremal black
hole in the presence of higher derivative and Chern-Simons
terms\cite{0506176}.
Note that the entropy and the near horizon geometry of
the non-extremal BTZ
black hole is determined by extremizing the function
\begin{equation} \label{e33}
{\cal E} = 2\pi \left[ q_L\, e_L + q_R e_R - \left({1\over e_L}
+{1\over
e_R}\right) g(l) + \pi \, K\left({1\over e_L}-{1\over e_R}\right)
\right]\, ,
\end{equation}
with respect to $l$, $e_L$, $e_R$.
Here
\begin{equation} \label{ep1}
e_L= {l\over \rho_+ - \rho_-},
\qquad e_R = {l\over \rho_+ + \rho_-}
\, .
\end{equation}
We suspect that one can follow the procedure of \cite{0506177} to
manipulate Wald's formula and the equations of motion to derive this
extremization principle directly, but we have not so far succeeded in
doing this.
|
1,477,468,750,339 | arxiv | \section{Introduction}
\label{sec:intro}
Urban traffic regularly exhibits disturbances and inefficiencies caused by simple traffic management schemes faced with large volumes of vehicles.
Especially smaller intersections are typically handled by static priority rules, resulting in vehicles approaching from a minor road having to yield.
Moreover, occlusions through buildings or other objects are highly prevalent in urban areas, limiting the view for both human drivers and vehicle-bound sensory systems.
The increasing use of connected vehicles (CVs) and connected automated vehicles (CAVs) opens up new opportunities to increase the traffic efficiency.
Those vehicles can announce their presence and possibly share perception data with surrounding road users via a communication link.
Moreover, with edge computing resources becoming available in urban areas, it is viable to build and maintain a local environment model of, e.g., an intersection and its surroundings.
Such an edge server can distribute the environment model to connected vehicles in the operational area.
CAVs, for instance, can make use of the information by incorporating it into their planning algorithms.
In the publicly funded project MEC-View, research on connected automated driving was conducted using a testing site at a suburban three-way intersection in the city of Ulm in Germany \cite{buchholz2021handling}.
Due to buildings occluding the view onto the priority road, an automated vehicle merging from the side road has to decelerate strongly, before being able to safely enter the intersection based solely on its own perception system.
With the support of the environment model provided by the edge server, the automated vehicle can transition smoothly onto the main road, given that appropriate space is available.
We build upon this approach and discuss the potential of multi-agent planning schemes that are executed on the server.
Based on the fused environment model, a cooperative plan for handling intersection traffic is derived that can be proposed to the connected vehicles as behavioral instructions.
Explicit deviations from static priority rules become possible.
For instance, vehicles on the main road can be requested to slow down and thus letting a vehicle from the side road merge into the emerging gap, as depicted in Fig.~\ref{fig:intro}.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{intro}
\caption{Cooperative maneuver at an urban intersection. The planning module on the edge server requests the blue vehicles on the main road to slow down. Hence, the turquoise vehicle can merge without having to stop.}
\label{fig:intro}
\end{figure}
Prior research on automatic intersection management (AIM) primarily focuses on non-learning algorithms, like reservation-based or optimization-based programs.
At the same time, machine learning-based approaches show remarkable results on prediction tasks in automated driving as well as planning for a single ego vehicle.
The lack of fitting ground-truth data prevents the application of supervised learning for cooperative behavior planning.
To bridge this gap, the present work proposes to train a reinforcement learning (RL) policy for multi-agent planning in a simulated environment, resulting in the following contributions:
\begin{itemize}
\item Leveraging machine learning to perform cooperative multi-agent planning for urban automated driving,
\item To the best of our knowledge, we propose the first AIM system exploiting graph neural networks,
\item Evaluation based on real-world traffic data taken from a publicly available urban driving dataset.
\end{itemize}
The remainder of the paper is structured as follows:
Section~\ref{sec:sota} discusses related work in the field of AIM and machine learning-based planning for automated driving.
The proposed behavioral planning scheme is comprehensively introduced in Section~\ref{sec:approach}.
Afterwards, evaluation results obtained in synthetic simulations and based on real-world traffic data are given in Section~\ref{sec:eval}.
Section~\ref{sec:conclusion} concludes the paper and gives an outlook on future work.
\section{Related Work}
\label{sec:sota}
The analysis on the state of the art first considers existing approaches to AIM.
Because machine learning is seldom used for AIM, we subsequently investigate learning-based behavioral planning methods.
Due to the large body of existing literature, we present a selection of commonly used approaches and refer the reader to surveys for a more extensive overview.
\subsection{Automatic Intersection Management}
\label{ssec:aim}
Past research brought forth a variety of AIM schemes, surveyed for instance by \cite{zhong2020autonomous}.
The authors identify centralization as a crucial feature for distinguishing AIM schemes.
Thereby, a fully centralized scheme exhibits a single coordination unit that is in charge of planning the intersection traversal and acts as the communication partner for all vehicles.
In a fully distributed AIM, a cooperative plan is negotiated by the vehicles on their own.
A centralized, reservation-based AIM system is proposed in \cite{dresner2008multiagent}, which employs a first-in-first-out policy for assigning clearance to cross the intersection.
A driver agent places a request when its vehicle is about to enter the monitored intersection area, covering possible conflict points with other paths.
The intersection manager maintains tile-based reservations and confirms the request if the affected tiles are free.
This approach can be combined with a traffic light to enable the co-usage of the intersection by human drivers and automated vehicles.
Optimization-based intersection management systems are published, for instance, in \cite{malikopoulos2018decentralized} and \cite{kamal2015vehicle-intersection}.
Those works assume full penetration of CAVs that laterally follow predefined lanes on urban intersections.
The distributed energy-optimizing approach \cite{malikopoulos2018decentralized} further disallows turning maneuvers and the utilization of two conflicting paths at the same time.
In \cite{kamal2015vehicle-intersection}, the longitudinal control of vehicles is performed by a centralized intersection coordination unit employing a model predictive control (MPC) scheme.
Both works demonstrate efficiency gains in time and fuel consumption by comparison to a traditional signalized intersection.
A prevalent issue with optimization-based approaches is the unfavorable scaling of computational demand with increasing traffic density.
In \cite{mertens2022cooperative}, a novel AIM scheme is presented, that is also capable of handling mixed traffic, i.e., simultaneous usage by automated and human-driven vehicles.
The concept of platooning \cite{morales_medina2018cooperative} can also be used for AIM.
Based on a so-called virtual inter-vehicle distance, a pair of vehicles can adapt their velocities to cross their conflict point with sufficient clearance.
The authors acknowledge that significant adaptions would have to be made for managing an intersection under mixed traffic.
\subsection{Machine Learning-Based Planning}
\label{ssec:mlplan}
Machine learning-based approaches to automated driving experience rising interest of researchers during the last years.
A survey of recent deep reinforcement and imitation learning planning methods for a single ego vehicle can be found in \cite{zhu2021survey}.
The authors categorize published works by the type of input data (e.g. sensor measurements or object detections) and output representation (e.g. behavioral planning or direct control outputs).
Because individual sensor measurements are not suited for cooperative planning over multiple vehicles, we limit our analysis to methods that require a prior perception system to be in place.
Readers interested in machine learning-based prediction for automated driving are referred to comprehensive surveys on the topic, like \cite{lefevre2014survey}.
On the output side, multi-agent planning requires an intermediate representation that can be passed to various vehicles in the scene, laying the focus on high-level behavioral planning approaches.
Imitation learning describes the application of supervised learning techniques to automated driving by training on expert drivers' demonstrations, which can be obtained from datasets or accordingly equipped testing vehicles.
Based on object detections from a dedicated perception system and high-definition map information, a typical approach is to render the surroundings of the ego vehicle in a raster image that is subsequently processed by a convolutional neural network (CNN) \cite{chen2019deep,bansal2019chauffeurnet}.
To address the problem of distributional shift between training data and closed-loop test conditions, various improvements have been proposed, like perturbing a random subset of training trajectories to teach the model to recover from atypical states \cite{bansal2019chauffeurnet}.
Being based on supervised learning, those techniques share the large needs for high-quality training data.
This limits their prospective transfer to cooperative multi-agent planning because ground-truth data showing cooperative maneuvers is virtually not available.
Urban traffic datasets (e.g. the inD dataset \cite{bock2020ind}) show road users obeying to static priority rules or traffic lights; both entities that shall become obsolete with cooperative planning.
In contrast to supervised learning, RL approaches evade the requirement for large datasets by instead exploring possible actions in a simulated environment and exploiting a reward signal to learn the desired behavior.
In \cite{capasso2021end--end}, a driving policy for controlling the acceleration and steering angle is trained through RL that is applied to multiple vehicles in a common simulated environment.
As there is no explicit communication between the different vehicles' policies, no cooperation is shown in traffic.
Based on a raster image representation, this approach shares the unfavorable scaling of computational load with the number of participants in the scene, because each vehicle requires an individual image, centered on its pose for sensible inference.
An alternative RL-based approach to coordinated driving on an urban intersection was published in \cite{wu2019dcl-aim}.
By maintaining a tile-based reservation of the intersection, the decentralized policies can choose from the set of actions that do not cause a collision.
Apart from this limitation of the action space, there is no further inter-agent communication that could enable cooperative maneuvers.
When encoding the semantic environment of a vehicle in urban traffic, the number of potentially relevant entities (e.g. other vehicles) is highly dynamic.
This makes fixed-size network architectures and input representations often used in RL unsuitable for the task at hand.
In \cite{huegle2019dynamic}, it is proposed to encode input features per vehicle using a multilayer perceptron followed by a permutation invariant operation for pooling the resulting features.
The aggregated feature vector is then propagated through another fully connected network to finally infer actions for a single ego vehicle.
The authors extend their work to encode whole traffic scenes including lanes and traffic signs and compare it to using a graph convolutional network for the same task in \cite{huegle2020dynamic}.
Similarly, \cite{hart2020graph} proposes to encode the vehicles being present in the scene as graph vertices.
However, none of the described works can handle multi-agent planning.
\section{Proposed Approach}
\label{sec:approach}
In this section, our proposed approach is presented, beginning with a discussion on learning paradigms for multi-agent usage.
Afterwards, the graph-based input representation is introduced, followed by details on the network architecture and reward engineering.
\subsection{Learning Paradigm}
\label{ssec:learningparadigm}
In RL algorithms, an agent typically interacts with an environment in discrete time steps.
The agent observes the current state of the environment and subsequently chooses an action, whose effect is evaluated by a reward signal.
With multiple entities to be controlled, one can pursue different learning paradigms depending on the degree of centralization in multi-agent reinforcement learning \cite{gronauer2021multi-agent}.
Instead of having the various agents interact individually with the environment and learn independent policies, cooperative planning is modeled best by the centralized training centralized execution paradigm.
Because different agents, which shall take part in a cooperative maneuver, have to be able to communicate explicitly.
This holds not only during training, but also at inference time.
In paradigms relying on decentralized execution, the agents would have to learn an implicit communication scheme through their behavior.
With the fused environment model being available to the server-side planner, considering the planning problem over all vehicles in the scene using a joint RL agent allows for explicit communication and hence better cooperation.
Like many RL problems, the cooperative planning problem can be denoted as a Markov decision process (MDP), defined as the tuple $(S,\,A,\,T,\,R)$.
It consists of a set of states $S$ that fully describe the traffic scene at a given time. $A$ denotes the set of actions the RL agent can choose from while interacting with the environment.
The transition function $T(s\,,a\,,s')$ describes the probability of changing from state $s \in S$ to $s' \in S$ when applying action $a \in A$, whereas the reward signal is determined by the function $R(s,\,a)$.
Since the multi-agent planning problem contains different vehicles in the scene, the dimensionalities of the state space and the action space depend on the number of vehicles currently present and may vary over time.
\subsection{Input Representation}
\label{ssec:inputrepresentation}
A well-suited input representation is crucial for applying artificial neural networks successfully.
We identify three major requirements for an input scene representation to be used in cooperative multi-agent planning:
\begin{itemize}
\item Invariance on the number of vehicles in the scene,
\item Permutation invariance of the input nodes,
\item Permutation equivariance regarding the output nodes.
\end{itemize}
Simple tabular representations already lack the invariance properties.
The limitations of fixed-sized inputs for behavior planning are elaborated more extensively in~\cite{huegle2019dynamic}.
A rendered raster image of the scene, as often used for CNNs, fulfills the invariance requirements, but typically requires a target agent to be centered around \cite{chen2019deep}.
This process must be repeated to produce individual outputs for each agent, making the application to a large number of vehicles computationally infeasible.
Permutation equivariance means that the inferred outputs of given agents in the scene are independent of their ordering in the input vector.
Hence, our work proposes to use a lean and flexible graph-based scene representation, shown in Fig.~\ref{fig:scene_graph}, which fulfills all above requirements.
The current state of the environment is thus defined as $S=(V,\,E,\,U)$, with $V$ being the set of vertices corresponding to the vehicles in the scene and $E$ denoting edges depending on the pairwise relation between vehicles.
For each vehicle, one vertex in $V$ stores the corresponding input features.
Each of the directed edges is assigned one of two edge types in $U$, either \emph{same~lane}, or \emph{crossing}:
\begin{equation}
(v_1,\,r,\,v_2) \in E = V \times U \times V.
\end{equation}
Two vehicles in front of the intersection whose paths cross or merge are being connected bidirectionally with crossing edges, like $v_1$,~$v_2$, and~$v_3$ in the figure.
The same lane edge, in contrast, is used to connect two vertices of vehicles on the same path, pointing from the predecessor to the following one (e.g. $v_6$ and $v_5$).
This is motivated by the observation that vehicles should adapt their behavior to the preceding vehicle and not vice-versa.
Note that the graph does not need to be connected.
Some vehicles may form a disjoint sub-graph, if they share no conflicts with the remaining vehicles, as it is the case for $v_5$ and $v_6$ or $v_7$ in Fig.~\ref{fig:scene_graph}.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{scene_graph}
\caption{The graph-based input representation illustrated on an arbitrary traffic scene at a four-way intersection. The vehicles' turning intentions are denoted by arrows on their hood.}
\label{fig:scene_graph}
\end{figure}
The input feature vector for each vehicle consists of three values and is denoted as $h^{(0)} = [s,\,v,\,d]^T$, where the upper index denotes the layer number.
The longitudinal position of the vehicle along its path is denoted by $s$, with $s=0$ defined as the point where the path leaves the intersection area.
This corresponds to the longitudinal coordinate of a Frenet coordinate pair.
Because different maneuvers (e.g. straight driving and right turns) cause a difference in the path length on the intersection, the effective path length is scaled to a common reference length $s_{\mathrm{ref}}$, as depicted in Fig.~\ref{fig:path_length}.
Thereby, the entry point to the intersection area is located at $s=-s_{\mathrm{ref}}$ consistently, ensuring that the localization on the incoming lanes is independent of the maneuver to be driven.
Moreover, this normalization makes the scene representation robust to slight changes in intersection geometry.
The second input feature $v$ denotes the scalar velocity of the corresponding vehicle normalized over the speed limit of the lane it is currently driving on.
To allow the network to sense immediate proximity of other vehicles, the input features are complemented by a distance measure $d$ based on the Mahalanobis distance \cite{de_maesschalck2000mahalanobis}.
The distance measured from vehicle~$i$ to vehicle~$j$ is calculated as
\begin{equation}
\label{eq:dist}
d_{ij} = \sqrt{(\boldsymbol{p}_j - \boldsymbol{p}_i)^T \boldsymbol{\varSigma}_i^{-1} (\boldsymbol{p}_j - \boldsymbol{p}_i)},
\end{equation}
where $\boldsymbol{p}_i$ denotes the position of vehicle $i$ in cartesian coordinates. The covariance matrix is given as
\begin{equation}
\boldsymbol{\varSigma}_i =
\boldsymbol{R}_{\psi_i}
\begin{bmatrix}
l^2/4 & 0 \\
0 & w^2/4
\end{bmatrix}
\boldsymbol{R}_{\psi_i}^T,
\end{equation}
with $l=\SI{5}{\meter}$ and $w=\SI{2}{\meter}$ describing the standardized length and width of a vehicle.
$\boldsymbol{R}_{\psi_i}$ denotes the 2D rotation matrix using the heading angle $\psi_i$.
To determine the input feature for a particular vehicle, the distance to each other vehicle is computed according to \eqref{eq:dist}, and the inverse of the minimum distance value is passed to the network.
Using the inverse instead of the plain distance value proved to yield better model convergence.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{path_length}
\caption{Path parameterization by a common reference length. Each path enters the intersection area at $s=-s_\mathrm{ref}$ and leaves it at $s=0$.}
\label{fig:path_length}
\end{figure}
\subsection{Network Architecture}
\label{ssec:networkdetails}
In the present work, the behavioral control of vehicles is performed by applying a commanded longitudinal acceleration within the range $[a_{\mathrm{min}},\,a_{\mathrm{max}}]$, requiring an RL algorithm suited for continuous control.
We propose to use the twin delayed deep deterministic policy gradient (TD3) algorithm \cite{fujimoto2018addressing}, an extension of the deep deterministic policy gradient (DDPG) \cite{lillicrap2016continuous}.
Both methods are actor-critic RL algorithms for actions in continuous space.
TD3 consists of two function approximators, namely the \emph{actor} and the \emph{critic}.
Based on a given state and action input, the critic network is trained to predict the discounted reward as $Q$ value estimates.
The actor gets the current environment state as an input and outputs an action to be performed in the particular time step, optimized by using the critic output as the loss.
The proposed graph neural network (GNN) architecture is depicted in Fig.~\ref{fig:network}.
For each vertex, the low-dimensional input features are first processed by a dense layer \texttt{enc}.
Note that this operation is performed individually for each vertex using shared parameters, disregarding the graph structure defined by the edges.
The encoded vertex features are then propagated through two relational graph convolution layers, \texttt{conv\_1} and \texttt{conv\_2}.
In contrast to simple graph convolution layers, multiple weight matrices corresponding to the different edge types are used for message passing \cite{gangemi2018modeling}.
During a forward pass, the hidden features of the vertices are propagated along the outgoing edges, while being multiplied by the respective weight matrix.
Hence, each node receives a variable amount of such messages that have to be integrated into its own feature vector.
This is done using a permutation invariant aggregation function like the element-wise maximum, mean, or sum.
In the present work, the maximum operation delivered the best results.
The update for the hidden feature vector of node $i$ is thus given as
\begin{equation}
\boldsymbol{h}_i^{(l+1)} = \sigma \left( \Sigma_{r \in U} \max_{j \in \mathcal{N}_i^r} \boldsymbol{W}_r^{(l)} \boldsymbol{h}_j^{(l)} + \boldsymbol{W}_0^{(l)} \boldsymbol{h}_i^{(l)} \right),
\end{equation}
where the set of neighbor nodes connected to the target node $i$ by incoming edges is denoted by $\mathcal{N}_i^r$.
The weight matrices for each edge type $r \in U$ are called $\boldsymbol{W}_r$, while the previous target node vector is multiplied by $\boldsymbol{W}_0$.
In case a vertex has no incoming edges (like $v_4$ or $v_7$ in Fig.~\ref{fig:scene_graph}), the node update coincides with a single dense layer on the node's own state.
Finally, the resulting feature vector is passed through a non-linear activation function, empirically chosen as rectified linear unit (ReLU).
\begin{figure}
\centering
\includegraphics[width=\linewidth]{network}
\caption{The graph neural network architecture is depicted, consisting of one dense encoder layer, two graph convolution layers, and one dense output layer. Below the layer identifiers, their dimensionalities are shown. The encoder has three input channels for the actor and four channels for the critic network.}
\label{fig:network}
\end{figure}
While the actor network and the critic network are constructed analogously up to the point described above, they differ in the output layer, depicted on the right side of Fig.~\ref{fig:network}.
The actor network is responsible for deriving an action for each entity in the scene to be executed for a given time horizon, while the critic is in charge of estimating the Q~value for the entire graph.
With each vehicle being represented by a vertex in the graph, there is one regression target per vertex that describes the commanded acceleration for the corresponding vehicle in the actor network.
The latent feature output by the GNN is reduced to a single unit using a final dense layer \texttt{a\_dec}, whose weights are shared across nodes.
To limit the action output to a defined range, a tangens~hyperbolicus activation function is used on the output layer.
The normalized value range is subsequently mapped to an acceleration between \SI{-5}{\meter\per\second\squared} and \SI{+5}{\meter\per\second\squared}.
With the critic network's output being a performance measure in form of a single Q~value estimate, an aggregation function for the latent feature vector of all nodes is required.
The Q~values' range is not limited, hence a final dense layer \texttt{a\_dec} with linear activation is used as the output layer.
Because the critic network requires the chosen action in addition to the state representation, the action values are concatenated with all vertex input features.
\subsection{Reward Engineering}
\label{ssec:reward}
Apart from the network architecture described above, the RL algorithm requires a reward scheme to learn a reasonable behavior within the simulation environment.
The reward signal is composed of a weighted sum of reward components
\begin{equation}
R = \sum\limits_{k \in \mathcal{R}} w_k R_k,
\end{equation}
where the set of reward components is given as $\mathcal{R} = \{\mathrm{velocity,\,action,\,idle,\,proximity,\,collision}\}$.
The velocity reward is the main driver for learning a non-trivial solution through rewarding large velocities and is defined as
\begin{equation}
R_\mathrm{velocity} = \begin{cases}
1.25 \frac{v}{v_\mathrm{lim}} & \frac{v}{v_\mathrm{lim}} \le 0.8 \\
1.0 & 0.8 < \frac{v}{v_\mathrm{lim}} \le 1.0 \\
6.0 - 5.0 \frac{v}{v_\mathrm{lim}} & 1.0 < \frac{v}{v_\mathrm{lim}},
\end{cases}
\end{equation}
where $v_\mathrm{lim}$ describes the vehicle's lane speed limit.
Regularizing the model against applying large acceleration magnitudes is done by the action penalty that is defined as the negative absolute commanded acceleration.
When striving to avoid collisions, the simplest solution is to stop the whole traffic, which is not desirable.
Therefore, the idle penalty is set to $R_\mathrm{idle}=1$ in case all vehicles are standing still.
To teach the model to keep suitable safety distances to nearby vehicles, the proximity component is used to penalize actions that cause two vehicles to get dangerously close.
This penalty is calculated based on the aforementioned modified Mahalanobis distance measure (cf.~\eqref{eq:dist}), which takes the relative direction of the obstacle into account.
In the case that two vehicles collide, the collision penalty $R_\mathrm{collision}=1$ is used to let the model implicitly learn collision avoidance, while aborting the episode on the spot preventing further positive rewards from being accumulated.
\section{Experiments}
\label{sec:eval}
Training and evaluation of RL algorithms require a suited simulation environment.
For the task of behavioral planning in automated driving, the simulator should at least provide a kinematic vehicle model and reasonable interaction between vehicles.
In the present study, the open-source environment Highway-env \cite{highway-env} is used and slightly adapted to be employed for centralized multi-agent planning.
The simulation of vehicle kinematics is done according to the kinematic bicycle model \cite{kong2015kinematic}, which suffices for behavioral planning.
The graph-based scene representation and graph neural network layers are based on the PyTorch Geometric API \cite{pyg}.
\begin{table}
\caption{Reward weights}
\label{tab:rewardweights}
\centering
\begin{tabular}{lrrrrr}
\toprule
\textbf{Reward} & velocity & action & idle & proximity & collision \\ \midrule
\textbf{Weight} & 0.03 & 0.01 & 0.01 & 0.2 & 1.0 \\ \bottomrule
\end{tabular}
\end{table}
The choice of reward weights is based on a grid search that was conducted on a reduced variant of the simulation environment resembling the key behavior, while being much less computationally demanding.
The reward weights used throughout this study are given in Table~\ref{tab:rewardweights}.
During training, the latest model is evaluated on a separate validation environment for ten episodes every 5000~time steps.
The validation environment is constructed the same way as the training environment but initialized with a different seed.
Each time the validation shows a new best validation reward, the current model parameters are saved to disk.
We benchmark our approach against two baselines in synthetic simulation: static priority rules (PR) and a first-in-first-out scheme (FIFO), resembling the currently prevalent approach in real-world and a seminal AIM scheme.
Traffic obeying to static priority rules is simulated using the driver models provided by Highway-env, which rely on the intelligent driver model (IDM) \cite{treiber2000congested} for longitudinal control and additional logic for handling intersections.
We applied minor tweaks to the driver models to obtain reasonable results for more dense traffic:
\begin{itemize}
\item The derivation of the commanded acceleration is modified to correctly handle a target speed of zero.
\item Scheduling at an intersection is based on vehicle priorities that are inferred from their intended maneuver instead of the current lane priority.
\item The yielding logic is extended to respect a specific stop point in front of the intersection.
\end{itemize}
The FIFO scheme, on the other hand, prioritizes the incoming vehicles based on their distance to the intersection.
Thereby, non-conflicting paths can be used at the same time, possibly allowing multiple vehicles on the intersection at a given time.
Note that this policy does not enforce a strict FIFO ordering on the whole intersection, but rather groups of conflicting paths, which leads to a considerable performance increase.
The way of generating simulated traffic differs between training and evaluation runs.
During training, vehicles are being spawned on all incoming lanes featuring enough space at a certain probability with their destination also being chosen randomly.
In the course of training, the spawn probability is continuously increased until it saturates at \SI{5}{\percent} per time step.
Thereby, the intersection is kept busy and allows the RL algorithm to obtain meaningful data samples to learn from.
The evaluation runs are based on scenario definitions that are generated using a slightly different scheme.
In that case, the time between vehicles appearing on a particular lane is governed by a shifted exponential distribution.
This resembles a Poisson process, where the shift on the distribution of the spawn period ensures a minimum distance between vehicles.
If a traffic jam has formed so that there is no space for a vehicle to be spawned, the generation is suspended to prevent immediate collisions.
For each scenario to be generated, the desired vehicle rate is chosen randomly from a uniform distribution over the interval $[0.2,\,0.4]$~vehicles per second and major road lane.
The vehicle rate on the minor road lanes is set to half of that value.
In both training and evaluation, the initial vehicle velocity is chosen uniformly between \SI{60}{\percent} and \SI{100}{\percent} of the corresponding lane speed limit.
\subsection{Synthetic Simulation}
\label{ssec:synthetic}
\begin{figure}
\centering
\includegraphics[width=\linewidth]{syn_flow_rate}
\caption{Flow rate during various evaluation runs by using static priority rules (PR), the FIFO~policy, and the RL~planner.}
\label{fig:syn_flow_rate}
\end{figure}
All experiments within this subsection were performed on a four-way intersection, whose layout is analogue to the one depicted in Fig.~\ref{fig:scene_graph}.
The \emph{flow rate} describes the number of vehicles that cross an intersection (or other road infrastructure) during a given time frame.
Figure~\ref{fig:syn_flow_rate} shows the flow rate distribution over 100 evaluation runs (each of \SI{100}{\second} length) of varying traffic density.
It can be observed that already the rather simple FIFO~scheme achieves a benefit over static priority rules, while the learned RL~planner outperforms both baselines regarding the median values.
To further investigate the performance gain, we analyze the ratio of vehicles that had to stop during the maneuver.
A vehicle trajectory is considered to contain a stop, if the velocity falls below \SI{0.3}{\meter\per\second} for at least one time step.
This threshold was chosen due to numeric reasons.
By categorizing the vehicles by their incoming road priority, the effect on traffic approaching from a minor road becomes apparent, as depicted in Fig.~\ref{fig:syn_road_stops}.
Clearly, the static priority rules induce a significant traffic buildup on the minor road that forces nearly all vehicles to stop.
The FIFO~policy manages to let more vehicles from the minor road pass the intersection, but in turn causes a large proportion of stops also on the major road.
In contrast, the RL~planner succeeds to get a large amount of vehicles across the intersection, while keeping the traffic flow mostly intact.
This behavior might be explained by the planner's learned ability to adapt the vehicles' velocities early to fit into an emergent gap.
It should be mentioned that the RL~planner cannot completely eliminate collisions, as denoted in the first row of Table~\ref{tab:collisions}.
However, the occurrence is extremely rare, making it very challenging to further reduce them, given the RL~algorithm has to implicitly learn it via the reward signal.
In practice, the remaining failure cases are not an issue, because the cooperative maneuver will only be advertised to the connected vehicles, if it fulfills sanity checks like being collision-free.
In the case no viable cooperative plan was found, the vehicles simply resort to local planning.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{syn_road_stops}
\caption{The number of vehicles approaching the intersection from the major and minor road as well as the ratio of those that had to stop when governed by static priority rules (PR), the FIFO~policy, and the RL~planner.}
\label{fig:syn_road_stops}
\end{figure}
\begin{table}
\caption{Collision rates}
\label{tab:collisions}
\centering
\begin{tabular}{lrrr}
\toprule
\textbf{Intersection} & \textbf{Priority rules} & \textbf{FIFO scheme} & \textbf{RL planner} \\ \toprule
Synthetic 4-way & \SI{0.0}{\percent} & \SI{0.0}{\percent} & \SI{0.028}{\percent} \\ \midrule
inD & \SI{0.0}{\percent} & \SI{1.918}{\percent} & \SI{0.584}{\percent} \\ \bottomrule
\end{tabular}
\end{table}
\subsection{Simulation Based on Real-World Traffic Data}
\label{ssec:realworld}
\begin{figure}
\centering
\includegraphics[width=\linewidth]{ind_intersection}
\caption{Bird's-eye view of the intersection, where the real-world vehicle tracks taken from the inD dataset were collected (image adopted from \cite{bock2020ind}).}
\label{fig:ind_intersection}
\end{figure}
Apart from the simulation based on synthetic data, we also evaluated the cooperative planning scheme on real-world urban traffic data taken from the inD dataset \cite{bock2020ind}.
The dataset contains tracks of vehicles and vulnerable road users that were recorded at four urban intersections in Germany.
We selected a four-way intersection connecting a priority road with a minor road that is managed by static priority rules.
The major road also features isolated lanes for turning left, as depicted in Fig.~\ref{fig:ind_intersection}.
Simulating the traffic according to the cooperative planning approach on this intersection makes the following assumptions inevitable.
Firstly, the intersection geometry is only approximated in simulation.
However, this is not an issue for behavioral planning, which is mostly independent of road geometry.
Considering the road curvature, vehicles may not traverse it with arbitrary speed, which is ensured by defining lane-dependent velocity limits.
As the dynamic properties of the vehicles shown in the dataset are unknown, a default parameter set is used in simulation.
Moreover, the real-world tracks deviate from the lane center lines that are used for guiding the simulated vehicles.
The recorded intersection is also used by vulnerable road users like pedestrians and bicyclists that cannot be modeled in the simulation as of now.
Compared to the synthetic simulation, not all metrics are viable for evaluation when using the dataset as the baseline.
The flow rate, for instance, cannot be improved by any intersection management system, because the number of vehicles in the scene is specified by the dataset.
In total, the used dataset excerpt provides 2446~vehicle tracks over a recording time of 3.08~hours, resulting in an average traffic density of \SI{794}{\vehicle\per\hour} or \SI{0.221}{\vehicle\per\second}.
Compared to the flow rates obtained in the synthetic simulations, those numbers are rather small, which might indicate that there are not many interesting situations during most of the recording.
Evaluating the RL~planner and FIFO~policy on real-world data is performed by spawning vehicles in simulation according to the appearance time in the dataset and subsequently simulating their motion based on the vehicle models.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{ind_turn_stops}
\caption{The number of vehicles that perform a certain maneuver and ratio of those that had to stop in real-world data (inD), using the FIFO~policy, and the RL~planner.}
\label{fig:ind_turn_stops}
\end{figure}
Figure~\ref{fig:ind_turn_stops} compares the number of vehicles that had to come to a complete stop categorized into the maneuvers left turn, straight driving, and right turn.
The total number of vehicles being managed by the FIFO~policy and the RL~planner has to be identical to the amount given by the dataset recording.
It is clearly visible from the real-world data that many left turning vehicles have to come to a stop before being able to safely pass the intersection.
Note that although one minor road access is governed by a stop sign, the real-world recordings show that by far not all road users obey to it.
The FIFO scheme naturally distributes the stops to all maneuvers including vehicles driving straight on.
Meanwhile, the RL~planner is able to avoid the vast majority of stops and maintains a smooth flow of traffic.
As it can be seen in the last row of Table~\ref{tab:collisions}, both the FIFO and the RL~planner suffer from an increased collision rate.
This can, at least in parts, be attributed to the way the real-world tracks are mapped to simulation.
Especially the minor road is only depicted for a very short distance in front of the intersection (cf. Fig.~\ref{fig:ind_intersection}), which makes it difficult for the planner to influence the incoming vehicles before they enter the intersection area.
In case one vehicle waits at the stop point while a second vehicle enters the scene at high speed, a collision might be inevitable.
This is merely an issue of the evaluation and does not diminish the remarkable improvement in traffic efficiency that is raised by the RL~planner.
\section{Conclusion}
\label{sec:conclusion}
In this work, a novel multi-agent behavioral planning scheme for connected automated vehicles at urban intersections was presented.
We chose a reinforcement learning algorithm to leverage recent advances in machine learning while evading the need for ground truth data that is virtually unavailable for cooperative maneuvers.
The developed graph-based input representation effectively encodes the semantic environment at the operational area.
By employing graph neural networks, our approach confidently handles the varying number of vehicles in the scene.
The proposed approach was evaluated in synthetic simulation and additionally based on real-world traffic data.
Compared to static priority rules and a FIFO~scheme as baselines, the learned planner increases the vehicle throughput significantly.
In addition, the number of induced stops is reduced which indicates better traffic flow.
The proposed behavioral planning framework can serve as a sound foundation for solving more sophisticated planning problems.
In the future, we plan to extend this work to be applicable to intersection layouts that were not seen during training.
Moreover, cooperative planning in mixed traffic, i.e. human drivers and automated vehicles sharing the road, shall be addressed.
\balance
\bibliographystyle{IEEEtran}
|
1,477,468,750,340 | arxiv | \section{Background}\label{back}
This paper concerns the classification of minimal
varieties of algebras.
Early work on this topic focused on determining
the minimal subvarieties of known varieties
(e.g., groups, rings, modules, lattices, etc). For a survey
of these results, please consult \cite{szendrei-survey}.
This 1992 survey also contains the state of knowledge
(at the time) of the problem of classifying all minimal
locally finite varieties.
But shortly after the publication of \cite{szendrei-survey},
two groups of researchers (Szendrei on the one hand
and Kearnes-Kiss-Valeriote on the other)
independently classified the
minimal, locally finite, abelian varieties of algebras.
Multiple proofs of the classification theorem were discovered
and presented in the papers
\cite{kearnes-kiss-valeriote,kearnes-szendrei1,szendrei1,szendrei2}.
Those proofs start with the observation that each
minimal locally finite variety contains
a smallest nontrivial member, which must be
finite, simple, and have no proper nontrivial subalgebras.
Tame congruence theory assigns a number (a {\it type})
to any finite simple
algebra: if abelian, the type must be {\bf 1} (the $G$-set type),
or type {\bf 2} (the vector space type). The type {\bf 1}/type {\bf 2}
case division is the main case division in the classification
of minimal, locally finite, abelian varieties.
Within each of these two cases there are subcases
related to dimension and to the field associated to the vector
space in type {\bf 2}. The full classification is accomplished
by examining type~{\bf 1} and type~{\bf 2}
simple algebras until one can isolate out and fully describe
those which generate minimal varieties.
The problem of extending these results to
varieties that are not locally finite
has been considered, but only under additional
very strong hypotheses.
For example, in \cite[Theorem~5.12]{kearnes-id-simple}
the classification of minimal abelian varieties
is obtained under the assumption that the variety is idempotent
and contains a nontrivial quasiaffine algebra.
In \cite[Corollary~2.10]{kearnes-min-id}
this is generalized to eliminate the hypothesis
that the variety
contains a nontrivial quasiaffine algebra.
The result here is a classification
of arbitrary minimal, abelian, idempotent varieties
\bigskip
This is the first in a sequence of three papers in which we
attempt to classify the minimal abelian varieties
without any additional assumptions at all.
We predict that a complete
classification proof might evolve along these lines:
\bigskip
\noindent
{\bf Goal 1.} Show that minimal abelian varieties exist in two
unrelated types, corresponding to the type {\bf 1}/type {\bf 2}
case division observed in the locally finite setting.
We propose that these types should be: strongly abelian
minimal varieties as the extension of the type {\bf 1} case,
and affine varieties
as the extension of the type {\bf 2} case.
\bigskip
\noindent
{\bf Goal 2.}
Classify the minimal affine varieties.
\bigskip
\noindent
{\bf Goal 3.}
Classify the minimal strongly abelian varieties.
\bigskip
In this paper we accomplish Goal 1.
Actually, we prove a stronger statement that does not
involve minimality, namely we prove that any abelian variety
that is not affine has a nontrivial strongly abelian subvariety.
In the second paper
in the sequence, \cite{kksz2}, we accomplish Goal 2
in the following sense: we reduce the classification
of minimal affine varieties to the classification of
simple rings. Each minimal affine variety has an
associated simple ring, and each simple ring is
associated to some minimal affine varieties.
This is a many-to-one correspondence between
minimal affine varieties and simple rings. We completely
explain the relationship between the varieties and the rings.
In the third paper in the sequence, \cite{kksz3}, we make partial progress
on Goal 3. Namely, we classify those minimal strongly abelian
varieties that have a finite bound on the essential arities
of their terms. Here, when we say `we classify',
we mean that we reduce the classification of
these varieties
to the classification
of simple monoids with zero.
Also in the third paper of the sequence we show that
there are minimal strongly abelian varieties
that do not have a finite bound on the
essential arities
of their terms, thereby showing that there is more
work to do to complete the classification of minimal
strongly abelian varieties.
\bigskip
\section{Terminology and notation} \label{prelim}
An algebraic language $\mathcal L$ is determined by a function
$\alpha: F\to \omega$ where $F$ is a set of operation symbols
and $\alpha$ assigns arity. An algebra for this language
is a pair $\langle A; F\rangle$ where $A$ is a nonempty set
and for each symbol $f\in F$ there is a fixed interpretation
$f^{\m a}: A^{\alpha(f)}\to A$ of that symbol as an $\alpha(f)$-ary operation
on $A$.
Let $X = \{x_1, \ldots \}$ be a set of variables.
The set $\mathcal T$
of all terms in $X$ in a language $\mathcal L$ is defined
recursively by stipulating that (i) $X\subseteq {\mathcal T}$,
and (ii) if $f\in F$, $\alpha(f)=k$,
and $t_1,\ldots, t_k\in {\mathcal T}$, then
$f(t_1,\ldots,t_k)\in {\mathcal T}$.
The assignment $f\mapsto f^{\m a}$, which assigns
an operation table to a symbol, can be extended to terms
$t\mapsto t^{\m a}$. We call the interpretation $t^{\m a}$
of $t$ the term operation of $\m a$ associated to the term $t$.
An identity in $\mathcal L$ is a pair of terms,
written $s\approx t$. The identity $s\approx t$ is satisfied by $\m a$,
written $\m a\models s\approx t$, if
$s^{\m a}=t^{\m a}$. Given a set $\Sigma$
of identities, the class $\mathcal V$
of all $\mathcal L$-algebras
satisfying $\Sigma$ is called the variety axiomatized
by $\Sigma$.
We shall use Birkhoff's Theorem, which asserts
that the smallest variety containing a class
$\mathcal K$ of $\mathcal L$-algebras is the class
$\Hom\Sub\Prod(\mathcal K)$ of homomorphic
images of subalgebras of products of algebras in $\mathcal K$.
A subvariety of a variety
$\mathcal V$ is a subclass of $\mathcal V$
that is a variety.
A variety is trivial if it
consists of $1$-element algebras only.
A variety is minimal if it is not trivial, but
any proper subvariety is trivial.
The full constant expansion of $\m a$
is the algebra $\m a_A=\langle A; F\cup \{c_a\;|\;a\in A\}\rangle$
obtained from $\m a$
by adding a new $0$-ary (constant) symbol $c_a$
for each element $a\in A$.
By a polynomial operation
of $\m a$ we mean
a term operation of $\m a_A$.
A $1,1$-matrix of $\m a$ is a $2\times 2$ matrix of elements
of $A$ of the form
\begin{equation} \label{matrixDEFN}
\left[\begin{array}{cc}
t(\wec{a},\wec{u}) & t(\wec{a},\wec{v}) \\
t(\wec{b},\wec{u}) & t(\wec{b},\wec{v})
\end{array}\right]=
\left[\begin{array}{cc}
p&q\\
r&s
\end{array}\right]\in A^{2\times 2}
\end{equation}
where $t(\wec{x},\wec{y})$ is a polynomial
of $\m a$ and $\wec{a}, \wec{b}, \wec{u}, \wec{v}$
are tuples of elements of $A$.
The set of $1, 1$-matrices is invariant under
the operations of
swapping rows, swapping columns, and matrix transpose.
$\m a$ has property ``$X$'', or ``is $X$'', if
the corresponding implications hold for all $1,1$-matrices
in (\ref{matrixDEFN}):
\begin{itemize}
\item ($X$ = abelian) provided
$p=q$ implies $r=s$. (Equivalently, if
$p=r$ implies $q=s$.)
\item ($X$ = rectangular) provided that,
for some compatible partial order on $\m a$, $\geq$,
it is the case that
$u\geq q$ and $u\geq r$ together imply $u\geq s$.
\item ($X$ = strongly rectangular) provided $q=r$ implies $r=s$.
\item ($X$ = strongly abelian) (same as abelian + strongly rectangular).
\item ($X$ = affine) (same as abelian + has a Maltsev operation).
\end{itemize}
A variety ``is $X$'' if all of its algebras are.
All of these concepts will be used in this paper.
What we have just written is not enough to understand
what follows, so please see Chapters 2 and 5
of \cite{kearnes-kiss} for more detail
about these concepts when necessary.
\section{Abelian and affine algebras} \label{abaff}
Our main goal in this section is to prove
that if $\mathcal V$ is an abelian variety that
is not an affine variety, then $\mathcal V$ contains a
nontrivial strongly abelian subvariety.
Applying this to the situation where $\mathcal V$
is a minimal variety, we obtain that
any minimal abelian variety is affine or strongly abelian.
The path we follow in this section is to prove
the following sequentially stronger
Facts about an arbitrary abelian variety $\mathcal V$ that is not affine:
\begin{enumerate}
\item[(I)] $\mathcal V$ contains an algebra with a nontrivial
strongly abelian congruence.
\item[(II)] $\mathcal V$ contains a nontrivial
strongly abelian algebra.
\item[(III)] $\mathcal V$ contains a nontrivial
strongly abelian subvariety.
\end{enumerate}
The following theorem establishes Fact (I).
\begin{thm} \label{I}
Let $\mathcal V$ be an abelian variety. If $\mathcal V$ is not affine,
then there is an algebra $\m a\in\mathcal V$ that has a nontrivial
strongly abelian congruence.
\end{thm}
\begin{proof}
We prove the contrapositive of the second
sentence in the theorem statement. Namely, under
the hypothesis that $\mathcal V$
is abelian, we show that if $\mathcal V$ contains
no algebra $\m a$ with a nontrivial
strongly abelian congruence, then $\mathcal V$ is affine.
If $\mathcal V$ has no algebra $\m a$ with a nontrivial
strongly abelian congruence, then Theorem~3.13 of \cite{kearnes-kiss}
proves that $\mathcal V$ satisfies a nontrivial idempotent
Maltsev condition.
By Theorem~3.21 of
\cite{kearnes-kiss}, any variety that satisfies
a nontrivial idempotent
Maltsev condition
has a \emph{join term}, which is a term
whose associated term operation
acts as a semilattice join operation
on the blocks of any rectangular tolerance relation
of any algebra in the variety.
But no subset of more than one element in an abelian algebra
can be closed under a semilattice term operation, because this would
realize a nontrivial semilattice as a subalgebra
of a reduct of an abelian algebra. Subalgebras of reducts
of abelian algebras are abelian, and no semilattice
of more than one element is abelian. This shows that
blocks of rectangular tolerances in $\mathcal V$ are singleton sets,
which is another way of saying that $\mathcal V$ contains
no algebra with a nontrivial rectangular tolerance.
By Theorem~5.25 of \cite{kearnes-kiss}, the fact
that $\mathcal V$ omits nontrivial rectangular
tolerances is equivalent to the fact that
$\mathcal V$ satisfies
an idempotent Maltsev condition that fails
in the variety of semilattices.
Finally, Theorem~4.10 of \cite{kearnes-szendrei2},
proves that if $\mathcal V$ is
any variety satisfying an idempotent Maltsev
condition which fails in the variety of semilattices,
then abelian algebras in $\mathcal V$ are affine.
Altogether, this shows that if $\mathcal V$
is abelian and no algebra in $\mathcal V$ has a nontrivial
strongly abelian congruence, then $\mathcal V$ is affine.
\end{proof}
This concludes the proof of Fact (I).
Our next goal is to prove Fact (II): if
$\mathcal V$ is abelian but not affine,
then $\mathcal V$ contains a nontrivial
strongly abelian algebra.
The following notation will be needed for
Lemma~\ref{nonaffine}, which is a result proved in
\cite{kearnes-kiss-szendrei} (Lemma~2.1 of that paper).
Assume that $\m a$ is abelian
and $\theta\in\Con(\m a)$ is strongly abelian.
Let $\m a(\theta)$ be the subalgebra of $\m a\times \m a$
supported by the graph of $\theta$.
Let $\Delta$ be the congruence on $\m a(\theta)$
generated by $D\times D$ where $D = \{(a,a)\;|\;a\in A\}$
is the diagonal.
$D$ is a $\Delta$-class, because $\m a$ is abelian.
Let $\m s = \m s_{\m a,\theta} := \m a(\theta)/\Delta$.
Let $0 = D/\Delta\in S$.
\begin{lm}\label{nonaffine}
Let $\mathcal V$ be an abelian variety, and suppose that
$\theta$ is a nontrivial strongly abelian congruence on some
$\m a\in\mathcal V$. Let $\m s = \m s_{\m a,\theta}$
and let $0 = D/\Delta\in S$. The following are true:
\begin{enumerate}
\item $\m s$ has more than one element.
\item $\{0\}$ is a 1-element subuniverse of $\m s$.
\item $\m s$ has ``Property P'': for every $n$-ary polynomial
$p(\wec{x})$ of $\m s$ and every tuple $\wec{s}\in S^n$
\[
p(\wec{s})=0\quad\textrm{implies}\quad p(\wec{0})=0,
\]
\bigskip
\noindent
where $\wec{0} = (0,0,\ldots,0)$.
\item Whenever $t(x_1,\ldots,x_n)$
is a $\mathcal V$-term and
\[
\mathcal V\models t(\wec{x})\approx
t(\wec{y})
\]
\bigskip
\noindent
where $\wec{x}$ and $\wec{y}$ are tuples of not necessarily
distinct variables which differ in the $i$th position,
then the term operation $t^{\m s}(x_1,\ldots,x_n)$ is
independent of its $i$th variable.
\item $\m s$ has a congruence $\sigma$ such that the
algebra $\m s/\sigma$ satisfies (1)--(4) of this lemma,
and $\m s/\sigma$ also
has a compatible partial order $\leq$ such that
$0\leq s$ for every $s\in (S/\sigma)$.
\end{enumerate} \qed
\end{lm}
This lemma puts us in position to establish Fact (II):
\begin{thm} \label{II}
If $\mathcal V$ is an abelian variety that is not affine, then
$\mathcal V$ contains a nontrivial strongly abelian algebra.
\end{thm}
\begin{proof}
By Theorem~\ref{I}, the assumption that $\mathcal V$
is abelian but nonaffine guarantees that there is some
$\m a\in{\mathcal V}$ that has some
nontrivial strongly abelian congruence.
By Lemma~\ref{nonaffine}, these data can be used
to construct a nontrivial algebra
$\m t:=\m s/\sigma\in{\mathcal V}$
that has a compatible partial order $\leq$
and a singleton subuniverse $\{0\}$ such that $0$ is the least element
under the partial order. We proceed from this point.
Observe that if $a, b\in T$ satisfy $a\geq b$,
then from $a\geq b\geq 0$ we derive that
$f(a)\geq f(b)\geq f(0)\;(\geq 0)$
for any unary polynomial $f\in\Pol_1(\m t)$.
In particular, if
$f(a) = 0$ we must also have $f(b)=0$.
Let us define a coarser quasiorder $\sqsupseteq$
on $\m t$ by this rule: for $a, b\in T$, let
$a\sqsupseteq b$ if
\begin{equation} \label{implication}
f(a)=0\Rightarrow f(b)=0
\end{equation}
\bigskip
\noindent
holds for all $f\in\Pol_1(\m t)$.
The relation $\sqsupseteq$ is reflexive, transitive,
and compatible with unary polynomials,
so it is compatible with all polynomials.
Therefore $\sqsupseteq$ is a compatible
quasiorder on $\m t$. From the first two sentences
of this paragraph we see that
$\sqsupseteq$ extends $\geq$ (i.e., $\sqsupseteq$
is a coarsening of $\geq$).
This is enough to imply that $0$
is a least element with respect to
$\sqsupseteq$.
Let $\theta = \sqsupseteq\cap \sqsupseteq^{\cup}$.
By considering what happens in (\ref{implication}) when $0\sqsupseteq b$
and $f(x) = x$ one sees that $0\sqsupseteq b$ implies $b=0$.
Hence $0/\theta = \{0\}$, from which it follows that
$\theta$ is a proper congruence of $\m t$.
We let $\m t'=\m t/\theta$,
$0'=0/\theta$, and $\geq' = {\sqsupseteq}/\theta$.
Now $\m t'$ is nontrivial,
has a compatible partial order $\geq'$ with least
element $0'$, and with respect to this partial order we have
$a\geq' b$ if and only if
\[f(a)=0'\Rightarrow f(b)=0'\]
\bigskip
\noindent
for all unary polynomials $f\in\Pol_1(\m t')$.
Our new algebra $\m t'$
is a quotient of the original algebra,
has all properties attributed to $\m t$
in the first paragraph of this proof, but
now we have strengthened the implication
``$a\geq' b$ in $\m t'$ implies $f(a)=0'\Rightarrow f(b)=0'$
for all unary polynomials $f\in\Pol_1(\m t')$''
to a bi-implication.
We now replace $\m t$ with $\m t'$,
drop all primes, and assume that
\begin{equation} \label{new_implication}
\textrm{$a\geq b$ in $\m t$ if and only if $f(a)=0\Rightarrow f(b)=0$.}
\end{equation}
It should be pointed out that the
reflexivity, transitivity, and compatibility of $\geq$
implies that $\m t$ satisfies Property P
of Lemma~\ref{nonaffine}. For suppose that $\wec{s}=(s_1,\ldots,s_n)$
and $p(\wec{s})=0$ for some polynomial $p$ of $\m t$.
Since $s_i\geq 0$ for all $i$ we derive from (\ref{new_implication})
that
\[
\begin{array}{rl}
p(\wec{s})= p(s_1,s_2,s_3,\ldots,s_n)=0 & \Rightarrow p(0,s_2,s_3,\ldots,s_n)=0 \\
& \Rightarrow p(0,0,s_3,\ldots,s_n)=0 \\
& \vdots\\
& \Rightarrow p(0,0,0,\ldots,0)=0, \\
\end{array}
\]
which is the assertion of Property P.
\begin{clm} \label{rectangulation}
The total binary relation $1\in\Con(\m t)$ rectangulates itself
with respect to $\geq$.
\end{clm}
\cproof
Assume that
\begin{equation} \label{rect_matrix}
\left[\begin{array}{cc}
t(\wec{a},\wec{u}) & t(\wec{a},\wec{v}) \\
t(\wec{b},\wec{u}) & t(\wec{b},\wec{v})
\end{array}\right]=
\left[\begin{array}{cc}
p&q\\
r&s
\end{array}\right]
\end{equation}
\bigskip
\noindent
is a $1,1$-matrix and that $u\geq q, r$. Our goal is to prove
that $u\geq s$. If $u\not\geq s$, then
according to (\ref{new_implication})
there
is a unary polynomial $f$ such that $f(u)=0$ and $f(s)\neq 0$.
Since $u\geq q, r$ and since $0$ is the least element under $\geq$,
we get that $0=f(u)\geq f(q), f(r) \geq f(0)$,
so in fact $0=f(u) = f(q) = f(r) = f(0)$.
Prefixing the polynomial $t$ in the
left matrix of (\ref{rect_matrix})
with the polynomial
$f$, we obtain a $1, 1$-matrix of the form
\begin{equation} \label{11matrix}
\left[\begin{array}{cc}
ft(\wec{a},\wec{u}) & ft(\wec{a},\wec{v}) \\
ft(\wec{b},\wec{u}) & ft(\wec{b},\wec{v})
\end{array}\right]=
\left[\begin{array}{cc}
f(p)&f(q)\\
f(r)&f(s)
\end{array}\right] =
\left[\begin{array}{cc}
p'&0\\
0&s'
\end{array}\right]
\end{equation}
\bigskip
\noindent
with $s'\neq 0$. That is,
\begin{equation} \label{crossdiag}
ft(\wec{a},\wec{v}) = 0 = ft(\wec{b},\wec{u}),
\end{equation}
\bigskip
\noindent
while $ft(\wec{b},\wec{v}) \neq 0$.
Employing Property P in the forms
$ft(\underline{\wec{a}},\wec{v}) = 0 \Rightarrow ft(\underline{\wec{0}},\wec{v}) = 0$
and
$ft(\underline{\wec{b}},\wec{u}) = 0 \Rightarrow ft(\underline{\wec{0}},\wec{u}) = 0$,
we get from (\ref{crossdiag}) that
\begin{equation} \label{crossdiag2}
ft(\underline{\wec{0}},\wec{v}) = 0 = ft(\underline{\wec{0}},\wec{u}).
\end{equation}
\bigskip
\noindent
Since $\m t$ is abelian, we derive from (\ref{crossdiag2}) that
\begin{equation} \label{contraequation}
ft(\underline{\wec{b}},\wec{v}) = ft(\underline{\wec{b}},\wec{u}).
\end{equation}
The left side of (\ref{contraequation})
equals $s'$ while the right side equals $0$,
yielding $s'=0$, contrary to the
fact stated
after the line containing (\ref{11matrix}). The claim is proved.
\hfill\rule{1.3mm}{3mm}
\bigskip
\begin{clm}
The total binary relation $1\in\Con(\m t)$
strongly rectangulates itself.
\end{clm}
\cproof
Assume that
\begin{equation} \label{st_rect_matrix}
\left[\begin{array}{cc}
t(\wec{a},\wec{u}) & t(\wec{a},\wec{v}) \\
t(\wec{b},\wec{u}) & t(\wec{b},\wec{v})
\end{array}\right]=
\left[\begin{array}{cc}
p&q\\
r&s
\end{array}\right]
\end{equation}
\bigskip
\noindent
is a $1,1$-matrix and that $q = r$. Our goal is to prove
that $r = s$. Note that by taking $u = q = r$ we get
from rectangulation with respect to $\geq$
(Claim~\ref{rectangulation}) that, since
$u\geq q, r$, we must have $u\geq s$, so in particular
we have $r\geq s$. If we do not have $r=s$ as desired,
then we must have $s\not\geq r$. In this case there
is a unary polynomial $f$ such that $f(s) = 0$
and $f(r)\neq 0$.
By prefixing the polynomial $t$
in the left matrix in (\ref{st_rect_matrix})
by $f$, we obtain a matrix of the form
\[
\left[\begin{array}{cc}
ft(\wec{a},\wec{u}) & ft(\wec{a},\wec{v}) \\
ft(\wec{b},\wec{u}) & ft(\wec{b},\wec{v})
\end{array}\right]=
\left[\begin{array}{cc}
f(p)&f(q)\\
f(q)&f(s)
\end{array}\right]=
\left[\begin{array}{cc}
p'&q'\\
q'&0
\end{array}\right]
\]
\bigskip
\noindent
with $q'=f(q)=f(r)\neq 0$. We also have (by rectangulation with
respect to $\geq$) that
any $u$ that majorizes the cross diagonal in
\[
\left[\begin{array}{cc}
p'&q'\\
q'&0
\end{array}\right],
\]
\bigskip
\noindent
like $u=q'$, also majorizes the main diagonal, i.e. $q'\geq p'$.
Similarly, any $u$ that majorizes the main diagonal,
like $u=p'$, also majorizes the cross diagonal, i.e. $p'\geq q'$.
In particular, $p'=q'$ and we have
\[
\left[\begin{array}{cc}
ft(\wec{a},\wec{u}) & ft(\wec{a},\wec{v}) \\
ft(\wec{b},\wec{u}) & ft(\wec{b},\wec{v})
\end{array}\right]=
\left[\begin{array}{cc}
q'&q'\\
q'&0
\end{array}\right].
\]
\bigskip
\noindent
This is a failure of the term condition (which defines abelianness),
thereby proving the claim.
\hfill\rule{1.3mm}{3mm}
\bigskip
We complete the proof of Theorem~\ref{II} by noting that
an algebra is strongly
abelian if and only if it is abelian and strongly rectangular.
Since we have shown that $\m t$ is strongly rectangular,
and we selected $\m t$ from an abelian variety, we conclude
that $\m t$ is strongly abelian.
\end{proof}
Next on our agenda is to prove Fact (III),
which asserts that, if $\mathcal V$ is abelian but not affine,
then $\mathcal V$ has a nontrivial subvariety that
is strongly abelian. We shall require the following lemma,
which is an extension of
\cite[Theorem~7.1]{mckenzie-valeriote}.
Recall
the class operators $\Hom$, $\Sub$, and $\Prod$
we introduced briefly in Section~\ref{prelim}.
Namely, for a class $\mathcal K$ of similar algebras
$\Hom(\mathcal K)$ denotes the class of algebras that are
homomorphic images of members of $\mathcal K$,
$\Sub(\mathcal K)$ denotes the class of algebras isomorphic
to subalgebras of members
of $\mathcal K$, and
$\Prod(\mathcal K)$ denotes the class of algebras isomorphic
to products of members
of $\mathcal K$. Each of the classes
$\Hom(\mathcal K)$, $\Sub(\mathcal K)$, and
$\Prod(\mathcal K)$ has been defined so that it is closed
under isomorphism.
\begin{lm} \label{fake-mck-val}
Assume that every finitely generated
subalgebra of $\m a$ is strongly solvable.
If\/ $\Hom\Sub(\m a^2)$ consists of abelian algebras,
then $\Hom\Sub(\m a)$ consists of strongly
abelian algebras.
\end{lm}
\begin{proof}
For the first step of the proof we invoke
\cite[Theorem~7.1]{mckenzie-valeriote}, which proves the following:
\begin{clm} \label{fake-mck-val-clm}
Assume that every finitely generated
subalgebra of $\m a$ is strongly solvable.
If\/ $\Hom\Sub(\m a^2)$ consists of abelian algebras,
then $\m a$ is strongly abelian.
\end{clm}
The conclusion that $\m a$ is strongly abelian in
Claim~\ref{fake-mck-val-clm}
implies that every algebra in
$\Sub(\m a)$ is also strongly abelian, since the
strong abelian property is expressible by
universal sentences. However it is not an
immediate consequence of Claim~\ref{fake-mck-val-clm}
that the class
$\Hom\Sub(\m a)$ consists of strongly abelian algebras.
For this we must show that if $\m b\in \Sub(\m a)$
and $\theta\in\Con(\m b)$, then $\m b/\theta$ is also
strongly abelian.
Choose and fix $\m b\in \Sub(\m a)$
and $\theta\in\Con(\m b)$.
Recall (from Section~\ref{prelim})
that an algebra is strongly abelian if and only if
it is both abelian and strongly rectangular.
We are assuming that all algebras in $\Hom\Sub(\m a^2)$
are abelian, and $\m b/\theta$ is in $\Hom\Sub(\m a)$,
which is a subclass of the abelian class $\Hom\Sub(\m a^2)$,
so to prove the lemma it suffices to prove that $\m b/\theta$
is strongly rectangular.
This will be a proof by contradiction.
Our aim will be to obtain a contradiction
from the assumptions that:
$\m a$ is strongly abelian (which we get from Claim~\ref{fake-mck-val-clm}),
$\Hom\Sub(\m a^2)$ consists of abelian algebras,
$\m b\leq \m a$, $\theta\in\Con(\m b)$,
but $\m b/\theta$ is not strongly rectangular.
Observe that the assumptions that
$\m a$ is strongly abelian and $\m b\leq \m a$
imply that $\m b$ is strongly abelian.
We reiterate the observation of the last paragraph
that the assumption that
$\Hom\Sub(\m a^2)$ consists of abelian algebras
implies that $\m b/\theta$ is abelian.
The assumption that $\m b/\theta$ is not strongly rectangular
means exactly that there is a $1,1$-matrix in $\m b$
of the form
\begin{equation} \label{matrixEQ}
\left[\begin{array}{cc}
t(\wec{a},\wec{u}) & t(\wec{a},\wec{v}) \\
t(\wec{b},\wec{u}) & t(\wec{b},\wec{v})
\end{array}\right]=
\left[\begin{array}{cc}
p&q\\
r&s
\end{array}\right]
\end{equation}
with $q\equiv_{\theta} r$ but $r\not\equiv_{\theta} s$.
\begin{clm} \label{theta-related}
No two elements of the second matrix of (\ref{matrixEQ})
which come from the same row or column
are $\theta$-related.
\end{clm}
\cproof
We explain why $p$ and $r$, the two elements
of the first column of the second matrix in (\ref{matrixEQ}),
are not $\theta$-related and omit the proofs of the other (similar)
cases.
Assume that $p\equiv_{\theta} r$.
Since $\m b/\theta$ is abelian,
the ordinary term condition
holds in $\m b/\theta$, i.e., $C(1,1;\theta)$ holds in $\m b$.
From
\[
t(\wec{a},\underline{\wec{u}}) = p\equiv_{\theta} r = t(\wec{b},\underline{\wec{u}})
\]
\bigskip
\noindent
we derive
\[
t(\wec{a},\underline{\wec{v}}) = q\equiv_{\theta} s= t(\wec{b},\underline{\wec{v}})
\]
\bigskip
\noindent
by replacing the underlined $\wec{u}$'s with $\wec{v}$'s.
In short, if the two elements of the first column
of the second matrix in (\ref{matrixEQ})
are $\theta$-related, then the two elements in the parallel
column must also be $\theta$-related.
This, together with the earlier condition $q\equiv_{\theta} r$, which
asserts that the elements along the cross diagonal
are $\theta$-related, yields
that all elements of the matrix are $\theta$-related.
(That is, $p\equiv_{\theta} r\equiv_{\theta} q\equiv_{\theta} s$).
This is
in contradiction to the condition $r\not\equiv_{\theta} s$.
What we have contradicted was the assumption that the elements
$p$ and $r$, which come from the same column of the second
matrix in (\ref{matrixEQ}), are $\theta$-related.
\hfill\rule{1.3mm}{3mm}
\bigskip
Next, let
$D = \{(z,z)\;|\;z\in B\}$ be the diagonal subuniverse of $\m b^2$.
Let $\m c\leq \m b^2$ be the subalgebra of $\m b^2$
generated by $D$ and all pairs
$(u_i,v_i)$, $i=1,\ldots,n$, where $\wec{u}=(u_1,\ldots,u_n)$ and
$\wec{v}=(v_1,\ldots,v_n)$ in the first matrix from (\ref{matrixEQ}).
Observe that the pairs $(p,q)$ and $(r,s)$
both belong to $\m c$, since
\[
(p,q)=
(t(\wec{a},\wec{u}), t(\wec{a},\wec{v}))
=t(\underbrace{(a_1,a_1),(a_2,a_2),\ldots,
(u_1,v_1),(u_2,v_2),\ldots}_{\text{generators of $\m c$}})\in C
\]
\bigskip
\noindent
and
\[
(r,s)=
(t(\wec{b},\wec{u}), t(\wec{b},\wec{v}))
=t(\underbrace{(b_1,b_1),(b_2,b_2),\ldots,
(u_1,v_1),(u_2,v_2),\ldots}_{\text{generators of $\m c$}})\in C.
\]
Let $\gamma$ be the principal congruence of $\m c$
generated by the single pair (of pairs)
$\big((p,q),(r,s)\big)$.
\begin{clm} \label{gamma_on_the_diagonal}
The diagonal $D\subseteq C$ is a union of $\gamma$-classes.
Moreover,
if $\big((c,c), (d,d)\big)\in\gamma$, then
$(c,d)\in\theta$.
\end{clm}
\cproof
Since $\gamma$ is the congruence generated by the pair
$\big((p,q), (r,s)\big)$,
it follows from
Maltsev's Congruence Generation Lemma that
to prove this claim it suffices to establish that
if $f$ is any unary polynomial of $\m c$,
and $f\big((p,q)\big) = (c,c)\in D$,
then $f\big((r,s)\big) = (d,d)\in D$
for some $d$ satisfying $(c,d)\in\theta$.
Assume that $f$ is a unary polynomial of $\m c$
and that $f\big((p,q)\big) = (c,c)$
for some $c$.
Unary polynomials of $\m c$ have the form
\[
f\big((x,y)\big)
= g\big((x,y),(\wec{u},\wec{v})\big)
= \big(g(x,\wec{u}), g(y,\wec{v})\big)
\]
\bigskip
\noindent
where $g(x,\wec{z})$ is a polynomial of $\m b$, since
$\m c$ is generated by $D$ and all pairs
$(u_i,v_i)$, $i=1,\ldots,n$, where $\wec{u}=(u_1,\ldots,u_n)$ and
$\wec{v}=(v_1,\ldots,v_n)$. Thus $f\big((p,q)\big) = (c,c)$
can be rewritten
\begin{equation} \label{pANDq}
g(p,\wec{u}) = c = g(q,\wec{v})
\end{equation}
\bigskip
\noindent
for some polynomial $g$ of $\m b$. More fully, this is
\[
g(t(\wec{a},\wec{u}),\wec{u}) = c = g(t(\wec{a},\wec{v}),\wec{v}).
\]
\bigskip
\noindent
Apply the term condition in the abelian algebra
$\m b$ to change the underlined
$\wec{a}$'s to $\wec{b}$'s below, so from
\[
g(t(\underline{\wec{a}},\wec{u}),\wec{u}) = c = g(t(\underline{\wec{a}},\wec{v}),\wec{v})
\]
\bigskip
\noindent
we get
\[
g(t(\underline{\wec{b}},\wec{u}),\wec{u}) = g(t(\underline{\wec{b}},\wec{v}),\wec{v}).
\]
\bigskip
\noindent
Less fully, this equality may be rewritten
\begin{equation}\label{rANDs}
g(r,\wec{u}) = d = g(s,\wec{v})
\end{equation}
\bigskip
\noindent
for some $d$. In other words, the fact that $\m b$ is abelian
implies that if $f\big((p,q)\big) = (c,c)\in D$
for some $c$, then there is some $d$ such that
$f\big((r,s)\big) = \big(g(r,\wec{u}), g(s,\wec{v})\big)
= (d,d)$.
It remains to argue that we must have $(c,d)\in\theta$. Consider the
following $1,1$-matrix of $\m b$, where two of the entries
in the left matrix can be determined from Equation~(\ref{pANDq}):
\[
\left[\begin{array}{cc}
g(p,\wec{u}) & g(p,\wec{v}) \\
g(q,\wec{u}) & g(q,\wec{v})
\end{array}\right]=
\left[\begin{array}{cc}
c&*\\
*&c
\end{array}\right].
\]
\bigskip
\noindent
The off-diagonal entries can be determined from the
diagonal entries, since $\m b$ is strongly abelian.
Namely, all four entries must equal $c$,
yielding $g(p,\wec{u})=g(p,\wec{v})=g(q,\wec{u})=g(q,\wec{v})=c$.
In particular, this and Equation~(\ref{rANDs}) yield
$(g(q,\wec{u}),g(r,\wec{u}))=(c,d)$. This shows that
the pair $(c,d)$ is a polynomial translate
of the pair $(q,r)$ via the translation
$x\mapsto g(x,\wec{u})$. Since $(q,r)\in\theta\in\Con(\m b)$,
according to the line after (\ref{matrixEQ}),
and $g(x,\wec{u})$ is a unary polynomial of $\m b$
it follows that $(c,d)\in\theta$.
\hfill\rule{1.3mm}{3mm}
\bigskip
\begin{clm}
$\m c/\gamma$ is not abelian.
\end{clm}
\cproof
We argue that $C(1,1;\gamma)$ fails in $\m c$.
For this it suffices to write down a bad $1,1$-matrix
of $\m c$:
\begin{equation} \label{matrixEQ2}
\left[\begin{array}{cc}
t\big((\wec{a},\wec{a}),\underline{(\wec{u},\wec{v})}\big)&
t\big((\wec{b},\wec{b}),\underline{(\wec{u},\wec{v})}\big)\\
\vphantom{A}&\\
t\big((\wec{a},\wec{a}),\underline{(\wec{u},\wec{u})}\big)&
t\big((\wec{b},\wec{b}),\underline{(\wec{u},\wec{u})}\big)\\
\end{array}\right]=
\left[\begin{array}{cc}
(p, q)&(r, s)\\
\vphantom{A}&\\
(p, p)&(r, r)\\
\end{array}\right].
\end{equation}
(One should check that this \underline{is} a
$1,1$-matrix in $\m c$, i.e., that $t$ is being
applied to elements of $\m c$: $(a_i,a_i),
(b_j,b_j), (u_k,u_k),
(u_{\ell},v_{\ell})\in C$.)
Our goal is to show that
the first matrix in (\ref{matrixEQ2})
expresses a failure of $C(1,1;\gamma)$, which in more
standard notation might be written:
\[
t\big((\wec{a},\wec{a}),\underline{(\wec{u},\wec{v})}\big)
\equiv_{\gamma}
t\big((\wec{b},\wec{b}),\underline{(\wec{u},\wec{v})}\big),
\]
\bigskip
\noindent
while changing the underlined
$(\wec{u},\wec{v})$'s to
$(\wec{u},\wec{u})$'s produces
\[
t\big((\wec{a},\wec{a}),\underline{(\wec{u},\wec{u})}\big)
\not\equiv_{\gamma}
t\big((\wec{b},\wec{b}),\underline{(\wec{u},\wec{u})}\big).
\]
To see that this truly is a failure of $C(1,1;\gamma)$,
notice that the elements on the first row of the second
matrix in (\ref{matrixEQ2}) are indeed $\gamma$-related,
since $\gamma$ was defined to be the congruence
of $\m c$ generated by
$\big((p,q), (r,s)\big)$.
But notice also that the elements on the second row of the second
matrix in (\ref{matrixEQ2}) cannot possibly be $\gamma$-related.
For, we proved in Claim~\ref{gamma_on_the_diagonal}
that $\gamma$-related pairs of the form
$\big((c,c),(d,d)\big)$
must satisfy $(c,d)\in\theta$, and
we proved that $(p,r)\notin \theta$ in Claim~\ref{theta-related}.
Hence $\big((p,p),(r,r)\big)\notin\gamma$.
\hfill\rule{1.3mm}{3mm}
\bigskip
To summarize,
in the main portion of the proof we showed
that when $\m b$ is strongly abelian,
$\theta\in\Con(\m b)$, and $\m b/\theta$ is abelian but
not strongly rectangular, then there is an algebra
$\m c/\gamma\in\Hom\Sub(\m b^2)$
that is not abelian.
It follows that when $\m a$ is strongly abelian,
$\Hom\Sub(\m a^2)$ consists of abelian algebras,
$\m b\leq \m a$, $\theta\in\Con(\m b)$,
and $\m b/\theta$ is not strongly rectangular,
then there is an algebra
$\m c/\gamma\in\Hom\Sub(\m b^2)\subseteq \Hom\Sub(\m a^2)$
that is not abelian. This was what we needed to
establish, as one can verify by consulting
the third paragraph following Claim~\ref{fake-mck-val-clm}.
\end{proof}
\begin{thm} \label{subclassK}
Let $\mathcal V$ be an abelian variety. If
${\mathcal K}$
is any subclass of ${\mathcal V}$ that consists of
strongly abelian algebras, then the subvariety of
$\mathcal V$ generated by $\mathcal K$ is
strongly abelian.
(Equivalently, the subclass of all strongly abelian
algebras in $\mathcal V$ is a subvariety of $\mathcal V$.)
\end{thm}
\begin{proof}
Since ${\mathcal K}$ consists of strongly abelian algebras,
and the property of being
strongly abelian is expressible
by a family of universal Horn sentences,
the class $\Sub\Prod({\mathcal K})$ of algebras isomorphic
to subalgebras of products
of members of $\mathcal K$ consists of strongly abelian algebras.
Any $\m a\in\Sub\Prod(\mathcal K)$ will be strongly
abelian, hence will have the property
that its finitely generated
subalgebras are strongly abelian.
Moreover, since $\m a\in\mathcal V$, and $\mathcal V$ is abelian,
the class $\Hom\Sub(\m a^2)$ will consist of abelian algebras.
By Lemma~\ref{fake-mck-val}, the class
$\Hom\Sub(\m a)$ consists of strongly
abelian algebras. Since $\m a\in\Sub\Prod(\mathcal K)$
was arbitrary, this means that
$\Hom\Sub(\Sub\Prod(\mathcal K)) = \Hom\Sub\Prod(\mathcal K)$
consists of strongly abelian algebras.
Since $\Hom\Sub\Prod(\mathcal K)$ is the variety generated
by $\mathcal K$, the theorem is proved.
\end{proof}
\begin{remark}
For \emph{locally finite} varieties,
the result stated in Theorem~\ref{subclassK}
is due to Matt Valeriote.
It is known from Tame Congruence Theory that
if $\mathcal V$ is a locally finite variety,
then the subclass $\mathcal S$
of locally strongly solvable algebras
in $\mathcal V$ is a subvariety of $\mathcal V$.
Valeriote proved in \cite{valeriote}
that the following are equivalent
for any locally finite \emph{abelian} variety $\mathcal S$:
\begin{enumerate}
\item[(1)] $\mathcal S$ is locally strongly solvable.
\item[(2)] $\mathcal S$ is strongly abelian.
\end{enumerate}
Here is how you deduce
Theorem~\ref{subclassK} in the locally finite case
from the statements just made.
Assume that $\mathcal V$ is a locally finite abelian variety.
Let $\mathcal S\subseteq \mathcal V$ be the subvariety of $\mathcal V$
consisting of locally strongly solvable algebras in $\mathcal V$.
Let $\mathcal K\subseteq \mathcal V$ be the subclass
of strongly abelian algebras of $\mathcal V$.
Since a strongly abelian algebra is locally strongly solvable,
$\mathcal K\subseteq \mathcal S$.
Valeriote's Theorem applied to the subvariety $\mathcal S$
shows that $\mathcal S\subseteq \mathcal K$,
hence $\mathcal K=\mathcal S$,
which shows that $\mathcal K$ is a subvariety of $\mathcal V$.
\end{remark}
\begin{thm} \label{III}
If $\mathcal V$ is an abelian variety that is not affine, then
$\mathcal V$ contains a nontrivial strongly abelian subvariety.
\end{thm}
\begin{proof}
Theorem~\ref{II} shows that if $\mathcal V$
is abelian but not affine, then there is some
nontrivial algebra in $\m a\in \mathcal V$ that
is strongly abelian. By Theorem~\ref{subclassK},
the subvariety of $\mathcal V$ generated
by $\m a$ is a nontrivial strongly abelian
subvariety of $\mathcal V$.
\end{proof}
\begin{cor} \label{minvar}
Any minimal abelian variety of algebras is affine
or strongly abelian. $\Box$
\end{cor}
\bibliographystyle{plain}
|
1,477,468,750,341 | arxiv | \section{Introduction}
\label{Introduction}
Relativistic jets are the primary channel of energy loss from accreting
supermassive black holes in many radio galaxies. They also have a major impact
on their surroundings and act as accelerators of the most energetic photons (and
perhaps hadrons) we observe. The present paper forms part of a study of jet
physics in nearby, low-luminosity radio galaxies, specifically those with FR\,I
morphology \citep{FR74}. We have developed a sophisticated model of FR\,I jets
as relativistic, symmetrical, axisymmetric flows. By fitting to deep,
high-resolution radio images in total intensity and linear polarization, we have
determined the three-dimensional variations of velocity, emissivity and
magnetic-field ordering in five sources \citep{LB02a,CL,CLBC,LCBH06}. We have
shown that FR\,I jets decelerate from relativistic ($\beta = v/c \approx 0.8$)
to sub-relativistic speeds on scales of a few kpc and that they are faster
on-axis than at their edges, as expected if they entrain external material.
The physics of boundary-layer entrainment must depend on the composition and
density of the surrounding medium, and in particular on whether the jets
propagate in direct contact with the intergalactic medium (IGM) or are
surrounded by lobes consisting primarily of tenuous and at least partially
relativistic plasma. Of the five sources we have modelled, three have {\em
plumed} or {\em tailed} outer structures wherein most of the extended emission
appears to lie further from the active nucleus than the narrower jets: 3C\,31
\citep{LB02a,3c31ls}, B2\,1553+24 \citep{CL,Young} and NGC\,315
\citep{CLBC,ngc315ls}. We presume that their jets are in direct contact with the
IGM. On the other hand, 3C\,296 \citep{LCBH06} has two {\em lobes} with
well-defined outer boundaries and a diffuse {\em bridge} of emission around the
jets (at least in projection). A further source, B2\,0326+39 \citep{CL},
clearly has lobes, but it was unclear from published observations
\citep{Bridle91} whether these lobes surround the inner jets.
Our best-fit model for the jets in the lobed FR\,I source 3C\,296 \citep{LCBH06}
is unusual in that it shows a very large transverse velocity gradient across the
jet except very close to the nucleus, the
ratio of edge to central velocity in this jet being $\la 0.1$, compared with values
$\approx$0.4 for the jets in three tailed sources (the value for 0326+39 is poorly
determined). It is therefore of interest to examine whether jets in other FR\,I
sources whose lobes entirely surround them appear to decelerate differently with
distance from the nucleus than those in tailed FR\,I sources, or those in sources
whose lobes may not extend all the way back to the nucleus, leaving the
inner jets unshielded. Projection complicates interpretation of individual
sources (the lobes may appear superimposed on the jets even if they are not in
physical contact) and it is not always straightforward to separate jet and lobe
emission, but the differences between 3C\,296 and the rest are large. Modelling
of the jets in a small number of lobed sources (which form the majority of
complete samples of low-luminosity radio galaxies; \citealt{PDF96}) should
therefore be enough to decide whether there are systematic differences between
the two classes.
The primary aim of this paper is to present high-quality radio imaging of four
lobed FR\,I sources whose jets are suitable for modelling by our methods. The
models themselves will be presented elsewhere.
Our approach to jet modelling requires fitting to high-fidelity, deep
images with linear resolution $\la$0.25\,kpc derived from multi-configuration
observations at 4.9\,GHz (C-band) or 8.5\,GHz
(X-band) with the Very Large Array (VLA) at the National Radio Astronomy
Observatory\footnote{The National Radio Astronomy Observatory is a
facility of the National Science Foundation operated under cooperative agreement
by Associated Universities, Inc.}. In order to correct the linear polarization for the effects of Faraday
rotation (small at these frequencies), we also need to observe at several
frequencies in the range 1.3 -- 1.7\,GHz (L-band). High-quality images of
depolarization, rotation measure and spectral index are useful byproducts. In
this paper, we describe:
\begin{enumerate}
\item details of the observations,
\item the source morphologies in total intensity at a range of resolutions,
\item images of the spectral-index distributions and
\item images of the degree of polarization and the apparent magnetic field
direction, corrected for Faraday rotation.
\end{enumerate}
We also include high-resolution MERLIN\footnote{MERLIN is a UK National Facility
operated by the University of Manchester at Jodrell Bank Observatory on behalf
of STFC.} imaging for two of the sources. Faraday rotation and depolarization
are analysed in detail by \citet[and in preparation]{Guidetti11}.
\begin{center}
\begin{table}
\caption{The sources. Col.\,1: name (as used in this paper). Col.\,2: alternative
names. Col.\,3: redshift. Col.\,4: linear scale (kpc\,arcsec$^{-1}$) for our
adopted cosmology. Col.\,5: reference for redshift. Col.\,6: reference for
earlier observations of large-scale radio structure.\label{tab:sources}}
\begin{minipage}{80mm}
\begin{tabular}{llllll}
\hline
Name & Alternative & $z$ & kpc&\multicolumn{2}{c}{Ref}\\
& & &arcsec$^{-1}$&O&R\\
\hline
NGC\,193 & PKS\,0036+03 & 0.0147 & 0.300 &9&4\\
&UGC\,408 &&&&\\
0206+35 & UGC\,1651 & 0.0377 & 0.748 &3&8\\
& 4C\,35.03 &&&&\\
0755+37 & NGC\,2484 & 0.0428 & 0.845 &9&1\\
M\,84 & 3C\,272.1 & 0.0035 & 0.073 &10&5\\
& NGC\,4374 &&&&\\
3C\,296 & NGC\,5532 & 0.0247 & 0.498 &7&6\\
0326+39 & & 0.0243 & 0.490 &7&2\\
\hline
\end{tabular}
References: 1 \citet{Bondi00}, 2 \citet{Bridle91}, 3 \citet{Falco99}, 4
\citet{Gia11}, 5 \citet{LB87}, 6 \citet{LCBH06}, 7 \citet{Miller02}, 8 \citet{Morganti87}, 9
\citet{Ogando}, 10 \citet{Trager}.
\end{minipage}
\end{table}
\end{center}
We also present an analysis of the spectrum of 3C\,296 which improves on that
given by \citet{LCBH06} and imaging of the large-scale structure of
B2\,0326+39 to trace its lobe emission closer to the nucleus than in earlier studies.
These data complete the documentation of the large-scale structures and
spectral-index distributions for all of the lobed FR\,I sources whose jets we can
currently model.
Section \ref{Obs-red} describes the new observations and data reduction, Section
\ref{Images} presents our results for the sources individually, and Section
\ref{discuss} outlines the phenomenology of their jets and larger-scale
emission as a prelude to modelling. Section~\ref{summary} is a brief summary.
We adopt a concordance cosmology with Hubble constant, $H_0$ =
70\,$\rm{km\,s^{-1}\,Mpc^{-1}}$, $\Omega_\Lambda = 0.7$ and $\Omega_M =
0.3$.
\begin{center}
\begin{table*}
\caption{Journal of VLA observations. Col.\,1: source name. Col.\,2: VLA
configuration. Col.\,3: Date of observation. Col.\,4: centre frequencies for
the one or two channels observed (MHz). Col.\,5 bandwidth (MHz). Col.\,6: the
on-source integration time scaled to an array with all 27 antennas
operational. Col.\,7: VLA proposal code.}
\begin{center}
\begin{tabular}{lclllcl}
\hline
Source & Config- & Date & $\nu$ & $\Delta\nu$ & t & Proposal \\
& uration & & [MHz] & [MHz] & [min] & code\\
\hline
NGC\,193& A & 2007 Jun 02& 4885.1, 4835.1 & 50 &280& AL693 \\
& A & 2007 Jun 28& 4885.1, 4835.1 & 50 & 90& AL693 \\
& A & 2007 Aug 22& 4885.1, 4835.1 & 50 & 41& AL693 \\
& A & 2007 Aug 23& 4885.1, 4835.1 & 50 & 88& AL693 \\
& B & 2007 Nov 05& 4885.1, 4835.1 & 50 &318& AL693 \\
& B & 2007 Nov 16& 4885.1, 4835.1 & 50 &121& AL693 \\
& C & 2008 May 24& 4885.1, 4835.1 & 50 &223& AL693 \\
& D & 2007 Mar 11& 4885.1, 4835.1 & 50 & 53& AL693 \\
& A & 2007 Jun 28& 1365.0 & 25 & 83& AL693 \\
& A & 2007 Aug 22& 1365.0 & 25 & 39& AL693 \\
& A & 2007 Aug 23& 1365.0 & 25 & 97& AL693 \\
& B & 2007 Nov 16& 1365.0 & 25 &148& AL693 \\
& C & 2008 May 24& 1365.0 & 25 & 61& AL693 \\
0206+35 & A & 2008 Oct 13& 4885.1, 4835.1 & 50 &486& AL797 \\
& A & 2008 Oct 18& 4885.1, 4835.1 & 50 &401& AL797 \\
& B & 2003 Nov 17& 4885.1, 4835.1 & 50 &254& AL604\\
& C & 2004 Mar 20& 4885.1, 4835.1 & 50 &88 & AL604\\
& A & 2004 Oct 24& 1385.1, 1464.9 & 25 &189& AL604\\
& B & 2003 Nov 17& 1385.1, 1464.9 & 25 &110& AL604\\
0755+37 & A &2008 Oct 05& 4885.1, 4835.1 & 50 &477& AL797 \\
& A & 2008 Oct 06& 4885.1, 4835.1 & 50 &383& AL797 \\
& B & 2003 Nov 15& 4885.1, 4835.1 & 50 & 332& AL604\\
& B & 2003 Nov 30& 4885.1, 4835.1 & 50 & 169& AL604\\
& C & 2004 Mar 20& 4885.1, 4835.1 & 50 & 125& AL604\\
& D & 1992 Aug 2 & 4885.1, 4835.1 & 50 &55 & AM364\\
& A & 2004 Oct 25& 1385.1, 1464.9 &12.5&450& AL604\\
& B & 2003 Nov 30& 1385.1, 1464.9 &12.5&160& AL604\\
& C & 2004 Mar 20& 1385.1, 1464.9 &12.5& 21& AL604\\
M\,84 & A & 1980 Nov 09 & 4885.1 & 50 &223& AL020 \\
& A & 1988 Nov 23 & 4885.1, 4835.1 & 50 &405& AW228 \\
& A & 2000 Nov 18 & 4885.1, 4835.1 & 50 &565& AW530 \\
& B & 1981 Jun 25 & 4885.1 & 50 &156& AL020 \\
& C & 1981 Nov 17 & 4885.1 & 50 &286& AL020 \\
& C & 2000 Jun 04 & 4885.1, 4835.1 & 50 &138& AW530 \\
& A & 1980 Nov 09 & 1413.0 & 25 & 86& AL020 \\
& B & 1981 Jun 25 & 1413.0 & 25 & 29& AL020 \\
& B & 2000 Feb 09 & 1385.1, 1464.9 & 50 & 30& AR402 \\
0326+39 & D & 1997 Dec 13 & 4885.1, 4835.1 & 50 & 11& AR386 \\
& D & 1997 Dec 16 & 4885.1, 4835.1 & 50 & 32& AR386 \\
& C & 1998 Dec 4 & 1464.9, 1414.9 & 50 & 11& AR386 \\
& D & 1997 Dec 13 & 1464.9, 1385.1 & 50 & 6& AR386 \\
& D & 1997 Dec 16 & 1464.9, 1385.1 & 50 & 9& AR386 \\
\hline
\end{tabular}
\end{center}
\label{tab:journal}
\end{table*}
\end{center}
\section{Observations and data reduction}
\label{Obs-red}
\subsection{The sources}
\label{Sources}
We selected four bright FR\,I sources for which available data suggested: (a)
that full synthesis observations with the VLA would achieve signal-to-noise
sufficient to image the linearly polarized emission from their counter-jets with
several beamwidths resolution transverse to the radio features, as required by
our modelling methods, and (b) that the jets have formed lobe-like structures rather than diffuse outer
plumes. Three of the sources, NGC\,193, B2\,0206+35 and B2\,0755+37 are
analogous to 3C\,296 in having well-defined outer boundaries. The fourth, M\,84,
is in some respects an intermediate case between the two classes, as we
discuss below. We analysed a combination of new and archival datasets chosen to
give good spatial-frequency coverage at two or three frequencies.
In addition, we improved our low-resolution images of 3C\,296
\citep{LCBH06}. Finally, we analysed shorter, low-resolution, archival VLA
observations for B2\,0326+39.
Alternative names, redshifts, linear scales and references for all of the
sources are given in Table~\ref{tab:sources} (we drop the B2 from source names
from now on). A journal of observations is given in
Table~\ref{tab:journal}\footnote{\citealt{LCBH06} give a full description of the
observations of 3C\,296; this is not repeated here.}.
\begin{center}
\begin{table*}
\caption{Resolutions and noise levels for the images used in this
paper. Col.\,1: source name. Col.\,2: resolution (FWHM, in arcsec). Col.\,3:
frequency, in MHz. Col.\,4: VLA configurations used in the image (not relevant for
the two MERLIN observations). Cols\,5 and 6: deconvolution method for $I$ and
$Q/U$ images, respectively (MR: multi-resolution {\sc clean}; CL:
single-resolution {\sc clean}; ME: maximum entropy). Cols\,7 and 8: off-source
noise levels. $\sigma_I$ is the noise level on the $I$ image; $\sigma_P$ the
average of the noise levels for $Q$ and $U$. All noise levels were determined
before correction for the primary beam response and larger values are
appropriate at significant distances from the field centre. Col.\,9: the
approximate maximum scale of structure imaged reliably \citep{ObsSS}.
Cols\,10 and 11: flux densities measured from the image and derived from
single-dish measurements, respectively. The single-dish flux densities were
taken from the references in Col.\,12. They have been corrected as necessary
to put them on the standard scale of \citet{Baars} and interpolated to our
observing frequencies.
\label{tab:images}}
\begin{minipage}{170mm}
\begin{tabular}{llllllrrrlll}
\hline
Source & FWHM & Freq & Config- & \multicolumn{2}{c}{Method} &
\multicolumn{2}{c}{rms noise level} & Max &\multicolumn{2}{c}{$I_{\rm int}$/Jy}&Reference \\
& [arcsec] & [MHz] & urations& $I~$&$QU$ &
\multicolumn{2}{c}{[$\mu$Jy\,beam$^{-1}$]}& scale & Image & SD \\
& & & && & $\sigma_I$ & $\sigma_P$ & [arcsec] &\\
\hline
NGC\,193& 0.45 & 4860.1 & ABCD &ME&CL& 8.6 & 8.2 &300& & &\\
& 1.35 & 4860.1 & ABCD &MR&CL& 7.1 & 7.4 &300& $0.79\pm 0.02$ &
$0.81\pm 0.04$ &3\\
& 1.60 & 4860.1 & ABCD &MR&CL& 7.5 & 7.6 &300& $0.78\pm 0.02$ &
$0.81\pm 0.04$ &3\\
& 1.60 & 1365.0 & ABC &MR&MR& 36 & 31 &900& $1.96\pm 0.04$ & 1.84 &7\\
& 4.05 & 4860.1 & ABCD &CL&CL& 10 & 10 &300& $0.78\pm 0.02$ &
$0.81\pm 0.04$ &3\\
& 4.05 & 1365.0 & ABC &MR&MR& 54 & 25 &900& $1.95\pm 0.04$ & 1.84 &7\\
0206+35 &0.16& 1658.0 & MERLIN&CL&$-$&41 & $-$ &2.5 & &\\
&0.35 & 4860.1 & ABC &MR&MR& 7.2 & 7.1 &300& & &\\
& 1.20 & 4860.1 & BC &MR&MR& 12 & 12 &300& $0.90\pm 0.02$ &
$0.98\pm 0.12$ & 2\\
& 1.20 & 1464.9 & AB &MR&MR& 25 & 26 &300& $2.12\pm 0.04$ &2.13&5\\
& 1.20 & 1385.1 & AB &MR&MR& 25 & 26 &300& $2.12\pm 0.04$ &2.22&5\\
& 1.20 & 1425.0 & AB &MR&$-$& 19 &$-$ &300& $2.12\pm 0.04$ &2.18&6\\
& 4.50 & 4860.1 & BC &MR&$-$& 18 &$-$ &300& $0.90\pm 0.02$ &
$0.98\pm 0.12$ & 2\\
& 4.50 & 1425.0 & AB &MR&$-$& 38 & $-$ &300& $2.13\pm 0.04$ &2.18&6\\
0755+37 &0.14& 1658.0 & MERLIN&CL&$-$&68 & $-$ &2.5 & &\\
& 0.40 & 4860.1 & ABCD &MR&MR& 8.0 & 7.1 &300& & &\\
& 1.30 & 4860.1 & BCD &MR&MR&7.8 &7.9 &300& $1.26\pm 0.03$
&$1.27\pm 0.02$ &4\\
& 1.30 & 1464.9 & ABC &MR&MR& 28 & 28 &300&$2.60\pm 0.05$&$2.53\pm 0.09$&4\\
& 1.30 & 1385.1 & ABC &MR&MR& 27 & 26 &300&$2.74\pm 0.05$&$2.62\pm 0.09$&4\\
& 1.30 & 1425.0 & ABC &MR&$-$& 20 &$-$ &300&$2.64\pm 0.05$&$2.57\pm 0.09$&4\\
& 4.00 & 4860.1 & BCD &MR&MR&14 & 12 &300& $1.25\pm 0.03$ &
$1.27\pm 0.02$ &4\\
& 4.00 & 1464.9 & ABC &MR&MR& 44 & 42 &300&$2.59\pm 0.05$&$2.53\pm 0.09$&4\\
& 4.00 & 1385.1 & ABC &MR&MR& 46 & 36 &300&$2.73\pm 0.05$&$2.62\pm 0.09$&4\\
& 4.00 & 1425.0 & ABC &MR&MR& 32 &$-$ &300&$2.65\pm 0.05$&$2.57\pm 0.09$&4\\
M\,84 & 0.40 & 4860.1\footnote{Although the 1980 and 1981 observations have a
centre frequency of 4885.1\,MHz, the weighted mean for the combined dataset is
still close to 4860.1\,MHz}& ABC &MR&MR& 11 & 10 &300& &
& \\
& 1.65 & 4860.1$^a$& ABC &MR&MR& 15 & 11 &300& $2.94\pm 0.06$ &
$2.88\pm 0.08$ &5\\
& 1.65 & 1413.0 & AB &MR&MR& 140 & 140 &120& $6.03\pm 0.12$ &
$6.44\pm 0.24$ &5\\
& 4.5 & 4860.1$^a$& C &MR&MR& 23 & 20 &300& $2.98\pm 0.06$ &
$2.88\pm 0.08$ &5\\
& 4.5 & 1464.9 & B &MR&MR& 120 & 70 &120& $6.14\pm 0.12$ &
$6.32\pm 0.24$ &5\\
& 4.5 & 1413.0 & AB &MR&MR& 210 &150 &120& $5.99\pm 0.12$ &
$6.44\pm 0.24$ &5\\
& 4.5 & 1385.1 & B &MR&MR& 130 & 69 &120& $6.46\pm 0.13$ &
$6.51\pm 0.24$ &5\\
3C\,296 & 5.5&8460.1&ABCD&MR&$-$&14 &$-$& 180 & $1.34\pm 0.03$ &$1.20\pm 0.10$&5\\
& 5.5&1479.0&BCD&MR&$-$&24 &$-$& 900 & $4.10\pm 0.08$ &$4.08\pm 0.18$&5\\
0326+39&18.0&4860.1& D&CL&$-$& 47 &$-$& 300 & $0.60\pm 0.01$ &$0.62\pm 0.09$
&1 \\
&18.0&1425.0&CD&CL&$-$&140 &$-$& 900 & $1.47\pm 0.03$ &$1.30$ &6 \\
\hline
\end{tabular}
References: 1 \citet{BWE}, 2 \citet{87GB}, 3 \citet{PMN}, 4 \citet{Kuehr81}, 5
\citet{LP}, 6 \citet{WB}, 7 \citet{PKS}.
\end{minipage}
\end{table*}
\end{center}
\subsection{VLA data reduction}
\label{Reduction}
The VLA data listed in Table~\ref{tab:journal} were calibrated and imaged using
the {\sc aips} software package, following standard procedures with a few
additions. The flux-density scale was set using observations of 3C\,286 or
3C\,48 and (except for 0326+39) the zero-point of ${\bf E}$-vector position
angle was determined using 3C\,286 or 3C\,138, after calibration of the
instrumental leakage terms. The main deviations from standard methods were as
follows.
Firstly, we used the routine {\sc blcal} to compute closure corrections for the
4.9-GHz observations. This was required to correct for large closure errors on
baselines between EVLA and VLA antennas in observations from 2007 onwards
\citep{webref}, but also improved a number of the earlier datasets. Whenever
possible, we included observations of the bright, unresolved calibrator 3C\,84
for this purpose; if it was not accessible during a particular observing run, we used
3C\,286. We found that it was not adequate to use the standard
calibration (which averages over scans) to compute the baseline corrections, as
phase jumps during a calibrator scan caused serious errors in the derived
corrections. We therefore self-calibrated the observations in amplitude and
phase with a solution interval of 10\,s before running {\sc blcal}. We assumed a
point-source model for 3C\,84 and the well-determined {\sc clean} model supplied
with the {\sc aips} distribution for 3C\,286.
Secondly, we imaged in multiple facets to cover the inner part of the primary
beam at L-band and to image confusing sources at large distances from the phase
centre in all bands. Before combining configurations, we subtracted in the
$(u,v)$ plane all sources outside a fixed central field. For 0755+37 at L-band,
this procedure failed to remove sidelobes at the centre of the field from a
bright confusing source close to the half-power point of the primary beam. The
reason is that the VLA primary beam is not azimuthally symmetric, so the
effective complex gain for a distant source is not the same as that at the
pointing centre and varies with time in a different way. We used the {\sc aips}
procedure {\sc peelr} to remove the offending source from the $(u,v)$ data for
each configuration before combining them.
Finally, we corrected for variations in core flux density and amplitude scale
between observations as described in \citet{LCBH06}.
J2000 coordinates are used throughout this paper. If positions from archival
data were originally in the B1950 system, then (u,v,w) coordinates were
recalculated for J2000 before imaging. The astrometry for each of the sources
was set using the A-configuration observations, referenced to a nearby phase
calibrator in the usual manner. Thereafter, the position of the compact core was
held constant during the process of array combination.
The observations of M\,84 in 1980 and 1981 used an earlier and less accurate
value of the position of the phase calibrator B1236+077 (alias J1239+075) than
that currently given in the VLA calibrator manual. We have updated the
astrometry to reflect the improved calibrator position. The archival L-band
observations of M\,84 taken in 2000 used a pointing centre displaced by
$\approx$1.1\,arcmin from the centre of the source.
The C-band data were usually taken in two adjacent 50-MHz frequency channels,
which were imaged together. The L-band channels were also imaged together in
$I$, in order to derive spectral-index images. For all sources except
0326+39, they were also imaged independently, primarily for analysis of
linear polarization.
In order to avoid the well-known problems introduced by the conventional {\sc
clean} algorithm for well-resolved, diffuse brightness distributions,
total-intensity images at the higher resolutions were produced using the
multi-scale {\sc clean} algorithm as implemented in the {\sc aips} package
\citep{MSC} or, in one case, a maximum-entropy algorithm \citep[used as
described by \citealt{LP91}]{CE}. The standard single-resolution {\sc clean} was
found to be adequate for the lowest-resolution $I$ images. Stokes $Q$ and $U$
images were {\sc clean}ed using one or more resolutions (we found few
differences between single and multiple-resolution {\sc clean} for these images,
which have little power on large spatial scales). All of the images were
corrected for the effects of the antenna primary beam.
In general, the deep 4.9-GHz images have off-source noise levels very close to
those expected from thermal noise in the receivers alone. There are a few faint
artefacts on the $I$ images. These are visible as concentric rings around the
bright cores and, for NGC\,193 at 1.35 and 1.6-arcsec resolution, a quadrupolar
pattern at the $2\sigma$ level. These are due to errors in cross-calibration of
the different array configurations, which were particularly troublesome due to
the low declination of this source. The integrations for all of the L-band
images and for the C-band image of 0326+39 are shorter and confusion from
sources outside the field of view is worse, so noise levels are correspondingly
higher.
Finally, we produced improved $I$ images from self-calibrated L- and X-band
visibility data for 3C\,296 \citep{LCBH06} using the multi-resolution {\sc
clean} algorithm.
As a check on the amplitude calibration and imaging of the $I$ images used in
spectral-index analysis, we integrated the flux densities using the {\sc
aips} verb {\sc tvstat}. We estimate that our errors are dominated by a residual
scale error of $\approx$2\%. All of the results are in excellent agreement with
single-dish measurements (Table~\ref{tab:images}).
The configurations, resolutions, deconvolution algorithms and noise levels for
the final images are listed in Table~\ref{tab:images}. The noise levels were
measured before correction for the primary beam, and are appropriate for the
centre of the field.
\begin{figure*}
\includegraphics[width=18cm]{ngc193all.eps}
\caption{Images of NGC\,193 at 4.05-arcsec resolution. (a) Total intensity at
4.9\,GHz. (b) Intensity gradient for the image in panel (a). The arrows mark
the high brightness gradients which coincide with the edges of the
flat-spectrum caps (which are also marked). (c) Spectral-index image between
4.9 and 1.4\,GHz. Values outside the white contour are lower limits. The caps
are indicated. (d) Vectors with directions along ${\bf B}_{\rm a}$ and
magnitudes proportional to $p_{4.9}$, plotted on a false-colour
image of total intensity at 4.9\,GHz.
\label{fig:ngc193all}}
\end{figure*}
\begin{figure*}
\includegraphics[width=11.75cm]{ngc193hires.eps}
\caption{High-resolution images of the inner jets of NGC\,193.
(a) Total intensity at 4.9\,GHz with 1.6\,arcsec FWHM resolution. (b) Brightness
gradient derived from the image in (a). Sharp steps in brightness (``arcs'')
are indicated. (c) Spectral index between 4.9 and 1.4\,GHz at 1.6-arcsec
resolution. The most prominent of the arcs, which shows a flattening in
spectral index, is marked. (d) Vectors with directions along ${\bf B}_{\rm a}$
and magnitudes proportional to $p_{4.9}$, plotted on false-colour images of
total intensity at 4.9\,GHz with 1.35\,arcsec FWHM resolution. The
vectors have been corrected for Faraday rotation using a 4.05-arcsec FWHM
resolution RM image.
(e) As in panel (d), but for the innermost jet regions at 0.45-arcsec resolution.
\label{fig:ngc193hires}}
\end{figure*}
\subsection{MERLIN observations and reduction}
\label{MERLIN}
We also present MERLIN imaging in total intensity only for two of the sources.
0206+35 was observed for a total time of about 14 hours. The array included the
following telescopes: Defford, Cambridge, Knockin, Wardle, Darnhall, Mk2, Lovell
and Tabley. The observations were carried out at 1420 MHz with a bandwidth of 15
MHz, in each of left and right circular polarizations. The nearby compact
source 0201+365 was used as the phase calibrator and the flux-density scale was
determined using 3C\,286. The data were edited, corrected for
elevation-dependent effects and non-closing errors and flux-calibrated using the
standard MERLIN analysis programs. Imaging and self-calibration were again
performed using the {\sc aips} package. The off-source image rms after
self-calibration was close to that expected from receiver noise alone. The
MERLIN observations of 0755+37 were described by \citet{Bondi00}.
The parameters of both MERLIN images are given in Table~\ref{tab:images}.
\section{Images}
\label{Images}
\subsection{General}
Our conventions for Figs~\ref{fig:ngc193all} -- \ref{fig:0326_i_si18} and the
descriptions in the text are as follows.
\begin{enumerate}
\item Images of total intensity, $I$, are shown as grey-scales, over ranges indicated
by the labelled wedges. The units are mJy\,beam$^{-1}$.
\item We also show grey-scales of intensity gradient, $|\nabla I|$, approximated
using a Sobel filter \citep{Sobel}.
\item We use the notation $P = (Q^2+U^2)^{1/2}$ for polarized intensity and $p =
P/I$ for the degree of linear polarization. $p_\nu$ is the degree of
polarization at frequency $\nu$ (in GHz). All values of $P$ have been
corrected for Ricean bias \citep{WK}. Linear polarization is illustrated by
plots in which vectors with lengths proportional to the degree of polarization
at 4.9\,GHz ($p_{4.9}$) and directions along the apparent magnetic field
(${\bf B}_{\rm a}$) are superposed on false-colour images of either $I$ (again
with a labelled wedge indicating the range) or $|\nabla I|$. A value of $p =
1$ is indicated by the labelled bar. The apparent field direction is $\chi_0
+ \pi/2$, where $\chi_0$ is the ${\bf E}$-vector position angle corrected to
zero-wavelength by fitting to the relation $\chi(\lambda^2) = \chi_0 + {\rm
RM}\lambda^2$ for foreground Faraday rotation derived from the images in
Table~\ref{tab:images} (RM is the rotation measure). In some sources, we used
RM images at lower resolution to correct the position angles, as detailed in
the captions. This procedure is valid if the RM varies smoothly over the
low-resolution image, and maximises the area over which we can determine the
direction of the apparent field. Vectors are plotted where: (a) $I \geq 5
\sigma_I$, (b) $P \geq 3 \sigma_P$ (the noise levels are given in
Table~\ref{tab:images}) and (c) the RM is well-determined. The RM images for
0206+35 and M\,84 were determined using three and four frequencies,
respectively and are shown in \citet{Guidetti11}; that for 0755+37 (also from
three frequencies) will be described by Guidetti et al. (in preparation). For
NGC\,193, images were only available at 4.86 and 1.365\,GHz. The integrated
RM of the source is small ($-18 \pm 2$\,rad\,m$^{-2}$; \citealt*{SKB}) as are
the variations of position-angle difference across the source. We are
therefore confident that a two-frequency RM determination is adequate in this
case (the correction required to derive ${\bf B}_{\rm a}$ is in any case very small).
\item Spectral index, $\alpha$, is defined in the sense $S(\nu) \propto
\nu^{-\alpha}$ and we use $\alpha^{\nu_2}_{\nu_1}$ for the index between
frequencies ${\nu_1}$ and ${\nu_2}$ (in GHz). In the false-colour images of
spectral index, the input $I$ images at the lower frequency are always blanked
for $I < 3\sigma_I$.\footnote{The spectral-index image for M\,84 has additional
blanking, as noted in the caption of Fig.~\ref{fig:m84all}(c)} In cases where
the areas over which emission was detected were essentially the same at both
frequencies, we also blanked the higher-frequency image at $3\sigma_I$. If
significant areas were detected only at the lower frequency, we did not blank
the higher-frequency image. Instead, we plot a single contour which indicates the
boundary of the region where the source is detected at $I \geq 3\sigma_I$ at the
higher frequency. Outside this contour, the spectral indices are lower limits.
We have carefully inspected the spectral-index images to check for edge effects
and zero-level problems. We are confident that the values and lower limits in
all of the unblanked regions are reliable except where explicitly stated and
that the steep spectra seen at the edges of the lobes in several sources are
real.
\item The restoring beam (FWHM) is shown at the bottom of each plot.
\item We refer to parts of the sources by the abbreviations N, S, E, W (for
North, South, East, West) etc.
\item We refer to the {\em main} (brighter) and {\em counter} (fainter) jets.
\end {enumerate}
\subsection{NGC\,193}
\label{ngc193}
\begin{figure*}
\includegraphics[width=13.25cm]{0206.alla.eps}
\caption{Images of 0206+35. (a) Total intensity at 1.4\,GHz with 4.5-arcsec
FWHM resolution. (b) Total intensity at 4.9\,GHz, 1.2-arcsec FWHM. (c) Intensity gradient
at 1.4\,GHz, 4.5-arcsec FWHM. (d) Intensity gradient at 1.4\,GHz, 1.2-arcsec
FWHM. The arrows mark the high brightness gradients around the inner boundary
of the flat-spectrum cap. (e) Spectral index between 4.9 and 1.4\,GHz,
4.5-arcsec FWHM. (f) Spectral index between 4.9 and 1.4\,GHz, 1.2-arcsec
FWHM. In panels (e) and (f), values outside the contours are lower limits.
\label{fig:0206all}}
\end{figure*}
Fig.~\ref{fig:ngc193all}(a) shows the total-intensity distribution over NGC\,193
at 4.9 GHz and 4.05-arcsec FWHM resolution. The symmetrical jets appear to
broaden rapidly and also bend away from their initial straight path as they
reach the midpoints of symmetric lobes. The lobes both have well-defined
leading edges that approximate arcs of circles in projection on the sky (the W lobe
having a larger radius of curvature), but they lack hot spots that might mark the
termination points of the jets. A broad, faint emission bridge fills the central
region of the source
between the lobes and appears to be wider than the lobes in the N-S
direction in the centre of the source (similar to the ``wings'' observed in some
FR\,II sources; \citealt[and references therein]{wings}). Figs~\ref{fig:ngc193all}(b)
and (c) taken together show that broad ``caps'' of emission can be delineated
in both lobes by enhanced intensity gradients and by lower-than-average
values of $\alpha^{4.9}_{1.4}$. The intensity
gradients are largest at the outer edges of these caps, but there are also gradient
features within the lobes (marked with arrows on Fig.~\ref{fig:ngc193all}b)
which coincide with the edges of the flatter-spectrum region. The regions where
the jets are most prominent have low spectral indices around 0.6. The spectral
index at the trailing edges of the caps steepens smoothly to $\alpha \approx
0.9$ where the emission merges with the broader, symmetric lobes. The most
diffuse lobe emission has spectral indices increasing from $\approx$1 at the edges
of the most elongated parts of the lobes to $\approx$1.4 near the centre of the
source, a spectral-index pattern characteristic of lobed radio sources of both
FR classes (see Section~\ref{lobes}). There are regions of emission with
spectral index approaching 2 at the edges of the faintest emission on the N
and S edges of the source.\footnote{The spectral index within a few arcsec
N and S of the core is affected by artefacts in the 4.9-GHz image.}
Fig.~\ref{fig:ngc193all}(d) shows that the distribution of the apparent magnetic
field direction over the lobes is basically circumferential, while the magnetic
field in the jets is perpendicular to the jets over most of their lengths. The
structure of the apparent magnetic field in both the jets and the lobes appears
regular, and characteristic of that commonly found in jets of FR\,I sources and
in the lobes of both FR classes.
\begin{figure*}
\includegraphics[width=10cm]{0206.BVEC.1.2.EPS}
\caption{Vectors with directions along the apparent magnetic field and lengths
proportional to the degree of polarization at 4.9\,GHz, superimposed on a
false-colour image of total intensity across 0206+35 at the same
frequency. The resolution is 1.2\,arcsec FWHM. The apparent field directions
were derived using a three-frequency rotation-measure fit \citep{Guidetti11}.
\label{fig:0206vec}}
\end{figure*}
Fig.~\ref{fig:ngc193hires}(a) shows the total-intensity distribution over the
jets at 4.9\,GHz with a resolution of 1.6\,arcsec FWHM. Both jets are fairly straight and
similar in overall appearance, exhibiting rapid lateral expansion just beyond
the distance from the unresolved nuclear radio source at which the E jet is
markedly brighter than the W jet. Their edges are well delineated by steep
transverse intensity gradients near the midpoint of the source
(Fig.~\ref{fig:ngc193hires}b). The surface brightnesses of both jets decrease
smoothly with distance from the nucleus except at a distance of
$\approx$25\,arcsec, where there are more sudden drops, mostly clearly visible
as ``arcs'' crossing the jets on the gradient image (indicated on
Fig.~\ref{fig:ngc193hires}b). The overall spectral index of the jet-dominated
emission appears to steepen with distance from the nucleus, but this is almost certainly the result of
superposition of dimming jets on steeper-spectrum lobe emission. The prominent arc in the
E jet is associated with a slight flattening in the spectrum
(Fig.~\ref{fig:ngc193hires}c).
Fig.~\ref{fig:ngc193hires}(d) shows the intensity and apparent magnetic field
distributions over the inner $\approx$45-arcsec regions of both jets at 1.35-arcsec
resolution. The magnetic field organization over both jets is quite regular,
with the field closest to the jet axis predominantly orientated perpendicular to
the axis. The apparent field at the edges of the inner jets is parallel to the
rapidly-expanding outer isophotes. Farther from the nucleus, the edge field
directions in both jets converge towards the axis to form almost circular
patterns.
Fig.~\ref{fig:ngc193hires}(e) shows the 4.9-GHz total intensity and apparent
magnetic-field distributions over the inner 15 arcsec ($\approx$4.5 kpc) of the
jets at 0.45-arcsec resolution. Both jets exhibit faint inner regions in the
first $\approx$2 arcsec from the nucleus before they brighten and subsequently
flare. There is bright, non-axisymmetric knot structure in the first $\approx$4
arcsec of the E jet, downstream of the flaring point \citep{LPdRF}, where
both jets brighten abruptly.
\subsection{0206+35}
\begin{figure}
\includegraphics[width=7cm]{0206.hires.eps}
\caption{(a) Grey-scale of the 4.9-GHz total-intensity distribution over the
jets in 0206+35 at 0.35-arcsec FWHM resolution. The grey-scale range is 0
-- 2.5\,mJy beam$^{-1}$. (b) Vectors with lengths proportional to the degree
of polarization at 4.9\,GHz and directions along the apparent magnetic field,
superimposed on a false-colour display of the total intensity at 4.9\,GHz. The
resolution is 0.35\,arcsec FWHM. The vector directions are derived from
3-frequency RM fits at 1.2-arcsec resolution. (c) Grey-scale of the 1.6-GHz total
intensity distribution over the inner 5 arcsec of the NW jet and the
unresolved nuclear source in 0206+35 at 0.16-arcsec FWHM resolution, from
MERLIN. The grey-scale range is 0 -- 5\,mJy beam$^{-1}$.
\label{fig:0206hires}}
\end{figure}
Fig.~\ref{fig:0206all} shows the total-intensity, brightness gradient, and
spectral-index distributions over the whole of 0206+35 at two resolutions.
Fig.~\ref{fig:0206all}(a) at 1.4 GHz, 4.5-arcsec FWHM resolution, shows that the
large scale structure consists of two lobes, overlapping and circular in
cross-section, with well-defined outer edges to the NW and SE of the source,
superimposed on fainter diffuse emission to the N and S.
Fig.~\ref{fig:0206all}(b), at 4.9\,GHz and 1.2-arcsec resolution, shows the
source in more detail but with reduced sensitivity to the largest-scale
emission. At this resolution, both lobes show sharp outer boundaries. The
roughly circular edge of the NW lobe protrudes beyond the diffuse emission,
whereas the corresponding feature in the SE is well inside the outer boundary of
the source and is most obvious in the E of the lobe, close to the termination of
the jet. If the orientation of $\theta \approx 40^\circ$ determined for the
inner jets (Laing \& Bridle, in preparation) also applies to the lobes, then
they are presumably ellipsoidal with an axial ratio $\approx$1.6.
Fig.~\ref{fig:0206all}(b) also shows some internal structure in both jets. The
NW jet has the brighter base, and both bends and brightens as it enters its
lobe, after which its path meanders. The SE counter-jet appears to expand more
rapidly initially, then also meanders as it enters its lobe.
Figs~\ref{fig:0206all}(c) and (d) show the 1.4-GHz intensity gradient images of
0206+35 at 4.5-arcsec and 1.2-arcsec resolution, respectively. The edges of
the jets are clearly marked by enhanced intensity gradients at both resolutions,
while significant internal structure is also apparent in the lobes. Both lobes
exhibit strong brightness gradients at their outer edges in these displays,
corresponding to the sharp boundaries noted earlier. There is a particularly
striking correlation between the main features of these intensity-gradient
images and of the two 1.4 to 4.9-GHz spectral-index images shown as
Figs.~\ref{fig:0206all}(e) and (f). The emission inside the brightest intensity
gradients around the jet has a much lower spectral index, typically $<$0.65,
than the $\approx$0.7 to 1.0 spectral index that is prevalent over the rest of the
two spherical lobes. The diffuse emission outside the lobes has spectral
indices ranging from $\approx$1.05 to $>$2, generally increasing with distance from
the lobes towards the outer edge of the source. Also notable are the ``fans''
of lower-spectral-index emission that can be traced from the ends of the jets to
the regions at the edges of both lobes that show the most pronounced brightness
gradients. This is particularly striking in the NW lobe, where a
cap of lower-spectral-index emission is bounded by the high brightness
gradients marked by arrows on Fig.~\ref{fig:0206all}(d) and by the edge of the
lobe. This suggests that the jet outflow has reached the end of the lobe in a
less-collimated, but still identifiable, form. In the SE lobe, the jet bends to
the N before appearing to impact the edge of the lobe at another enhanced
brightness gradient, marked in Fig.~\ref{fig:0206all}(d).
Fig.~\ref{fig:0206vec} shows that the magnetic field configuration in both
lobes is well ordered and basically circumferential, while the magnetic field in
the jets is predominantly perpendicular to their axis near the centre lines of
the jets, with evidence for parallel field at the jet edges. The southern edge
of the source is strongly polarized with field tangential to the boundary.
At 0.35-arcsec FHWM resolution (Fig.~\ref{fig:0206hires}a) the lobe emission is
substantially resolved out so the images are dominated by the jets. The bright
base of the main (NW) jet is clearly centre-brightened, while the corresponding
segment of the counter-jet appears centre-darkened. While at first sight the
main jet appears to expand more slowly than the counter-jet, {\it the geometry of the
centre-darkened segment of the counter-jet is very similar to that of the main
jet over the first $\approx$10 arcsec}. This suggests a two-component view of
these jets wherein a narrow inner structure, brighter to the NW and fainter to
the SE, is seen superposed on a broader expanding structure that is slightly
brighter to the SE than to the NW. We will interpret this elsewhere
(Laing \& Bridle, in preparation) as evidence
for a symmetrical relativistic outflow surrounded by a mildly relativistic
backflow in this source.
The magnetic field is clearly perpendicular to the jet axis over most of the
length of both jets (Fig.~\ref{fig:0206hires}b), but the first few arcsec of the
main jet, where it is brightest, have the magnetic field parallel to the jet
axis. There is also evidence for oblique, or parallel, magnetic field at the
edges of both jets.
Fig.~\ref{fig:0206hires}(c) shows the bright, narrow base of the NW jet
from MERLIN data at 1.6 GHz. This higher-resolution (0.16\,arcsec FWHM) image
unambiguously identifies the flaring point in the jet, 0.7\,arcsec from the
nucleus, where it brightens abruptly. This is an important fiducial distance for
our modelling. The image also shows the start of rapid expansion downstream of
the flaring point.
\subsection{0755+37}
\label{0755}
\begin{figure*}
\includegraphics[width=14cm]{0755all.eps}
\caption{Images of the whole of 0755+37 at resolutions of 4.0\,arcsec FWHM
(panels a, c, e and g) and 1.3\,arcsec FWHM (panels b, d and f). (a) Total
intensity at 1.4\,GHz. (b) Total intensity at 4.9\,GHz. (c) Intensity gradient
at 1.4\,GHz. (d) Intensity gradient at 4.9\,GHz. (e) and (f) spectral index
between 4.9 and 1.4\,GHz. (g) vectors with lengths proportional to $p_{4.9}$
and directions along the apparent magnetic field from a three-frequency
rotation-measure fit (Guidetti et al., in preparation), superimposed on the
intensity gradient at 4.9\,GHz.
\label{fig:0755all}}
\end{figure*}
Fig.~\ref{fig:0755all} shows the total-intensity, $|\nabla I|$ and $\alpha$
distributions over all of 0755+37 at two resolutions and the polarization at low
resolution. Fig.~\ref{fig:0755all}(a) at 1.4 GHz, 4.0-arcsec FWHM resolution, shows
that the large-scale structure consists of two lobes, again roughly
circular in projection, with well-defined but not particularly sharp outer edges
to the W and E, plus fainter diffuse emission to the N and S. The E lobe has
a series of narrow ridges and brightness steps, all roughly arcs of circles in
projection, in the region where the brighter jet appears to terminate. They are
recessed from the E boundary of this lobe and some may be the edges of thin
shells. The structure of the W lobe is unusual, containing some arc-like
features and other structure suggestive of a rapidly decollimating counter-jet
W of the nucleus, as previously described by \citet{Bondi00}, with a
``hole'', or deficit of emission in the region where the counter-jet might be
expected to terminate. Fig.~\ref{fig:0755all}(b) at 4.9 GHz, 1.3-arcsec FWHM
resolution, is insensitive to the largest scales of emission to the N and S of
the main source, but clearly shows internal structure in the E lobe, including
the concentric semicircular ridges at the end of the jet (labelled on the
figure). All of the substructure in the W lobe appears to be resolving out,
though vestiges of the ridge apparent at lower resolution in
Fig.~\ref{fig:0755all}(a) remain.
\begin{figure*}
\includegraphics[width=13cm]{0755.BVEC.RMLOCORR.1.3.EPS}
\caption{Vectors with lengths proportional to the degree of polarization at
4.9\,GHz and directions along the apparent magnetic field, superimposed on a
false-colour display of the total intensity over 0755+37 at 4.9\,GHz. The
resolution is 1.3\,arcsec FWHM and the colour-scale range is 0 -- 2.5\,mJy
beam$^{-1}$. The vector directions are derived from three-frequency RM fits
at 4.0-arcsec resolution (Guidetti et al., in preparation).
\label{fig:0755vectors13}}
\end{figure*}
Figs.~\ref{fig:0755all}(c) and (d) show the intensity gradients over the whole
source at 1.4 GHz, 4.0\,arcsec FWHM resolution and 4.9 GHz, 1.3\,arcsec FWHM
resolution, respectively. These figures
emphasise the strong differences between the internal structures of the
lobes: multiple recessed ridges with significant brightness gradients in the E
lobe, but much smoother structure in the W lobe away from the
jet.
Figs.~\ref{fig:0755all}(e) and (f) clearly show three distinct spectral-index
regions on each sides of the source, as follows.
\begin{enumerate}
\item $\alpha \approx 0.6$ at the bases of both jets, in a broad
cap in the NW part of the W lobe, and all along the region delineated
by the strongest brightness gradients in the E jet.
\item $\alpha \approx 0.8$ over most of the rest of both lobes,
including the ridge extending Northward from the nucleus.
\item There is also steeper-spectrum diffuse emission with $\alpha$ increasing from
$\approx$ 1 in the central part of the source to $\approx$1.5 at the N and S
edges.
\end{enumerate}
The spectral-index image suggests that the counter-jet flow persists as far as
the ridge of emission in the W lobe marked on Figs.~\ref{fig:0755all}(a) and (b), despite the
lack of evidence for this in total intensity.
Fig.~\ref{fig:0755all}(g) shows that the apparent magnetic field in both lobes
is exceptionally well organised, and mainly tangential to the lobe
boundaries. The degree of linear polarization is $p\ga 0.6$ over much of both
lobes, consistent with the high degree of organization evident from the
vectors. Note the excellent alignment between the field vectors and the ridges
of high brightness gradient on both sides of the source.
Fig.~\ref{fig:0755vectors13} confirms the exceptional degree of ordering of the
magnetic field in both lobes at 1.3-arcsec FHWM resolution.
\begin{figure*}
\includegraphics[width=17cm]{0755hires.eps}
\caption{High-resolution images of the inner jets of 0755+37. (a) Total
intensity at 4.9\,GHz with 1.3-arcsec FWHM resolution, plotted with a compressed grey-scale
range to emphasise fine-scale structure in and around the jets. (b)
Vectors with lengths proportional to $p_{4.9}$ and directions along the
apparent magnetic field superimposed on a false-colour image of intensity
gradient at 4.9\,GHz. The resolution is 1.3\,arcsec FWHM. (c) Total intensity
at 4.9\,GHz, 0.4-arcsec FWHM. (d) Main jet base at 4.9\,GHz with 0.4-arcsec
FWHM resolution. ${\bf B}_{\rm a}$ vectors with lengths proportional to $p_{4.9}$ are
superposed on a false-colour plot of total intensity. (e) MERLIN image of the
main jet base at 1.7\,GHz with 0.16-arcsec FWHM resolution \citep{Bondi00}.
Corrections for Faraday rotation in panels (b) and (d) were made using a
three-frequency RM fit at 4.05-arcsec resolution (Guidetti et al., in preparation).
\label{fig:0755hires}}
\end{figure*}
Fig.~\ref{fig:0755hires} shows the jet bases on larger scales and at higher
resolution. Fig.~\ref{fig:0755hires}(a) is optimised to emphasise fine-scale
structure in the jets at 1.3-arcsec resolution. It shows that the E jet has the
brighter base but becomes limb-brightened at the position indicated on
Fig.~\ref{fig:0755hires}(a) at about 18\,arcsec from the
nucleus. At this resolution, the W counter-jet contains a centre-darkened
structure that expands at about the same rate as the brighter E jet, embedded in
a much broader, and more rapidly-expanding cone of emission with at least two
curved arcs (Fig.~\ref{fig:0755hires}a). The centre of the counter-jet is
crossed by a prominent, straight bar of emission (labelled as such on
Fig.~\ref{fig:0755hires}a) at $\approx$15\,arcsec from the core. The steepest
brightness gradients at the edges of both jets are very similar in form within
$\approx$15\,arcsec of the nucleus so that, as in 0206+35, the {\it inner}
geometry of the counter-jet structure appears to mimic that of the brighter jet
on the other side of the source (Fig.~\ref{fig:0755hires}b).
Fig.~\ref{fig:0755hires}(b) also shows detail of the magnetic field organisation
at the bases of both jets at 1.3-arcsec resolution. The bright base of the E
jet has the magnetic field roughly parallel to the jet axis, but there is a
rapid transition, with the field becoming perpendicular to the axis as the jet
expands. On the counter-jet side the field is also transverse. There is little
evidence for any perturbation of the magnetic field structure at the edges of
the jets except at the N edge of the counter-jet where the magnetic field
becomes parallel to the steepest brightness gradient once the jet widens
significantly. The very high degree of polarization in the surrounding diffuse
emission makes it difficult to disentangle the true polarization of the jets
where their emission is weak, e.g.\ at their edges.
Figs~\ref{fig:0755hires}(a) and (b) also show a filament of faint
emission which extends for about 30 arcsec Northward from the vicinity of the
nuclear source, roughly perpendicular to the jets. It has a very high degree of
linear polarization, with an apparent magnetic field parallel to its length.
Fig.~\ref{fig:0755vectors13} suggests that this highly polarized
filament may be part of a larger region of enhanced polarization that delineates
the inner boundary of the W lobe.
\begin{figure*}
\includegraphics[width=16.5cm]{m84all.eps}
\caption{Images of M\,84 at 1.65-arcsec FWHM resolution. (a) 4.9-GHz total intensity.
(b) 4.9-GHz intensity gradient. (c) Spectral index between 1.4 and 4.9\,GHz,
plotted only where its rms error is $<$0.1. The vertical ``streaks'' are
artefacts. Arrows mark the areas when the spectral index is significantly
steeper than the typical value of $\alpha = 0.6$. (d) ${\bf B}_{\rm a}$ vectors with
lengths proportional to $p_{4.9}$, superposed on a false-colour plot of
intensity gradient. Corrections for Faraday rotation were made using a
4-frequency RM image at 4.5-arcsec resolution \citep{Guidetti11}. All panels
show identical areas.
\label{fig:m84all}}
\end{figure*}
\begin{figure*}
\includegraphics[width=17cm]{m84hires.eps}
\caption{4.9-GHz images of M\,84 at 0.4-arcsec FWHM resolution. (a) Total intensity
for the whole source. (b) Total intensity for the jets. (c) Intensity gradient
for the jets. Three prominent ``arcs'' in the N jet are labelled on panels (b)
and (c). (d) Total intensity for the inner jets, showing the abrupt bend in
the counter-jet. (e) ${\bf B}_{\rm a}$ vectors with lengths proportional to
$p_{4.9}$, superimposed on a false-colour image of intensity gradient for the
N jet. (f) As (e), but for the S jet.
\label{fig:m84hires}}
\end{figure*}
Fig.~\ref{fig:0755hires}(c) shows the total-intensity structures at the bases of
the jets at 0.4 arcsec resolution. The main jet is clearly centre-brightened
whereas the counter-jet is not and the brighter edges of the counter-jet lie
mostly outside the region that would be delineated by reflecting the main
jet across the nucleus. As in 0206+35 (Fig.~\ref{fig:0206hires}a) there is a
narrow collimated structure within which the main jet is systematically brighter
than the counter-jet apparently superposed on a broader structure which is
brighter around the counter-jet than around the main jet. As for 0206+35, we
will show elsewhere (Laing \& Bridle, in preparation) that this structure can be modelled
as a symmetrical relativistic outflow surrounded by modestly relativistic backflow.
The polarization image in Fig.~\ref{fig:0755hires}(d) shows the extent of the
region at the bright base of the main jet in which the magnetic field at the edges
is parallel to the expanding outer isophotes, whereas the on-axis field is
oblique (see also Fig.~\ref{fig:0755hires}b). Finally, a 1.7-GHz MERLIN image of
the main jet base (Fig.~\ref{fig:0755hires}e) shows the position of the flaring
point and the details of the initial expansion.
\subsection{M\,84}
\label{m84}
M\,84 is of particular interest for two reasons: it has a much lower radio
luminosity than the other sources we have studied and it shows very clear
evidence for interaction with the surrounding IGM \citep{M84Chandra}.
Fig.~\ref{fig:m84all}(a) shows the total-intensity distribution over M\,84 at
4.9\,GHz with 1.65-arcsec FHWM resolution\footnote{Lower-resolution (FWHM
$\approx$4\,arcsec) radio observations, shown by \citet{LB87}, are not
reproduced here.}. Both jets of this small (overall extent $\approx$12 kpc),
low-luminosity radio source expand rapidly and deflect within about 1 arcmin.
They are surrounded by diffuse emission (at least in projection) everywhere
except perhaps within a few arcsec of the nucleus. The initially brighter N jet
can be traced as far as the edge of its lobe, where it bends through
$\approx$90$^\circ$ in projection and decollimates on impact. The bending is
accompanied by strong brightness gradients (Fig.~\ref{fig:m84all}b). In
contrast, and uniquely amongst the sources in this paper, the S jet (initially
fainter and misaligned with the nucleus) appears to terminate within its lobe
and to feed a bubble-like structure with significant internal brightness
gradients and filaments. The bubble is contained within a smoother, more
elongated structure, at least in projection. The spectral index between 1.4 and
4.9\,GHz (Fig.~\ref{fig:m84all}c) is constant with $\alpha \approx 0.6$ over the
jets, within the Southern bubble and over most of the N lobe. The only regions
of significantly steeper spectral index that we have detected (marked by arrows
on Fig.~\ref{fig:m84all}c) are on both sides of the south jet base and to the
west of the north jet. The current observations are too noisy to determine the
spectral index in the low-surface-brightness emission outside the southern
bubble. Fig.~\ref{fig:m84all}(d) shows the apparent magnetic field structure
over the whole source at 1.65-arcsec FHWM resolution. The magnetic field in the
S lobe is broadly circumferential and appears well-aligned with the peak in
brightness gradient at the edge of the bubble. There is a sudden increase in
the degree of polarization at the edge of the bubble, suggesting a discontinuity
in the field structure at that location. A configuration in which the field is
confined to ellipsoidal shells but is otherwise random (model A of \citealt{L80})
gives qualitatively the correct polarization distribution, but the predicted
variation of $p$ across the lobe is smoother than we observe. The magnetic field
in the N lobe is predominantly perpendicular to the presumed path of the jet
along its mid-line.
Fig.~\ref{fig:m84hires}(a) shows the 4.9-GHz total-intensity distribution over
the whole source at 0.4-arcsec FWHM resolution. This highlights the filamentary
structure in the N lobe, the S bubble and a thin rim of emission around the S
lobe. The intensity and gradient images of the jets at this resolution
(Fig.~\ref{fig:m84hires}b and c) emphasise the edges of both jets and the curved
arcs in the north. The former also shows a curious thin feature (labelled A)
joining the S jet and the edge of its lobe. The misalignment (non-collinearity)
of the axes of the N and S jets beyond a few arcsec (a few hundred parsecs) from
the nucleus and the initially knotty structure of the N jet, can be seen on a
larger scale in Fig.~\ref{fig:m84hires}(d), which also shows faint emission
close to the nucleus on both sides.
Fig.~\ref{fig:m84hires}(e) and (f) show that, although the apparent magnetic field
in the jets is locally well-organised, there are significant regions where the
field is oblique to the jet axis. In the outer parts of both jets the field appears to be
predominantly perpendicular to the jet axis, but the jet emission also becomes blended with
that from the lobes.
M\,84 may be an intermediate case between lobed and tailed sources, showing some
characteristics of each class. The N jet terminates in a sharp bend at the outer
edge of its lobe, as often seen in lobed sources (eg.\ 3C\,296,
Section~\ref{3c296}), but there are hints of a nascent tail structure in the
NE. This is supported by {\em Chandra} imaging of M\,84 \citep{M84Chandra},
which suggests that the NE lobe is breaking out of the surrounding hot
plasma. In contrast, the S jet terminates well within its bubble-like lobe. The
oscillation of the S jet prior to its eventual disruption is very reminiscent of
that of the jets within the S spurs of the tailed sources 3C\,31 and 3C\,449
\citep{3c31ls,KSR}. The spectral gradients (such as they are) are more
characteristic of lobed sources, with no hint of steepening outwards. The
constancy of the spectral index across the radio structure is not surprising:
given the small ($\approx$12-kpc) size of the source, synchrotron losses are unlikely to
have had enough time to steepen the spectrum at GHz frequencies even in the more
extended regions.
\subsection{3C\,296}
\label{3c296}
\begin{figure}
\begin{center}
\includegraphics[width=7.5cm]{3c296spec.eps}
\caption{(a) Grey-scale of the 1.4-GHz total intensity over 3C\,296 at
5.5-arcsec FWHM resolution.
(b) Intensity gradient image at the same frequency and resolution as panel (a).
(c) False-colour plot of the 1.4 to 4.9 GHz spectral index distribution at
5.5-arcsec FWHM resolution; data plotted outside the white contour are lower
limits.
\label{fig:296_i_si55}}
\end{center}
\end{figure}
Fig.~\ref{fig:296_i_si55} shows the 1.4-GHz total intensity and brightness
gradient together with the 1.4 to 4.9-GHz spectral index distributions over
3C\,296 at 5.5-arcsec resolution. The intensity data are essentially those in
Figs.1(a) and 4(a) of \citet{LCBH06} with an improved deconvolution, but the
grey-scale range in Fig.~\ref{fig:296_i_si55}(a) is chosen to show the jets more
clearly where they appear to enter the lobes. The corresponding intensity
gradient is shown in Fig.~\ref{fig:296_i_si55}(b). Lower limits to the spectral
indices have been plotted outside the white intensity contour in
Fig.~\ref{fig:296_i_si55}(c) to provide a better representation of the
large-scale spectral gradients at the edges of the radio source. There is clear
evidence that the flatter-spectrum ($\alpha \approx$ 0.5 to 0.65) jets propagate
to the edges of both lobes, where they deflect and eventually blend with more
extended steeper-spectrum emission whose spectral index $\alpha \approx$ 1. The
NE jet forms a cap of flat-spectrum emission with a semicircular outer boundary,
again marked by sharp brightness gradients. The flow (as traced by its flatter
spectrum) then turns through $\approx 140^\circ$ in projection, crosses the
entire lobe and impacts on the boundary at the position marked on
Fig.~\ref{fig:296_i_si55}(b). The SW jet, on the other hand, does not form a
cap, but appears to make an oblique impact on the wall of the lobe before
turning through almost 180$^\circ$ in projection back towards the nucleus. The
spectral index of the more extended emission increases further towards the
centre of the source and towards its outer edges, where $\alpha \approx$ 2.
\subsection{0326+39}
\label{0326}
\begin{figure}
\begin{center}
\includegraphics[width=8.5cm]{0326i_si18.eps}
\caption{(a) Grey-scale of the 1.4-GHz total intensity over 0326+39 at
18-arcsec FWHM resolution.
(b) False-colour plot of the spectral index ($\alpha^{4.9}_{1.4}$) distribution at
the same resolution.
\label{fig:0326_i_si18}}
\end{center}
\end{figure}
Fig.~\ref{fig:0326_i_si18} shows the distributions of the 1.4-GHz total
intensity and 1.4 to 4.9-GHz spectral index over 0326+39 at 18-arcsec FWHM resolution.
As in the other sources studied here, the lobes of 0326+39 appear to
surround the jets in projection. Even at this relatively low resolution, the
jets are clearly traceable to the outer parts of the lobes in both total
intensity, where they appear to twist and deflect close to the outer edges
of the lobes, and spectral index, where they can be traced as regions of
significantly lower index. The W jet exhibits a particularly strong kink
towards the S about 2 arcmin ($\approx$ 60 kpc) from the nucleus; this kink is
clearly replicated in the spectral index distribution.
The extended emission of both lobes also shows a well-defined spectral index
gradient, increasing towards the nucleus from $\alpha \approx 1$ near the broad
caps that appear to be dominated by the outer jet emission to a significantly
steeper spectrum with $\alpha \approx 2$ near the centre of the source. The
spectral index also appears to increase towards $\alpha \approx 2$ in the outer
part of the faint Southward extension of the E lobe.
\section{Discussion}
\label{discuss}
\subsection{Initial jet propagation}
\label{jets}
There appear to be few, if any, morphological differences between the jet base
regions in lobed and tailed FR\,I sources, whose common properties include the
following.
\begin{enumerate}
\item The initial rapid expansion (flaring) and recollimation of the jets is essentially identical
in both types of FR\,I source.
\item Jet bases usually show significant side-to-side asymmetries, although a few very
symmetrical examples of each type are known.
\item With the exception of these symmetrical cases, there are further common
properties, as follows.
\begin{enumerate}
\item One jet in each source exhibits a bright region
at its base, often with with non-axisymmetric knots and a predominantly
longitudinal magnetic field.
\item Jet brightness and polarization asymmetries are correlated: the apparent
magnetic field on-axis in the brighter jets is initially longitudinal, but switches
to transverse at larger distances; that in the fainter jets is always transverse.
\item The jet/counter-jet ratio falls with increasing distance from the
nucleus.
\end{enumerate}
\end{enumerate}
These regions at the bases of FR\,I jets are also those in which our models
\citep{LB02a,CL,CLBC,LCBH06} show that the jets decelerate from relativistic to
sub-relativistic velocities. The development of a {\it transverse} velocity
gradient across the jets implies that they decelerate primarily by
boundary-layer entrainment of the external medium. We suggest that this
entrainment occurs primarily in the dense, kpc-scale coronae of hot plasma that
surround the nuclei of twin-jet radio galaxies (probably with an additional
contribution from stellar mass loss) and that the entrainment effectively turns
off on large scales, as it must to avoid further decollimation. {\em Chandra}
observations have revealed coronae of this type in 0206+35, 0755+37
\citep{WBH01}, 0326+39 (Hardcastle, private communication), M\,84
\citep{M84Chandra} and 3C\,296 \citep{Hard05,Croston}; NGC\,193 may have a
similar component \citep{Gia11}. The coronae for which data are available have
central electron densities of $10^5$ to $7 \times 10^5$\,m$^{-3}$, central
pressures of $3 \times 10^{-11}$ to $4 \times 10^{-10}$\,Nm$^{-2}$ and core
radii of 0.3 to 2\,kpc. It seems likely that lobe plasma is excluded from the
immediate vicinity of the nucleus by the high-pressure coronae, so the jets in
both types of source initially propagate in essentially identical environments,
unshielded from the IGM. Further evolution of the jets will depend on their
surroundings: if they propagate through low-density lobe material, then
entrainment will effectively cease and they will recollimate to become almost
cylindrical flows, as seen in the sources described here. Entrainment rates are
also likely to be small in jets without surrounding lobes provided that the
external density is low. For example, there is no sign of any lobe surrounding
the inner jets of NGC\,315, yet it has a very small opening angle after
recollimation \citep{CLBC}. This argues for a negligible entrainment rate at
distances $\ga$35\,kpc, and the external density does indeed fall rapidly on
these scales \citep{Croston}. In contrast, we have argued that entrainment
continues at a lower, but still significant rate after recollimation in 3C\,31,
whose inner jets also have no surrounding lobes. In this source, the opening
angle after recollimation is larger, our models indicate continuing deceleration
and there is a hot-gas component with a large core radius in addition to the
inner corona \citep{LB02b}.
One feature of jet propagation appears to be unique to lobed sources, however.
Our high-resolution data for 0206+35 and 0755+37 show that the apparent
difference in opening angle between the main and counter-jets seen at lower
resolution in these sources is a manifestation of a {\it two-component} jet structure.
In both sources, the main jet and the counter-jet appear to contain both narrow
(well-collimated) and broader features on both sides of the nucleus, but the
better collimated parts of the main jet are centre-brightened while those in the
counter-jet are centre-darkened. The broader features at the edges of the
counter-jets are also slightly brighter than those of the main jets. What {\it
appears} to be poorer collimation of the counter-jet at low resolution is now
seen as a narrow centre-brightened jet opposite a similarly narrow
centre-darkened counter-jet, surrounded by broader emission which is slightly
brighter around the counter-jet than around the main jet. We will explore
explanations for this ``two-component'' jet structure in terms of relativistic
outflow in the well collimated component surrounded by mildly relativistic
backflow in the broader component in a later paper (Laing \& Bridle, in preparation).
\subsection{Jet termination and lobe structure}
\label{lobes}
With the partial exception of M\,84, to which we return at the end of this section, the sources
described in this paper have lobes similar to those in FR\,II sources
(e.g.\ \citealt{AL87,CPDL,K08}). They exhibit sharp brightness gradients at their outer edges,
spectral indices that steepen towards the nucleus from the outer lobes and
towards the outer extremities of any lobe, off-axis wings of diffuse emission
near their centres, and generally circumferential magnetic fields. The only
obvious difference from the lobes of FR\,II sources is the evident lack of hot spots at the ends of the FR\,I
jets. Similar spectral gradients occur in a larger sample of lobed FR\,I sources
observed at lower resolution \citep{Parma99}. Where the jets dominate the total
intensity, they all have similar spectral indices in the range 0.5 to 0.7. Even
when the jets are not obviously dominant in intensity alone, the observed spectral
index distributions can trace plausible paths for jets towards the edges of the
lobes. This spectral signature implies that steeper-spectrum lobe material has been displaced by
flatter-spectrum jet material along an extended pathway through the lobes. Note that the
spatial resolution of our data relative to the overall source size is higher than for most
published multifrequency imaging of sources in either FR class. Together
with the ability to trace the jet flow via its flatter spectrum and the use of
intensity gradient images to indicate enhanced compression, our data allow us
to identify a number of new types of structures in the lobes.
The jets often terminate in what we have called ``caps'', with the following properties.
\begin{enumerate}
\item They typically occur at the ends of lobes (one example, 0755+37SE,
appears recessed, perhaps as a result of projection) and are clearly
associated with jet termination.
\item They are bounded at their leading edges by smooth outer isophotes
(approximated by segments of circles) with sharp intensity gradients.
\item They also have inner boundaries, again marked by high intensity
gradients.
\item Their emission has a flat spectrum, close to that typical of jets ($\alpha
\approx 0.6$) after accounting for contamination by steeper-spectrum diffuse
emission.
\item Four out of five examples are fairly symmetrical with respect to the local jet
axis. The exception is 3C\,296NE.
\end{enumerate}
There are five clear cases of such ``caps'' out of the ten FR\,I lobes we have studied at
high resolution: NGC\,193E and W, 0206+35NW, 0755+37SE and
3C\,296NE, and 0755+37NW may well be similar.
Other jet terminations show some, but not all, of the same features. In
particular, several show enhanced brightness gradients, but without obvious inner
boundaries. In 0206+35SE, the jet bends away from its initial direction and
creates a sharp brightness gradient where it impacts on the side of the lobe;
the associated emission again has a relatively flat spectrum. Both lobes of
0326+39 probably have similar structures, but the available resolution is not
yet high enough to be sure. In 3C\,296SW and M\,84N, the jets remain straight
until they make oblique impacts on the lobe walls at locations marked by high
intensity gradients after which they bend abruptly. The former case also shows
flatter-spectrum emission at the impact point. 0755+37NW has a weak ring of
emission surrounding a flatter-spectrum region, but this is contained entirely
within the outline of the lobe. It is possible that some of these structures
(especially the last) are caps seen from an unfavourable angle.
We see little convincing evidence for enhancements in $|\nabla I|$ {\em crossing} the
jets close to their termination points, such as might be expected from strong shocks.
This implies that the flow is internally sub- or transonic. Our estimates of
the on-axis flow velocities after the initial rapid deceleration range from
$\la$0.1$c$ to $\approx$0.6$c$, compared with an internal sound speed of
$3^{-1/2}c = 0.58c$ for an ultrarelativistic plasma. Unless there is significant
deceleration on scales larger than we model, the implication is that the jets
must be very light and energetically dominated by relativistic particles and
magnetic field.
The lobes in this class of source are very different from the subsonic, buoyant
plumes which are thought to form the outer structures of large FR\,I sources
like 3C\,31 \citep{3c31ls}. The picture that emerges for jet termination in
lobed FR\,I sources is that the flow can be traced at least as far as the end of
the lobe via its flatter spectrum. Where it impacts on the lobe surface, a
high-pressure region (the cap) can be created. The smooth shape of the outer
isophotes (compared with the more ragged outline of the lobes) suggests that the
forward expansion is at least mildly supersonic with respect to the external
medium. Jet material flows through the cap and back into the lobe, eventually
mixing with pre-existing lobe plasma. The flow pattern is sometimes consistent
with axisymmetry (or at least appears so in projection) but can bend by large
angles without completely losing its collimation. The clearest example is
3C\,296NE, where the flow bends by $\approx$140$^\circ$ in the plane of the sky
and can be traced as far as the trailing edge of the lobe, where its impact is
marked by a sharp brightness gradient. An alternative to the formation of a cap
appears to be an oblique collision with the boundary of the lobe. In at least
one case, 0206+35SE, the jet deflects from its original straight path before
hitting the edge of the lobe. This raises the possibility that the impact point
moves around the surface of the lobe, extending it in different directions at
different times and lowering the average advance speed of the lobe compared with
the instantaneous speed of the impact point, as in the ``dentist's drill''
model of FR\,II sources \citep{dentist}.
As noted in Section~\ref{m84}, M\,84 appears to be an intermediate case, in
which only one of the jets appears to impact on its lobe boundary, but no tails
have (yet) developed. The S lobe of M\,84 is morphologically very similar to
the spurs in 3C\,31, albeit on a much smaller linear scale. This adds to the
developing picture of the transition between fast, well-collimated jets and slow
plumes in tailed FR\,I sources, which is often a two-stage process. The jets
first enter bubbles, within which they disrupt, often thrashing around (as in
M\,84S) before disintegrating completely. The tails are then formed by escape
of material from the bubbles along the direction of steepest pressure gradient
rather than directly from the jet flow. A similar morphology is often found in
wide-angle tail sources \citep{HS04}.
\section{Summary}
\label{summary}
We have presented deep, high-resolution, multi-configuration VLA imaging of four
FR\,I radio sources: NGC\,193, 0206+35, 0755+37 and M\,84, together with
lower-resolution observations of 0326+39 and a reanalysis of our published
images of 3C\,296. These sources are all examples of ``lobed'' FR\,I radio
galaxies. Our results, displayed as images of total intensity, brightness
gradient, degree of polarization, apparent magnetic-field direction and spectral
index, show common features, as follows.
\begin{enumerate}
\item All of the sources have twin radio jets, with side-to-side brightness
ratios decreasing with distance from the nucleus in a manner qualitatively
consistent with relativistic, decelerating flow.
\item The brightness and polarization distributions of the inner jets are very
like those in tailed radio sources, indicating similar deceleration
physics. We suggest that the jets in both classes of source
propagate unshielded from the surrounding IGM within dense, kpc-scale
coronae, leading to efficient boundary-layer entrainment.
\item Farther from the nucleus, the jets in both classes of source recollimate.
This implies that the entrainment rate is low whether or not they are
surrounded by lobe plasma.
\item 0206+35 and 0755+37 show evidence for a two-component jet structure in
which a centre-brightened main jet and centre-darkened counter-jet are
surrounded by broader features that are somewhat brighter on the counter-jet
side, suggesting that a central relativistic outflow is surrounded by a slower,
but still mildly relativistic backflow.
\item In all but one case (M\,84S), the jets propagate to the ends of their
lobes. Continuing, but less well collimated flow can often be traced in
spectral index or brightness gradient images.
\item Five or six of the ten jets we have studied at high resolution terminate at the
ends of their lobes in features we call ``caps'' with smooth outer isophotes,
sharp inner and outer intensity gradients and relatively flat spectra.
\item An additional three out of ten jet terminations are best described as oblique
collisions of jets with the outer lobe walls: they also show enhanced outer
intensity gradients and flat spectra and may be caps seen at unfavourable
angles.
\item The lobes resemble those in FR\,II sources, with sharp outer brightness
gradients, spectral indices which steepen towards the nucleus and
circumferential apparent magnetic fields.
\item There is little evidence for features in the jet brightness distributions
which can be identified as strong shocks, either at recollimation or where
they terminate\footnote{Shocks may occur on smaller scales, at or just
downstream of the flaring point.}. This implies that the flow is internally
sub- or trans-sonic on large scales.
\end{enumerate}
We will present quantitative modelling of the inner jets in later papers.
\section*{Acknowledgements}
We thank Anita Richards for a useful perl script.
|
1,477,468,750,342 | arxiv | \section{Introduction}
Pose estimation is a prerequisite for many applications, e.g., self-driving cars, autonomous robots and augmented/virtual reality.
Various sensors can be utilized in pose estimation, such as GPS, IMU, LIDAR and camera.
Among them, camera is especially favored by researchers due to its small size, low cost and abundant perceived information.
Pose estimation using only the continuous images captured by a single camera is called monocular visual odometry (VO).
A multitude of VO systems have been presented as of now, such as SVO \cite{forster2014svo} and DSO \cite{engel_direct_2017}.
They are normally designed for the conventional pinhole cameras with a limited field of view (FOV).
The PALVO \cite{chen_palvo_2019} proposed in our previous work is a monocular VO based on panoramic annular lens (PAL).
PAL can transform the cylindrical side view onto a planar annular image and obtain panoramic perception of $360^\circ$ FOV in a single shot \cite{luo2017compact}, as shown in Fig. \ref{fig:pal}.
Benefiting from the panoramic imaging, PALVO can handle some challenging scenarios that are difficult for conventional VO based on pinhole cameras.
For example, conventional VO will produce unreliable results when rotating with a fast angular velocity due to the rapid reduction of overlaps between adjacent frames,
and is greatly affected by dynamic components in the environment because of the limited FOV.
Compared with the traditional monocular VO, PALVO greatly improves the robustness of pose estimation in real application scenarios.
\begin{figure}[htb]
\centering\includegraphics{fig-pal-revision}
\caption{(a) Schematic optical path in the PAL block. P and P' represent the object and image points, respectively. (b) A sample image captured by PAL camera. (c) The cylindrical object space of PAL.}
\label{fig:pal}
\end{figure}
However, there still exits some problems in PALVO when it runs on a large scale and for a long time.
The first one is error accumulation \cite{fraundorfer2012visual}.
Since PALVO only maintains a local map consisting of the most recent several keyframes, the pose of each new frame is calculated by tracking the previous frame and the local map.
As a result, the errors introduced by each new frame-to-frame motion accumulate over time and cause the estimated trajectory to deviate from the actual path.
Secondly, it is impossible for PALVO to recover the absolute scale because only bearing information is available for a single camera, i.e., for a monocular VO, the motion and 3D map can only be recovered up to a scale factor.
But due to the inevitable errors of pose estimation, the scale of the motion estimated later may be distinct from that determined at the beginning,
which is known as scale drift \cite{strasdat2010scale}.
These problems will cause that although the camera actually revisits a certain place, it cannot be indicated from the estimated trajectory, i.e., the PALVO cannot ``close the loop''.
To solve the above problems, we propose Panoramic Annular Simultaneous Localization And Mapping (PA-SLAM), which extends the previous PALVO by adapting it as the SLAM front-end to estimate camera poses with local localization consistency, and corrects error accumulation as well as scale drift with loop closure detection and global optimization in the back-end.
Compared with existing monocular visual SLAM systems based on pinhole camera with narrow FOV, the proposed PA-SLAM has the following advantages:
Firstly, benefiting from the large FOV brought by PAL, PA-SLAM is less affected by dynamic components in the environment when performing loop closure detection.
For pinhole cameras, dynamic objects will have a significant influence on the image appearance, which will affect the loop closure detection \cite{yu2018ds}.
Secondly, the large FOV of PAL ensures that enough visual features can be extracted in a single shot,
so pose estimation and loop closure detection will not be affected by the lack of features.
Thirdly, due to the cylindrical object space of PAL (Fig. \ref{fig:pal}(c)), loop closure detection in PA-SLAM is insensitive to travel direction, i.e., loop closure can be detected not only when the camera revisits a certain place in the same travel direction, but also in the perpendicular- and the reverse direction.
In contrast, pinhole cameras are mostly forward-looking and conventional visual SLAM can only detect loop closure in the same travel direction.
The contribution of this paper lies in threefold:
1. We present a method to extend sparse direct visual odometry (PALVO) to a full visual SLAM system;
2. A hybrid keypoint selection strategy is proposed to ensure repeatability of keypoints and to enable loop closure detection based on the bag-of-words approach, while maintaining high computational efficiency;
3. We verify the presented PA-SLAM on real-world datasets collected by a remote control vehicle equipped with a PAL camera.
Several comparative experiments with existing VO/SLAM based on both panoramic and perspective images are conducted, demonstrating the superiority of the proposed PA-SLAM.
The remainder of the paper is organized as follows.
Section 2 reviews the related work.
Our algorithms are described in detail in Section 3.
In Section 4, extensive experiments are conducted to evaluate the proposed PA-SLAM.
Finally, we draw our conclusion in Section 5.
\section{Related work}
\subsection{Visual SLAM}
Many visual SLAM systems have been proposed during the last decade.
One of the most influential visual SLAM approaches is ORB-SLAM2 \cite{mur-artal_orb-slam2}.
It uses the same ORB (Oriented FAST and rotated BRIEF) \cite{rublee2011orb} features for tracking, mapping, and place recognition tasks.
A bag-of-words (BoW) \cite{galvez2012bags} place recognizer built on DBoW2 with ORB features is embedded for loop closure detection.
As a feature-based method, ORB-SLAM2 needs to extract ORB features on both keyframes and non-keyframes, and relies on feature matching to obtain data association, which is a time-consuming task.
Another famous visual SLAM is LSD-SLAM \cite{engel2014lsd}, which utilizes FAB-MAP \cite{glover2012openfabmap}, an appearance-based loop detection algorithm, to detect large-scale loop closures.
However, FAB-MAP needs to extract its own features, so none of information from the VO front-end can be reused in loop detection.
Besides, the relative pose calculation relies on direct image alignment, which means that all the images of past keyframes need to be kept in memory, resulting in large memory costs in long-time running.
Some researchers have also done some work to extend VO to SLAM.
For example, LDSO \cite{gao_ldso} is extended by adding loop closure detection and pose map optimization to DSO.
As a VO based on the direct method, DSO tracks the pixels with high gradient in the image through direct alignment in the front-end, and the back-end takes use of the sliding window method based on keyframes.
LDSO proposed to gear point selection towards repeatable features,
which makes it possible to apply the BoW method similar to ORB-SLAM2 for loop closure detection, and estimate constraints using geometric techniques.
Similarly, VINS-Mono \cite{qin_vins-mono} also calculates additional feature point descriptors in keyframes and utilizes BoW for loop closure detection.
However, LDSO and VINS-Mono only conduct pose graph optimization, but do not perform the global bundle adjustment (BA).
Inspired by LDSO and VINS-Mono, we extract additional features and take use of BoW to detect loop closure.
Compared to them, PA-SLAM has three main advantages:
(1) The extracted feature points are not all involved in tracking front-end, but only part of the feature points will be aggregated in the pose estimation and structure reconstruction, which enables reliable loop closure detection and meanwhile ensures the computational efficiency;
(2) The global BA can be carried out flexibly after pose graph optimization, further improving localization accuracy and global mapping consistency;
(3) The loop closure detection of PA-SLAM is direction-insensitive, while visual SLAM based on pinhole cameras can only handle the loop closure when traveling in the same direction.
\subsection{Panoramic visual localization}
\label{sec:pano_vpr}
In recent years, many researchers have been exploring the application of panoramic images in positioning tasks, including visual place recognition (VPR), VO and SLAM.
For the VPR task, Murillo and Josecka \cite{murillo2009experiments} proposed place recognition utilizing GIST descriptors, which has achieved satisfactory performance on large-scale datasets.
Cheng et al. \cite{cheng2019panoramic} presented a panoramic image retrieval method based on NetVLAD \cite{arandjelovic2016netvlad} to tackle the challenges of various appearance variations between query and database images.
Oishi et al. \cite{oishi2019seqslam++} proposed to use panoramic images as one of the multi-modal data for robot localization and navigation, during which the panoramic images are matched using hand-crafted features and a sliding window scheme.
For VO and SLAM, some researchers have studied the advantages of large FOV.
For example, SVO, DSO, VINS-Fusion, ORB-SLAM3 have been extended to support fisheye lenses \cite{forster2016svo, matsuki2018omnidirectional, qin2019b, campos2020orb}.
Wang et al. \cite{cubemapslam} presented CubemapSLAM, which is a real-time feature-based SLAM system for fisheye cameras.
Lin et al. \cite{lin2018pvo} proposed PVO based on Ricoh Theta V panoramic camera, which is a multi-camera system composed of two fisheye lenses and produces 360$^\circ$ FOV through stitching images.
Seok et al. presented ROVO \cite{seok2019rovo} and OmniSLAM \cite{won2020omnislam} for a wide-baseline multiview stereo setup with wide-FOV fisheye cameras.
Gutierrez et al. \cite{gutierrez2011adapting} developed a real-time EKF (extended Kalman filter) based visual SLAM system for catadioptric cameras.
Compared to these works with wide-FOV imaging systems (fisheye lenses, catadioptric cameras
and multi-camera panoramic imaging systems), we exploit PAL in the proposed PA-SLAM, which has significant advantages of relative small distortions, single-shot panoramic perception and the compact structure \cite{huang2013stray}.
These advantages make PAL camera an ideal sensor for localization and perception tasks \cite{hu2019indoor, yang2019ds, fang2020cfvl}.
\begin{figure*}[htb]
\centering\includegraphics{fig-opticaldesign-revision}
\caption{(a) Structure of the PAL optical system. (b) Spot diagrams of $30^\circ$, $40^\circ$, $50^\circ$, $60^\circ$, $70^\circ$, $80^\circ$, $85^\circ$, $89^\circ$ and $92^\circ$ fields. (c) MTF of different fields below 145 lp/mm. The solid and dashed lines denote tangential and sagittal MTF, respectively. The black curve represents the MTF of the diffraction limit. (d) F-Theta distortion in different fields. }
\label{fig:optical_design}
\end{figure*}
\section{Optical design}
A self-designed PAL with a $360^\circ\times$($30^\circ$-$92^\circ$) FOV \cite{sun2019multimodal} is utilized in this paper, as well as a global shutter camera with a 2048$\times$2448 imaging resolution and 3.45 \textmu m pixel size.
The specifications of the optical system are listed in Table \ref{tab:optical_design}.
As indicated, the PAL system is designed for the working spectrum at 0.486-0.656 \textmu m with a F-number of 1.8.
The total length of the PAL system is 31.3 mm.
\begin{table}[h]
\centering
\caption{\bf Specifications of the PAL system.}
\label{tab:optical_design}
\begin{tabular}{lc}
\toprule
\textbf{Parameter} & \textbf{Specification} \\
\midrule
Working spectrum & 0.486-0.656 \textmu m \\
F$\setminus$\# & 1.8 \\
FOV & $360^\circ\times$($30^\circ$-$92^\circ$) \\
Total length & 31.3 mm \\
F-Theta distortion & \textless{}1.0\% \\
MTF & \textgreater{}0.55 at 145 pl/mm \\
Camera & 2048$\times$2448 with 3.45 \textmu m pixel size \\
\bottomrule
\end{tabular}
\end{table}
The structure of the PAL optical system is shown in Fig. \ref{fig:optical_design}(a).
Generally, a PAL system consists of two components, the PAL block and a relay lens.
The PAL block is composed of two refractive and two reflective surfaces.
As the PAL block produces annular image mapping, a relay lens with a symmetrical structure can effectively balance aberrations and achieve adequate imaging quality.
Fig. \ref{fig:optical_design}(b) illustrates the spot diagram of the system.
As can be seen, the maximum RMS radius is 2.367 µm at the $92^\circ$ field, which is smaller than the pixel size of the camera and can therefore realise sharp imaging.
Fig. \ref{fig:optical_design}(c) is the MTF of the optical system.
With a camera pixel size of 3.45 \textmu m, the spatial cutoff frequency is 145 lp/mm.
As Fig. \ref{fig:optical_design}(c) indicates, the MTF below 145~lp/mm is above 0.55, meeting the resolution requirement of the camera we use.
Additionally, the F-Theta distortion sustains less than 1\% in all of the FOV (Fig. \ref{fig:optical_design}(d)),
delivering competitive advantage compared to other wide-FOV imaging systems mentioned in Section \ref{sec:pano_vpr}.
\section{Algorithm}
Before going into PA-SLAM in more detail, we briefly review the pipeline of PALVO, which is the previous work of this paper.
PALVO takes use of a sparse direct method, meaning that the feature correspondence is not explicitly calculated.
During the initialization process, feature points are tracked from frame to frame using Lucas-Kanade feature tracking (KLT) \cite{bouguet2001pyramidal}, and essential matrix is calculated to recover the poses and 3D map points of the first two keyframes.
In the tracking thread, a coarse-to-fine strategy is adopted to estimate the camera pose for each new frame:
Firstly, track the previous frame to obtain the coarse pose estimation through photometric error minimization;
Secondly, track the local map by projecting keypoints to the current frame and optimizing the projection position;
Finally, the camera pose is fine-tuned by minimizing the reprojection error.
In the mapping thread, a fixed-size local map is maintained, and the depth of keypoints in the local map are updated through a depth filter.
When the number of keyframes in the local map exceeds a threshold, the furthest keyframe will be discarded.
\begin{figure*}[htb]
\centering\includegraphics{fig-algorithm-revision}
\caption{Framework of PA-SLAM. Keyframes moved out from the local map are managed by the global map, and a BoW database is constructed. Loop candidates are proposed by querying the BoW database and verified geometrically. Once a loop closure is detected successfully, the $Sim(3)$ relative pose constraint between the candidate keyframe (KF) and the current keyframe will be calculated.}
\label{fig:algorithm}
\end{figure*}
In this paper, we adapt PALVO as the front-end of PA-SLAM to estimate frame-to-frame camera poses, and correct error accumulation as well as scale drift with loop closure detection and global optimization in the back-end.
The tracking and mapping threads are inherited from PALVO.
The difference lies in that each keyframe moved out from the local map is not simply discarded but added to the global map with a BoW database, as shown in Fig.~\ref{fig:algorithm}.
The task of loop closure detection is carried out by querying the BoW database and the loop candidates are verified geometrically.
Once a loop closure is successfully detected, the $Sim(3)$ transformation (3D similarity transformation) between the candidate keyframe and the current keyframe is calculated and added to the pose graph as a constraint.
Then, all the poses of keyframes in the global map are adjusted by pose graph optimization and followed by global BA.
\subsection{Selection of feature points}
As mentioned above, the front-end of PA-SLAM is a VO based on a sparse direct method, which features pose estimation via sparse image alignment rather than explicit feature matching.
There exists several open challenges in adapting such a direct visual odometry to reuse the existing map.
First of all, PALVO does not care about the repeatability of the tracked pixels (keypoints).
Thus, if we simply attempt to reuse the tracked keypoints in the front-end and compute descriptors for them, it is likely to result in poor loop closure detection.
Secondly, when the loop closure is detected and the inter-frame $Sim(3)$ transformation computation is carried out, the actual transformation matrix may be quite different from the unit matrix (the initial guess of optimization process).
At this time, sparse image alignment will be invalid.
Therefore, we propose a hybrid point selection strategy in PA-SLAM.
When a frame is selected as a keyframe, new keypoints extraction will be carried out before it is sent into the depth filter.
The hybrid point selection strategy means that when extracting new keypoints, it is more inclined to consider ORB feature points, i.e., more ORB feature points are used as keypoints for tracking in the front-end.
In areas with insufficient features, pixels with a high gradient are used to supplement.
This strategy has the following advantages:
Firstly, ORB feature points are actually FAST corners with good repeatability, and have been proved to be an effective feature for loop closure detection in visual SLAM;
Secondly, once a loop closure is detected, feature matching can be easily obtained, which is convenient for geometric check and inter-frame $Sim(3)$ transformation computation.
\begin{figure*}[htb]
\centering\includegraphics{fig-pointselection}
\caption{The hybrid point selection strategy.
(a) ORB features will be extracted from new keyframes for loop closure detection, but only a part of them are selected for tracking in the front-end.
Some supplementary keypoints with high image gradient will also been involved in tracking if necessary.
(b) Keypoint selection.
The image is divided in a grid, and for each cell only the ORB keypoint with the highest Harris response is selected for tracking in the front-end.
And for the cells without ORB feature points, the image gradient in the cell is computed and the pixel with the highest gradient is selected as a supplementary keypoint.}
\label{fig:point_selection}
\end{figure*}
\begin{figure*}[htb]
\centering\includegraphics{fig-orbandtrackedfeat}
\caption{Upper row: keypoints tracked in the front-end (drawn in green). Lower row: ORB features extracted for loop closure detection (drawn in blue).}
\label{fig:feat_component}
\end{figure*}
In the implementation, redundant ORB features will be extracted from new keyframes so as to ensure the performance of loop closure detection, and all the features are involved in generating BoW image descriptors, as shown in Fig. \ref{fig:point_selection}(a).
But not all ORB feature points are picked as depth filter seeds considering real-time performance.
The image is divided in a grid, and for each cell only the one with the highest Harris response is selected for depth recovery.
And for the cells without ORB feature points, the image gradient in the cell is computed and the pixel with the highest gradient is selected as a supplementary keypoint (same as the original strategy in PALVO) and fed into the depth filter, as shown in Fig. \ref{fig:point_selection}(b).
Fig. \ref{fig:feat_component} depicts the extracted ORB features for loop closure detection (lower row) and the tracked keypoints in the front-end (including reused ORB keypoints and the supplementary keypoints with a high image gradient, upper row) during one run.
\subsection{Loop closure detection and geometric check}
As mentioned above, redundant ORB features will be extracted from new keyframes and then DBoW3 \cite{dbow3} is utilized to transform ORB feature descriptors to BoW vectors and build a BoW database, and the database is queried to propose loop candidates for the current keyframe.
It is worth noting that the loop closure is only retrieved outside the local map, i.e., only the historical keyframes in the global map can be picked.
There may be false positives in loop closure detection via BoW database retrieval.
Therefore, a geometric check must be performed for each loop candidate.
Here, geometric check is done via verifying epipolar constraints.
For each pair of ideal matching feature points $\mathbf{u}_{r}$ and $\mathbf{u}_{c}$, it should be satisfied that
\begin{equation}
\pi ^ {-1} \left( \mathbf{u}_{r} \right) ^ T
\cdot \mathbf{E} \cdot
\pi ^ {-1} \left( \mathbf{u}_{c} \right)
= 0,
\end{equation}
where $\pi^{-1}\left(\cdot\right)$ is the back-projection function, $\mathbf{u}_{r}$ and $\mathbf{u}_{c}$ are the pixel coordinates of the matching ORB feature points on the reference frame (candidate) and the current keyframe respectively, and $\mathbf{E}$ is the essential matrix.
Specifically, feature matching is first carried out between the candidate- and the current keyframe, and good matches are selected according to the matching distance.
Based on the good matches, the essential matrix is computed using the 8-point method \cite{longuet1981computer} with a random sample consensus (RANSAC) scheme \cite{derpanis2010overview}, and the number of inliers is counted.
Only if the inlier number is greater than a threshold, geometric check is considered to be successful.
The same technology is also used in the initialization process of PALVO.
The difference lies in that KLT is used to obtain the correspondence between pixels in initialization,
while ORB feature matching is used here.
This is because the parallax between the loop candidate frame and the current keyframe may be large, so the optical flow can not be calculated effectively.
\subsection{$Sim(3)$ computation}
If a loop closure is successfully detected, the $Sim(3)$ relative pose from the loop candidate frame to the current keyframe will be calculated, where the 3D coordinates of the matching points are required.
As mentioned above, not all extracted ORB feature points are fed into the depth filter to recover depth, so we can not guarantee that every matching point has its corresponding 3D coordinates.
In view of this, we propose an approximate strategy to obtain the depth of the feature points.
Specifically, for each feature point, if there exists a 3D map point in the same grid cell with it, the depth of this map point is regarded as the depth of the feature point.
If the opposite is true, then we search its 3$\times$3 neighborhood grid cells to find adjacent 3D map points and calculate the weighted average depth as the depth of the feature point.
After the depth of matching feature points are obtained, the 3D coordinates can be calculated using the back-projection function.
For the matching points with effective 3D coordinates, the algorithm proposed in \cite{horn_closed-form_1987} is utilized to solve $Sim(3)$.
In order to ensure the robustness of the solution, RANSAC scheme is adopted.
\subsection{Pose graph optimization and global BA}
The $Sim(3)$ relative pose indicates the rotation, translation and scale constraints between the loop candidate frame and the current keyframe.
By adding this constraint during pose graph optimization, error accumulation and scale drift in this period of time can be reduced.
In general, the relative pose estimation between adjacent frames in the local map is reliable,
but due to error accumulation and scale drift, the error of global pose gradually increases over time.
Pose graph optimization is to optimize the pose of each keyframe with the constraints of the relative pose transformation between keyframes.
Since the estimated pose in the front-end is $SE(3)$, it is upgraded to $Sim(3)$ during optimization so as to adjust its scale, and the initial scale is set to 1. The form of error in pose graph optimization is
\begin{equation}
\mathbf{e}_{i j} = {
\ln
\left(
\mathbf{S}_{i j}
\hat{\mathbf{S}}_{j}^{-1}
\hat{\mathbf{S}}_{i}
\right)
}^\vee,
\end{equation}
where $S_i$ represents the $Sim(3)$ pose of the keyframe $i$, $S_ {ij}$ denotes the $Sim(3)$ relative pose between the keyframe $i$ and $j$, and $\hat{\ }$ denotes the estimated value of a variable.
After pose graph optimization, the global BA is then performed to fine-tune the 3D coordinates of all map points and poses of all keyframes in the global map by minimizing the reprojection error. The error term is
\begin{equation}
\mathbf{e}_{i}^{m} =
\mathbf{u}_{i}^{m} -
\pi \left(
\hat{\mathbf{T}}_{i} \cdot
\hat{\mathbf{P}}_{m}
\right),
\end{equation}
where $\mathbf{u}_{i}^{m}$ represents the observed projection of the 3D map point $m$ in the keyframe $i$,
$\pi\left(\cdot\right)$ is the projection function,
$\mathbf{T}_{i}$ is the pose of the keyframe $i$, and $\mathbf{P}_{m}$ is the 3D coordinate of the map point $m$.
It is also important to note that in order not to interfere with the pose estimation process in the front-end, the estimated poses of active keyframes in the local map are all fixed during pose graph optimization and global BA.
Only the global poses of the old part of the trajectory will tend to be modified.
We utilize g2o, a graph optimization library proposed in \cite{grisetti2011g2o} for optimization tasks.
\section{Experiments}
\subsection{Experimental setup}
The PAL videos of real scenarios used in the following experiments are captured using a remote control vehicle equipped with the self-designed PAL camera, as shown in Fig. \ref{fig:experiment_setup}(a). The imaging resolution of the camera is 2048$\times$2448. For the sake of real-time performance, the image resolution is cropped and downsampled to 720$\times$720 before being fed into PA-SLAM system.
In order to compare with SLAM systems based on the conventional pinhole camera, we use a virtual pinhole camera and the reprojection method to obtain perspective images.
This is perfectly feasible as the PAL imaging model follows a clear F-Theta law \cite{Zhou:16}.
As shown in Fig. \ref{fig:experiment_setup}(b), the PAL image is first back-projected into 3D space using a calibrated PAL camera model, and then re-projected into a perspective image using a virtual pinhole camera model with a $90^\circ$ horizontal FOV.
In this way, PAL and perspective image sequences share the same FPS and timestamp, ensuring the fairness of the comparison to the maximum extent.
\begin{figure}[htb]
\centering\includegraphics{fig-experimentsetup-revision}
\caption{(a) The remote control vehicle equipped with a PAL camera for dataset collection. (b) Perspective images used for comparative experiments are synthesized using a virtual pinhole camera and the reprojection method.}
\label{fig:experiment_setup}
\end{figure}
OmniCalib calibration tool \cite{scaramuzza2006toolbox} is used to calibrate the PAL camera. Evo \cite{grupp2017evo} and the method proposed in \cite{engel_photometrically_nodate} are used to evaluate the trajectory.
\subsection{Loop closure detection test}
\subsubsection{Relationship between feature number and loop closure detection}
In this section, the relationship between the performance of loop closure detection based on PAL images and the number of ORB features is studied.
Videos captured by the remote control vehicle are used and the total length of the trajectory is about 500 meters.
We select one image as a keyframe every fixed number of images (set to 30 in this paper), and take all the keyframes as the database to be queried.
Then for each query frame, we use the algorithm described in Section 3.2 for loop closure detection.
For each detected loop closure candidate, if the difference of index between the candidate frame and the current query frame is less than the interval number of keyframes (30 in this paper), it is considered to be a true positive (TP) loop closure;
Otherwise, it will be treated as false positive (FP).
In addition, all the query frames that fail in detecting loop closure are defined as false negative (FN).
The precision-recall curve is used to characterize the performance of the loop closure detection algorithm.
Precision (P) and recall (R) can be calculated as follows:
\begin{equation}
P = \frac{TP}{TP+FP},
\end{equation}
\begin{equation}
R = \frac{TP}{TP+FN}.
\end{equation}
The higher the curve, the higher the recall at the same precision, which means the better the performance of the algorithm.
\begin{figure}[tb]
\centering\includegraphics{fig-looptest-revision}
\caption{(a) The precision-recall curve of loop closure detection with respect to different numbers of ORB features. The higher the curve, the better the performance of the algorithm. (b) Statistics of extracted ORB features (blue), reused ORB features (red) and all the keypoints tracked in the front-end (yellow) in one run.}
\label{fig:loop_test}
\end{figure}
The loop closure detection results with repect to different numbers of ORB features are shown in Fig. \ref{fig:loop_test}(a).
It can be seen that with the increase of the number of ORB features from 100 to 3200, the recall rate at precision 100\% increases gradually, which proves that the performance of loop closure detection is positively correlated with the number of ORB features to a certain extent.
In order to hold a good trade-off between performance and speed, we set the ORB feature number to 1600 when running PA-SLAM.
Additionally, Fig. \ref{fig:loop_test}(b) shows the total number of extracted ORB features, the number of reused ORB keypoints fed into the depth filter and the number of all the tracked keypoints (including supplementary keypoints) when running PA-SLAM on this dataset.
It can be seen that the ORB keypoints actually involved in the tracking front-end only account for about 15\% of all ORB features, which ensures the running efficiency of PA-SLAM.
\subsubsection{Loop closing in different travel directions}
\begin{figure*}[htb]
\centering\includegraphics{fig-verselooptest-revision}
\caption{(a) Schematic diagram of the path when collecting datasets for verifying the direction insensitivity of loop closure detection based on PAL. In this path, there exists the part traveling in the same direction (the green area) and the part traveling in the opposite direction (the blue area). (b) The estimate trajectory and results of loop closure detection produced by PA-SLAM. (c) The results of loop closure detection utilizing reconstructed perspective images.}
\label{fig:verse_loop_test}
\end{figure*}
In order to verify the insensitivity to the travel direction, we collect another video whose path is shown in Fig. \ref{fig:verse_loop_test}(a).
There exists the part traveling in the same direction (the green area) and the part traveling in the opposite direction (the blue area) in this dataset.
The estimated trajectory of PA-SLAM and successfully detected loop closure results (plot in red line segments) are shown in Fig. \ref{fig:verse_loop_test}(b).
It can be seen that whether the travel direction is same or opposite, loop closure can always be correctly detected based on the PAL images, proving the direction insensitivity of loop closure in PA-SLAM.
As a contrast, the results of loop detection utilizing reconstructed perspective images are shown in Fig. \ref{fig:verse_loop_test}(c), indicating that only loop closure in the same travel direction can be detected successfully.
\subsection{Accuracy test}
\subsubsection{Accuracy test based on ArUco}
\begin{figure*}[htb]
\begin{center}
\includegraphics{fig-accuracytest-revision}
\end{center}
\caption{Accuracy test results. We run PA-SLAM, CubemapSLAM, ORB-SLAM2 and PALVO on the datasets with ground truth, and calculate the (a) absolute trajectory error, (b) accumulated error and (c) scale drift separately. }
\label{fig:accuracy_test}
\end{figure*}
In this part, we evaluate the accuracy of PA-SLAM and compare it with the previous PALVO as well as CubemapSLAM \cite{cubemapslam},
which is a visual SLAM system based on panoramic images.
Simultaneously, comparative experiments with ORB-SLAM2 are also conducted, which is a state-of-the-art implementation of visual SLAM.
We use ArUco to obtain the ground truth of 6 degree of freedom (DOF) camera pose.
ArUco is an open-source library for camera pose estimation using squared markers \cite{garrido2016generation, romero2018speeded}.
The pixel correspondence necessary for pose estimation can be obtained through a single mark.
Thus, the camera pose can be calculated separately for each frame, and there is no error accumulation and scale drift over time.
Image sequences that are used in this test are captured in an office, with paths ranging from 3 meters to 50 meters in length.
It is impossible to capture the ArUco marker in all images in case of large scale camera movement.
Thus, only part of the frames are assigned with ground truth.
When collecting the datasets, we take the ArUco marker as the start point and the end point of the trajectory,
ensuring that frames in the beginning segment and the end segment have ground truth.
The absolute trajectory error (ATE) is utilized as the criterion for accuracy evaluation.
Additionally, the accumulated error and scale drift are also evaluated separately.
Specifically, we align the tracked trajectory with the beginning segment (B) and the end segment (E) independently, providing two $Sim(3)$ transformations:
\begin{equation}
S_{b}^{\mathrm{gt}} =\underset{S \in \operatorname{Sim}(3)}{\operatorname{argmin}} \sum_{i \in B}\left(S p_{i}-p'_{i}\right)^{2},
\end{equation}
\begin{equation}
S_{e}^{\mathrm{gt}} =\underset{S \in \operatorname{Sim}(3)}{\operatorname{argmin}} \sum_{i \in E}\left(S p_{i}-p'_{i}\right)^{2}.
\end{equation}
The accumulated error ($e_{accu}$) and scale drift ($e_s$) can be defined as
\begin{equation}
e_{accu}=\sqrt{\frac{1}{n} \sum_{i=1}^{n}\left\|S_{b}^{\mathrm{gt}} p_{i}-S_{e}^{\mathrm{gt}} p_{i}\right\|_{2}^{2}},
\end{equation}
\begin{equation}
e_s = \left| log( scale( S_{b}^{\mathrm{gt}} ( S_{e}^{\mathrm{gt}} ) ^ {-1} ) ) \right|.
\end{equation}
Fig. \ref{fig:accuracy_test} presents the experiment results, from which one can see that our algorithm achieves the least ATE on the sequence (2), (3) and (5),
while on the sequence (1) and (4) ORB-SLAM2 performs best w.r.t. ATE.
As for the accumulated error, our PA-SLAM delivers superior performance on the sequence (1)-(4), but slightly inferior to CubemapSLAM on the sequence (5).
For scale drift, PA-SLAM achieves the best performance among the four algorithms on the sequences (2)-(5).
There is an exception of sequence (1), on which PALVO performs better.
This is because the movement scale of this sequence is quite small (the path length of sequence (1) is about 3 meters).
Under this circumstance, PALVO maintains good local consistency of the trajectory, with error accumulation and scale drift not being significant.
The experiment results indicate that the proposed PA-SLAM has achieved equivalent or even better accuracy in comparison with ORB-SLAM2 and CubemapSLAM, and has been greatly improved compared to the previous PALVO.
It becomes clear that loop closure and global optimization significantly decrease error accumulation and scale drift in large-scale and long-term running.
\subsubsection{Accuracy test of loop closure error}
In addition, we also run our algorithm on the dataset used in the accuracy test of PALVO to collect and compare the overall numerical performance.
As described in the paper of PALVO, this dataset is collected in an indoor corridor and contains a total of 5 videos (R1 - R5), with paths ranging from 20 meters to 50 meters in length.
The start and end point are exactly in the same position.
Loop closure error in percentage is utilized as a criterion for accuracy evaluation,
which is defined as the ratio of the residual between the start- and end points of the trajectory estimated by algorithms, to the whole length of estimated trajectory:
\begin{equation}
e_{loop} =
\frac{\|P_{start}-P_{end}\|}{L_{traj}} \times 100\%.
\label{eq:loop_closure_error}
\end{equation}
Table~\ref{tab:accuracy_test} presents the quantitative results.
As can be seen, the proposed PA-SLAM achieves the least loop closure error in R1, R3 and R4.
In R2 PA-SLAM is inferior to CubemapSLAM,
and in R5 it is slightly inferior to ORB-SLAM2 but still better than the other three algorithms.
These experiment results further support our conclusion that PA-SLAM reaches the state-of-the-art performance and has a great improvement compared to the previous PALVO.
\begin{table*}[ht]
\centering
\caption{\bf Accuracy test results.}
\label{tab:accuracy_test}
\begin{tabular}{llclccccc}
\hline
& & Frame rate & & \multicolumn{5}{c}{Loop closure error (\%)} \\ \hline
Method & & FPS & & R1 & R2 & R3 & R4 & R5 \\ \cline{1-1} \cline{3-3} \cline{5-9}
PA-SLAM & & 99.1 & & \textbf{0.6749} & 1.1702 & \textbf{0.4417} & \textbf{0.9060} & 0.7553 \\
CubemapSLAM& & 25.3 & & 0.7781 & \textbf{0.4219} & 0.7647 & 0.9553 & 0.8439 \\
ORB-SLAM2 & & 37.4 & & 0.8364 & 1.2000 & 0.5425 & 1.2681 & \textbf{0.6779} \\
PALVO & & 251.6 & & 1.9326 & 1.5893 & 2.9858 & 2.6105 & 0.9527 \\
SVO & & \textbf{423.9} & & 1.4276 & 2.0067 & 2.8455 & 3.5414 & 1.7109 \\ \hline
\end{tabular}
\end{table*}
Moreover, we also evaluate the frame rate of our algorithm.
With loop closing and global optimization, the proposed PA-SLAM is capable of processing frames at 99.1 frames per second (FPS), which is much faster than ORB-SLAM2 and CubemapSLAM.
\subsection{Field test}
\begin{table*}[ht]
\centering
\caption{\bf Field test results.}
\label{tab:field_test}
\setlength{\tabcolsep}{4mm}{
\begin{tabular}{llcccc}
\hline
& & \multicolumn{4}{c}{Loop closure error (\%)} \\ \hline
Method & & S1 (190 m) & S2 (450 m) & S3 (200 m) & S4 (250 m) \\ \cline{1-1} \cline{3-6}
PA-SLAM & & \textbf{0.5361} & \textbf{0.0717} & \textbf{0.1136} & \textbf{0.2789} \\
CubemapSLAM & & 0.9989 & 2.4388 & 4.3081 & 4.0793 \\
ORB-SLAM2 & & 1.2254 & - & 6.3477 & - \\
PALVO & & 2.5719 & 4.2268 & 4.2288 & 3.6310 \\
SVO & & 3.6702 & - & - & 6.5322 \\ \hline
\end{tabular}
}
\end{table*}
\begin{figure*}[!t]
\centering
\includegraphics{fig-fieldtest-revision}
\caption{ Trajectories produced by PA-SLAM, PALVO and CubemapSLAM on datasets (a) S1 - (d) S4. The start points of different estimated trajectories are aligned to the same point, and the end points of each trajectory are indicated by dots in different colors. In (a) the orientation of the remote control vehicle at the start point is approximately perpendicular to the end point, which is opposite to the end point in (c) and the same as the end point in (b)(d). It’s worth noting that the trajectories are plotted up to a scale factor, because the absolute scale cannot be derived from a single camera.}
\label{fig:field_test}
\end{figure*}
\begin{figure*}[!t]
\begin{center}
\includegraphics{fig-featmatching.pdf}
\end{center}
\caption{Feature matching when the vehicle revisits a certain place (a loop closure occurs) with its orientation (a) perpendicular to-, (c) opposite to- and (b)(d) the same as the first visit. }
\label{fig:feat_matching}
\end{figure*}
In order to further verify our algorithm and validate its effectiveness and reliability in real applications, field tests are conducted in the outdoor area.
We collect a number of videos in the campus, ranging from 190 to 450 meters in length.
In these videos, there are a large number of pedestrians, vehicles and other dynamic components, which is challenging for conventional visual SLAM systems.
Similarly, the start- and end point are kept in the same place and the loop closure error in percentage is calculated.
Table \ref{tab:field_test} displays the experiment results, and the estimated trajectories are shown in Fig. \ref{fig:field_test}.
As can be seen in Table \ref{tab:field_test}, PA-SLAM achieves least loop closure errors on all of the sequences.
Additionally, the perspective image-based ORB-SLAM2 and SVO get failed on two of the sequences.
Fig. \ref{fig:field_test} depicts the trajectories produced by PA-SLAM, PALVO and CubemapSLAM.
from which one can see that the orientation of the remote control vehicle at the start point is approximately perpendicular to the end point in S1, and opposite to the end point in S3.
In spite of this issue, PA-SLAM can still close the loop, further proving the direction insensitivity of loop closure in PA-SLAM.
Fig. \ref{fig:feat_matching} represents the ORB feature matching when the vehicle revisits a certain place (a loop closure occurs) with its orientation perpendicular to-, opposite to- and the same as the first visit, demonstrating the robustness of PA-SLAM in real-world unconstrained scenarios.
\section{Conclusion}
In this paper, we propose PA-SLAM, which extends the sparse direct method based PALVO to PA-SLAM with loop closure detection and global optimization.
The hybrid point selection is presented to enable reliable BoW-based loop closure detection while ensuring computational efficiency.
When a loop closure is successfully detected, pose graph optimization is performed and followed by global BA.
Experiments demonstrate that PA-SLAM significantly reduces the error accumulation and scale drift in PALVO, reaching state-of-the-art accuracy and maintaining the original robustness and high efficiency.
Meanwhile, PA-SLAM can deal with loop closure in different travel directions, which greatly improves the performance in practical application scenarios.
\begin{backmatter}
\bmsection{Funding}
This research was granted from ZJU-Sunny Photonics Innovation Center (No. 2020-03). This research was also funded in part through the AccessibleMaps project by the Federal Ministry of Labor and Social Affairs (BMAS) under the Grant No. 01KM151112.
\bmsection{Acknowledgments}
This research was supported in part by Hangzhou SurImage Technology Company Ltd.
\bmsection{Disclosures}
The authors declare no conflicts of interest.
\bmsection{Data availability} Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.
\end{backmatter}
\iffalse
\section{Introduction}
This legacy template is designed to assist with creating an article to submit to \emph{Applied Optics}, \emph{Advances in Optics and Photonics}, JOSA A, JOSA B, \emph{Optics Letters} or \emph{Optica}. See the OSA's \href{http://www.opticsinfobase.org/submit/style/}{Style Guide} and \href{http://www.opticsinfobase.org/submit/templates/}{Manuscript Templates} pages for more details. Please select the appropriate journal abbreviation (ao, aop, josaa, josab, ol, optica) in the document preamble.
Use the shortarticle/false option for \emph{Applied Optics}, JOSA A, and JOSA B. Use the shortarticle/true option for \emph{Optics Letters}. For \emph{Advances in Optics and Photonics}, use the shortarticle/false option for Review Articles, and the shortarticle/true option for Tutorials.
If you have a question while using this template on {Overleaf}, please use the help menu (``?'') on the top bar to search for help or ask us a question using our \href{https://www.overleaf.com/contact}{contact form}.
\section{Examples of Article Components}
\label{sec:examples}
The sections below show examples of different article components.
\section{Figures and Tables}
It is not necessary to place figures and tables at the back of the manuscript. Figures and tables should be sized as they are to appear in the final article. Do not include a separate list of figure captions and table titles.
Figures and Tables should be labelled and referenced in the standard way using the \verb|\label{}| and \verb|\ref{}| commands.
\subsection{Sample Figure}
Figure \ref{fig:false-color} shows an example figure.
\begin{figure}[htbp]
\centering
\fbox{\includegraphics[width=\linewidth]{sample}}
\caption{False-color image, where each pixel is assigned to one of seven reference spectra.}
\label{fig:false-color}
\end{figure}
\subsection{Sample Table}
Table \ref{tab:shape-functions} shows an example table.
\begin{table}[htbp]
\centering
\caption{\bf Shape Functions for Quadratic Line Elements}
\begin{tabular}{ccc}
\hline
local node & $\{N\}_m$ & $\{\Phi_i\}_m$ $(i=x,y,z)$ \\
\hline
$m = 1$ & $L_1(2L_1-1)$ & $\Phi_{i1}$ \\
$m = 2$ & $L_2(2L_2-1)$ & $\Phi_{i2}$ \\
$m = 3$ & $L_3=4L_1L_2$ & $\Phi_{i3}$ \\
\hline
\end{tabular}
\label{tab:shape-functions}
\end{table}
\section{Sample Equation}
Let $X_1, X_2, \ldots, X_n$ be a sequence of independent and identically distributed random variables with $\text{E}[X_i] = \mu$ and $\text{Var}[X_i] = \sigma^2 < \infty$, and let
\begin{equation}
S_n = \frac{X_1 + X_2 + \cdots + X_n}{n}
= \frac{1}{n}\sum_{i}^{n} X_i
\label{eq:refname1}
\end{equation}
denote their mean. Then as $n$ approaches infinity, the random variables $\sqrt{n}(S_n - \mu)$ converge in distribution to a normal $\mathcal{N}(0, \sigma^2)$.
\section{Sample Algorithm}
Algorithms can be included using the commands as shown in algorithm \ref{alg:euclid}.
\begin{algorithm}
\caption{Euclid’s algorithm}\label{alg:euclid}
\begin{algorithmic}[1]
\Procedure{Euclid}{$a,b$}\Comment{The g.c.d. of a and b}
\State $r\gets a\bmod b$
\While{$r\not=0$}\Comment{We have the answer if r is 0}
\State $a\gets b$
\State $b\gets r$
\State $r\gets a\bmod b$
\EndWhile\label{euclidendwhile}
\State \textbf{return} $b$\Comment{The gcd is b}
\EndProcedure
\end{algorithmic}
\end{algorithm}
\subsection{Supplementary materials in OSA journals}
OSA journals allow authors to include supplementary materials as integral parts of a manuscript. Such materials are subject to peer-review procedures along with the rest of the paper and should be uploaded and described using OSA's Prism manuscript system. Please refer to the \href{https://www.opticsinfobase.org/submit/style/supplementary_materials.cfm}{Author Guidelines for Supplementary Materials in OSA Journals} for more detailed instructions on labeling supplementary materials and your manuscript. Visualizations, Data Files, Datasets, and Code must be associated with a figure, table, or equation, OR be referenced in the results section of the manuscript.
\textbf{Authors may also include Supplemental Documents} (PDF documents with expanded descriptions or methods) with the primary manuscript. At this time, supplemental PDF files are not accepted for partner titles, JOCN and Photonics Research. To reference the supplementary document, the statement ``See Supplement 1 for supporting content.'' should appear at the bottom of the manuscript (above the References heading). Please note that to create text color for supplementary materials links, use of the command \\
\verb|\textcolor{urlblue}{Visualization 1}| is preferred to using the command\\
\verb|\url{Visualization 1}|.
\begin{figure}[ht!]
\centering\includegraphics{sample}
\caption{(a) Three traps create three rings of magnetic nanoparticles. (b) The rings interact with one another.}
\end{figure}
\subsection{Sample Dataset Citation}
1. M. Partridge, "Spectra evolution during coating," figshare (2014) [retrieved 13 May 2015], http://dx.doi.org/10.6084/m9.figshare.1004612.
\subsection{Sample Code Citation}
2. C. Rivers, "Epipy: Python tools for epidemiology" (Figshare, 2014) [retrieved 13 May 2015], http://dx.doi.org/10.6084/m9.figshare.1005064.
\section{Backmatter}
Backmatter sections should be listed in the order Funding/Acknowledgment/Disclosures/Data Availability Statement/Supplemental Document section. An example of backmatter with each of these sections included is shown below.
\begin{backmatter}
\bmsection{Funding} Content in the funding section will be generated entirely from details submitted to Prism. Authors may add placeholder text in the manuscript to assess length, but any text added to this section in the manuscript will be replaced during production and will display official funder names along with any grant numbers provided. If additional details about a funder are required, they may be added to the Acknowledgments, even if this duplicates information in the funding section. See the example below in Acknowledgements.
\bmsection{Acknowledgments} Acknowledgments should be included at the end of the document. The section title should not follow the numbering scheme of the body of the paper. Additional information crediting individuals who contributed to the work being reported, clarifying who received funding from a particular source, or other information that does not fit the criteria for the funding block may also be included; for example, ``K. Flockhart thanks the National Science Foundation for help identifying collaborators for this work.''
\bmsection{Disclosures} Disclosures should be listed in a separate section at the end of the manuscript. List the Disclosures codes identified on OSA's \href{http://www.osapublishing.org/submit/review/conflicts-interest-policy.cfm}{Conflict of Interest policy page}. If there are no disclosures, then list ``The authors declare no conflicts of interest.''
\smallskip
\noindent Here are examples of disclosures:
\bmsection{Disclosures} ABC: 123 Corporation (I,E,P), DEF: 456 Corporation (R,S). GHI: 789 Corporation (C).
\bmsection{Disclosures} The authors declare no conflicts of interest.
\bmsection{Data Availability Statement} A Data Availability Statement (DAS) will be required for all submissions beginning 1 March 2020. The DAS should be an unnumbered separate section titled ``Data Availability'' that
immediately follows the Disclosures section. See \href{https://www.osapublishing.org/submit/review/data-availability-policy.cfm}{OSA's Data Availability Statement policy page} for more information.
OSA has identified four common (sometimes overlapping) situations that authors should use as guidance. These are provided as minimal models, and authors should feel free to
include any additional details that may be relevant.
\begin{enumerate}
\item When datasets are included as integral supplementary material in the paper, they must be declared (e.g., as "Dataset 1" following our current supplementary materials policy) and cited in the DAS, and should appear in the references.
\bmsection{Data availability} Data underlying the results presented in this paper are available in Dataset 1, Ref. [3].
\item When datasets are cited but not submitted as integral supplementary material, they must be cited in the DAS and should appear in the references.
\bmsection{Data availability} Data underlying the results presented in this paper are available in Ref. [3].
\item If the data generated or analyzed as part of the research are not publicly available, that should be stated. Authors are encouraged to explain why (e.g.~the data may be restricted for privacy reasons), and how the data might be obtained or accessed in the future.
\bmsection{Data availability} Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.
\item If no data were generated or analyzed in the presented research, that should be stated.
\bmsection{Data availability} No data were generated or analyzed in the presented research.
\end{enumerate}
\bmsection{Supplemental document}
See Supplement 1 for supporting content.
\end{backmatter}
\section{References}
Note that \emph{Optics Letters} and \emph{Optica} short articles use an abbreviated reference style. Citations to journal articles should omit the article title and final page number; this abbreviated reference style is produced automatically when the \emph{Optics Letters} journal option is selected in the template, if you are using a .bib file for your references.
However, full references (to aid the editor and reviewers) must be included as well on a fifth informational page that will not count against page length; again this will be produced automatically if you are using a .bib file.
\bigskip
\noindent Add citations manually or use BibTeX. See \cite{Zhang:14,OSA,FORSTER2007,testthesis,manga_rao_single_2007}.
\fi
|
1,477,468,750,343 | arxiv | \section{Introduction}
It is well known that supergravity theories in $D \geq 4$ space-time dimensions contain gauge potentials described by $p$-forms, of various $p > 1$, associated to $p$-index antisymmetric tensors. In this scenario, the Free Differential Algebras framework, that is an extension of the Maurer-Cartan equations to involve higher-degree differential forms, is particularly well suited for studying supergravity models. The concept of Free Differential Algebra (FDA in the sequel) was introduced in \cite{Sullivan} and subsequently applied to the study of supergravity theories (see, for instance, Ref. \cite{D'AuriaFre}).
A review of the standard procedure for the construction of a minimal FDA (namely a FDA where the differential of any $p$-form does not contain forms of degree greater than $p$) starting from an ordinary Lie algebra can be found in \cite{Hidden}.
In \cite{D'AuriaFre}, the authors considered the $D=11$ supergravity theory of \cite{Cremmer}, introducing and investigating the supersymmetric FDA describing the theory (using the so-called superspace geometric approach) in order to see whether the FDA formulation could be interpreted in terms of an ordinary Lie superalgebra (in its dual Maurer-Cartan formulation). This was proven to be true, and the existence of a hidden superalgebra underlying the $D=11$ supergravity theory was presented for the first time. It includes the $D=11$ Poincar\'{e} superalgebra as a subalgebra, but it also contains two extra, almost-central, bosonic generators, which were lately understood as $p$-brane charges, sources of the dual potentials $A^{(3)}$ and $B^{(6)}$ appearing in the (complete) FDA of \cite{D'AuriaFre} (see Refs. \cite{Hull:1994ys}, \cite{Townsend:1995gp}).
Furthermore, a nilpotent fermionic generator must be included to close the superalgebra and in order for the same superalgebra to reproduce the $D=11$ FDA on ordinary superspace, whose basis is given by the supervielbein. Relevant contributions concerning the physical role played by this extra fermionic generator were given first in \cite{vanHolten:1982mx} and then in particular in \cite{Bandos:2004xw}, \cite{Bandos:2004ym}, where the results presented in \cite{D'AuriaFre} were further analyzed and generalized. Finally, its group-theoretical and physical meaning was recently clarified in \cite{Hidden} (and subsequently further discussed in \cite{Malg}): In \cite{Hidden} it was shown that the spinor $1$-form dual to the nilpotent fermionic charge is not a physical field in superspace, rather behaving as a cohomological BRST ghost, since its supersymmetry and gauge transformations exactly cancel the non-physical contributions coming from the extra tensor fields, guaranteeing that the extra bosonic $1$-forms dual to the almost-central charges are genuine abelian gauge fields.\footnote{Actually, as it was lately pointed out in \cite{Malg}, the extra spinor $1$-form dual to the nilpotent fermionic generator can be parted into two different spinors, whose integrability conditions close separately.}
As shown in Ref. \cite{Hidden}, where the authors analyzed also the FDA of the minimal $\mathcal{N} = 2$, $D = 7$ supergravity theory, this interpretation is valid for any supergravity theory containing antisymmetric tensor fields, and any supersymmetric FDA can always be traded for a hidden Lie superalgebra
containing fermionic nilpotent generators (see also \cite{SM4} for the study of a particular $D=4$ FDA case).
In the first part of this paper, we will consider the FDA of $\mathcal{N}=1$, $D=4$ supergravity containing a $2$-form potential under the same perspective of \cite{Hidden}. Let us mention, here, that supergravity in $D=4$ space-time dimensions is often formulated as a theory of gravity coupled to scalar-vector multiplets only, that is to say $1$-form gauge fields. On the other hand, when we think of the theory as obtained by Kaluza-Klein compactification from eleven-dimensional supergravity, then it naturally contains also $2$-form fields (tensor multiplets). In four dimensions, if these $2$-form fields are massless, then they can be dualized, through Hodge duality of their field strengths, to scalars (this is the reason why they often do not explicitly appear in the formulation). However, when they are massive\footnote{This happens, for instance, in the case in which the higher-dimensional theory is reduced via a flux compactification.} such dualization does not (at least directly) apply and, in this case, the $2$-form gauge fields must be made manifest \cite{Gunaydin:2005df} (see also Refs. \cite{Andrianopoli:2007ep}, \cite{Andrianopoli:2011zj} and \cite{sito} for more details on the role of $2$-forms in four-dimensional supergravity theories).
The aim of the present paper is to show that the so-called \textit{minimal Maxwell superalgebra} (or \textit{minimal super-Maxwell algebra}) in four dimensions (a non-semisimple superalgebra naturally endowed with a nilpotent fermionic generator), can be interpreted as a hidden superalgebra underlying the FDA of $D=4$ supergravity that includes a $2$-form potential $A^{(2)}$. This will be done by studying the parametrization of $A^{(2)}$ and the hidden gauge structure of the FDA on the same lines of what was done in the $D=11$ (and $D=7$) case in \cite{D'AuriaFre}, \cite{Hidden}. Then, we will extend our discussion to the FDA introduced in \cite{D'AuriaFre}, which describes $D=11$ supergravity, showing that, also in this case, there exists a Maxwell superalgebra underlying the theory.\footnote{Actually, we will consider the $D=11$ FDA just containing a $3$-form potential $A^{(3)}$. We leave the study of the complete FDA containing also a $6$-form potential $B^{(6)}$ (see Refs. \cite{D'AuriaFre}, \cite{Hidden}) to future works.}
The extra spinors dual to the nilpotent fermionic generators whose presence is crucial for writing a supersymmetric extension of the Maxwell algebras, both in the $D=4$ and in the $D=11$ case, will turn out to be fundamental also to reproduce the $D=4$ and $D=11$ FDAs on ordinary superspace.
This work is organized as follows: In Section \ref{Maxalg}, we first recall the main features of the Maxwell superalgebra; then, we move to the analysis of the hidden gauge structure of the supersymmetric FDA of $\mathcal{N}=1$, $D=4$ supergravity (containing a $2$-form potential $A^{(2)}$), showing that the Maxwell superalgebra can be viewed as a hidden superalgebra underlying the theory. Subsequently, in Section \ref{11d}, we extend our study and results to the FDA describing $D=11$ supergravity (which, in its minimal cohomology formulation, contains just a $3$-form potential $A^{(3)}$), introducing a (hidden) Maxwell superalgebra underlying the theory. Finally, Section \ref{Concl} contains the conclusions and possible future developments. In the Appendix we collect our conventions and some useful formulas.
\section{Minimal super-Maxwell algebra and hidden gauge structure of the D=4 supergravity FDA} \label{Maxalg}
After the discovery of the cosmic microwave background and the mysterious dark energy, it appears interesting to consider some field densities uniformly filling space-time. One such modification of empty Minkowski space can be obtained by adding a
constant electromagnetic field background, parametrized by additional degrees of freedom related to tensorial almost-central charges. The presence of a constant electromagnetic field modifies the Poincar\'{e} symmetries into
the so-called Maxwell symmetries. On the other hand, since the advent of supersymmetry, there has been a great interest in superalgebras going beyond the super-Poincar\'{e} one.
In particular, the (minimal) Maxwell superalgebras are (minimal) super-extensions of the Maxwell algebra, which in turn is a non-central extension of the Poincar\'{e} algebra involving an extra, bosonic, abelian generator (along the lines of non-commutative geometry).
Specifically, the $D=4$ Maxwell algebra is obtained by replacing the
commutator $[P_a, P_b] = 0$ ($a=0,1,2,3$) of the Poincar\'{e} algebra with $[P_a,P_b]=Z_{ab}$, where $Z_{ab}=-Z_{ba}$ are abelian generators commuting with translations and behaving like a tensor
with respect to Lorentz transformations (\textit{i.e.} $Z_{ab}$ are tensorial central charges).
Setting $Z_{ab}=0$ one gets back to the Poincar\'{e} algebra.
The Maxwell algebra arises when one considers symmetries of systems evolving in flat Minkowski space filled in by a constant electromagnetic background \cite{bacry}, \cite{schrader}. Indeed, an action for a massive particle which is invariant under the Maxwell symmetries satisfies the equations of motion of a charged particle interacting with a constant electromagnetic field via the Lorentz force. In particular, in order to interpret the Maxwell algebra and the corresponding Maxwell group, a Maxwell group-invariant particle model was studied on an extended space-time with coordinates $(x^\mu, \phi^{\mu \nu})$, where the translations of $\phi^{\mu \nu}$ are generated by $Z_{\mu \nu}$ \cite{gomis0}, \cite{gomis1}, \cite{Bonanos:2008ez}, \cite{gibbons}. The interaction term described by a Maxwell-invariant $1$-form introduces new tensor degrees of freedom, momenta conjugate to $\phi^{\mu \nu}$, and, in the equations of motion, they play the role of a background electromagnetic field which is constant on-shell and leads to a closed, Maxwell-invariant $2$-form.
The Maxwell algebra describes, at same time, the particle and the
constant electromagnetic background in which it moves.
Furthermore, in \cite{deAzcarraga:2010sw}, driven by the fact that it is often thought that the cosmological constant problem
may require an alternative approach to gravity, the authors presented a geometric framework based on the $D=4$ gauged Maxwell algebra, involving six new gauge fields associated with their abelian generators, and described its application as source of an additional contribution to the cosmological term in Einstein gravity, namely as a generalization of the cosmological term.
Subsequently, in \cite{Durka:2011nf} the authors deformed the $AdS$ algebra by adding extra non-abelian $Z_{ab}$ generators, forming, in this way, the negative cosmological constant counterpart of the Maxwell algebra. Then, they gauged this algebra and constructed a dynamical model. In the resulting theory, the gauge fields associated with the Maxwell-like generators $Z_{ab}$ appear only in topological terms that do not influence the dynamical field equations.
The minimal supersymmetric extension of the $D=4$ Maxwell algebra was obtained in \cite{gomis2} as a minimal enlargement of the $\mathcal{N}=1$ Poincar\'{e} superalgebra, by adding two four-dimensional Majorana supercharges ($Q_\alpha$ and $\Sigma_\alpha$, $\alpha=1,2,3,4$), and, mathematically optional, two scalar generators ($B_5$ and $B$). Thus, in terms of dual Maurer-Cartan $1$-forms, this minimal supersymmetrization of the Maxwell algebra naturally requires to introduce, besides the $1$-form spinor field $\psi^\alpha$ (dual to the supercharge $Q_\alpha$), also an extra Majorana $1$-form spinor field $\xi^\alpha$ (dual to the nilpotent fermionic generator $\Sigma_\alpha$).
The minimal Maxwell superalgebra introduced in \cite{gomis2} (and, subsequently, further discussed and deformed in \cite{gomis3}) seems to be specially appealing, since the coset $\frac{\text{super-Maxwell}}{\text{Lorentz} \times B_5}$ describes the supersymmetries of flat (Wess-Zumino) Minkowski superspace with arbitrary constant values of an abelian supersymmetric field-strength background.
In this set up, the superspace coordinates $(x^\mu, \theta^\alpha, \phi)$ are supplemented in the framework of Maxwell supergeometry by graded additional coordinates related to the generators $(\Sigma_\alpha, Z_{\mu \nu}, B)$.
At a later time, in \cite{Kamimura:2011mq} the authors wrote supersymmetrization schemes of the $D=4$ Maxwell algebra, and further generalizations of Maxwell (super)algebras where then derived and studied in the context of expansion of Lie (super)algebras \cite{deAzcarraga:2012zv}. Subsequently, the Maxwell superalgebra of \cite{gomis3} and its generalizations have been obtained through a particular expansion procedure that goes under the name of $S$-expansion, starting from the $AdS$ superalgebra \cite{Concha2}. This family of superalgebras, containing the Maxwell algebras type as bosonic subalgebras, can be viewed as a generalization of the D'Auria-Fr\'{e} superalgebra introduced in \cite{D'AuriaFre} and of the Green algebra \cite{green}.\footnote{The Green algebra was used in \cite{Siegel:1994xr} to produce a superstring action with a manifestly supersymmetric Wess-Zumino term. The procedure was further generalized in \cite{Bergshoeff:1995hm} to super $p$-branes by introducing larger Green-type superalgebras (see also \cite{Sezgin:1996cj}).}
Lately, in \cite{deAzcarraga:2014jpa} it was shown that the first-order $\mathcal{N}= 1$, $D=4$ pure supergravity Lagrangian $4$-form can be obtained geometrically as a quadratic expression in the curvatures of the Maxwell superalgebra.
Furthermore, in \cite{CR2} the authors presented the construction of the $D = 4$ pure supergravity action (plus boundary terms) starting from a minimal Maxwell superalgebra (which can be derived from $\mathfrak{osp}(4 \vert 1)$ by applying the $S$-expansion procedure), showing, in particular, that the $\mathcal{N} = 1$, $D = 4$ pure supergravity theory can be alternatively obtained as the MacDowell-Mansouri like action built from the curvatures of this minimal Maxwell superalgebra.
Remarkably, also in this context, the Maxwell-like fields do not contribute to the dynamics of the theory, appearing only in the boundary terms. Moreover, recently, in \cite{Concha:2016hbt} the authors introduced an alternative way of closing Maxwell-like algebras.
For all the reasons listed above, the (super-)Maxwell algebras result to be very attractive in the context of (super)gravity theories. Let us now go deep in some technical details concerning the minimal $D=4$ Maxwell superalgebra of Ref. \cite{deAzcarraga:2014jpa}.
As we have already mentioned, besides the Poincar\'{e} generators, the minimal $D=4$ Maxwell algebra contains six additional tensorial charges $Z_{ab}$ that centrally extend the abelian translation algebra and behave tensorially under the Lorentz algebra.
Then, the minimal $D=4$ super-Maxwell algebra is generated by $\lbrace J_{ab}, P_a, Z_{ab}, Q_\alpha , \Sigma_\alpha \rbrace$ ($a=0,1,2,3$, $\alpha=1,2,3,4$), and its (anti)commutation relations read:
\begin{align}
& [J_{ab} , J_{cd}] = \eta_{bc} J_{ad} - \eta_{ac} J_{bd}-\eta_{bd}J_{ac}+\eta_{ad}J_{bc}, \nonumber \\
& [J_{ab},Z_{cd}] = \eta_{bc} Z_{ad} - \eta_{ac} Z_{bd}-\eta_{bd}Z_{ac}+\eta_{ad}Z_{bc}, \nonumber \\
& [J_{ab},P_c] = \eta_{bc}P_a - \eta_{ac}P_b, \nonumber \\
& [J_{ab}, Q_\alpha]= - \frac{1}{2}(\gamma_{ab} Q)_\alpha , \nonumber \\
& [J_{ab}, \Sigma_\alpha ]= - \frac{1}{2}(\gamma_{ab}\Sigma)_\alpha , \nonumber \\
& [P_a,P_b]=Z_{ab}, \quad [P_a,Z_{cd}]=0, \nonumber \\
& [P_a , Q_\alpha] = - \frac{1}{2}(\gamma_a \Sigma)_\alpha , \nonumber \\
& [P_a,\Sigma_\alpha]=0, \nonumber \\
& [Z_{ab},Z_{cd}]=0, \nonumber \\
& [Z_{ab},Q_\alpha]=0, \quad [Z_{ab},\Sigma_\alpha]=0, \nonumber \\
& \lbrace Q_\alpha , Q_\beta \rbrace = (\gamma^a C)_{\alpha \beta} P_a , \nonumber \\
& \lbrace Q_\alpha , \Sigma_\beta \rbrace = - \frac{1}{2}(\gamma^{ab} C)_{\alpha \beta} Z_{ab}, \nonumber \\
& \lbrace \Sigma_\alpha , \Sigma_\beta \rbrace = 0 , \label{smlast}
\end{align}
where $C$ stands for the charge conjugation matrix, $\eta_{ab}$ is the (mostly plus) Minkowski metric, $\gamma_a$ and $\gamma_{ab}$ are Dirac gamma matrices in four dimensions, $Q_\alpha$ is the supersymmetry generator, and where we can see that the $[P_a,Q_\alpha]$ commutator produces an extra, nilpotent, fermionic generator, $\Sigma_\alpha$; in particular, the latter naturally appears in the supersymmetric extension of the Maxwell algebra.
Let us notice that the Lorentz-type algebra generated by $\lbrace J_{ab},Z_{ab} \rbrace$ is a subalgebra of the above superalgebra. This Maxwell superalgebra can also be obtained by imposing $\tilde{Z}_{ab} = 0$ in the generalized minimal super-Maxwell algebra of \cite{Concha2}, \cite{CR2}, which in turn can be derived from the $\mathfrak{osp} (4 \vert 1)$ superalgebra by applying the abelian semigroup expansion procedure ($S$-expansion, for short).
We shall describe the superalgebra given in (\ref{smlast}) through its Maurer-Cartan equations satisfied by the set of $1$-form fields $\sigma^A= \lbrace \omega^{ab}$, $V^a$, $B^{ab}, \psi^\alpha, \xi^\alpha \rbrace$ dual to the set of generators $T_A = \lbrace J_{ab}, P_a, Z_{ab}, Q_\alpha , \Sigma_\alpha \rbrace$ (in the sequel, we will neglect the spinor index $\alpha$, for simplicity), that is to say
\begin{align}
& \omega^{ab} (J_{cd}) = \delta^{ab}_{cd} , \quad V^a (P_b) = \delta^a_b , \quad B^{ab}(Z_{cd}) = \delta^{ab}_{cd}, \nonumber \\
& \psi (Q) = \mathbf{1} , \quad \xi (\Sigma) = \mathbf{1} .
\end{align}
The aforementioned Maurer-Cartan equations read
\begin{align}
& d \omega^{ab} + \omega^{ac} \wedge \omega_{c}^{\; b}=0, \\
& D V^a - \frac{1}{2}\bar{\psi} \wedge \gamma^a \psi =0, \\
& D \psi =0, \\
& D B^{ab} + \bar{\xi}\wedge \gamma^{ab} \psi + V^a \wedge V^b=0, \label{smeqbab} \\
& D \xi - \frac{1}{2} \gamma_a \psi \wedge V^a =0, \label{smeqxi}
\end{align}
where $D=d + \omega$ denotes the Lorentz covariant derivative in four dimensions and where $\wedge$ is the wedge product between differential forms. All spinors above are Majorana spinors. The $1$-form fields of (the dual Maurer-Cartan formulation of) the super-Maxwell algebra have dimensions $[\omega^{ab}] = L^0$, $[V^a] = L$, $[\psi] = L^{1/2}$, $[B^{ab}]= L^2$, and $[\xi]=L^{3/2}$.
Let us now formulate the $\mathcal{N}=1$, $D=4$ supergravity theory in a geometric superspace approach\footnote{In this
context, the bosonic vielbein $V^a$ together with the gravitino $1$-form $\psi$ span a basis of the cotangent superspace $K=\lbrace V^a, \psi \rbrace$, where also the superspace $p$-forms (whose pull-back on space-time corresponds to $p$-index antisymmetric tensors) are defined.}, in which we can write a supersymmetric FDA involving a $2$-form potential $A^{(2)}$. Explicitly, the supersymmetric FDA defining the ground state (\textit{i.e.} the ``vacuum'') of this model is given by the
vanishing of the following set of supercurvatures:
\begin{align}
& \mathcal{R}^{ab} \equiv d \omega^{ab} + \omega^{ac}\wedge \omega_{c}^{\; b} =0, \label{fdafirst} \\
& R^a \equiv D V^a - \frac{1}{2}\bar{\psi} \wedge \gamma^a \psi =0, \\
& \rho \equiv D \psi =0, \label{dpsifda} \\
& F^{(3)} \equiv dA^{(2)} - \frac{1}{2}\bar{\psi}\wedge \gamma_a \psi \wedge V^a =0 . \label{fdalast}
\end{align}
The $d^2$-closure of the above FDA relies in the Fierz identity (\ref{F1}) of Appendix \ref{app}.
Let us mention that the interacting theory (that is to say, out of the ground state) is obtained by introducing a non-vanishing value for the supercurvatures (defined in the left-hand side of the FDA). We will not further elaborate on the theory out of the vacuum in the present paper. We will concentrate, instead, on the cohomological structure of the theory, which is fully captured by the ground state FDA.
Now, one could wonder whether the FDA structure (\ref{fdafirst})-(\ref{fdalast}) can be traded with an ordinary Lie superalgebra written in terms of $1$-form gauge fields valued in non-trivial tensor representations of the Lorentz group (on the same lines of the study that was carried on in \cite{D'AuriaFre} and recalled and further analyzed in \cite{Hidden} in the case of $D=11$ supergravity). Observe that this cannot be done without introducing further $1$-form fields in the theory.
On the other hand, interestingly, in the present case this can be done by considering the extra fields (naturally) appearing in the Maxwell superalgebra, namely by introducing in the FDA describing the theory also the Maurer-Cartan equations (\ref{smeqbab}) and (\ref{smeqxi}).
Indeed, if we consider the following decomposition of the $2$-form $A^{(2)}$ in terms of $1$-forms (that, in this case, is also the most general one we can write provided the FDA structure above and satisfying the Bianchi identity in superspace of the $2$-form, $d^2 A^{(2)} = 0$):
\begin{equation}\label{par2}
A^{(2)}(\sigma) = \alpha \bar{\psi} \wedge \xi ,
\end{equation}
being $\alpha$ a free parameter, we have that (\ref{par2}) enjoys the ground state FDA requirement $d A^{(2)} = \frac{1}{2} \bar{\psi} \wedge \gamma_a \psi \wedge V^a$ (see (\ref{fdalast})) if
\begin{equation}\label{par2fixed}
A^{(2)}(\sigma) = - \bar{\psi} \wedge \xi ,
\end{equation}
that is to say $\alpha=-1$, where, in particular, we have used the Maurer-Cartan equation
\begin{equation}
D \xi = \frac{1}{2} \gamma_a \psi \wedge V^a . \label{dxiiiiiiiii}
\end{equation}
Observe that the $1$-form field $B^{ab}$ does not appear in the parametrization of $A^{(2)}$, where the crucial role is played just by the extra spinor $1$-form field $\xi$ appearing in the super-Maxwell algebra. Indeed, one could have obtained the same result by simply considering a (Lorentz-valued) central spinor extension (given by (\ref{dxiiiiiiiii})) of the super-Poincar\'{e} algebra in $D=4$. However, the peculiarity of our result lies in the fact that the spinor $\xi$ that allows to write the supersymmetryzation of the $D=4$ Maxwell algebra (in its dual Maurer-Cartan formulation) is also the same spinor that allows to write the parametrization (\ref{par2fixed}) in terms of $1$-forms for the $2$-form $A^{(2)}$ appearing in the FDA of the $\mathcal{N}=1$, $D=4$ supergravity theory;\footnote{As we will see in the sequel, this will also hold for the higher-dimensional case of the $D=11$ FDA describing $D=11$ supergravity.} then, in light of this fact, even if the $1$-form field $B^{ab}$ is ruled out by the parametrization of $A^{(2)}$, its contribution at the algebraic level cannot be discarded, being $B^{ab}$ a $1$-form field of (the dual Maurer-Cartan formulation of) the Maxwell superalgebra. In particular, it could be related to a possible enhancement of the (hidden gauge) symmetries underlying supergravity models, for example when considering extensions or expansions of the super-Maxwell algebra.
Let us mention, here, that we could also have added to the FDA (\ref{fdafirst})-(\ref{fdalast}) the following equation:
\begin{equation} \label{xi2giga}
D \Xi^{(2)} + \psi \wedge A^{(2)} - \frac{1}{2} \gamma_{ab}\psi \wedge V^a \wedge V^b =0,
\end{equation}
being $\Xi ^{(2)}$ a spinor $2$-form whose dimension is $[\Xi^{(2)}]=L^{5/2}$ (see also Ref. \cite{Castellani}). However, in this case, in order to write the $2$-form $\Xi^{(2)}$ in terms of $1$-forms, we would need extra $1$-form fields with respect to those appearing in the dual Maurer-Cartan formulation of the Maxwell superalgebra (for instance, $1$-form fields coming from extensions or expansions of the super-Maxwell algebra).\footnote{Or, directly, another hidden Lie superalgebra, different with respect to the super-Maxwell algebra, trivializing (\ref{xi2giga}) in terms of $1$-forms.} In the present paper, we limit ourselves to consider the supersymmetric FDA containing the $2$-form $A^{(2)}$ and leave the analysis of the FDA involving (\ref{xi2giga}) to future investigations.
From the above result, we can conclude that the super-Maxwell algebra written in (\ref{smlast}) can be interpreted as a hidden superalgebra underlying the $D=4$ supersymmetric FDA describing $\mathcal{N}=1$, $D=4$ supergravity extended to include a $2$-form $A^{(2)}$.
Let us recall that the inclusion of a new $p$-form (a gauge potentials enjoying a gauge freedom) in the basis of the so-called $\mathcal{H}$-relative Chevalley-Eilenberg (CE) cohomology of a FDA is physically meaningful only if the whole of the FDA is gauge invariant, and this requires the non-physical degrees of freedom to be projected out from the FDA (see Ref. \cite{Hidden} for details).
Thus, we now move to the analysis of the hidden gauge structure of the supersymmetric ground state FDA (\ref{fdafirst})-(\ref{fdalast}), on the same lines of \cite{Hidden}.
\subsection{Analysis of the hidden gauge structure in D=4}
In the following, we analyze in detail the hidden gauge structure of the FDA (\ref{fdafirst})-(\ref{fdalast}) when the $2$-form $A^{(2)}$ is parametrized in terms of $1$-forms. In particular, we investigate the conditions under which the gauge invariance of the FDA is realized once $A^{(2)}$ is expressed in terms of $1$-forms.
In the geometrical approach adopted in this work, the fields are naturally defined in an enlarged manifold corresponding to the supergroup-manifold, where all the invariances of the FDA are diffeomorphisms generated by Lie derivatives. The physical request that the FDA should be described in term of fields living in ordinary superspace corresponds to require the Lie superalgebra to have a fiber bundle structure, whose base space is spanned by the supervielbein, the rest of the fields spanning a fiber $\mathcal{H}$. This in turn implies that the gauge fields belonging to $\mathcal{H}$ must be excluded from the construction of the so-called cochains (corresponding to gauge invariance). In geometrical terms, this corresponds to require the CE cohomology to be restricted to the $\mathcal{H}$-relative CE cohomology (see Ref. \cite{Hidden} for details).
Once the supersymmetric ground state FDA (\ref{fdafirst})-(\ref{fdalast}) is parametrized in terms of $1$-forms, the symmetry structure is based on the hidden supergroup-manifold $G$ having the structure of a principal fiber bundle $(G/\mathcal{H},\mathcal{H})$, where $G/ \mathcal{H}$ corresponds to superspace and where the fiber $\mathcal{H}$ includes, in the present case, the Lorentz transformations and the hidden super-Maxwell generators $Z_{ab}$ and $\Sigma$.
Explicitly, we can write $\mathcal{H} = H_0 + H_b + H_f$, where $\lbrace J_{ab} \rbrace \in H_0$, $ \lbrace Z_{ab} \rbrace \in H_b$, $\lbrace \Sigma \rbrace \in H_f$, and $\lbrace P_a , Q \rbrace \in \mathbb{K}$; $\mathbb{G} = \mathcal{H} + \mathbb{K}$ is the hidden Maxwell superalgebra.\footnote{With an abuse of notation, here and in the following we will use for the cotangent space of the supergroup-manifold $G$, spanned by the $1$-forms, the same symbols defined above for the tangent space of $G$, spanned by the vector fields (generators).} Observe that the subalgebra $H_b + H_f$ defines an abelian ideal of $\mathbb{G}$.
Requiring the physical condition that the CE cohomology is restricted to the $\mathcal{H}$-relative CE cohomology corresponds, now, to require the FDA to be described in terms of $1$-form fields living on $G/\mathcal{H}$; this implies that the $1$-forms in $H_b$ and $H_f$ do not appear in $dA^{(2)}$.
Now, taking into account this discussion, we consider in detail the relation between the gauge transformations of the FDA and those of the super-Maxwell bosonic and fermionic $1$-forms $B^{ab}$ and $\xi$, respectively.
The FDA (\ref{fdafirst})-(\ref{fdalast}) is invariant under the following gauge transformation:
\begin{equation}\label{gaugefda}
\delta A^{(2)}=d\Lambda^{(1)} ,
\end{equation}
which is generated by the arbitrary $1$-form $\Lambda^{(1)}$.
The gauge transformations of the bosonic 1-form $B^{ab}$ and of the spinor $1$-form $\xi$ generated by the tangent vectors in $H_b$ and in $H_f$ are
\begin{equation}\label{gauge1foms}
\left\{
\begin{array}{l}
\delta B^{ab}=d\Lambda^{ab} - \bar{\varrho} \gamma^{ab} \psi , \\
\delta \xi = D \varrho ,
\end{array}\right.
\end{equation}
where $\Lambda^{ab}$ is an arbitrary Lorentz-valued scalar function (\textit{i.e.} a $0$-form) and where we have introduced the infinitesimal spinor parameter $\varrho$. Observe that the parameter $\varrho$ appears in both the gauge transformations, while $\delta \xi$ does not involve $\Lambda^{ab}$, in agreement with the fact that the covariant
differential $D\xi$ (see equation (\ref{dxiiiiiiiii})) is parametrized only in terms of the supervielbein and not in terms of $B^{ab}$ in $H_b$. This is different from what happened in the case of the hidden superalgebra underlying the FDA of $D=11$ supergravity \cite{D'AuriaFre}, \cite{Hidden}, where the covariant differential of the spinor $1$-form dual to the extra, nilpotent, fermionic generator is parametrized also in terms of the gauge fields in $H_b$.
In the present case, the corresponding $1$-form gauge parameter of $A^{(2)}$ turns out to be given by
\begin{equation} \label{l1}
\Lambda^{(1)} = \bar{\psi} \varrho ,
\end{equation}
where we have used the relation $\alpha=-1$ which must be fulfilled in order for (the differential of) the parametrization of $A^{(2)}$ to be equivalent to (\ref{fdalast}).
We can now show that all the diffeomorphisms in the hidden supergroup $G$, generated by Lie derivatives, are invariances of the FDA, and that, in particular, the ones in the fiber $\mathcal{H}$ directions are associated with a particular form of the gauge parameter of the FDA
gauge transformation given by (\ref{gaugefda}). Indeed, defining the tangent vectors\footnote{Since the Lorentz transformations, belonging to $H_0 \subset \mathcal{H}$, are not effective on the FDA, the $2$-form $A^{(2)}$ being Lorentz-invariant, our analysis reduces to consider the transformations induced by the tangent vectors $Z_{ab} \in H_b \subset \mathcal{H}$ and $ \Sigma \in H_f \subset \mathcal{H}$.}
\begin{align}
& \overrightarrow{z} \equiv \Lambda^{ab} Z_{ab} \in H_b , \\
& \overrightarrow{q} \equiv \bar{\varrho} \Sigma \in H_f ,
\end{align}
we find that a gauge transformation leaving invariant the FDA (\ref{fdafirst})-(\ref{fdalast}) is recovered, when $A^{(2)}$ is parametrized in terms of $1$-forms, if
\begin{equation}
\Lambda^{(1)} \equiv \Lambda^{(1)}_b + \Lambda^{(1)}_f = \imath_{\overrightarrow{z}} (A^{(2)}) + \imath_{\overrightarrow{q}} (A^{(2)}) ,
\end{equation}
where $\imath$ denotes the contraction operator and where we have denoted by $\Lambda^{(1)}_b$ the $1$-form gauge parameter corresponding to the transformation in $H_b$, while $\Lambda^{(1)}_f$ is the $1$-form gauge parameter corresponding to the transformation in $H_f$. Note that, since $\Lambda^{(1)}_b=\imath_{\overrightarrow{z}} (A^{(2)})=0$, we can write $\Lambda^{(1)} =\Lambda^{(1)}_f= \imath_{\overrightarrow{q}} (A^{(2)})$.
Now, introducing the Lie derivative $\ell_{\overrightarrow{z}} \equiv d \imath_{\overrightarrow{z}} + \imath_{\overrightarrow{z}} d$ (and, analogously, $\ell _{\overrightarrow{q}} \equiv d \imath_{\overrightarrow{q}} + \imath_{\overrightarrow{q}} d$), we find the corresponding gauge transformations of $A^{(2)}$ to be
\begin{align}
& \delta _{\overrightarrow{z}} A^{(2)} = 0 = d \left(\imath_{\overrightarrow{z}} (A^{(2)}) \right) = \ell_{\overrightarrow{z}} A^{(2)} , \\
& \delta _{\overrightarrow{q}} A^{(2)} = - \bar{\psi} \wedge D \varrho = d \left(\imath_{\overrightarrow{q}} (A^{(2)}) \right) = \ell_{\overrightarrow{q}} A^{(2)}.
\end{align}
The last equality in both the above relations follows since $dA^{(2)}$, as given in (\ref{fdalast}), is invariant under transformations
generated by $\overrightarrow{z}$ and $\overrightarrow{q}$ corresponding to the gauge invariance of the supervielbein. In particular, this is in agreement with the fact that the right hand side of $dA^{(2)}$ as given in (\ref{fdalast}) is in the $\mathcal{H}$-relative CE cohomology.
Thus, after integration by parts, we can finally write:
\begin{equation}
\delta A^{(2)} = \delta _{\overrightarrow{z}} A^{(2)} + \delta _{\overrightarrow{q}} A^{(2)} = d\Lambda^{(1)}_b + d \Lambda^{(1)}_f = d \Lambda^{(1)} ,
\end{equation}
which, due to the fact that $ \delta _{\overrightarrow{z}} A^{(2)}=d\Lambda^{(1)}_b=0$, reduces to
\begin{equation}
\delta A^{(2)} = \delta _{\overrightarrow{q}} A^{(2)} = d \Lambda^{(1)}_f = d \Lambda^{(1)} .
\end{equation}
We have thus completed the analysis of the hidden gauge structure of the $D=4$ FDA (\ref{fdafirst})-(\ref{fdalast}). We now extend our study to the $D=11$ FDA describing the Cremmer-Julia-Scherk $D=11$ supergravity theory \cite{Cremmer}.
\section{Maxwell superalgebra and hidden gauge structure of the supergravity FDA in D=11} \label{11d}
In this section, we move to the analysis of the FDA describing the $D=11$ supergravity theory of \cite{Cremmer}. In particular, we will see that, also in this case, there exists a super-Maxwell algebra which can be interpreted as a (hidden) superalgebra underlying the theory.
The $D=11$ supergravity theory, whose action was first constructed in \cite{Cremmer}, has a bosonic field content given by the metric $g_{\mu\nu}$ and a $3$-index antisymmetric tensor $A_{\mu\nu\rho}$ ($\mu,\nu,\rho,\ldots =0,1,\ldots ,D-1$); the theory is also endowed with a single Majorana gravitino $\Psi_\mu$ in the fermionic sector.\footnote{We denote by $\Psi$ the gravitino in $D=11$ space-time dimensions, in order to avoid confusion with the gravitino $\psi$ of the four-dimensional case.} By dimensional reduction, the $D=11$ theory yields $\mathcal{N}=8$, $D=4$ supergravity, which is considered as a possible unifying theory of all interactions.
In the FDAs framework, the bosonic sector of the theory includes, besides the supervielbein $\lbrace V^a,\Psi \rbrace$ (where $a=0,1, \ldots, 10$ and where $\Psi$ is a $32$-components Majorana spinor), a $3$-form potential $A^{(3)}$ (whose pull-back on space-time is $A_{\mu \nu \rho}$), with field-strength $F^{(4)}= dA^{(3)}$ (modulo fermionic bilinears in terms of the gravitino $1$-form), together with its Hodge dual $F^{(7)}$ (whose space-time components are related to the ones of the $4$-form by $F_{\mu_1 \ldots \mu_7}= \frac 1{84} \epsilon_{\mu_1 \ldots \mu_7\nu_1 \ldots \nu_4} F^{\nu_1 \ldots \nu_4}$) associated with a $6$-form potential $B^{(6)}$ in superspace (see Ref. \cite{D'AuriaFre} for details on the FDA formulation of $D=11$ supergravity in the superspace geometric approach).
As we have already mentioned, in \cite{D'AuriaFre} the supersymmetric FDA describing $D=11$ supergravity was introduced and then interpreted in terms of an ordinary Lie superalgebra. The superalgebra found by the authors of \cite{D'AuriaFre} can also be viewed as a spinor central extension of the so-called $M$-algebra \cite{Sezgin:1996cj}, \cite{deAzcarraga:1989mza}, \cite{Townsend:1997wg}, \cite{Hassaine:2003vq}, \cite{Hassaine:2004pp}.
In particular, the authors of \cite{D'AuriaFre} presented a general decomposition of the $3$-form $A^{(3)}$ in terms of $1$-forms, by requiring the Bianchi identity in superspace of the $3$-form, $d^2A^{(3)} = 0$, to be satisfied also when $A^{(3)}$ is written in terms of $1$-forms. The result of \cite{D'AuriaFre} (the authors got a dichotomic solution, consisting in two different supergroups, whose $1$-form potentials can be alternatively used to parametrize the $3$-form) have been further analyzed and generalized in \cite{Hidden}, \cite{Bandos:2004xw}, \cite{Bandos:2004ym}, where some misprints of \cite{D'AuriaFre} have been corrected and where, in particular in \cite{Bandos:2004xw} and \cite{Bandos:2004ym}, it was pointed out that a restriction imposed in \cite{D'AuriaFre} on one coefficient in the parametrization of $A^{(3)}$ can be relaxed, thus giving a one-parameter family of solutions.
In the following, we will see that there also exists another hidden Lie superalgebra underlying the FDA describing $D=11$ supergravity (we will not consider the complete $D=11$ FDA involving a $6$-form potential $B^{(6)}$ in the present work, limiting ourselves to the FDA containing just a $3$-form $A^{(3)}$), namely a minimal Maxwell superalgebra in eleven dimensions. In particular, we will see that the general parametrization of $A^{(3)}$ in terms of the hidden super-Maxwell $1$-forms, together with $V^a$ and $\Psi$, can be recast into the form of that written in Refs. \cite{Bandos:2004xw}, \cite{Bandos:2004ym}.
The supersymmetric FDA defining the ground state of the $D=11$ theory\footnote{We do not consider the $D=11$ theory out of the vacuum in the present paper. Some progress in this topic has been obtained in Ref. \cite{Bandos:2004ym}.} is given by the vanishing of the following supercurvatures:
\begin{align}
& \mathcal{R}^{ab} \equiv d\omega^{ab} - \omega^{ac}\wedge \omega_c^{\; b}=0 ,\label{FDA11omega} \\
& R^a \equiv D V^a - \frac{{\rm i}}{2}\bar{\Psi}\wedge \Gamma^a \Psi =0 ,\label{FDA11v} \\
& \rho \equiv D \Psi = 0 ,\label{FDA11psi}\\
& F^{(4)} \equiv dA^{(3)} - \frac{1}{2}\bar{\Psi}\wedge \Gamma_{ab}\Psi \wedge V^a \wedge V^b =0 , \label{FDA11a3}
\end{align}
where $D$ ($D=d-\omega$, according with the convention of \cite{D'AuriaFre}, \cite{Hidden}) denotes the Lorentz covariant derivative in eleven dimensions and where $\Gamma_a$ and $\Gamma_{ab}$ are gamma matrices in $D=11$. Again, the vielbein $V^a$ and the gravitino $\Psi$ span a basis of the cotangent superspace $K \equiv \lbrace V^a,\Psi \rbrace$ where also the superspace $3$-form $A^{(3)}$ is defined. The $d^2$-closure of the FDA (\ref{FDA11omega})-(\ref{FDA11a3}) is a consequence of $3$-gravitinos Fierz identity $\Gamma_{ab} \Psi \wedge \bar{\Psi} \wedge \Gamma^a \Psi =0$ in $D=11$.
Let us now consider the following minimal Maxwell superalgebra (written in its dual Maurer-Cartan formulation) in eleven dimensions:
\begin{align}
& d \omega^{ab} - \omega^{ac} \wedge \omega_{c}^{\; b}=0, \label{domprima} \\
& D V^a = \frac{{\rm i}}{2}\bar{\Psi} \wedge \Gamma^a \Psi , \\
& D \Psi =0, \\
& D \tilde{B}^{ab} = \frac{1}{2} \bar{\Psi} \wedge \Gamma^{ab}\Psi , \label{maxtildebab} \\
& D B^{ab} = - \bar{\chi}\wedge \Gamma^{ab} \Psi - V^a \wedge V^b - \frac{1}{5} \tilde{B}^{ac} \wedge \tilde{B}_c^{\; b} , \label{maxbab} \\
& D \chi = \frac{{\rm i}}{2} \gamma_a \Psi \wedge V^a - \frac{1}{20}\gamma_{ab} \Psi \wedge \tilde{B}^{ab} , \label{maxxi}
\end{align}
where $\chi$ is a spinor $1$-form (with dimension $[\chi]=L^{3/2}$) dual to a nilpotent fermionic generator (we have used the symbol $\chi$ in order to avoid confusion with the spinor $1$-form $\xi$ of the four-dimensional case discussed in Section \ref{Maxalg}). The $d^2$-closure of this superalgebra is a consequence of $3$-gravitinos Fierz identities in $D=11$ (see Appendix \ref{app}).
Observe that the generator dual to the $1$-form field $\tilde{B}^{ab}$, let us call it $\tilde{Z}_{ab}$, is a non-abelian one. In the absence of the super-Maxwell fields, this bosonic generator would become an almost-central bosonic generator; in eleven dimensions, it was understood as a $2$-brane charge, source of a $3$-form gauge potential (see, for example, Ref. \cite{Hidden} for details). The (dual Maurer-Cartan formulation of the) superalgebra (\ref{domprima})-(\ref{maxxi}) is a $D=11$ extension including an extra bosonic $1$-form field $\tilde{B}^{ab}$ (whose dimension is $[\tilde{B}^{ab}]=L$) of the $D=4$ super-Maxwell algebra we have considered in Section \ref{Maxalg}. Note that the superalgebra (\ref{domprima})-(\ref{maxxi}) have the same form of the minimal super-Maxwell algebra in $D=4$ discussed in \cite{CR2} (which was referred to as $s\mathcal{M}_4$ in that paper).\footnote{However, the $1$-form fields of \cite{CR2} have different dimensions with respect to those appearing in (\ref{domprima})-(\ref{maxxi}).}
Now, the most general ansatz for the $3$-form $A^{(3)}$, written in terms of the $1$-forms $\sigma^A= \lbrace V^a, \tilde{B}^{ab}, B^{ab}, \Psi, \chi \rbrace$, satisfying the Bianchi identity $d^2 A^{(3)}=0$ reads as follows:
\begin{align}
A^{(3)} (\sigma) &= T_0 \tilde{B}^{ab} \wedge V_a \wedge V_b + T_1 \tilde{B}^{ab} \wedge \tilde{B}_{bc} \wedge \tilde{B}^{c}_{\; a} + \nonumber \\
& \quad + {\rm i} S_1 \bar{\Psi} \wedge \Gamma_a \chi \wedge V^a + S_2 \bar{\Psi} \wedge \gamma_{ab} \chi \wedge \tilde{B}^{ab} + \nonumber \\
& \quad + M_1 \bar{\Psi} \wedge \Gamma_{ab} \Psi \wedge B^{ab}. \label{a3par}
\end{align}
Then, the requirement that expression $A^{(3)}(\sigma)$ in (\ref{a3par}) satisfies the FDA equation (\ref{FDA11a3}) leads to the following system of equations involving the coefficients $T_0$, $T_1$, $S_1$, $S_2$, and $M_1$:
\begin{equation} \label{cond11}
\left\{
\begin{array}{l}
T_0- S_1 -2M_1 -1 =0 , \cr
T_0+ \frac{1}{10}S_1 -S_2 =0 , \cr
\frac{3}{2} T_1 + \frac{1}{5} S_2 + \frac{1}{5} M_1 =0 , \cr
- \frac{1}{2} S_1 - 5 S_2 + 10 M_1=0 .
\end{array}
\right.
\end{equation}
The solution to the system (\ref{cond11}) depends on one free parameter (we choose $M_1$), and it is given by:
\begin{align}
& T_0 = \frac{1}{6} + 2M_1, \quad T_1 = - \frac{1}{90}- \frac{2}{5}M_1 , \nonumber \\
& S_1=- \frac{5}{6}, \quad S_2 = \frac{1}{12} + 2 M_1. \label{solution}
\end{align}
Some remarks are in order. First of all, one can easily prove that in the absence of the super-Maxwell extra spinor $\chi$ the expression (\ref{a3par}) could not reproduce the FDA equation (\ref{FDA11a3}) on ordinary superspace anymore. On the other hand, using equation (\ref{maxtildebab}), the last term in (\ref{a3par}) can be rewritten as
\begin{equation}
M_1 \bar{\Psi} \wedge \Gamma_{ab} \Psi \wedge B^{ab}=2 M_1 D\tilde{B}_{ab}\wedge B^{ab} .
\end{equation}
Then, we have
\begin{equation}
2 M_1 D\tilde{B}_{ab}\wedge B^{ab} = 2M_1 d(\tilde{B}_{ab} \wedge B^{ab})+2M_1 \tilde{B}_{ab} \wedge D B^{ab}
\end{equation}
and, extracting the total derivative (which is allowed since the FDA is invariant under the $3$-form gauge transformation $\delta A^{(3)}=d\Lambda^{(2)}$) and using equation (\ref{maxbab}), we obtain the following expression for $A^{(3)}$ in terms of $1$-forms:
\begin{align}
A^{(3)} &= \left(T_0 - 2 M_1 \right) \tilde{B}^{ab} \wedge V_a \wedge V_b + \nonumber \\
& \quad + \left( T_1+ \frac{2}{5} M_1 \right) \tilde{B}^{ab} \wedge \tilde{B}_{bc} \wedge \tilde{B}^{c}_{\; a} + \nonumber \\
& \quad + {\rm i} S_1 \bar{\Psi} \wedge \Gamma_a \chi \wedge V^a + \left( S_2 - 2 M_1 \right) \bar{\Psi} \wedge \gamma_{ab} \chi \wedge \tilde{B}^{ab}. \label{a3parNEEEEEW}
\end{align}
This final expression contains only the terms appearing in the composite $3$-form written in Refs. \cite{Bandos:2004xw}, \cite{Bandos:2004ym}. In particular, it does not contain the bosonic $1$-form field $B^{ab}$. Accordingly, redefining $T_0 - 2 M_1 \equiv \hat{T}_0$, $ T_1+ \frac{2}{5} M_1 = \hat{T}_1$, $S_1 \equiv \hat{S}_1$, and $S_2 - 2 M_1 \equiv \hat{S}_2$ in (\ref{a3parNEEEEEW}) (that is equivalent to set $M_1=0$ in (\ref{a3par})) and imposing the requirement that the expression for $A^{(3)}$ in (\ref{a3parNEEEEEW}) satisfies the FDA equation (\ref{FDA11a3}), one ends up with
\begin{equation}
\hat{T}_0 = \frac{1}{6} , \quad \hat{T}_1 = - \frac{1}{90}, \quad \hat{S}_1=- \frac{5}{6}, \quad \hat{S}_2 = \frac{1}{12} ,
\end{equation}
corresponding to the solution found in \cite{Bandos:2004xw}, \cite{Bandos:2004ym} with a particular choice for the normalization of the extra spinor $1$-form (see also \cite{Hidden} and, in particular, the expression for $A^{(3)}_{(0)}$ in \cite{Malg} where, however, the extra spinor $1$-form named $\xi$ was normalized in a different way).\footnote{In the case under analysis, we are not considering the presence of the extra bosonic $1$-form field $B^{a_1 \ldots a_5}$ (dual to a bosonic generator $Z_{a_1 \ldots a_5}$), which would appear when considering the complete FDA including also a $6$-form potential $B^{(6)}$.}
Thus, we can conclude that the super-Maxwell algebra (\ref{domprima})-(\ref{maxxi}) can be interpreted as a (hidden) superalgebra underlying the supersymmetric FDA (\ref{FDA11omega})-(\ref{FDA11a3}) describing $D=11$ supergravity. This superalgebra is larger than the one discovered in \cite{D'AuriaFre} (excluding the $1$-form $B^{a_1 \ldots a_5}$), since in contains one more extra bosonic $1$-form field $B^{ab}$.
On the other hand, the contribution coming from $B^{ab}$ in the parametrization of $A^{(3)}$ can be reabsorbed by a gauge transformation of the $3$-form. Again, in analogy with the result we have obtained in Section \ref{Maxalg} in $D=4$ space-time dimensions, the peculiarity of the above result in $D=11$ lies in the fact that the spinor $\chi$ that allows to write the supersymmetryzation of the $D=11$ Maxwell algebra is also the same spinor that allows to write the parametrization of the $3$-form $A^{(3)}$ in terms of $1$-forms in such a way to fulfill the FDA requirement (\ref{FDA11a3}).
We now move to the analysis of the hidden gauge structure of the supersymmetric FDA (\ref{FDA11omega})-(\ref{FDA11a3}).
\subsection{Analysis of the hidden gauge structure in D=11}
Recalling the discussion presented in Section \ref{Maxalg}, once the supersymmetric FDA (\ref{FDA11omega})-(\ref{FDA11a3}) is parametrized in terms of $1$-forms, the symmetry structure is based on the hidden supergroup-manifold $G$ having the structure of a principal fiber bundle $(G/\mathcal{H},\mathcal{H})$: $G/ \mathcal{H}$ corresponds to superspace, while the fiber $\mathcal{H}$ in the present $D=11$ case includes, besides the Lorentz transformations, also the hidden super-Maxwell generators in $D=11$ (we call them $\tilde{Z}_{ab}$, $Z_{ab}$, and $\Sigma$, and they are dual to the $1$-form fields $\tilde{B}^{ab}$, $B^{ab}$, and $\chi$, respectively).
We can then write $\mathcal{H} = H_0 + H_b + H_f$, so that $\lbrace J_{ab} \rbrace \in H_0$, $ \lbrace \tilde{Z}_{ab}, Z_{ab} \rbrace \in H_b$, $\lbrace \Sigma \rbrace \in H_f$, and $\lbrace P_a , Q \rbrace \in \mathbb{K}$, where $\mathbb{G} = \mathcal{H} + \mathbb{K}$ is the hidden Maxwell superalgebra.
We now analyze the relation between the FDA gauge transformations and those of its hidden Maxwell supergroup. As we have already mentioned, the FDA (\ref{FDA11omega})-(\ref{FDA11a3}) is invariant under the gauge transformation
\begin{equation}\label{gaugefdaNEW}
\delta A^{(3)}=d\Lambda^{(2)} ,
\end{equation}
which is generated by the arbitrary $2$-form $\Lambda^{(2)}$.
The gauge transformations of the bosonic 1-forms $\tilde{B}^{ab}$, $B^{ab}$ and of the spinor $1$-form $\chi$ generated by the tangent vectors in $H_b$ and in $H_f$ are respectively given by:
\begin{equation}\label{gauge1fomsNEW}
\left\{
\begin{array}{l}
\delta \tilde{B}^{ab} = d \tilde{\Lambda}^{ab} , \\
\delta B^{ab}=d\Lambda^{ab} - \bar{\varrho} \gamma^{ab} \Psi - \frac{2}{5} \tilde{\Lambda}^{ac} \tilde{B}_c^{\;b}, \\
\delta \chi = D \varrho + \frac{1}{20} \Gamma_{ab} \Psi \tilde{\Lambda}^{ab},
\end{array}\right.
\end{equation}
where $\tilde{\Lambda}^{ab}$ and $\Lambda^{ab}$ are arbitrary Lorentz-valued scalar functions and where we have introduced the infinitesimal spinor parameter $\varrho$.
The corresponding $2$-form gauge parameter of $A^{(3)}$ turns out to be
\begin{align}
\Lambda^{(2)} &= T_0 \tilde{\Lambda}^{ab} V_a \wedge V_b + 3T_1 \tilde{\Lambda}^{ab} \tilde{B}_{bc} \wedge \tilde{B}^c_{\;a} + \nonumber \\
& \quad - {\rm i} S_1 \bar{\Psi} \wedge \Gamma_a \varrho V^a - S_2 \bar{\Psi} \wedge \Gamma_{ab} \varrho \tilde{B}^{ab} + \nonumber \\
& \quad +S_2 \bar{\Psi} \wedge \Gamma_{ab} \chi \tilde{\Lambda}^{ab} +M_1 \bar{\Psi} \wedge \Gamma_{ab} \Psi \Lambda^{ab} . \label{l2NEW}
\end{align}
We can now show that all the diffeomorphisms in the hidden Maxwell supergroup, generated by Lie derivatives, are invariances of the FDA, the ones in the fiber $\mathcal{H}$ directions being associated with a particular form of the gauge parameter of the FDA
gauge transformation (\ref{gaugefdaNEW}). Indeed, defining the following tangent vectors:\footnote{Again, since the Lorentz transformations, belonging to $H_0 \subset \mathcal{H}$, are not effective on the FDA, the $3$-form $A^{(3)}$ being Lorentz-invariant, our analysis reduces to consider the transformations induced by the tangent vectors in $H_b $ and in $ H_f$.}
\begin{align}
& \overrightarrow{z} \equiv \tilde{\Lambda}^{ab} \tilde{Z}_{ab} + \Lambda^{ab} Z_{ab} \in H_b , \\
& \overrightarrow{q} \equiv \bar{\varrho} \Sigma \in H_f ,
\end{align}
we find that a gauge transformation leaving invariant the FDA (\ref{FDA11omega})-(\ref{FDA11a3}) is recovered, $A^{(3)}$ being parametrized in terms of $1$-forms, if
\begin{equation}
\Lambda^{(2)} \equiv \Lambda^{(2)}_b + \Lambda^{(2)}_f = \imath_{\overrightarrow{z}} (A^{(3)}) + \imath_{\overrightarrow{q}} (A^{(3)}) ,
\end{equation}
where $\imath$ denotes the contraction operator and where we have denoted by $\Lambda^{(2)}_b$ the $2$-form gauge parameter corresponding to the transformations in $H_b$, while $\Lambda^{(2)}_f$ is the $2$-form gauge parameter corresponding to the transformation in $H_f$. The result written above is true as a consequence of the relations (\ref{cond11}) obeyed by the coefficients of the parametrization (\ref{a3par}) of $A^{(3)}$ in terms of $1$-forms.
Then, introducing the Lie derivative $\ell _{\overrightarrow{z}} \equiv d \imath_{\overrightarrow{z}} + \imath_{\overrightarrow{z}} d$ (and, analogously, $\ell _{\overrightarrow{q}} \equiv d \imath_{\overrightarrow{q}} + \imath_{\overrightarrow{q}} d$), we can write
\begin{align}
\delta A^{(3)} & = \delta_{\overrightarrow{z}} A^{(3)} + \delta_{\overrightarrow{q}} A^{(3)} = \nonumber \\
& = T_0 \, d \tilde{\Lambda}^{ab} \wedge V_a \wedge V_b + 3 T_1 d \tilde{\Lambda}^{ab} \wedge \tilde{B}_{bc} \wedge \tilde{B}^c_{\; a} + \nonumber \\
& \quad + {\rm i} S_1 \bar{\Psi} \wedge \Gamma_a \left( D \varrho + \frac{1}{20} \Gamma_{bc} \Psi \tilde{\Lambda}^{bc} \right) \wedge V^a + \nonumber \\
& \quad + S_2 \bar{\Psi} \wedge \gamma_{ab} \left( D \varrho + \frac{1}{20} \Gamma_{cd} \Psi \tilde{\Lambda}^{cd} \right) \wedge \tilde{B}^{ab} + \nonumber \\
& \quad + S_2 \bar{\Psi} \wedge \Gamma_{ab}\chi \wedge d \tilde{\Lambda}^{ab} + \nonumber \\
& \quad + M_1 \bar{\Psi} \wedge \Gamma_{ab} \Psi \wedge \left(d \Lambda^{ab} - \bar{\varrho} \Gamma^{ab}\Psi - \frac{2}{5}\tilde{\Lambda}^{ac} \tilde{B}_c^{\; b} \right) = \nonumber \\
& = d \left(\imath_{\overrightarrow{z}} (A^{(3)}) \right) + d \left(\imath_{\overrightarrow{q}} (A^{(3)}) \right) = \nonumber \\
& = \ell_{\overrightarrow{z}} A^{(3)} + \ell_{\overrightarrow{q}} A^{(3)} ,
\end{align}
where the last equality follows since $dA^{(3)}$, as given in (\ref{FDA11a3}), is invariant under transformations
generated by $\overrightarrow{z}$ and $\overrightarrow{q}$, corresponding to the gauge invariance of the supervielbein (the right hand side of $dA^{(3)}$ is in the $\mathcal{H}$-relative CE cohomology).
We can finally see that, after integration by parts, making use of $3$-gravitinos Fierz identities in $D=11$ (see Appendix \ref{app}) and of the relations (\ref{cond11}), the above result exactly reproduces the gauge transformation (\ref{gaugefdaNEW}) leaving invariant the supersymmetric FDA (\ref{FDA11omega})-(\ref{FDA11a3}). Precisely, we have
\begin{equation}
\delta A^{(3)} = \delta_{\overrightarrow{z}} A^{(3)} + \delta_{\overrightarrow{q}} A^{(3)} = d \Lambda^{(2)} ,
\end{equation}
where $\Lambda^{(2)}$ is defined in equation (\ref{l2NEW}). This result is hardly surprising, since if one had reabsorbed (as shown above) the term containing $B^{ab}$ in the parametrization (\ref{a3par}) of $A^{(3)}$, the analysis of the FDA gauge invariance would have been traced back to the one done in \cite{Hidden}.
We have thus completed the analysis of the hidden gauge structure of the $D=11$ supersymmetric FDA (\ref{FDA11omega})-(\ref{FDA11a3}).
\section{Conclusions} \label{Concl}
In this paper, driven by the fact that any supersymmetric FDA can always be traded for a hidden Lie superalgebra containing extra, nilpotent, fermionic generators \cite{Hidden}, we have first of all shown that the $D=4$ super-Maxwell algebra of Ref. \cite{deAzcarraga:2014jpa} (given in (\ref{smlast})) can be interpreted as a hidden superalgebra underlying the ground state FDA (\ref{fdafirst})-(\ref{fdalast}) of $D=4$ supergravity containing a $2$-form potential $A^{(2)}$.
Subsequently, we have considered the FDA (introduced in \cite{D'AuriaFre}) describing the $D=11$ supergravity theory of \cite{Cremmer}, which contains a $3$-form potential $A^{(3)}$, and we have shown that there exists a $D=11$ super-Maxwell algebra underlying the theory. In this work, we have limited ourselves to consider the $D=11$ FDA containing just the $3$-form $A^{(3)}$, leaving the study of the complete FDA involving also a $6$-form potential $B^{(6)}$ (and, correspondingly, the presence of an extra bosonic $1$-form field $B^{a_1 \ldots a_5}$, see Refs. \cite{D'AuriaFre}, \cite{Hidden}, in the underlying Lie superalgebra) to future investigations.\footnote{In that case, the extra bosonic $1$-form field $B^{ab}$ appearing in the $D=11$ super-Maxwell algebra could play a more prominent role in participating to the parametrization of $B^{(6)}$ in terms of $1$-forms.}
In the analyses we have performed, the presence of the extra spinors $\xi$ and $\chi$ naturally appearing in the supersymmetric extension of the Maxwell algebras in the $D=4$ and $D=11$ cases, respectively, is crucial in order to reproduce the $D=4$ and $D=11$ FDAs on ordinary superspace, whose basis is given by the supervielbein. Indeed, referring, for instance, to the $D=4$ case, the spinor $1$-form field $\xi$ allows to write the parametrization $A^{(2)}=-\bar{\psi} \wedge \xi$ satisfying (\ref{fdalast}); this would not be possible without adding extra fields to the $D=4$ supergravity theory, and it is particularly intriguing that it is really a fundamental spinor to the construction of the Maxwell superalgebra to make possible a parametrization in terms of $1$-forms of the $2$-form $A^{(2)}$ appearing in the FDA of the $\mathcal{N}=1$, $D=4$ supergravity theory. The same consideration holds true also in the $D=11$ case, where the extra spinor $\chi$ naturally appearing in the $D=11$ super-Maxwell algebra allows to trivialize the FDA containing the $3$-form potential $A^{(3)}$ when the latter is written in terms of $1$-forms. In this case, we have shown that, exploiting the gauge invariance of the $3$-form, the parametrization (\ref{a3par}) of $A^{(3)}$ in terms of $1$-forms can be recast into the form given in \cite{Bandos:2004xw}, \cite{Bandos:2004ym} (see also \cite{Hidden} and $A^{(3)}_{(0)}$ of \cite{Malg}). Our result could shed some light on the symmetries hidden in $D=11$ supergravity and related models (see, for instance, Refs. \cite{Bergshoeff:1995hm} and \cite{Sezgin:1996cj}).
Concerning the $D=4$ FDA, in this work we have just considered the FDA including the $2$-form potential $A^{(2)}$. We leave the analysis of the (complete) FDA involving also a spinor $2$-form $\Xi ^{(2)}$ (see \cite{Castellani}) satisfying (\ref{xi2giga}) to future works. This would require extra $1$-form fields with respect to those appearing in the dual Maurer-Cartan formulation of the Maxwell superalgebra or, directly, a different Lie superalgebra underlying the theory.
The extra super-Maxwell fields could be important additions towards the construction of possible off-shell models underlying supergravity theories (mainly in higher-dimensional cases, such as the eleven-dimensional one).
Furthermore, let us mention that our framework is naturally related to the formulation of Double Field Theory and Exceptional Field Theory (see also Refs. \cite{Hidden}, \cite{Malg}). Indeed, the presence of extra bosonic $1$-forms in the dual formulation of Lie superalgebras appears to be quite analogous to the presence of extra coordinate directions in the formulation of Double Field Theory and Exceptional Field Theory; in particular, referring to Exceptional Field Theory, the section constraints required in that theory to project the field equations on ordinary
superspace should be dynamically implemented through the presence of the cohomological extra spinors.
It would be interesting to extend our discussion and interpretation of the (hidden) Maxwell-superalgebras to higher-dimensional and $\mathcal{N}>1$ theories worked out in a geometric framework (also matter-coupled ones), investigating, in particular, possible supersymmetric extensions of the discussion presented in \cite{Gomis:2017cmt}.
Finally, one could also analyze gauged FDAs in this geometric framework; in this context, we conjecture that the so-called $AdS$-Maxwell superalgebra \cite{Durka:2011gm} could play an important role within our approach. Some work is in progress on this topic.
\section*{Acknowledgements}
The author is grateful to Laura Andrianopoli and Riccardo D'Auria for the support and the stimulating discussions during the early stages of the preparation of this work. The author also wish to acknowledge illuminating discussions with Igor A. Bandos.
|
1,477,468,750,344 | arxiv | \section{Introduction}\label{section.intro}
In Quantum Physics since we are dealing with operators on Hilbert space, it is important to construct the quantum theory in such a way that it's measurement process remains invariant under unitary transformations. Although in non-relativistic quantum mechanics, all representations are unitary equivalent, different inequivalent representations are among main and natural properties of Quantum Field Theory. However, in conventional Quantum Field Theory, physicists do not pay proper attention to them. In other words in conventional Quantum Field Theory we take into consideration just one class of these representations and ignore the others. In this article first we make a review of them and then show that although it seems that they play no important role in Quantum Field theory in flat (Minkowski) space-time, they are inevitable part of Quantum Field Theory in curved space-times, without which it is impossible to formalize a consistent QFT.
\section{Equivalent Representations}\label{section.UIR}
\subsection{Stone- von Neumann Uniqueness theorem}\label{subsecgtion.von Neumann}
First we begin with Stone- von Neumann uniqueness theorem \cite{Neumann1931} which states that if $\{\tilde{U}(a)| a\in\mathbb{R}\}$, $\{\tilde{V}(b)| b\in\mathbb{R}\}$ are finite sets of weakly continuous unitary operators acting irreducibly on a separable Hilbert space $H$ such that $\tilde{U}(a)\tilde{V}(b)=\exp{\frac{-iab}{\hbar}}\tilde{V}(b)\tilde{U}(a)$, $\tilde{U}(a)\tilde{U}(b)=\tilde{U}(a+b)$ and $\tilde{V}(a)\tilde{V}(b)=\tilde{V}(a+b)$, then there is a Hilbert space isomorphism $W:H\rightarrow \mathcal{L}^2 (\mathbb{R})$ such that $W\tilde{U}(a)W^{-1}=U(a)$ and $W\tilde{V}(a)W^{-1}=V(a)$.
This theorem concludes that for every system with $N$ degrees of freedom, where $N$ is finite, all representations in Hilbert space are unitary equivalent.
As an example of this equivalent representations one can consider the wave function of a non-relativistic system described as $\psi(x)$. In non-relativistic quantum mechanics $\psi(x)$ is called space representation of wave function. Besides for every system we have another representation called momentum representation, denoted by $\phi(p)$. It is easy to show that both representations are unitary equivalent, since they can transform to each other by a fourier transformation:
\begin{equation}\label{eqn.fourier trans.1}
\psi(x)= \frac{1}{(2\pi\hbar)^\frac{1}{2}}\int \phi (p)\exp \frac{ipx}{\hbar}\,dp
\end{equation}
$$
\phi(p)=\frac{1}{(2\pi\hbar)^\frac{1}{2}}\int \psi(x) \exp \frac{-ipx}{\hbar}\,dx
$$
Since these transformations preserve the norm of the wave function they are unitary.
As mentioned above the Von Neumann uniqueness is valid for all systems with finite degrees of freedom \cite{Haag}. But the situation changes when $N\rightarrow\infty$. In this case instead of unitary equivalent representations, we have the equivalent classes of representations. Each two representations belonging to one of these classes are unitary equivalent but the ones from different classes do not need to be equivalent.
By definition the field is a system with infinite number of degrees of freedom, so naturally we have to deal with unitary inequivalent representations. Then one may ask which class of representations is physical?
In conventional Quantum Field Theory the answer to this question is given by the condition:
\begin{equation}\label{eqn.vacuum1}
H\ket0=0
\end{equation}
where the $\ket0$ is the vacuum state of quantum field (after this selection we ignore the existence of other representations). This selection becomes physically realizable due to existance of Poincare symmetry. As we know, conventional Quantum Field Theory is based on two physical theories namely Quantum Mechanics and Special Theory of Relativity. Thus in order to construct Poincare (Lorentz) covariant Quantum Field Theory, we have to find a unitary representation of Poincare group in Hilbert space and then conclude that the vacuum state of the field ($\ket0$) must be Poincare invariant. In other words all inertial observers (Observers related to each other by a Poincare transformation) will see the same vacuum state.
Considering all above and in view that it is possible to define a globally time-like Killing vector $t$ ,we can state that:
\begin{equation}
\exp (-i\hat{H}t)\ket0=\ket0
\end{equation}
in which $\hat{H}$ is the generator of one parameter time translation group, i.e. the Hamiltonian.
Here we have to add that this procedure can be generalized to curved space-times when they admit a global time-like Killing vector $t$. So the vacuum state of QFT in these space-times will be invariant under the group of isometric transformations.
\section{Why do These Inequivalent Representations Exist?}\label{subsection.why UIR exist}
Although some people try to ignore the existence of these classes of representations, the existence of them can be proved in the context of conventional Quantum Field Theory as in Algebraic one.
In conventional approach, since we are dealing with a field, we have to make a cut for our system. Remembering that this situation does not take place in systems with finite degrees of freedom. Because in this situation we are able to close the system and specify it. But in fields this can not be done. The reason is that we can not specify the infinity. So by an idealization we make a cut and try to construct a complete set of observables locally. As an example we suppose that our infinity is located in a very far place, say Andromeda galaxy, and ${\{A_{i}\}}$ is our complete set of observables. So it is clear that we can formalize our theory with ${\{A_{i}\}}$, but this can be done just locally. That is bacause if someone makes a change in a place beyond Andromeda galaxy, globally our representations will change. But as we have made a cut in Andromeda, this change will have no effect in our local observations. From one side it shows that in local observations and interactions we can neglect these different representations, but on the other side in non-local effects all of them become important. Here we have to emphasize that if we want to construct a complete and self consistent theory of Quantum Gravity which can relate local and global phenomena, dealing with all these representations is necessary.
The existence of this different representations can be easily shown in the context of Algebraic Quantum Field Theory too. Where one can associate a $ C^*-algebra $ to a quantum field. This follows by GNS construction \cite{citeulike:7477863,Segal} which states that for every element on $ C^*-algebra $ like $\omega$, there is a representation $\pi$ of the algebra by linear operators on a dense subspace $D\subseteq H$ such that
\begin{equation}\label{eqn.GNS}
\omega(A)=(\Omega,\pi(A)\Omega)
\end{equation}
where $\Omega$ is the unit vector in $D$ .
So we can conclude that in each representation which we construct with GNS construction, the specified state $\omega$ in $ C^*-algebra $ is related to unit vector $\Omega$ in Hilbert space. Thus if one chooses another state say $\nu$ and constructs another representation, then there is no need to these two representations be unitary equivalent. Here we have to add that the existence of these representations had been realized by physicists in the early years of Quantum Field Theory due to the Schur's lemma. Moretti has stated in \cite{moretti} that every pure algebraic state $\omega$, corresponding to an irreducible representation of the algebra of observables, must inevitably select a value of $Q$ in the GNS representations.($Q$ is an observable,with arbitrary $q\in \mathcal{R}$ value on pure states) following the Schur's lemma, $\pi_{\omega}(Q)$ commutes with all elements. So it must be a multiple of identity. So two pure algebraic states $\omega$ and $\omega'$ with distinct $q$ and $q'$ ($q \neq q'$) produce inequivalent representations.
\section{Inequivalent Representations}
As stated above, Poincare invarinace leads us to select one special class of representations. In this section we intend to show that picking up one class of representations is not sufficient for describing some physical phenomena.
\subsection{Haag's Theorem}
Soon after formalization of Quantum Field Theory, physicists realized that even in the context of conventional approach to Quantum Field Theory, when we try to explain the interacting theories, more than one class of representaions is needed.
The said phenomena was observed bay Haag and sometimes is called the Haag's no go theorem, which states that \cite{HaagArt}, free and interacting fields must necessarily be defined on different unitariliy inequivalent Hilbert spaces. This means that interacting Fock space cannont exist, beacause the frequency-splitting process cannot be applied to interacting fields. For example, it has been shown in \cite{Baker}, if we add an interacting term $\lambda \phi^4$, where $\lambda \in \mathbb{R}$ to the Klein-Gordon Lagrangian
\begin{equation}\label{KG}
(\Box + m^2)\phi=0
\end{equation}
we would have
\begin{equation}
(\Box + m^2)\phi+\lambda \phi^4=0
\end{equation} then the mass condition $k_ak^a=m^2$
(which plays a crucial role for frequency-splitting process) does not hold, which means that some of the single particle interacting wavefunctions will be built, in part, from plane waves with spacelike momentum vectors.
\subsection{Unruh Effect}
Beside the interacting Quantum Field Theory that indicates the natural existance of different unitary inequivalent representations, there are some important physical effects that cannot be explained based on just one class of these representations.
As mentioned above, the existance of Poincare symmetry leads us to conclude that the vacuum state $\ket{0}$ is identical for all different inertial observers.
But is it possible for non-inertial observers to see the same vacuum?
The answer is negative. The Unruh effect is a clear example of the fact that for accelerating observer the Minkowski vacuum state $\ket0$ ( Vacuum state with of a Quantum Field Theory based on Poincare symmetry) looks like a thermal state with temprature $ T= \frac{\hbar}{2\pi\kappa c} a$, in which $a$ stands for the acceleration. This phenomena, discovered by Unruh\cite{Unruh} , Fulling\cite{Fulling} and Davies \cite{Davies} , plays a basic role for other important effects notably on Hawking effect. Here we discuss the Unruh effect brifely.
As we know from Special Theory of Relativity, Rindler coordinates describes a uniformly acceleratng frame of reference, which is obtained from standard Minkowski line elements:
\begin{equation}
ds^2=dt^2-dx^2-dy^2-dz^2
\end{equation}
by introducing these new coordinates:
\begin{equation}\label{transformation}
x=\xi \cosh{\eta}
\end{equation}
$$ t=\xi \sinh{\eta}$$
and after some calculation we can write the so called Rindler line element:
\begin{equation}
ds^2=\xi^2 d\eta^2-d\xi^2-dy^2-dz^2
\end{equation} where $\xi^2=\frac{1}{a^2}$.
As stated in \cite{Arageorgis} there is a singularity at $\xi=0$. The apparent singularity at $\xi=0$ is coordinate singularity and is due to the fact that these coordinates are valid for just a portion of Minkowski spacetime, called (right) Rindler wedge $R: x> |t|$.
The main feature of Rindler wedge is that it is a glaboally hyperbolic spacetime with Cauchy surfaces $\eta=const$, where the orthogonal trajectories are $\xi^2=x^2 - t^2$. It is clear that by compairing Rindler spacetime with Minkowski spacetime, an observer, whose worldline is one of these hypersurfaces, undergoes constant proper acceleration of magnitude $a=\xi^{-1}$. In other words, a particle following the hyperbolic motion ,$\xi^2=x^2-t^2=const$, is a stationary observer according to Rindler coordinates.
The hyperbolicity of Rindler wedge, enables us to quantize the Klein-Gordon scalar field $\phi$ for this spacetime and to construct the Hilbert space and Fock space representations of $\phi$ with their corresponding operators. This procedure is called Fulling Quantization.
Now we may say that the Unruh effect indicates that if we consider a Klein-Gordon field equation \ref{KG}
in the Rindler wedge and then apply the quantization procedure two times, in two different ways: one by using Minkowski coordinates and other by using Rindler coordinates we can conclude that
\begin{equation}
\hat{N}_R\ket{0}_M\neq 0
\end{equation}
where $\hat{N}_R$ is the number operator of Rindler quantization and $\ket{0}_M$ is the Minkowski vacuum. However, the important feature is that the two mentioned quantizations are differnet from each other.
Let us look to the problem algebraically. Follwing Algebraic quantum field theoy we can say that the restriction of Minkowski vacuum state $\omega_M$ on the Rindler wedge Algebra $\mathcal{A}(R)$ defines a state $\omega_{M}\mid _{\mathcal{A}(R)}$. But contrary to Rindler vacuum state $\omega_R$, which is a pure state, $\omega_{M}\mid _{\mathcal{A}(R)}$ is a mixed one.
It can be shown that $\omega_{M}\mid _{\mathcal{A}(R)}$ is a KMS state \cite{Wald}:
\begin{equation}
\rho=\prod_i \sum_{n=0}^{\infty} \exp{(\frac{-2\pi n \omega_i}{a})}\ket{n_i}_M\bra{n_i}_M
\end{equation}
where $n\omega_i$ is the energy of $\ket{n_i}_M$ state. Thus it will become clear that $\omega_M$ can be seen as the thermal density matrix $\exp (\frac{H}{T})$, where $H$ is the Hamiltonian. Therefore $T=\frac{a \hbar}{2\pi k_B c}$.
Furthermore, the two quantizations are different in a stronger way. They are disjoint representations. That is why it is mentioned by Belinski \cite{Belinski} that "... they refer to problems with different Hamiltonians". There are several papers and useful discussions whether Unruh effect is a physical effect or it is meaningless to talk about it. \cite{Arageorgis} \cite{Belinski} \cite{Halvorson}
What we want to say is that if we take into consideration all different class of representations of a quantum field, it is possible to show that Unruh effect is physical. Of course as we have mentioned, the starting point is that the accelerated and the inertial observers' coordinates are not related to each other by a Poincare transformation .\ref{transformation}
So first of all we have to extend the notion of Poincare(Lorentz) covariant Quantum Field Theory to the general covariant one, where of course all different type of coordinate transformations are possible. This automatically leads us to the fact that in this case there is no prefered notion of states. Speaking algebraically general covariance means that for every open subset $O \in M$ there is a $\mathcal{X} \in Diff(M)$ (Diffeomorphism group) such that \cite{Salehi}
\begin{equation}
\pi_{\mathcal{X}}(\mathcal{A}(O))=\mathcal{A}(\mathcal{X}(O))
\end{equation}
where $\mathcal{A}(O)$ is the algebra of observables in the region $O$ and $\pi_{\mathcal{X}}$ is the representation of $\mathcal{X}$. So we can say that in the Unruh effect, dealing with all classes simultaneously, acceleration changes the representation of the quantum field from one class to another and makes the vacuum state of initial class to seem thermalized in the new class.
\subsection{Hawking Effect}\label{section.hawking Effect}
In this section we want to discuss the Hawking effect and show that by regarding these unitary inequivalent representations it is possible, not in a complete mathematical way, to describe this phenomena.
Stephen Hawking \cite{Hawking:1974rv} showed that by regarding quantum effects, it is possible to attribute a thermal radiation to black holes. This radiation, called Hawking radiation, has a temperature which is
\begin{equation}\label{eqn.hawking.temp}
T_{H}=\frac{\hbar c^3}{8\pi G M k_{B}}
\end{equation}
This formula contains four fundamental constants in nature, $ \hbar$, $G$, $k$ and $c $. In other words Hawking radiation showed that there is a connection between Thermodynamics, General Relativity and Quantum Field Theory. Although this was a great achievement, at first glance Hawking's original calculations suffer from transplanckian problem. Because of this, some physicists by considering the fact that in our world there is no transplanckian energy concluded that Hawking effect is not physical. Besides there was some other problems related to second law of Thermodynamics. The situation is changed when we take into consideration the Bekenstein's generalized second law of thermodynamics \cite{Bekenstein} which states that in a system with a black hole the total amount of entropy is given by
\begin{equation}\label{eqn.bekenstein}
\Delta S_{outside} + \Delta S_{B.H} \geq 0
\end{equation}
where for a black hole the entropy is equal to
\begin{equation}\label{eqn.entropy. beken}
S_{B.H}=\frac{A k_B c^3}{4\hbar G}
\end{equation}
in which $A$ denotes the area of the black hole.
Since the generalized second law indicates that when an object falls into a black hole, increases the entropy of the black hole and all the information of the object will be lost, it is possible to relate a thermal radiation to the black hole. One can conclude that in the presence of gravitational collapse the vacuum state of quantum field, $ \ket {0} $, becomes unstable and finally is changed to a thermal state, $\ket{\beta}$.
The main problem arises when we try to discuss this effect in the context of Quantum field theory in curved space-time.
Consider a vacuum state$\ket{0}$. In order to discuss the Hawking effect we have to use the semiclassical General Relativity,
\begin{equation}\label{eqn.semiclassic}
G_{\mu\nu}=\kappa\braket{T_{\mu\nu}}
\end{equation}
where $\braket{T_{\mu\nu}}$ is the expectation value of the quantum field. Now it seems that for a vacuum state we have
\begin{equation}\label{eqn.vacuum2}
\bra{0}T_{\mu\nu}\ket{0}=0
\end{equation}.
By comparing to the left hand side of the equation, it is evident that it is compatible with the black hole metric. But the situation changes when we arrive to final state of quantum field, the thermal state $\ket{\beta}$.
In this case we have
\begin{equation}\label{eqn.thermal}
\bra{\beta}T_{\mu\nu}\ket{\beta}\neq 0
\end{equation}
But this seem contradictory in two cases. First, we have
\begin{equation}\label{eqn.contradict}
G_{\mu\nu}\neq \kappa \bra{\beta}T_{\mu\nu}\ket{\beta}
\end{equation}
which indicates that the given background for black hole is not compatible with the new stress tensor.
The second contradiction takes place when we look at the vacuum state $ \ket{0} $ in the context of conventional Quantum Field Theory. As mentioned above the conventional QFT is formalized in just one of these representations where \begin{equation}\label{eqn.vacuum3}
H\ket{0}=0
\end{equation}
Since all representations in this class are equivalent to each other up to a unitary transformation, it is impossible to find a unitary transformation like $U$ which will be able to transform $\ket{0}$ into $\ket{\beta}$ such that
\begin{equation}\label{eqn.vacuum to thermal}
U\ket{0}=\ket{\beta}
\end{equation}
The strange results introduced above divide theoretical physicists into two groups. The first are those who try to resolve these contradictions using different mathematical and physical methods and concepts. Others are those who conclude that these contradictions tell us that Hawking effect is unphysical, even QFT in curved space-time does not exist.
But these two contradictions may be easily solved if we forget the one class of representations and regard all of them simultaneously. In this case we can say that during the Hawking effect, the vacuum state $\ket{0}$ belonging to some class of representation changes into another class of representations and becomes $\ket{\beta}$. Meanwhile the given background denoted by $G_{\mu\nu}$ changes into another one. This example shows the complementarity of General Relativity and QFT. Although it seems very difficult to find the relations between inequivalent representations in QFT (like Haag's Theorem), it becomes clear and simple when we start to take into the consideration the General Relativity. Speaking in other words it is due to gravity that one class of representation changes to onther. It means that gravitational effects can relate different inequivalent classes to each other. This is one of the special features of gravity because neither electromagnetic nor other gauge fields of Standard Model can do that.
In this sense the Semiclassical Gravity and even the pure Quantum Gravity is not just an effort to find a quantization procedure for Einstein's Field Equations, but also a mathematical and conceptual framework for finding all exsisting relations and transformation rules between the Unitary Inequivalent Representations.
\section{Conclusion}\label{section.conclusion}
As mentioned above, the existance of these inequivalent unitary representations is the inevitable part of the QFT. But the main problem is that when we try to construct a physical theory, by considering the Poincare symmetry,we select just one of these classes and simply forget about the existance of others. This causes some problems such as Haag' s no-go theorem \cite{HaagArt}. On the other hand, the formulation of S-Matrix is such that one can find the final state $|f>$ at $t=+\infty$ by operating S-matrix on the initial state $|i>$ at $t=-\infty$ without taking into acount the moment of interaction, regarding it as a black box. But it is the moment of interaction that all of these classes may become equally important.Again we have to mention that the exsistence of these classes is something which is related to the global structure and that is why we will not be able to see their effect in our local observations. As stated in \cite{Wald} if $(F_1,\pi_1)$ and $ (F_2,\pi_2)$ be two unitary inequivalent representations of the Weyl Algebra $\mathcal{A}$, $A_1,A_2,...,A_n \in \mathcal{A}$ , $\epsilon_1, ... ,\epsilon_n >0$ and $\omega_1$ be an algebraic state corresponding to density matrix on $F_1$, then there exists a state $\omega_2$ corresponding to a density matrix on $F_2$ such that for all $i=1,2,...,n$ we have $|\omega_1(A_i)-\omega_2(A_i)|<\epsilon_i$ which in it's turn, shows that, although two representations of $\mathcal{A}$ may be unitarily inequivalent, the determination of a finite number of expectation values in $\mathcal{A}$, made with finite accuracy can not distinguish between different representations.
Another important feature is that Quantum Gravity will enable us to relate the yet unknown transplanckian world and our one to each other. This correlation shows itself in Hawking effect which again can be explain in this manner.
Furthermore in this paper we wanted to show that there may be a new look to the yet unknown quantum theory of gravity and the gravity may have the role of relating one class of representations to another, although it seems a very difficult physical and mathematical task.
\section*{Acknowledgement}
We would like to thank Professor H.Salehi for insightful comments and valuable suggestions.
|
1,477,468,750,345 | arxiv | \section{Introduction}
It is a well extended myth in scientific programming to claim that
compiled languages such as C++ or Java are always, in every
circumstance, faster than interpreted languages such as Perl,
JavaScript or Python.
However, while it is quite clear that efficiency matters, as said in
\cite{anderson2010efficiency}, in general and restricting the concept
of {\em speed} to {\em speed of
the compiled/interpreted application} it might be the case that some
languages are faster to others, as evidenced by benhmarks such as
\cite{prechelt2000empirical,fulghamcomputer}. Taken in general or even
restricting it to some particular set of problems such as floating
point computation, some compiled languages tend to be faster than
interpreted languages.
But, in the same spirit of the {\em There is no free lunch} theorem
\cite{Wolpert-1997-NFL} we can affirm there is a {\em no fast lunch}
theorem for the implementation of evolutionary optimization, in the
sense that, while there are particular languages that might be the
fastests for particular problem sizes and specially fitness functions,
in general the fastest language will have those two dependencies, and,
specially, for non-trivial problem sizes and limiting ourselves to the
realm of evolutionary algorithm operators, scripting languages such as
JavaScript might be as fast or even faster than compiled languages
such as Java.
Coming up next, we will write a brief state of the art of the analysis
of implementations of evolutionary algorithms. Next we will present
the test we have used for this paper and its rationale, and finally we
will present the results of examining four different languages running
the most widely used evolutionary algorithm operator:
mutation. Finally, we will draw the conclusions and present future
lines of work.
\section{State of the art}
In fact, the examination of the running time of an evolutionary
algorithm has received some attention from early on. Implementation
matters \cite{DBLP:conf/iwann/MereloRACML11,nesmachnow2011time}, which implies that
paying attention to the particular way an algorithm is implemented
might result in speed improvements that outclass that achieved by
using the {\em a priori} fastest language available. In fact, careful
coding led us to prove \cite{ae09} that Perl,
an interpreted and not optimized for speed language, could obtain times
that were on the same order the magnitude as Java. However, that
paper also proved that, for the particular type of problems used in
scientific computing in general, the running speed is not as important
as coding speed or even learning speed, since most scientific programs
are, in fact, run a few times while a lot of time is spent on coding
them. That is why expressive languages such as Perl, JavaScript or
Python are, in many cases, superior to these fast-to-run
languages.
However the benchmarks done in those papers were restricted to
particular problem sizes. Since program speed is the result of many
factors, including memory management and implementation of loop control
structures, in this paper we will examine how fast several languages
are for different problem sizes. This will be done next.
\section{Experimental setup}
First, a particular problem was chosen for testing different
languages and also data representations: performing bit-flip mutation
on a binary string. In fact, this is not usually the part of the
program an evolutionary algorithm spends the most time in
\cite{nesmachnow2011time}. In general, that is the fitness function,
and then reproduction-related functions: chromosome ranking, for
instance. However, mutation is the operation that is performed the
most times on every evolutionary algorithm and is quintessential to
the algorithm itself, so it allows the comparison of the different
languages in the proper context.
Essentially, mutation is performed by \begin{enumerate}
\item Generating a random integer from 0 to the length of the chromosome.
\item Choosing the bit in that position and flipping it
\item Building a chromosome with the value of that bit changed.
\end{enumerate}
Chromosomes can be represented in at least two different ways: an
array or vector of boolean values, or any other scalar value that can
be assimilated to it, or as a bitstring using generally ``1'' for true
values or ``0'' for false values. Different data structures will have
an impact on the result, since the operations that are applied to them
are, in many cases, completely different and thus the underlying
implementation is more or less efficient.
Then, four languages have been chosen for performing the
benchmark. The primary reason for chosing these languages was the
availability of open source implementations for the author, but also
they represent different philosophies in language design.
\begin{table}[htb]
\centering
\begin{tabular}{l|c|l}
\hline
Language & Version & URL \\
\hline
Scala & 2.11.7 & \url{http://git.io/bfscala} \\
Lua & 5.2.3 & \url{http://git.io/bflua} \\
Perl & v5.20.0 & \url{http://git.io/bfperl} \\
JavaScript & node.js 5.0.0 & \url{http://git.io/bfnode} \\ http://git.io/bfpython
Python & 2.7.3 & \url{http://git.io/bfpython} \\
\hline
\end{tabular}
\caption{Languages used and file written to carry out the
benchmark. No special flags were used for the interpreter or
compiler. \label{tab:files}}
\end{table}
Compiled languages are represented by Scala, a strongly-typed
functional language that compiles to a Java Virtual Machine
bytecode. Scala is in many cases faster than Java
\cite{fulghamcomputer} due to its more efficient
implementation of type handling. Two different representations were
used in Scala: {\tt String} and {\tt Vector[Boolean]}. They both have
the same underlying type, {\tt IndexedSeq} and in fact the overloading
of operators allows us to use the same syntax independently of the
type. The benchmark, {\tt bitflip.scala}, is available under a GPL
license, at the URL shown in Table \ref{tab:files}.
Interpreted languages are represented by Lua, Perl and Javascript. Lua
is a popular embedded language that is designed for easy
implementation; Perl has been used extensively for evolutionary
algorithms \cite{ae09,merelo14:noisy,DBLP:conf/cec/GuervosMCCV13} with
satisfactory results, and node.js, an implementation of JavaScript,
which uses a the V8 JIT compiler to create bytecode when in reads the
script, and
has been used lately by our research group as part of our NodEO
library \cite{DBLP:conf/gecco/GuervosVGES14} and volunteer computing
framework NodIO \cite{DBLP:journals/corr/GuervosG15}. In fact, this
paper is in part a rebuttal to concerns made by reviewers of the lack of
speed and thereof adequacy of JavaScript for evolutionary algorithm
experimentation. Versions and files are shown in the Table
\ref{tab:files}. Only Perl used two data structures as in Scala: a
string, which is a scalar structure in Perl, and an array of
booleans.
In all cases except in Scala, implementation took less than one hour and was
inspired by the initial implementation made in Perl. Adequate data and control
structures were used for running the application, which applies
mutation to a single generated chromosome a hundred thousand
times. The length of the mutated string starts at 16 and is doubled
until $2^{15}$ is reached, that is, 32768. This upper length was chosen to
have an ample range, but also so small as to be able to run the
benchmarks within one hour. Results are shown next.
\section{Results and analysis}
\label{sec:res}
\begin{figure}[h!tb]
\centering
\begin{knitrout}
\definecolor{shadecolor}{rgb}{0.969, 0.969, 0.969}\color{fgcolor}
\includegraphics[width=\maxwidth]{figure/results-1}
\end{knitrout}
\caption{Plot of time needed to perform 100K mutations in strings with
lengths increasing by a factor of two from 16 to $2^{15}$. Please note
that $x$ and $y$ both have a logarithmic scale.}
\label{fig:time}
\end{figure}
All measurements and processing scripts are included in this paper
repository, although in fact the programs were written to directly
produce a CSV (comma separated value) representation of measurements,
which was then plotted using R and {\tt ggplot} as shown in Figure
\ref{fig:time}. The first unexpected behavior shown here is the
remarkable speed of the Lua language, which is, in fact, faster
than any other small sizes, although slower than Python at bigger
sizes. After these two, next position goes to node.js, which uses a very
efficient implementation of a Just-in-Time interpreter for the
JavaScript language. Then Scala, whose bitvector representation is
better than the bitstring and finally Perl, with a bitstring representation being
slightly better than bit vectors, although the difference dilutes with
time. In fact, Scala is a bit better than node for the
smaller sizes, less than 128 bits and any representation, but that advantage disappears for greater sizes.
The behavior of the languages is not linear either. Node.js and Python
both have an
interesting feature: its speed is roughly independent of the string
size up to size 1024. The same happens also for several size
segments for Lua, showing some plateaus that eventually break for the
bigger sizes. It probably means that the creation and accessing of strings is
done in constant time, or is roughly constant, giving it a performance
advantage over other languages. Even so, it never manages to beat the
fastest language in this context, which is either Lua or Python,
depending on the size.
The trend from size 1024 on is for the differences to keep in more or less the same
style for bigger sizes, so we do not think it would be interesting to
extend it to $2^16$ and upwards. In any case, these measures allow us
to measure the performance of the most widely used genetic operator in
four different and popular languages, since all four of them (except
for Lua) show up in most rankings of the most popular languages, such
as the Tiobe ranking \cite{tiobe15}.
\section{Conclusions}
In this paper we set out to measure the speed of different languages
when running the classical evolutionary algorithm operation: mutating
a chromosome represented as a binary string or vector. The results
can be a factor on the choice of a language for implementing solution
to problems using evolutionary algorithms, at least if raw running
speed is the main consideration. And, if that is our main concern, the
fastest language for mutating strings has been found to be Lua and Python,
followed by node.js and Perl. Most interpreted languages are faster for the
wider range of chromosome sizes than Scala, which is a compiled
language that uses the Java Virtual machine.
Despite its speed Lua is not exactly a popular language, although it definitely
has found its niche in embedded systems and applications such as in-game scripting,
game development or servers so
our choice for EA programming languages would be Python or JavaScript,
which are fast enough, popular, allow for fast development and have a
great and thriving community. JavaScript does have an advantage over
Python: Besides being interpreted languages and using
dynamic typing, it can express complex operations in a terse syntax
and bestows implementations both in browsers and on the
server. We can conclude from these facts and the measurements made in
this paper that JavaScript is perfectly adequate for any scientific
computing task, including evolutionary algorithms.
That does not mean that Perl or Scala are not adequate for
scientific computing. However, they
might not be if experiments take a long time and time is of the
essence; in that case implementing the critical parts of the program
using C, Go, Lua or Python might be the right way to go. But in general, it
cannot be said that interpreted languages are not an adequate platform
for implementing evolutionary algorithms, as proved in this paper.
Future lines of work might include a more extensive measurement of
other operators such as crossover, tournament selection and other
selection algorithms. However, they are essentially CPU integer
operations and their behavior might be, in principle, very similar to
the one shown here. This remains to be proved, however, but it is left
as future line of work.
\section{Acknowledgements}
This paper is part of the open science effort at the university of
Granada. It has been written using {\tt knitr}, and its source as well as
the data used to create it can be downloaded from
\href{https://github.com/geneura-papers/2015-ea-languages}{the GitHub
repository}. It has been supported in part by
\href{http://geneura.wordpress.com}{GeNeura Team}.
This work has been supported in part SPIP2014-01437 (Direcci\'on General
de Tr\'afico), PRY142/14 (Fundaci\'on P\'ublica Andaluza Centro de
Estudios Andaluces en la IX Convocatoria de Proyectos de
Investigaci\'on), TIN2014-56494-C4-3-P (Spanish Ministry of Economy
and Competitivity), and PYR-2014-17 GENIL project (CEI-BIOTIC
Granada).
\bibliographystyle{elsarticle-num}
|
1,477,468,750,346 | arxiv | \section{Introduction}
\subsection{Phase retrieval}
Suppose that ${\mathbf x}_0\in {\mathbb F}^d$ with ${\mathbb F}\in \{{\mathbb R},{\mathbb C}\}$ is the target signal. The information that we gather about ${\mathbf x}_0$ is
\[
{\mathbf y}=\abs{A{\mathbf x}_0}+\eta,
\]
where $A=({\mathbf a}_1,\ldots,{\mathbf a}_m)^T\in {\mathbb F}^{m\times d}$ is the known measurement matrix and $\eta\in {\mathbb R}^m$ is a noise vector. Throughout this paper, we often assume that $A\in {\mathbb R}^{m\times d}$ is a Gaussian random matrix with entries $a_{jk}\sim N(0,1)$ with $m\gtrsim d$ and we also assume that
$\eta$ is either fixed or random and independent of $A$.
The aim of phase retrieval is to estimate ${\mathbf x}_0$ from ${\mathbf y}$.
Phase retrieval is raised in numerous applications such as X-ray crystallography \cite{harrison1993phase,millane1990phase}, microscopy \cite{miao2008extending}, astronomy \cite{fienup1987phase}, coherent diffractive imaging \cite{shechtman2015phase,gerchberg1972practical} and optics \cite{walther1963question} etc.
A popular model for recovering ${\mathbf x}_0$ is
\begin{equation}\label{eq:mod1}
\argmin{{\mathbf x}\in {\mathbb F}^d} \| \abs{A{\mathbf x}}-{\mathbf y}\|_2.
\end{equation}
If ${\mathbf x}_0$ is sparse, both the constrained nonlinear Lasso model
\begin{equation} \label{eq:mod3}
\min_{{\mathbf x}\in {\mathbb F}^d} \| \abs{A{\mathbf x}}-{\mathbf y}\|_2 \quad \mathrm{s.t.}\quad \|{\mathbf x}\|_1\le R,
\end{equation}
and its non-constrained version
\begin{equation}\label{eq:model2}
\min_{{\mathbf x}\in {\mathbb F}^d} \| \abs{A{\mathbf x}}-{\mathbf y}\|_2^2+\lambda \|{\mathbf x}\|_1,
\end{equation}
have been considered for recovering ${\mathbf x}_0$. As we will see later, one already develops
many efficient algorithms to solve (\ref{eq:mod1}). The aim of this paper is to study the performance of (\ref{eq:mod1}) as well as of (\ref{eq:mod3}) and (\ref{eq:model2}) from the theoretical viewpoint.
Particularly, we focus on the question: {\em how well can one recover ${\mathbf x}_0$ by solving these above three models?}
\subsection{Algorithms for phase retrieval }
One of the oldest algorithms for phase retrieval is the error-reduction algorithm which is raised
in \cite{gerchberg1972practical,ER3}. The error-reduction algorithm is to solve the following model
\begin{equation}\label{eq:ERM}
\min_{{\mathbf x}\in {\mathbb F}^d, C\in {\mathbb F}^{m\times m}} \| A{\mathbf x}-C{\mathbf y}\|_2,
\end{equation}
where $C={\rm diag}(c_1,\ldots,c_m)$ with $\abs{c_j}=1, j=1,\ldots,m$.
The error-reduction
is an alternating projection algorithm that iterates between $C$ and ${\mathbf x}$.
A simple observation is that ${\mathbf x}^\#$ is a solution to (\ref{eq:mod1}) if and only if
$({\mathbf x}^\#, {\rm diag}({\rm sign}(A{\mathbf x}^\#)))$ is a solution to (\ref{eq:ERM}). Hence, the error-reduction algorithm can be used to solve (\ref{eq:mod1}). The convergence property of the error-reduction algorithm is studied in \cite{AltMin,ER4}.
Beyond the error-reduction algorithm, one also develops the generalized gradient descent method for solving (\ref{eq:mod1}) (see \cite{TAF} and \cite{zhang2016reshaped}).
An alternative model for phase retrieval is
\begin{equation}\label{eq:mod2}
\min_{{\mathbf x}\in {\mathbb F}^d}\,\,\sum_{i=1}^m \xkh{\abs{\nj{{\mathbf a}_i,{\mathbf x}}}^2-y_i^2}^2.
\end{equation}
Although the objective function in (\ref{eq:mod2}) is non-convex, many computational algorithms turn to be successful actually with a good initialization, such as Gauss-Newton algorithms \cite{Gaoxu}, Kaczmarz algorithms \cite{tan2017phase} and trust-region methods \cite{turstregion}. A gradient descent method is applied to solve (\ref{eq:mod2}), which provides the Wirtinger Flow (WF) \cite{WF} and Truncated Wirtinger Flow (TWF) \cite{TWF} algorithms. It has been proved that both WF and TWF algorithms linearly converge to the true solution up to a global phase. For the sparse phase retrieval, a standard $\ell_1$ norm term is added to the above objective functions to obtain the models for sparse phase retrieval, such as (\ref{eq:mod3}) and (\ref{eq:model2}). Similarly, the gradient descent method with thresholding can be used to solve those models successfully \cite{cai2016optimal,SparseTAF}.
One convex method to handle phase retrieval problem is PhaseLift \cite{phaselift} which lifts the quadratic system to recover a rank-1 positive semi-definite matrix by solving a semi-definite programming. An alternative convex method is PhaseMax \cite{phasemax} which recasts this problem as a linear programming by an anchor vector.
\subsection{Our contributions} \label{contribution}
The aim of this paper is to study the estimation performance of the nonlinear least squares for phase retrieval. We obtain the measurement vector ${\mathbf y}=|A{\mathbf x}_0|+\eta$, where $A=[{\mathbf a}_1,\ldots,{\mathbf a}_m]^\top$ is the measurement matrix with ${\mathbf a}_j \in {\mathbb R}^d, {\mathbf x}_0\in {\mathbb R}^d$ and $\eta \in {\mathbb R}^m$ is a noise vector. We would like to estimate ${\mathbf x}_0$ from ${\mathbf y}$.
Firstly, we consider the following non-linear least square model:
\begin{equation} \label{square model}
\mathop{\min} \limits_{{\mathbf x} \in {\mathbb R}^d} \quad \left\||A{\mathbf x}|-{\mathbf y}\right\|^2.
\end{equation}
One of main results is the following theorem which shows that the reconstruction error of model (\ref{square model}) can be reduced proportionally to $\norm{{\mathbf \eta}}/\sqrt{m}$ and it becomes quite small when $\|\eta\|_2$ is bounded and $m$ is large.
\begin{theorem} \label{result1}
Suppose that $A\in {\mathbb R}^{m\times d}$ is a Gaussian random matrix whose entries are independent Gaussian random variables.
We assume that $m \gtrsim d$. The following holds with probability at least $1-3\exp(-c m)$. For any fixed vector ${\mathbf x}_0\in {\mathbb R}^d$, suppose that ${\widehat{\mathbf{x}}} \in {\mathbb R}^d$ is any solution to (\ref{square model}). Then \begin{equation}\label{eq:orderm}
\min \left\{\norm{{\widehat{\mathbf{x}}}-{\mathbf x}_0}, \norm{{\widehat{\mathbf{x}}}+{\mathbf x}_0}\right\} \lesssim \frac{\|\eta\|_2}{\sqrt{m}}.
\end{equation}
\end{theorem}
The next theorem implies that the reconstruction error in Theorem \ref{result1} is sharp.
\begin{theorem}\label{th:con_lower}
Let $m \gtrsim d$.
Assume that ${\mathbf x}_0\in {\mathbb R}^d$ is a fixed vector. Assume that
$\eta\in {\mathbb R}^m$ is a fixed vector which satisfies $\sqrt{2/\pi}\cdot\abs{\sum_{i=1}^m \eta_i}/m\ge \delta_0$ and $\norm{\eta}/\sqrt{m} \le \delta_1$ for some $\delta_0>0$ and $\delta_1> 0$.
Suppose that $A\in {\mathbb R}^{m\times d}$ is a Gaussian random matrix whose entries are independent Gaussian random variables. Let ${\widehat{\mathbf{x}}}$ be any solution to (\ref{square model}). Then there exists a $\epsilon_0>0$ and a constant $c_{\delta_0,{\mathbf x}_0}>0$ such that the following holds with probability at least $1-6\exp(-c\epsilon_0^2 m)$:
\begin{equation}\label{eq:constant_lower}
\min \left\{\norm{{\widehat{\mathbf{x}}}-{\mathbf x}_0}, \norm{{\widehat{\mathbf{x}}}+{\mathbf x}_0}\right\} \ge c_{\delta_0,{\mathbf x}_0}.
\end{equation}
Here, the constant $c_{\delta_0,{\mathbf x}_0}$ only depends on $\delta_0$ and $\norm{{\mathbf x}_0}$.
\end{theorem}
\begin{remark} \label{lowbound_optimal}
We next explain the reason why the error bound in Theorem \ref{result1} is sharp up to a constant.
For the aim of contradiction, we assume that there exists a $\alpha>0$ such that
\begin{equation}\label{eq:ascon}
\min \left\{\norm{{\widehat{\mathbf{x}}}-{\mathbf x}_0}, \norm{{\widehat{\mathbf{x}}}+{\mathbf x}_0}\right\} \lesssim \frac{\norm{\eta}}{m^{1/2+\alpha}}\quad \text{ for } m\gtrsim d,
\end{equation}
holds for any fixed ${\mathbf x}_0\in {\mathbb R}^d$ with high probability. Here, ${\widehat{\mathbf{x}}} \in {\mathbb R}^d$ is any solution to (\ref{square model}) which depends on ${\mathbf x}_0$ and $\eta$.
We assume
\begin{equation*}
\varliminf _{m\to \infty }\abs{\sum_{i=1}^m \eta_i/m}\ge \delta_0 \quad \mathrm{and} \quad \varlimsup _{m\to \infty } \norm{\eta}/\sqrt{m} \le \delta_1
\end{equation*}
where $\delta_0,\delta_1>0$. For example, if we take $\eta=(1,\ldots,1)^T\in {\mathbb R}^m$, then $\delta_0=\delta_1=1$. For a fixed ${\mathbf x}_0\in {\mathbb R}^d$, Theorem \ref{th:con_lower} implies the following holds
with high probability
\begin{equation}\label{eq:remark_lower}
\min \left\{\norm{{\widehat{\mathbf{x}}}-{\mathbf x}_0}, \norm{{\widehat{\mathbf{x}}}+{\mathbf x}_0}\right\} \ge c_{\delta_0,{\mathbf x}_0}, \quad \text{ for } m\gtrsim d,
\end{equation}
where $c_{\delta_0,{\mathbf x}_0}>0$.
However, the (\ref{eq:ascon}) implies that
\begin{equation*}
\min \left\{\norm{{\widehat{\mathbf{x}}}-{\mathbf x}_0}, \norm{{\widehat{\mathbf{x}}}+{\mathbf x}_0}\right\}\lesssim \frac{\delta_1}{m^\alpha} \to 0, \quad m\to \infty,
\end{equation*}
which contradicts with (\ref{eq:remark_lower}). Hence, (\ref{eq:ascon}) does not hold.
\end{remark}
We next turn to the phase retrieval for sparse signals. Here, we assume that ${\mathbf x}_0\in {\mathbb R}^d$ is $s$-sparse, which means that there are at most $s$ nonzero entries in ${\mathbf x}_0$.
We first consider the estimation performance of the following constrained nonlinear Lasso model
\begin{equation} \label{square model with sparse with constrain}
\min_{{\mathbf x}\in {\mathbb R}^d} \| \abs{A{\mathbf x}}-{\mathbf y}\|_2 \quad \mathrm{s.t.}\quad \|{\mathbf x}\|_1\le R,
\end{equation}
where $R$ is a parameter which specifies a desired sparsity level of the solution. The following theorem presents the estimation performance of model (\ref{square model with sparse with constrain}):
\begin{theorem} \label{th:sparse constrain}
Suppose that $A\in {\mathbb R}^{m\times d}$ is a Gaussian random matrix whose entries are independent Gaussian random variables. If $m \gtrsim s\log(ed/s)$, then the following holds with probability at least $1-3\exp(-c_0 m)$ where $c_0>0$ is a constant. For any fixed $s$-sparse vector ${\mathbf x}_0\in {\mathbb R}^d$, suppose that ${\widehat{\mathbf{x}}} \in {\mathbb R}^d$ is any solution to (\ref{square model with sparse with constrain}) with parameter $R:=\norms{{\mathbf x}_0}_1$ and ${\mathbf y}=\abs{A{\mathbf x}_0}+\eta$. Then
\begin{equation*}
\min \left\{\norm{{\widehat{\mathbf{x}}}-{\mathbf x}_0}, \norm{{\widehat{\mathbf{x}}}+{\mathbf x}_0}\right\}\,\, \lesssim\,\, \frac{\norm{\omega}}{\sqrt{m}}.
\end{equation*}
\end{theorem}
The unconstrained Lagrangian version of (\ref{square model with sparse with constrain}) is
\begin{equation}\label{square model with sparse}
\mathop{\min} \limits_{{\mathbf x} \in {\mathbb R}^d} \quad \left\||A{\mathbf x}|-{\mathbf y}\right\|^2+\lambda\norms{{\mathbf x}}_1,
\end{equation}
where $\lambda>0$ is a parameter which depends on the desired level of sparsity.
The following theorem presents the estimation performance of model (\ref{square model with sparse}):
\begin{theorem} \label{th:sparse}
Suppose that $A\in {\mathbb R}^{m\times d}$ is a Gaussian random matrix whose entries are independent Gaussian random variables. If $m\gtrsim s\log (ed/s)$, then the following holds with probability at least $1-\exp(-c_0 m)-1/d^2$ where $c_0>0$ is a constant.
For any fixed $s$-sparse vector ${\mathbf x}_0\in {\mathbb R}^d$, suppose that ${\widehat{\mathbf{x}}} \in {\mathbb R}^d$ is any solution to
(\ref{square model with sparse}) with the positive parameter $\lambda \gtrsim \normone{\eta}+\norm{\eta}\sqrt{\log d}$ and ${\mathbf y}=\abs{A{\mathbf x}_0}+\eta$.
Then
\begin{equation}\label{eq:nonlam}
\min \left\{\norm{{\widehat{\mathbf{x}}}-{\mathbf x}_0}, \norm{{\widehat{\mathbf{x}}}+{\mathbf x}_0}\right\} \lesssim \frac{\lambda\sqrt{s}}{m}+\frac{\norm{\omega}}{\sqrt{m}}.
\end{equation}
\end{theorem}
We can use a similar method to that in Remark (\ref{lowbound_optimal}) to show that the reconstruction error in Theorem \ref{th:sparse constrain} is sharp. In Theorem \ref{th:sparse}, one requires that $\lambda \gtrsim \normone{\eta}+\norm{\eta}\sqrt{\log d}$.
Motivated by a lot of numerical experiments, we conjecture that Theorem \ref{th:sparse} still holds provided $\lambda \gtrsim \norm{\eta}\sqrt{\log d}$. If the conjecture holds, then we can take $\lambda \thickapprox \norm{\eta}\sqrt{\log d}$ and replace (\ref{eq:nonlam}) by
\[
\min \left\{\norm{{\widehat{\mathbf{x}}}-{\mathbf x}_0}, \norm{{\widehat{\mathbf{x}}}+{\mathbf x}_0}\right\} \lesssim \frac{\norm{\omega}}{\sqrt{m}}.
\]
\subsection{Comparison to related works }
\subsubsection{Least squares}
We first introduce the estimation of signals from the noisy linear measurements.
Suppose that ${\mathbf x}_0\in {\mathbb R}^d$ is the target signals. Set
\begin{equation*}
{\mathbf y}'=A{\mathbf x}_0+{\mathbf \eta},
\end{equation*}
where $A \in {\mathbb R}^{m\times d}$ is the measurement matrix and ${\mathbf \eta} \in {\mathbb R}^m$ is a noise vector.
We suppose that $A$ is a Gaussian random matrix with entries $a_{jk}\sim N(0,1)$ and we also
suppose that $m\gtrsim d$.
A popular method for recovering ${\mathbf x}_0$ from ${\mathbf y}'$ is the least squares:
\begin{equation} \label{linear lasso without sparse}
\min_{{\mathbf x}\in {\mathbb R}^d} \norm{A{\mathbf x}-{\mathbf y}'}^2.
\end{equation}
Then the solution of model (\ref{linear lasso without sparse}) is ${\widehat{\mathbf{x}'}}=(A^\top A)^{-1}A^\top {\widehat{\mathbf{y}}}$, which implies that
\begin{equation*}
{\widehat{\mathbf{x}'}}-{\mathbf x}_0=(A^\top A)^{-1}A^\top \eta.
\end{equation*}
Thus with probability at least $1-4\exp(-c d)$ one has
\begin{equation*}
\norm{{\widehat{\mathbf{x}'}}-{\mathbf x}_0}=\norm{(A^\top A)^{-1}A^\top \eta}\le \norm{(A^\top A)^{-1}}\norm{A^\top \eta}\lesssim\frac{\sqrt{d}}{m}\norm{\eta},
\end{equation*}
where the last inequality follows from the fact that $\norm{A^\top \eta}\le 3\sqrt{d}\norm{\eta}$ and $\lambda_{\min}(A) \ge O(\sqrt{m})$ hold with probability at least $1-4\exp(-c d)$ for any Gaussian random matrix \cite[Theorem 7.3.3]{Vershynin2018}.
Then the following holds with high probability
\begin{equation}\label{eq:error}
\norm{{\widehat{\mathbf{x}'}}-{\mathbf x}_0} \lesssim \frac{\sqrt{d}\|\eta\|_2}{m},
\end{equation}
where ${\widehat{\mathbf{x}'}}$ is the solution of (\ref{linear lasso without sparse}).
For non-linear least squares with phaseless measurement ${\mathbf y}=|A {\mathbf x}_0|+\eta$, we consider
\begin{equation} \label{mol:nonlinear}
\min_{{\mathbf x} \in {\mathbb R}^d} \|\abs{A{\mathbf x}}-{\mathbf y}\|.
\end{equation}
Theorem \ref{result1} implies that
\begin{equation}\label{eq:plerror}
\min \left\{\norm{{\widehat{\mathbf{x}}}-{\mathbf x}_0}, \norm{{\widehat{\mathbf{x}}}+{\mathbf x}_0}\right\}\,\, \lesssim\,\, \frac{\|\eta\|_2}{\sqrt{m}}
\end{equation}
where ${\widehat{\mathbf{x}}}$ is any solution to (\ref{mol:nonlinear}).
Remark \ref{lowbound_optimal} implies that the upper bound is sharp.
Note that the error order about $m$ for nonlinear least squares is $O(1/\sqrt{m})$
while one for least squares is $O(1/m)$.
Hence, the result in Theorem \ref{result1}
highlights an essential difference between linear least square model (\ref{linear lasso without sparse}) and the non-linear least square model (\ref{mol:nonlinear}).
\subsubsection{Lasso}
If assume that the signal ${\mathbf x}_0$ is $s$-sparse and ${\mathbf y}'=A{\mathbf x}_0+{\mathbf \eta}$, one turns to the Lasso
\begin{equation} \label{lasso:sparse uncons}
\min_{{\mathbf x}\in {\mathbb R}^d} \| A{\mathbf x}-{\mathbf y}'\|_2 \quad \mathrm{s.t.}\quad \|{\mathbf x}\|_1\le R.
\end{equation}
If $m \gtrsim s\log d$, then the solution ${\widehat{\mathbf{x}'}}$ of (\ref{lasso:sparse uncons}) satisfies
\begin{equation}\label{eq:lassoer}
\norm{{\widehat{\mathbf{x}'}}-{\mathbf x}_0} \lesssim \|\eta\|_2 \sqrt{s\log d}/m
\end{equation}
with high probability (see \cite{Vershynin2018}).
For the nonlinear Lasso, Theorem \ref{th:sparse constrain} shows that any solution ${\widehat{\mathbf{x}}}$ to $\min_{\normone{{\mathbf x}} \le \normone{{\mathbf x}_0}} \|\abs{A{\mathbf x}}-{\mathbf y}\|$ with ${\mathbf y}=|A {\mathbf x}_0|+\eta$ satisfies
\begin{equation}\label{eq:nlassoer}
\min \left\{\norm{{\widehat{\mathbf{x}}}-{\mathbf x}_0}, \norm{{\widehat{\mathbf{x}}}+{\mathbf x}_0}\right\} \lesssim \norm{\eta}/\sqrt{m}
\end{equation}
with high probability.
Comparing (\ref{eq:lassoer}) with (\ref{eq:nlassoer}), we find that the reconstruction error of Lasso is similar
to that of nonlinear Lasso when $m=O(s\log d)$, while Lasso has the better performance
over the nonlinear Lasso provided $m\gg s\log d$.
\subsubsection{Unconstrained Lasso}
We next turn to the unconstrained Lasso
\begin{equation}\label{model:uncons}
\min_{{\mathbf x} \in {\mathbb R}^d} \|A{\mathbf x}-{\mathbf y}'\|^2+ \lambda \normone{{\mathbf x}}
\end{equation}
where ${\mathbf y}'=A {\mathbf x}_0+\eta$ and ${\mathbf x}_0$ is a $s$-sparse vector. If the parameter $\lambda \gtrsim \norm{\eta}\sqrt{\log d}$, then ${\widehat{\mathbf{x}'}}$ satisfies
\[
\norm{{\widehat{\mathbf{x}'}}-{\mathbf x}_0}\lesssim \frac{ \lambda \sqrt{s}}{m}
\]
with high probability (see \cite{Vershynin2018}) where ${\widehat{\mathbf{x}'}}$ is the solution of (\ref{model:uncons}).
For the sparse phase retrieval model
\begin{equation}\label{eq:sparselasso}
\min_{{\mathbf x} \in {\mathbb R}^d} \|\abs{A{\mathbf x}}-{\mathbf y}\|^2+ \lambda \normone{{\mathbf x}}
\end{equation}
with ${\mathbf y}=|A {\mathbf x}_0|+\eta$, Theorem \ref{th:sparse} shows that
\begin{equation}\label{eq:sparselamda}
\min \left\{\norm{{\widehat{\mathbf{x}}}-{\mathbf x}_0}, \norm{{\widehat{\mathbf{x}}}+{\mathbf x}_0}\right\} \lesssim \frac{\lambda\sqrt{s}}{m}+\frac{\norm{\omega}}{\sqrt{m}}
\end{equation}
where the parameter $\lambda \gtrsim \normone{\eta}+\norm{\eta}\sqrt{\log d}$
and ${\widehat{\mathbf{x}}}$ is any solution to (\ref{eq:sparselasso}).
Our result requires that the parameter $\lambda$ in nonlinear Lasso model is larger than linear case.
\subsubsection{The generalized Lasso with nonlinear observations }
In \cite{plan2016generalized}, Y. Plan and R. Vershynin consider the following non-linear observations
\begin{equation*}
y_j=f_j(\nj{{\mathbf a}_j,{\mathbf x}_0}),\quad j=1,\ldots,m
\end{equation*}
where $f_j: {\mathbb R}\to {\mathbb R}$ are independent copies of an unknown random or deterministic function $f$ and ${\mathbf a}_j\in {\mathbb R}^d, j=1,\ldots,m,$ are Gaussian random vectors. The $K$-Lasso model is employed to recover ${\mathbf x}_0$ from $y_j, j=1,\ldots,m$:
\begin{equation}\label{eq:nonlinear}
\min_{{\mathbf x}\in {\mathbb R}^d} \norm{A{\mathbf x}-{\mathbf y}}^2 \quad \mathrm{s.t.}\quad {\mathbf x}\in K,
\end{equation}
where $K\subset {\mathbb R}^d$ is some known set. Suppose that ${\widehat{\mathbf{x}}}$ is the solution to (\ref{eq:nonlinear}). Y. Plan and R. Vershynin \cite{plan2016generalized} show that $\|{\widehat{\mathbf{x}}}-\mu\cdot {\mathbf x}_0\|$ tends to 0 with $m$ tending to infinity, where $\mu={\mathbb E}(f(g)g)$ with $g$ being a Gaussian random variable.
Unfortunately, applying the result to phase retrieval problem, it gives that $\mu={\mathbb E}(|g|\cdot g)=0$ and hence $\|{\widehat{\mathbf{x}}}\|$ tends to 0 with $m$ tending to infinity where
${\widehat{\mathbf{x}}}$ is the solution to the least square mode (\ref{eq:nonlinear}) with $K={\mathbb R}^d$ and $y_j=\abs{\nj{{\mathbf a}_j,{\mathbf x}_0}}$. This means that the generalized Lasso does not work for phase retrieval.
Hence, one has to employ the nonlinear Lasso (or nonlinear least squares) for solving phase retrieval. This is also our motivation for this project.
\subsection{Organization}
The paper is organized as follows. In Section 2, we introduce some notations and lemmas which are used in this paper. We provide the proofs of main results in Section 3.
\section{Preliminaries}
The aim of this section is to introduce some definitions and lemmas which play a key role in our paper.
\subsection{Gaussian width}
For a subset $T\subset {\mathbb R}^d$, the Gaussian width is defined as
\begin{equation*}
w(T):= {\mathbb E} \sup_{{\mathbf x}\in T} \jkh{g,{\mathbf x}} \quad \mathrm{where} \quad g \sim N(0,I_d).
\end{equation*}
The Gaussian width $w(T)$ is one of the basic geometric quantities associated with the subset $T\subset {\mathbb R}^d$ (see \cite{Vershynin2018}). We now give several examples about Gaussian width. The first example is Euclidean unit ball $\mathbb{S}^{d-1}$, where a simple calculation leads to
\begin{equation*}
w(\mathbb{S}^{d-1})=O(\sqrt{d}).
\end{equation*}
Another example is the unit $\ell_1$ ball $B_1^d$ in ${\mathbb R}^d$. It can be showed that (see e.g. \cite{Vershynin2018})
\begin{equation*}
w(B_1^d)=O(\sqrt{\log d}).
\end{equation*}
In this paper, we often use the following set
\begin{equation*}
K_{d,s}:=\dkh{{\mathbf x} \in {\mathbb R}^d: \norm{{\mathbf x}}\le 1, \quad \normone{{\mathbf x}}\le \sqrt{s}},
\end{equation*}
with the Gaussian width $w(K_{d,s})=O(\sqrt{s\log(ed/s)})$ (see e.g. \cite{Vershynin2018}).
\subsection{Gaussian Concentration Inequality}
\begin{lemma}\cite{Vershynin2018} \label{Gaussian space}
Consider a random vector $X\sim N(0,I_d)$ and a Lipschitz function $f:{\mathbb R}^d \to {\mathbb R}$ with constant $\norms{f}_{\mathrm{Lip}}$: $\abs{f(X)-f(Y)}\le \norms{f}_{\mathrm{Lip}}\cdot \norm{X-Y}$. Then for every $t\ge 0$, we have
\begin{equation*}
{\mathbb P}\dkh{\abs{f(X)-{\mathbb E} f(X)}\ge t}\le 2\exp\left(-\frac{ct^2}{\norms{f}_{\mathrm{Lip}}}\right).
\end{equation*}
\end{lemma}
\subsection{Strong RIP}
To study the phaseless compressed sensing, Voroninski and Xu introduce the definition of strong restricted isometry property (SRIP) (see \cite{Voroninski2016A}).
\begin{definition}\cite{Voroninski2016A}
The matrix $A\in {\mathbb R}^{m\times d}$ satisfies the Strong Restricted Isometry Property of order $s$ and constants $\theta_{-}, \theta_{+}\in (0,2)$ if the following holds
\begin{equation}\label{eq:strongrip}
\theta_{-}\norm{{\mathbf x}}^2 \le \min\limits_{I\subset [m],\abs{I}\ge m/2} \norm{A_I {\mathbf x}}^2\le \max\limits_{I\subset [m],\abs{I}\ge m/2} \norm{A_I {\mathbf x}}^2 \le \theta_{+}\norm{{\mathbf x}}^2
\end{equation}
for all ${\mathbf x}\in K_{d,s}$. Here, $A_I$ denotes the submatrix of $A$ where only {\em rows} with indices in $I$ are kept, $[m]:=\{1,\ldots,m\}$ and $\abs{I}$ denotes the cardinality of $I$.
\end{definition}
The following lemma shows that Gaussian random matrices satisfy SRIP with high probability for some non-zero universal constants $\theta_{-}, \theta_{+}>0$.
\begin{lemma}\cite[Theorem 2.1]{Voroninski2016A} \label{SRIP}
Suppose that $t>1$ and that $A\in {\mathbb R}^{m\times d}$ is a Gaussian random matrix with entries $a_{jk}\sim N(0,1)$. Let $m=O(tk\log(e d/k))$. Then there exist constants $\theta_{-}, \theta_{+}$ with $0<\theta_{-}< \theta_{+}<2$, independent with $t$, such that $A/\sqrt{m}$ satisfies SRIP of order $t\cdot k$ and constants $\theta_{-}, \theta_{+}$ with probability at least $1-\exp(-cm/2)$, where $c>0$ is an absolute constant.
\end{lemma}
\begin{remark}\label{re:SRIP}
In \cite{Voroninski2016A}, the authors just present the proof of Lemma \ref{SRIP} for the case where ${\mathbf x}$ is $s$-sparse.
Note that the set $K_{d,s}$ has covering number $N(K_{d,s},\varepsilon)\le \exp(Cs \log (ed/s)/\varepsilon^2)$ \cite[Lemma 3.4]{Plan2013}. It is easy to extend the proof in \cite{Voroninski2016A} to the case where ${\mathbf x} \in K_{d,s}$.
\end{remark}
\section{Proof of the main results}
\subsection{Proof of Theorem \ref{result1}}
We begin with a simple lemma.
\begin{lemma} \label{upper bound}
Suppose that $m\ge d$. Let $A\in {\mathbb R}^{m\times d}$ be a Gaussian matrix whose entries are independent Gaussian random variables. Then the following holds with probability at least $1-2\exp(-cm)$
\begin{equation*}
\sup_{{\mathbf h}\in {\mathbb R}^d \atop \eta \in {\mathbb R}^m}\nj{{\mathbf h},A^\top \eta}\le 3\sqrt{m}\norm{{\mathbf h}}\norm{\eta}.
\end{equation*}
\end{lemma}
\begin{proof}
Since $A\in {\mathbb R}^{m\times d}$ is a Gaussian random matrix, we have $\norm{A} \le 3\sqrt{m}$ with probability at least $1-2\exp(-c m)$ \cite[Theorem 7.3.3]{Vershynin2018}.
We obtain that
\begin{equation*}
\nj{{\mathbf h},A^\top \eta} \le \norm{{\mathbf h}} \norm{A^\top\eta} \le \norm{{\mathbf h}} \norm{A^\top} \norm{\eta}\le 3\sqrt{m}\norm{{\mathbf h}}\norm{\eta}
\end{equation*}
holds with probability at least $1-2\exp(-c m)$. We arrive at the conclusion.
\end{proof}
\begin{proof}[Proof of Theorem \ref{result1}]
Set ${\mathbf h}^{-}:={\widehat{\mathbf{x}}}-{\mathbf x}_0$ and ${\mathbf h}^{+}:={\widehat{\mathbf{x}}}+{\mathbf x}_0$. Since ${\widehat{\mathbf{x}}}$ is the solution of (\ref{square model}), we have
\begin{equation} \label{minsolution}
\left\||A{\widehat{\mathbf{x}}}|-{\mathbf y}\right\|^2\le \left\||A{\mathbf x}_0|-{\mathbf y}\right\|^2.
\end{equation}
For any index set $T\subset \{1,\ldots,m\}$, we let $A_T:=[{\mathbf a}_j:\;j\in T]^\top$ be the submatrix of $A$. Denote
\begin{eqnarray*}
&& T_1:=\left\{j: \mathrm{sign}(\nj{{\mathbf a}_j,{\widehat{\mathbf{x}}}})=1,\; \mathrm{sign}(\nj{{\mathbf a}_j,{\mathbf x}_0})=1\right\} \\
&& T_2:=\left\{j: \mathrm{sign}(\nj{{\mathbf a}_j,{\widehat{\mathbf{x}}}})=-1,\; \mathrm{sign}(\nj{{\mathbf a}_j,{\mathbf x}_0})=-1\right\} \\
&& T_3:=\left\{j: \mathrm{sign}(\nj{{\mathbf a}_j,{\widehat{\mathbf{x}}}})=1,\; \mathrm{sign}(\nj{{\mathbf a}_j,{\mathbf x}_0})=-1\right\} \\
&& T_4:=\left\{j: \mathrm{sign}(\nj{{\mathbf a}_j,{\widehat{\mathbf{x}}}})=-1,\; \mathrm{sign}(\nj{{\mathbf a}_j,{\mathbf x}_0})=1\right\}.
\end{eqnarray*}
Without loss of generality, we assume that $\# (T_{1}\cup T_2)=\beta m \ge m/2$ (otherwise, we can assume that $\# (T_{3}\cup T_4) \ge m/2$~).
Then we have
\begin{equation*}
\left\||A{\widehat{\mathbf{x}}}|-{\mathbf y}\right\|^2\geq \norm{A_{T_1}{\mathbf h}^{-}-\eta_{T_1}}^2+\norm{A_{T_2}{\mathbf h}^{-}+\eta_{T_2}}^2.
\end{equation*}
The (\ref{minsolution}) implies that
\[
\norm{A_{T_1}{\mathbf h}^{-}-\eta_{T_1}}^2+\norm{A_{T_2}{\mathbf h}^{-}+\eta_{T_2}}^2\leq \|\eta\|^2
\]
and hence
\begin{equation}\label{key eq}
\norm{A_{T_{12}}{\mathbf h}^{-}}^2 \le 2\nj{{\mathbf h}^{-},A_{T_1}^\top \eta_{T_1}-A_{T_2}^\top \eta_{T_2}}+ \|\eta_{T_{12}^c}\|^2
\end{equation}
where $T_{12}:=T_1 \cup T_2$. Lemma \ref{SRIP} implies that
\begin{equation}\label{lower bound}
\norm{A_{T_{12}}{\mathbf h}^{-}}^2 \ge cm \norm{{\mathbf h}^{-}}^2
\end{equation}
holds with probability at least $1-\exp(-c_0 m)$. On the other hand, Lemma \ref{upper bound} states that with probability at least $1-2\exp(-cm)$ the following holds:
\begin{eqnarray}\label{first term}
\nj{{\mathbf h}^{-},A_{T_1}^\top \eta_{T_1}-A_{T_2}^\top \eta_{T_2}} &\le & 6\sqrt{m}\norm{{\mathbf h}^{-}}\norm{\eta}.
\end{eqnarray}
Putting (\ref{lower bound}) and (\ref{first term}) into (\ref{key eq}), we obtain
\begin{equation}\label{eq:bu}
cm \norm{{\mathbf h}^{-}}^2\le 12\sqrt{m}\norm{{\mathbf h}^{-}}\norm{\eta} + \|\eta_{T_{12}^c}\|_2^2
\end{equation}
with probability at least $1-3\exp(-c_1 m)$, which implies that
\begin{equation*}
\norm{{\mathbf h}^{-}} \lesssim \frac{\norm{\eta}}{\sqrt{m}}.
\end{equation*}
For the case where $\# (T_3\cup T_4)\geq m/2$, we can obtain that
\begin{equation*}
\norm{{\mathbf h}^{+}} \lesssim \frac{\norm{\eta}}{\sqrt{m}}
\end{equation*}
by a similar method to above.
\end{proof}
\subsection{Proof of Theorem \ref{th:con_lower}}
To this end, we present the following lemmas.
\begin{lemma} \label{fixed-point condition}
Suppose that ${\widehat{\mathbf{x}}}$ is any solution of model (\ref{square model}). Then ${\widehat{\mathbf{x}}}$ satisfies the following fixed-point equation:
\begin{equation} \label{fixed point}
{\widehat{\mathbf{x}}}=(A^\top A)^{-1}A^\top ({\mathbf y}\odot{\mathrm s}(A{\widehat{\mathbf{x}}})),
\end{equation}
where $\odot$ denotes the Hadamard product and ${\mathrm s}(A{\widehat{\mathbf{x}}}):=
\left(\frac{\nj{{\mathbf a}_1,{\widehat{\mathbf{x}}}}}{\abs{\nj{{\mathbf a}_1,{\widehat{\mathbf{x}}}}}},\ldots,\frac{\nj{{\mathbf a}_m,{\widehat{\mathbf{x}}}}}{\abs{\nj{{\mathbf a}_m,{\widehat{\mathbf{x}}}}}}\right)$ for any ${\widehat{\mathbf{x}}}\in{\mathbb R}^d$. Here, $\frac{\nj{{\mathbf a}_j,{\widehat{\mathbf{x}}}}}{\abs{\nj{{\mathbf a}_j,{\widehat{\mathbf{x}}}}}}=1$ is adopted if $\nj{{\mathbf a}_j,{\widehat{\mathbf{x}}}}=0$.
\end{lemma}
\begin{proof}
Let
\[
L({\mathbf x})\,\,:=\,\,\norm{\abs{A {\mathbf x}}-{\mathbf y}}^2.
\]
Consider the smooth function
\begin{equation*}
G({\mathbf x},{\mathbf u})\,\,:=\,\,\norm{A {\mathbf x}-{\mathbf u}\odot{\mathbf y}}^2
\end{equation*}
with ${\mathbf x}\in {\mathbb R}^d$ and ${\mathbf u}\in U:=\{{\mathbf u}=(u_1,\ldots,u_m)\in {\mathbb R}^m: \abs{u_i}=1,\; i=1,\ldots,m\}$. Recall that $L({\mathbf x})$ has a global minimum at ${\widehat{\mathbf{x}}}$. Then $G({\mathbf x},{\mathbf u})$ has a global
minimum at $({\widehat{\mathbf{x}}},s(A{\widehat{\mathbf{x}}}))$. Indeed, if there exists $(\widetilde{{\mathbf x}},\widetilde{{\mathbf u}})$ such that $G(\widetilde{{\mathbf x}},\widetilde{{\mathbf u}})<G({\widehat{\mathbf{x}}},s(A{\widehat{\mathbf{x}}}))$, then
\begin{equation*}
L(\widetilde{{\mathbf x}})=\norm{\abs{A \widetilde{{\mathbf x}}}-{\mathbf y}}^2\le \norm{A \widetilde{{\mathbf x}}-\widetilde{{\mathbf u}}\odot{\mathbf y}}^2=G(\widetilde{{\mathbf x}},\widetilde{{\mathbf u}})<G({\widehat{\mathbf{x}}},s(A{\widehat{\mathbf{x}}}))=L({\widehat{\mathbf{x}}}).
\end{equation*}
This contradicts the assumption that $L({\mathbf x})$ has a global minimum at ${\widehat{\mathbf{x}}}$.
Thus we have
\begin{equation*}
G({\widehat{\mathbf{x}}},s(A{\widehat{\mathbf{x}}})) \le G({\mathbf x},s(A{\widehat{\mathbf{x}}})) \quad \text{for any} \quad {\mathbf x} \in {\mathbb R}^d,
\end{equation*}
i.e., the function $G({\mathbf x},{\mathrm s}(A{\widehat{\mathbf{x}}}))$ has a global minimum at ${\widehat{\mathbf{x}}}$. Here, we consider
$G({\mathbf x},{\mathrm s}(A{\widehat{\mathbf{x}}}))$ as a function about ${\mathbf x}$ since ${\mathrm s}(A{\widehat{\mathbf{x}}})$ is a fixed vector. Note that $G({\mathbf x},{\mathrm s}(A{\widehat{\mathbf{x}}}))$ is differentiable and
\begin{equation*}
\nabla G({\mathbf x},{\mathrm s}(A{\widehat{\mathbf{x}}}))=2A^\top(A {\mathbf x}-{\mathbf y}\odot{\mathrm s}(A{\widehat{\mathbf{x}}})).
\end{equation*}
And $G({\mathbf x},{\mathrm s}(A{\widehat{\mathbf{x}}}))$ has a global minimum at ${\widehat{\mathbf{x}}}$, we have
\[
\nabla G({\widehat{\mathbf{x}}},{\mathrm s}(A{\widehat{\mathbf{x}}}))=2A^\top(A {\widehat{\mathbf{x}}}-{\mathbf y}\odot{\mathrm s}(A{\widehat{\mathbf{x}}}))=0
\]
which implies the conclusion.
\end{proof}
\begin{lemma} \label{Th:lower bound}
Let $m \gtrsim d$. Suppose that $A\in {\mathbb R}^{m\times d}$ is a Gaussian random matrix whose entries are independent Gaussian random variables. For a fixed vector ${\mathbf x}_0\in {\mathbb R}^d$ and a fixed noise vector $\eta\in {\mathbb R}^m$, let ${\widehat{\mathbf{x}}}$ be the solution of model (\ref{square model}). For any fixed $\epsilon>0$, set
\[
\beta_\epsilon:=\left|\norm{{\mathbf x}_0}\cdot f(\theta)+\sqrt{2/\pi}\cdot \sum_{i=1}^m \eta_i/m \right|-(\norm{{\mathbf x}_0}+\norm{\eta}/\sqrt{m})\epsilon,
\]
where $f(\theta):=2/\pi\cdot (\sin \theta +(\pi/2-\theta)\cos \theta)-|\cos\theta|$ and $\theta$ is the angle between ${\widehat{\mathbf{x}}}$ and ${\mathbf x}_0$. Then the following holds with probability at least $1-6\exp(-c\epsilon^2 m)$:
\begin{equation*}
\min \left\{\norm{{\widehat{\mathbf{x}}}-{\mathbf x}_0}, \norm{{\widehat{\mathbf{x}}}+{\mathbf x}_0}\right\} \ge \beta_\epsilon/9.
\end{equation*}
\end{lemma}
\begin{proof}
According to Lemma \ref{fixed-point condition}, we have
\begin{equation} \label{eq:fixed point}
{\widehat{\mathbf{x}}}=(A^\top A)^{-1}A^\top ({\mathbf y}\odot{\mathrm s}(A{\widehat{\mathbf{x}}})).
\end{equation}
Without loss of generality, we can assume $\norm{{\widehat{\mathbf{x}}}-{\mathbf x}_0}\le \norm{{\widehat{\mathbf{x}}}+{\mathbf x}_0}$, which implies that $0\le \theta\le \pi/2$. From (\ref{eq:fixed point}), we have
\begin{equation*}
{\widehat{\mathbf{x}}}-{\mathbf x}_0=(A^\top A)^{-1}A^\top ({\mathbf y}\odot{\mathrm s}(A{\widehat{\mathbf{x}}})-A{\mathbf x}_0),
\end{equation*}
which implies that
\begin{equation*}
\norm{{\widehat{\mathbf{x}}}-{\mathbf x}_0}\ge \sigma_{\min} ((A^\top A)^{-1})\norm{A^\top ({\mathbf y}\odot{\mathrm s}(A{\widehat{\mathbf{x}}})-A{\mathbf x}_0)}\ge \frac{1}{9m} \norm{A^\top ({\mathbf y}\odot{\mathrm s}(A{\widehat{\mathbf{x}}})-A{\mathbf x}_0)}.
\end{equation*}
Here, we use the fact that $\norm{A} \le 3\sqrt{m}$ holds with probability at least $1-2\exp(-c m)$ \cite[Theorem 7.3.3]{Vershynin2018} since $A\in {\mathbb R}^{m\times d}$ is a Gaussian random matrix.
Without loss of generality, we can assume ${\widehat{\mathbf{x}}}\neq 0$. Indeed, (\ref{eq:fixed point}) implies $A^\top {\mathbf y}=0$ provided ${\widehat{\mathbf{x}}}= 0$, which gives that ${\mathbf x}_0=0$ and $\eta=0$. Thus our conclusion holds.
By the unitary invariance of Gaussian random vectors, we can take ${\widehat{\mathbf{x}}}=\norm{{\widehat{\mathbf{x}}}}{\mathbf e}_1$ and ${\mathbf x}_0=\norm{{\mathbf x}_0}(\cos \theta\cdot {\mathbf e}_1+\sin \theta \cdot {\mathbf e}_2)$, where $\theta$ is the angle between ${\widehat{\mathbf{x}}}$ and ${\mathbf x}_0$. Thus,
\begin{equation*}
\norm{{\widehat{\mathbf{x}}}-{\mathbf x}_0}\ge \frac{1}{9m} \norm{A^\top ({\mathbf y}\odot{\mathrm s}(A\mathbf{e}_1)-A{\mathbf x}_0)}=\frac{1}{9m} \norm{{\mathbf z}},
\end{equation*}
where ${\mathbf z}:=(z_1,\ldots,z_d)^\top:=A^\top ({\mathbf y}\odot{\mathrm s}(A\mathbf{e}_1)-A{\mathbf x}_0)$.
Note that the first entry of ${\mathbf z}$ is
\begin{equation*}
z_1=\sum_{i=1}^m \xkh{\abs{a_{i,1}}(\abs{a_i^\top {\mathbf x}_0}+\eta_i)-a_{i,1}\cdot a_i^\top {\mathbf x}_0}.
\end{equation*}
This implies that
\begin{equation}\label{two part sum}
\begin{aligned}
\norm{{\widehat{\mathbf{x}}}-{\mathbf x}_0} \ge \frac{\abs{z_1}}{9m}= &\left|\norm{{\mathbf x}_0}\cdot\frac{1}{9m}\sum_{i=1}^m \big|a_{i,1}(a_{i,1}\cos\theta+a_{i,2}\sin\theta)\big|+\frac{1}{9m}\sum_{i=1}^m\eta_i\abs{a_{i,1}} \right. \\
& \left. - \norm{{\mathbf x}_0}\cdot\frac{1}{9m}\sum_{i=1}^m a_{i,1}(a_{i,1}\cos\theta+a_{i,2}\sin\theta)\right| \\
=\,\,&\left|\frac{\norm{{\mathbf x}_0}}{9m}\sum_{i=1}^m(\abs{\xi_i}-\xi_i)
+\frac{1}{9m}\sum_{i=1}^m\eta_i\abs{a_{i,1}}\right|,
\end{aligned}
\end{equation}
where $\xi_i:=a_{i,1}(a_{i,1}\cos\theta+a_{i,2}\sin\theta)$. It is clear that $\xi_i$ is a subexponential random variable with ${\mathbb E} \xi_i =\cos \theta$. We claim that ${\mathbb E} |\xi_i|= 2/\pi\cdot (\sin \theta +(\pi/2-\theta)\cos \theta)$. Then the Bernstein's inequality implies that, for any fixed $\epsilon>0$,
\begin{equation} \label{The first part}
\left|\frac{1}{m}\sum_{i=1}^m (|\xi_i|-\xi_i)- \frac{2}{\pi}\cdot (\sin \theta +(\frac{\pi}{2}-\theta)\cos \theta)+\cos\theta \right| \le \epsilon
\end{equation}
holds with probability at least $1-2\exp(-c\epsilon^2 m)$. We next consider
$ \frac{1}{m}\sum_{i=1}^m\eta_i\abs{a_{i,1}}$.
Note that ${\mathbb E} \abs{a_{i,1}}=\sqrt{2/\pi}$.
Then by Hoeffding's inequality we can obtain that
\begin{equation} \label{The second part}
\left|\frac{1}{m}\sum_{i=1}^m\eta_i\abs{a_{i,1}}-\sqrt{\frac{2}{\pi}}\cdot \frac{1}{m}\sum_{i=1}^m\eta_i\right| \le \frac{\norm{\eta}}{\sqrt{m}}\epsilon
\end{equation}
holds with probability at least $1-2\exp(-c\epsilon^2 m)$ for any $\epsilon>0$. Substituting (\ref{The first part}) and (\ref{The second part}) into (\ref{two part sum}), we obtain that
\begin{equation} \label{result}
\norm{{\widehat{\mathbf{x}}}-{\mathbf x}_0}\ge \frac{1}{9}\cdot \left(\left|\norm{{\mathbf x}_0}f(\theta)+\sqrt{\frac{2}{\pi}}\cdot \frac{1}{m}\sum_{i=1}^m\eta_i\right|-
\left(\norm{{\mathbf x}_0}+\frac{\norm{\eta}}{\sqrt{m}}\right)\epsilon\right)
\end{equation}
holds with probability at least $1-6\exp(-c\epsilon^2 m)$. Thus we arrive at the conclusion.
It remains to argue that ${\mathbb E} |\xi_i|= 2/\pi\cdot (\sin \theta +(\pi/2-\theta)\cos \theta)$.
By spherical coordinates integral,
\begin{eqnarray*}
{\mathbb E} |\xi_i|= {\mathbb E} \big|a_{i,1}(a_{i,1} \cos \theta +a_{i,2} \sin \theta) \big| &=& \frac{1}{2\pi}\int_0^{2\pi}\int_0^\infty r^3 e^{-r^2/2} |\cos \phi \cos(\theta-\phi)| dr d\phi\\
&=& \frac{1}{2\pi}\int_0^{2\pi} |\cos \theta + \cos(2\phi-\theta)|d\phi\\
&=& \frac{1}{\pi}\int_0^{\pi} |\cos \theta + \cos \phi|d\phi \\
&=& \frac{2}{\pi} \left(\sin \theta +(\pi/2-\theta)\cos \theta\right)
\end{eqnarray*}
where we use the identities $2\cos \phi \cos (\theta-\phi)=\cos \theta +\cos(2\phi-\theta)$ in second line.
\end{proof}
\begin{proof}[Proof of Theorem \ref{th:con_lower}]
From Lemma \ref{Th:lower bound}, it is easy to prove that (\ref{eq:constant_lower}) holds for ${\mathbf x}_0=0$.
Then it suffices to prove the theorem for ${\mathbf x}_0\neq 0$.
Since $\norm{\eta}/\sqrt{m} \le \delta_1$ with $\delta_1\ge 0$, there exists a $\epsilon_0>0$ so that
$$(\norm{{\mathbf x}_0}+\norm{\eta}/\sqrt{m})\epsilon_0 \le \delta_0/2.$$
Set
\[
\overline{\eta}:=\sqrt{2/\pi}\cdot \sum_{i=1}^m \eta_i/m,
\]
and
\begin{equation*}
f(\theta):=2/\pi\cdot (\sin \theta +(\pi/2-\theta)\cos \theta)-|\cos\theta|,\quad 0\le \theta \le \pi.
\end{equation*}
Note that $f(\theta)$ is a monotonically increasing function for $\theta \in [0, \pi/2]$.
Choosing $\epsilon=\epsilon_0$ in Lemma \ref{Th:lower bound}, with probability at least $1-6\exp(-c\epsilon_0^2 m)$, we have
\begin{equation} \label{eq:basic lower bound}
\min \left\{\norm{{\widehat{\mathbf{x}}}-{\mathbf x}_0}, \norm{{\widehat{\mathbf{x}}}+{\mathbf x}_0}\right\} \ge \big(\big|\norm{{\mathbf x}_0}\cdot f(\theta_0) + \overline{\eta}\big|-\delta_0/2\big) /9,
\end{equation}
where $\theta_0$ is the angle between ${\widehat{\mathbf{x}}}$ and ${\mathbf x}_0$. Without loss of generality, we can assume $ 0\le \theta_0 \le \pi/2$ and hence $f(\theta_0) \ge f(0)=0$.
Noting $\abs{\overline{\eta}}\geq \delta_0$, we divide the rest of the proof into three cases.
{\em Case 1:} $\overline{\eta}\ge \delta_0$.
In this case, (\ref{eq:basic lower bound}) implies that
\begin{equation*}
\min \left\{\norm{{\widehat{\mathbf{x}}}-{\mathbf x}_0}, \norm{{\widehat{\mathbf{x}}}+{\mathbf x}_0}\right\} \ge \big(\overline{\eta}-\delta_0/2\big) /9 \ge \delta_0/18
\end{equation*}
holds with probability at least $1-6\exp(-c\epsilon_0^2 m)$.
{\em Case 2:} $\overline{\eta}\le -\delta_0$ and $\abs{\overline{\eta}}\le \norm{{\mathbf x}_0}\cdot f(\theta_0)$.
In this case, we have $f(\theta_0)\ge \delta_0/\norm{{\mathbf x}_0}$.
Since the function $f(\theta)$ is monotonicity, we have $\theta_0\ge \theta_1:=f^{-1}(\delta_0/\norm{{\mathbf x}_0})>0$, which implies that
\begin{equation*}
\min \left\{\norm{{\widehat{\mathbf{x}}}-{\mathbf x}_0}, \norm{{\widehat{\mathbf{x}}}+{\mathbf x}_0}\right\} \ge \norm{{\mathbf x}_0}\sin\theta_1.
\end{equation*}
{\em Case 3:} $\overline{\eta}\le -\delta_0$ and $\abs{\overline{\eta}}> \norm{{\mathbf x}_0}\cdot f(\theta_0)$.
We claim that there exists a constant $c_{\delta_0,{\mathbf x}_0}$ such that the following holds
with probability at least $1-6\exp(-c\epsilon_0^2 m)$
\begin{equation} \label{eq:claim_three_part}
\min \left\{\norm{{\widehat{\mathbf{x}}}-{\mathbf x}_0}, \norm{{\widehat{\mathbf{x}}}+{\mathbf x}_0}\right\} \ge c_{\delta_0,{\mathbf x}_0}
\end{equation}
where $c_{\delta_0,{\mathbf x}_0}$ only depends on $\delta_0$ and $\norm{{\mathbf x}_0}$. Indeed, if $\abs{\overline{\eta}}-\norm{{\mathbf x}_0}f(\theta_0)\ge 3/4\cdot \abs{\overline{\eta}}$, then (\ref{eq:basic lower bound}) implies
\begin{equation*}
\min \left\{\norm{{\widehat{\mathbf{x}}}-{\mathbf x}_0}, \norm{{\widehat{\mathbf{x}}}+{\mathbf x}_0}\right\} \ge \big(\abs{\overline{\eta}}-\norm{{\mathbf x}_0}f(\theta_0)-\delta_0/2\big) /9\ge \delta_0/36.
\end{equation*}
If $\abs{\overline{\eta}}-\norm{{\mathbf x}_0}f(\theta_0)< 3/4\cdot \abs{\overline{\eta}}$, then $f(\theta_0)\ge \delta_0/(4\norm{{\mathbf x}_0})$.
It can also give that
\begin{equation*}
\min \left\{\norm{{\widehat{\mathbf{x}}}-{\mathbf x}_0}, \norm{{\widehat{\mathbf{x}}}+{\mathbf x}_0}\right\} \ge \norm{{\mathbf x}_0}\cdot \sin\theta_2,
\end{equation*}
where $\theta_2:=f^{-1}(\delta_0/(4\norm{{\mathbf x}_0}))>0$. Choosing $c_{\delta_0,{\mathbf x}_0}:=\min\{\delta_0/36,\norm{{\mathbf x}_0}\sin\theta_2\}$, we arrive at the conclusion.
\end{proof}
\subsection{Proof of Theorem \ref{th:sparse constrain}}
We first extend Lemma \ref{upper bound} to sparse case.
\begin{lemma} \label{upper bound sparse cons}
For any fixed $s>0$, let $m \gtrsim s\log (e d/s)$. Suppose that $A\in {\mathbb R}^{m\times d}$ is a Gaussian matrix whose entries are independent Gaussian random variables. Set
\begin{equation*}
K_{d,s}\,\,:=\,\,\dkh{{\mathbf x} \in {\mathbb R}^d: \|{\mathbf x}\|_2\leq 1, \normone{{\mathbf x}}\le \sqrt{s}}.
\end{equation*}
Then for any fixed $\eta \in {\mathbb R}^m$, the following holds with probability at least $1-2\exp(-cm)$
\begin{equation}
\sup_{{\mathbf h}\in K_{d,s} \atop T\subset \{1,\ldots,m\}}\nj{{\mathbf h},A^\top \eta_{T}} \lesssim \sqrt{m}\cdot \norm{\eta}\cdot \norm{{\mathbf h}},
\end{equation}
where $\eta_T$ denotes the vector generated by $\eta$ with entries in $T$ are themselves and others are zeros.
\end{lemma}
\begin{proof}
For any fixed $T\subset \{1,\ldots,m\}$, we have
\begin{equation*}
{\mathbb E} \sup_{{\mathbf h}\in K_{d,s}}\nj{{\mathbf h},A^\top \eta_{T}} =\norm{\eta_{T}}\cdot w(K_{d,s})\le C\sqrt{s\log (ed/s)}\norm{\eta}\le C\sqrt{m}\norm{\eta},
\end{equation*}
where the first inequality follows from the fact of the Gaussian width $w(K_{d,s}) \le C\sqrt{s\log (ed/s)}$ and the second inequality follows from $m \ge c_0s\log (ed/s)$. We next use Lemma \ref{Gaussian space} to give a tail bound for $\sup_{{\mathbf h}\in K_{d,s}} \nj{{\mathbf h},A^\top \eta_{T}}$. To this end, we set
\[
f(A):= \sup_{{\mathbf h}\in K_{d,s}} \nj{{\mathbf h},A^\top \eta_{T}}.
\]
We next show that $f(A)$ is a Lipschitz function on ${\mathbb R}^{m\times d}$ and its Lipschitz constant is $\norm{\eta}$. Indeed, for any matrices $A_1,A_2 \in {\mathbb R}^{m\times d}$, it holds that
\begin{equation*}
\Big|\sup_{{\mathbf h}\in K_{d,s}} \nj{{\mathbf h},A_1^\top \eta_{T}}-\sup_{{\mathbf h}\in K_{d,s}} \nj{{\mathbf h},A_2^\top \eta_{T}}\Big| \le \Big|\sup_{{\mathbf h}\in K_{d,s}} \nj{(A_1-A_2){\mathbf h},\eta_{T}}\Big|\le \norm{\eta}\norms{A_1-A_2}_F.
\end{equation*}
Then Lemma \ref{Gaussian space} implies that
\begin{equation}\label{eq:budeng}
{\mathbb P} \dkh{\sup_{{\mathbf h}\in K_{d,s}} \nj{{\mathbf h},A^\top \eta_{T}}\ge {\mathbb E} \sup_{{\mathbf h}\in K_{d,s}} \nj{{\mathbf h},A^\top \eta_{T}}+t} \le 2\exp\Big(-\frac{ct^2}{\norm{\eta}^2}\Big).
\end{equation}
Suppose that $C_1>0$ is a constant satisfying $ C_1^2\cdot c>1$.
Choosing $t=C_1\sqrt{m}\norm{\eta}$ in (\ref{eq:budeng}), we obtain that the following holds
with probability at least $1-2\exp(-c \cdot C_1^2\cdot m)$
\begin{equation*}
\sup_{{\mathbf h}\in K_{d,s}} \nj{{\mathbf h},A^\top \eta_{T}} \le C_0\sqrt{m}\norm{\eta}
\end{equation*}
for any fixed $T\subset \{1,\ldots,m\}$ .
Finally, note that the number of all subset $T\subset \{1,\ldots,m\}$ is $2^m$. Taking a union bound over all the sets gives
\begin{equation*}
\sup_{{\mathbf h} \in K_{d,s} \atop T\subset \{1,\ldots,m\}}\nj{{\mathbf h},A^\top \eta_{T}} \le C_0\sqrt{m}\norm{\eta}
\end{equation*}
with probability at least $1-2\exp(-\tilde{c}m)$. Here, we use the fact of $C_1^2\cdot c>1$. We arrive at the conclusion.
\end{proof}
\begin{proof}[Proof of Theorem \ref{th:sparse constrain}]
Set ${\mathbf h}^{-}:={\widehat{\mathbf{x}}}-{\mathbf x}_0,\; {\mathbf h}^{+}:={\widehat{\mathbf{x}}}+{\mathbf x}_0$ and set
\begin{eqnarray*}
&& T_1:=\left\{j: \mathrm{sign}(\nj{{\mathbf a}_j,{\widehat{\mathbf{x}}}})=1,\; \mathrm{sign}(\nj{{\mathbf a}_j,{\mathbf x}_0})=1\right\} \\
&& T_2:=\left\{j: \mathrm{sign}(\nj{{\mathbf a}_j,{\widehat{\mathbf{x}}}})=-1,\; \mathrm{sign}(\nj{{\mathbf a}_j,{\mathbf x}_0})=-1\right\} \\
&& T_3:=\left\{j: \mathrm{sign}(\nj{{\mathbf a}_j,{\widehat{\mathbf{x}}}})=1,\; \mathrm{sign}(\nj{{\mathbf a}_j,{\mathbf x}_0})=-1\right\} \\
&& T_4:=\left\{j: \mathrm{sign}(\nj{{\mathbf a}_j,{\widehat{\mathbf{x}}}})=-1,\; \mathrm{sign}(\nj{{\mathbf a}_j,{\mathbf x}_0})=1\right\}.
\end{eqnarray*}
Without loss of generality, we can assume that $\# (T_{1}\cup T_2)=\beta m \ge m/2$.
Using an argument similar to one for (\ref{key eq}), we obtain that
\begin{equation} \label{key inequality for sparse}
\norm{A_{T_{12}}{\mathbf h}^{-}}^2 \le 2\nj{{\mathbf h}^{-},A_{T_1}^\top \eta_{T_1}-A_{T_2}^\top \eta_{T_2}}+ \|\eta_{T_{12}^c}\|^2.
\end{equation}
To this end, we first need to show $\normone{{\mathbf h}^{-}}\le 2\sqrt{s}\norm{{\mathbf h}^{-}}$. Indeed, let $S:={\rm supp}({\mathbf x})$ and note that
\begin{equation*}
\normone{{\widehat{\mathbf{x}}}}=\normone{{\mathbf x}_0+{\mathbf h}^{-}}=\normone{{\mathbf x}_0+{\mathbf h}_{S}^{-}}+\normone{{\mathbf h}_{S^c}^{-}}\ge \normone{{\mathbf x}_0}-\normone{{\mathbf h}_{S}^{-}}+\normone{{\mathbf h}_{S^c}^{-}}.
\end{equation*}
Here ${\mathbf h}_{S}^{-}$ denotes the restriction of the vector ${\mathbf h}^{-}$ onto the set of coordinates $S$. Then the constrain condition $\normone{{\widehat{\mathbf{x}}}}\le R:=\normone{{\mathbf x}_0}$ implies that $\normone{{\mathbf h}_{S^c}^{-}}\le \normone{{\mathbf h}_{S}^{-}}$. Using H\"older inequality, we obtain that
\begin{equation*}
\normone{{\mathbf h}^{-}}\,=\,\normone{{\mathbf h}_{S}^{-}}+\normone{{\mathbf h}_{S^c}^{-}}\, \le\, 2\normone{{\mathbf h}_{S}^{-}}\,\le\, 2\sqrt{s}\norm{{\mathbf h}^{-}}.
\end{equation*}
We next give a lower bound for the left hand of inequality (\ref{key inequality for sparse}). Set
\begin{equation*}
K:=\dkh{{\mathbf h} \in {\mathbb R}^d: \norm{{\mathbf h}}\le 1, \; \normone{{\mathbf h}}\le 2\sqrt{s}}.
\end{equation*}
Note that ${\mathbf h}^{-}/\norm{{\mathbf h}^{-}}\in K$.
Since $A/\sqrt{m}$ satisfies strong RIP (see Lemma \ref{SRIP}), we obtain that
\begin{equation}\label{lower bound sparse cons}
\norm{A_{T_{12}}{\mathbf h}^{-}}^2\,\, \ge\,\, cm \norm{{\mathbf h}^{-}}^2
\end{equation}
holds with probability at least $1-\exp(-c_0 m)$, provided $m\gtrsim s\log (ed/s)$.
On the other hand, Lemma \ref{upper bound sparse cons} implies that
\begin{equation}\label{up:bo}
\nj{{\mathbf h}^{-},A_{T_1}^\top \eta_{T_1}-A_{T_2}^\top \eta_{T_2}}\,\, \le\,\, 2C\sqrt{m}\norm{\eta}\norm{{\mathbf h}^{-}}
\end{equation}
holds with probability at least $1-2\exp(-c_0m)$.
Putting (\ref{up:bo}) and (\ref{lower bound sparse cons}) into (\ref{key inequality for sparse}), we obtain that
\begin{equation}\label{eq:hmineq}
cm \norm{{\mathbf h}^{-}}^2\le 4C\sqrt{m}\norm{\eta}\norm{{\mathbf h}^-}+ \|\omega_{T_{12}^c}\|^2
\end{equation}
holds with probability at least $1-3\exp(-c_0 m)$. The (\ref{eq:hmineq}) implies that
\[
\norm{{\mathbf h}^{-}} \lesssim \frac{\norm{\omega}}{\sqrt{m}}.
\]
Similarly, if $\# (T_3\cup T_4)\geq m/2$, we can obtain that
\[
\norm{{\mathbf h}^{+}} \lesssim \frac{\norm{\omega}}{\sqrt{m}}.
\]
\end{proof}
\subsection{Proof of Theorem \ref{th:sparse}}
To this end, we introduce the following lemma.
\begin{lemma} \label{upper bound sparse uncons}
Let $A\in {\mathbb R}^{m\times d}$ be a Gaussian matrix whose entries are independent Gaussian random variables and $\eta \in {\mathbb R}^m$ be a fixed vector.
Then the following holds with probability at least $1-1/d^2$
\begin{equation} \label{constant c}
\sup_{{\mathbf h}\in {\mathbb R}^d \atop T\subset \{1,\ldots,m\}}\nj{{\mathbf h},A^\top \eta_{T}}\lesssim ( \normone{\eta}+\norm{\eta}\sqrt{\log d})\normone{{\mathbf h}},
\end{equation}
where $\eta_T$ denotes the vector generated by $\eta$ with entries in $T$ are themselves and others are zeros.
\end{lemma}
\begin{proof}
By applying H\"older's inequality with $\ell_1$ and $\ell_\infty$ norms, we have
\begin{equation*}
\nj{{\mathbf h},A^\top \eta_{T}} \le \norms{A^\top \eta_{T}}_\infty\cdot \normone{{\mathbf h}}.
\end{equation*}
Thus it is sufficient to present an upper bound of $\sup_{T\subset \{1,\ldots,m\}} \norms{A^\top \omega_{T}}_\infty$.
We use ${\tilde{\mathbf{a}}}_j\in {\mathbb R}^{m}, j=1,\ldots,d,$ to denote the {\em column} vectors of $A$.
Then for any fixed index $j$ and $t>0$, we have
\begin{equation*}
{\mathbb P}\xkh{\sup_{T\subset \{1,\ldots,m\}} \abs{{\tilde{\mathbf{a}}}_j^\top \omega_{T}}>t}\le {\mathbb P}\xkh{\sum_{i=1}^m \abs{\omega_{i}}\abs{{\tilde{\mathbf{a}}}_{j,i}} >t}.
\end{equation*}
A simple calculation shows that ${\mathbb E} \abs{\omega_i}\abs{{\tilde{\mathbf{a}}}_{j,i}}=\sqrt{2/\pi}\abs{\omega_i}$. By Hoeffding's inequality, we obtain that
\begin{equation}\label{eq:pror}
{\mathbb P}\xkh{\sum_{i=1}^m \abs{\omega_{i}}\abs{{\tilde{\mathbf{a}}}_{j,i}} > C\Big(\normone{\eta}+\norm{\eta}\sqrt{\log d}\Big)} \le \frac{1}{d^3}
\end{equation}
holds for some constant $C>0$.
Taking a union bound over all indexes $j\in \{1,\ldots,d\}$, (\ref{eq:pror}) implies
\begin{equation*}
\sup_{T\subset \{1,\ldots,m\}} \norms{A^\top \omega_{T}}_\infty \lesssim \normone{\eta}+\norm{\eta}\sqrt{\log d}
\end{equation*}
with probability at least $1-1/d^2$. Thus, we arrive at the conclusion.
\end{proof}
\begin{proof}[Proof of Theorem \ref{th:sparse}]
Set ${\mathbf h}^{-}:={\widehat{\mathbf{x}}}-{\mathbf x}_0$ and ${\mathbf h}^{+}:={\widehat{\mathbf{x}}}+{\mathbf x}_0$. Without loss of generality, we assume that $\normone{{\mathbf h}^{-}}\le \normone{{\mathbf h}^{+}}$. Since ${\widehat{\mathbf{x}}}$ is the solution of (\ref{square model with sparse}), we have
\begin{equation} \label{minsolution sparse}
\left\||A{\widehat{\mathbf{x}}}|-{\mathbf y}\right\|^2+\lambda\norms{{\widehat{\mathbf{x}}}}_1 \le \left\||A{\mathbf x}_0|-{\mathbf y}\right\|^2+\lambda\norms{{\mathbf x}_0}_1 =\left\|\eta\right\|_2^2+\lambda\norms{{\mathbf x}_0}_1.
\end{equation}
For any index set $T\subset \{1,\ldots,m\}$, we set $A_T:=[{\mathbf a}_j:\;j\in T]^\top$ which is a submatrix of $A$. Set
\begin{eqnarray*}
&& T_1:=\left\{j: \mathrm{sign}(\nj{{\mathbf a}_j,{\widehat{\mathbf{x}}}})=1,\; \mathrm{sign}(\nj{{\mathbf a}_j,{\mathbf x}_0})=1\right\} \\
&& T_2:=\left\{j: \mathrm{sign}(\nj{{\mathbf a}_j,{\widehat{\mathbf{x}}}})=-1,\; \mathrm{sign}(\nj{{\mathbf a}_j,{\mathbf x}_0})=-1\right\} \\
&& T_3:=\left\{j: \mathrm{sign}(\nj{{\mathbf a}_j,{\widehat{\mathbf{x}}}})=1,\; \mathrm{sign}(\nj{{\mathbf a}_j,{\mathbf x}_0})=-1\right\} \\
&& T_4:=\left\{j: \mathrm{sign}(\nj{{\mathbf a}_j,{\widehat{\mathbf{x}}}})=-1,\; \mathrm{sign}(\nj{{\mathbf a}_j,{\mathbf x}_0})=1\right\}.
\end{eqnarray*}
Then a simple calculation leads to
\begin{equation} \label{Ax-y}
\left\||A{\widehat{\mathbf{x}}}|-{\mathbf y}\right\|^2=\norm{A_{T_1}{\mathbf h}^{-}-\omega_{T_1}}^2+\norm{A_{T_2}{\mathbf h}^{-}+\omega_{T_2}}^2+\norm{A_{T_3}{\mathbf h}^{+}-\omega_{T_3}}^2+\norm{A_{T_4}{\mathbf h}^{+}+\omega_{T_4}}^2.
\end{equation}
Substituting (\ref{Ax-y}) into (\ref{minsolution sparse}), we obtain that
\begin{eqnarray} \label{key eq sparse}
\norm{A_{T_{12}}{\mathbf h}^{-}}^2+\norm{A_{T_{34}}{\mathbf h}^{+}}^2 &\le & 2\nj{{\mathbf h}^{-},A_{T_1}^\top \omega_{T_1}-A_{T_2}^\top \omega_{T_2}}+2\nj{{\mathbf h}^{+},A_{T_3}^\top \omega_{T_3}-A_{T_4}^\top \omega_{T_4}} \nonumber \\
& & + \lambda(\norms{{\mathbf x}_0}_1-\norms{{\mathbf h}^{+}-{\mathbf x}_0}_1),
\end{eqnarray}
where $T_{12}:=T_1 \cup T_2$ and $T_{34}:=T_3 \cup T_4$. We claim that $\normone{{\mathbf h}^{-}} \le 4\sqrt{s}\norm{{\mathbf h}^{-}}$ and $\normone{{\mathbf h}^{+}} \le 4\sqrt{s}\norm{{\mathbf h}^{+}}$ hold with high probability. Indeed, let $S:={\rm supp}({\mathbf x}_0)\subset \{1,\ldots,d\}$. Then
\begin{equation}\label{normoneh}
\norms{{\mathbf h}^{+}-{\mathbf x}_0}_1 =\normone{{\mathbf h}_S^{+}-{\mathbf x}_0}+\normone{{\mathbf h}_{S^c}^{+}} \ge \normone{{\mathbf x}_0}-\normone{{\mathbf h}_S^{+}}+\normone{{\mathbf h}_{S^c}^{+}},
\end{equation}
where the inequality follows from triangle inequality. According to Lemma \ref{upper bound sparse uncons}, we obtain that
\begin{equation} \label{eq:up_lemma}
\nj{{\mathbf h}^{-},A_{T_1}^\top \omega_{T_1}-A_{T_2}^\top \omega_{T_2}}\le \frac{\lambda}{8}\normone{{\mathbf h}^{-}} \quad \mathrm{and}\quad \nj{{\mathbf h}^{+},A_{T_3}^\top \omega_{T_3}-A_{T_4}^\top \omega_{T_4}}\le \frac{\lambda}{8} \normone{{\mathbf h}^{+}}
\end{equation}
holds with probability at least $1-1/d^2$, where $\lambda \gtrsim \normone{\eta}+\norm{\eta}\sqrt{\log d}$. Putting (\ref{normoneh}) and (\ref{eq:up_lemma}) into (\ref{key eq sparse}) and using the fact $\normone{{\mathbf h}^{-}}\le \normone{{\mathbf h}^{+}}$, we can obtain that
\begin{equation}\label{eq:AT12H}
\norm{A_{T_{12}}{\mathbf h}^{-}}^2+\norm{A_{T_{34}}{\mathbf h}^{+}}^2 \le \frac{\lambda}{2} \normone{{\mathbf h}^{+}}+ \lambda(\normone{{\mathbf h}_S^{+}}-\normone{{\mathbf h}_{S^c}^{+}})
\end{equation}
holds with probability at least $1-1/d^2$. The (\ref{eq:AT12H}) implies that
\begin{equation*}
\frac{\lambda}{2} \normone{{\mathbf h}^{+}}+ \lambda(\normone{{\mathbf h}_S^{+}}-\normone{{\mathbf h}_{S^c}^{+}})\ge 0,
\end{equation*}
which gives $\normone{{\mathbf h}_{S^c}^{+}} \le 3\normone{{\mathbf h}_S^{+}}$ and hence
$\normone{{\mathbf h}^{+}} \le 4\normone{{\mathbf h}_S^{+}}$. By the H\"older's inequality, we obtain that
\[
\normone{{\mathbf h}^{+}} \le 4\sqrt{s}\norm{{\mathbf h}^{+}}.
\]
On the other hand, note that
\begin{equation*}
\normone{{\mathbf h}_S^{+}}=\normone{{\widehat{\mathbf{x}}}_S+{\mathbf x}_0}, \quad \normone{{\mathbf h}_S^{-}}=\normone{{\widehat{\mathbf{x}}}_S-{\mathbf x}_0}\quad \mathrm{and} \quad \normone{{\mathbf h}_{S^c}^{+}}=\normone{{\mathbf h}_{S^c}^{-}}.
\end{equation*}
Combining with $\normone{{\mathbf h}^{-}}\le \normone{{\mathbf h}^{+}}$, we can obtain that $\normone{{\mathbf h}^{-}} \le 4\sqrt{s}\norm{{\mathbf h}^{-}}$.
We next present an upper bound of $\norm{{\mathbf h}^{-}}$. Without loss of generality, we assume that $\# T_{12}=\beta m \ge m/2$. The (\ref{Ax-y}) implies that
\begin{equation}\label{eq:jin}
\left\||A{\widehat{\mathbf{x}}}|-{\mathbf y}\right\|^2\ge \norm{A_{T_1}{\mathbf h}^{-}-\omega_{T_1}}^2+\norm{A_{T_2}{\mathbf h}^{-}+\omega_{T_2}}^2.
\end{equation}
Substituting (\ref{eq:jin}) into (\ref{minsolution sparse}) we obtain that
\begin{equation}\label{key}
\begin{aligned}
\norm{A_{T_{12}}{\mathbf h}^{-}}^2 &\le 2\nj{{\mathbf h}^{-},A_{T_1}^\top \omega_{T_1}-A_{T_2}^\top \omega_{T_2}} + \lambda(\norms{{\mathbf x}_0}_1-\norms{{\mathbf h}^{-}+{\mathbf x}_0}_1)+ \|\omega_{T_{12}^c}\|^2\\
&\le 2\nj{{\mathbf h}^{-},A_{T_1}^\top \omega_{T_1}-A_{T_2}^\top \omega_{T_2}} + \lambda(\normone{{\mathbf h}_S^{-}}-\normone{{\mathbf h}_{S^c}^{-}})+ \|\omega_{T_{12}^c}\|^2.
\end{aligned}
\end{equation}
Here, we use
\begin{equation*}
\norms{{\mathbf h}^{-}+{\mathbf x}_0}_1 =\normone{{\mathbf h}_S^{-}+{\mathbf x}_0}+\normone{{\mathbf h}_{S^c}^{-}} \ge \normone{{\mathbf x}_0}-\normone{{\mathbf h}_S^{-}}+\normone{{\mathbf h}_{S^c}^{-}}.
\end{equation*}
We consider the left side of (\ref{key}).
Recall that $\normone{{\mathbf h}^{-}} \le 4\sqrt{s}\norm{{\mathbf h}^{-}}$. Then
\begin{equation}\label{lower bound sparse uncons}
\norm{A_{T_{12}}{\mathbf h}^{-}}^2 \ge cm \norm{{\mathbf h}^{-}}^2
\end{equation}
with probability at least $1-\exp(-c_0 m)$, provided $m \gtrsim s\log (ed/s)$
(see Remark \ref{re:SRIP}).
For the right hand of (\ref{key}), we use (\ref{eq:up_lemma}) to obtain that
\begin{equation}\label{sparse:upper bound}
\begin{aligned}
\norm{A_{T_{12}}{\mathbf h}^{-}}^2 &\le \frac{\lambda}{4}\normone{{\mathbf h}^{-}}+\lambda(\normone{{\mathbf h}_S^{-}}-\normone{{\mathbf h}_{S^c}^{-}})+ \|\omega_{T_{12}^c}\|^2 \\
&\le \frac{5\lambda}{4}\normone{{\mathbf h}_S^{-}}+ \|\omega_{T_{12}^c}\|^2 \\
&\le \frac{5\lambda\sqrt{s}}{4}\norm{{\mathbf h}^{-}}+ \|\omega_{T_{12}^c}\|^2
\end{aligned}
\end{equation}
holds with probability at least $1-1/d^2$. Combining (\ref{lower bound sparse uncons}) and (\ref{sparse:upper bound}), we have
\begin{equation*}
cm \norm{{\mathbf h}^{-}}^2\le \frac{5\lambda\sqrt{s}}{4}\norm{{\mathbf h}^{-}}+ \|\omega_{T_{12}^c}\|^2
\end{equation*}
with probability at least $1-\exp(-c_0 m)-1/d^2$. By solving the above inequality, we arrive at the conclusion
\begin{eqnarray*}
\norm{{\mathbf h}^{-}} &\lesssim & \frac{\lambda\sqrt{s}}{m}+\frac{\norm{\omega}}{\sqrt{m}}.
\end{eqnarray*}
\end{proof}
\section{Discussion}
We have analyzed the estimation performance of the nonlinear least squares for phase retrieval. We show that the reconstruction error of the nonlinear least square model is $O(\|\eta\|_2/\sqrt{m})$ and we also prove that this recovery bound is optimal up to a constant. For sparse phase retrieval, we also obtain similar results for the nonlinear Lasso.
It is of interest to extend the results in this paper to complex signals. Moreover, assume that
$y_i=f(\abs{{\mathbf a}_i,{\mathbf x}_0})+\eta_i, i=1,\ldots,m$, where $f:\mathbb{R}\to \mathbb{R}$ is a continuous function. It is interesting to consider the recovery error of the model $\min_{\mathbf x} \|\abs{A{\mathbf x}}-{\mathbf y}\|$ under this setting, which is the subject of our future work.
|
1,477,468,750,347 | arxiv | \section{Introduction}
\label{sec:introduction}
In this paper, we present the results from the High Energy Spectroscopic System
(H.E.S.S.)\ Galactic plane survey (HGPS), the deepest and most comprehensive survey
of the inner Milky Way Galaxy undertaken so far in very high-energy (VHE; $0.1
\la E \la 100$~TeV) $\gamma$-rays. Results include numerous sky images (maps) and a
new source catalog that is the successor of two previous HGPS releases. The
first release \citep{2005Sci...307.1938A} was based on $\sim$140~h of
observations with the imaging atmospheric Cherenkov telescope (IACT) array
H.E.S.S.\ and contained eight previously unknown sources of VHE $\gamma$-rays. In the
second release \citep{ref:gps2006}, we used 230~h of data, covering
$\ell=330\degr$ to $30\degr$ in Galactic longitude and $|b|\leq{}3\degr$ in
latitude. In total, we detected 22 sources of $\gamma$-rays\ in that data set.
Since then, the HGPS data set enlarged by more than one order of magnitude in
observation time, now comprising roughly $2700\,\mathrm{h}$\ of high-quality
data recorded in the years 2004 -- 2013. The spatial coverage is also
significantly larger, now encompassing the region from $\ell=250\degr$ to
$65\degr$ in longitude. H.E.S.S.\ provided periodic updates on this progress by
publishing new unidentified sources~\citep{ref_gps_unids2008} and through
conference proceedings \citep{2008ICRC....2..579H, Chaves08_GPS, ref:icrc09,
ref:icrc11, Deil12_GPS, Carrigan13a_GPS, Carrigan13b_GPS}.
\begin{figure*}
\includegraphics[width=\textwidth]{figures/hgps_region_exposure_illustration}
\caption[HGPS region, flux, exposure illustration in all-sky context]{
Illustration of HGPS region superimposed an all-sky image of
\emph{Planck} CO(1-0) data \citep{Planck15} in Galactic coordinates and
Hammer-Aitoff projection. For comparison, we overlay the HEGRA Galactic plane
survey \citep{ref:hegrasurvey} and VERITAS Cygnus survey \citep{Weinstein:2009}
footprints. Triangles denote the \emph{Fermi}-LAT\ 2FHL $\gamma$-ray\ sources \citep{2FHL}
identified as Galactic, and stars indicate the 15 Galactic VHE $\gamma$-ray\ sources
outside the HGPS region. H.E.S.S.\ has detected three of these, which are labeled
SN~1006 \citep{2010A&A...516A..62A}, the Crab Nebula \citep{2006A&A...457..899A,
2014A&A...562L...4H}, and HESS~J0632$+$057 \citep{2007A&A...469L...1A,
2014ApJ...780..168A}. The gray shaded regions denote the part of the sky that
cannot be observed from the H.E.S.S.\ site at reasonable zenith angles (less than
60$\degr$). The lower panels show the HGPS $\gamma$-ray\ flux above 1~TeV for
regions where the sensitivity is better than 10\%~ Crab (correlation radius
$R_{\mathrm{c}} = 0.4\degr$; see Sect.~\ref{sec:maps}) and observation time,
both also in Galactic coordinates. The white contours in the lower panels
delineate the boundaries of the survey region; the HGPS has little or no
exposure beyond Galactic latitudes of $|b|\leq{}3\degr$ at most locations along
the Galactic plane.
}
\label{fig:hgps_region_exposure_illustration}
\end{figure*}
Compared to the first HGPS releases over a decade ago, the deeper exposure over
a much larger sky area of the Galaxy, combined with improved $\gamma$-ray\
reconstruction, analysis, and modeling techniques, now results in a new catalog
containing 78 VHE $\gamma$-ray\ sources.
Figure~\ref{fig:hgps_region_exposure_illustration} illustrates the HGPS region
and compares this region to the structure of the Galaxy, represented by an
all-sky \emph{Planck} CO(1-0) map, and the smaller regions of previous surveys
performed by the IACT arrays HEGRA \citep[High-Energy-Gamma-Ray
Astronomy,][]{ref:hegrasurvey} and VERITAS \citep[Very Energetic Radiation
Imaging Telescope Array System,][]{Weinstein:2009}. Even though the HGPS covers
only a few percent of the entire sky, this region contains the vast majority of
the known Galactic \emph{Fermi}-LAT\ 2FHL $\gamma$-ray\ sources \citep{2FHL}\footnote{In this
paper, we compare the HGPS with the \emph{Fermi}-LAT\ 2FHL catalog, but not with 3FHL
\citep{2017arXiv170200664T} or the HAWC 2HWC catalog
\citep{2017ApJ...843...40A}, which were not published at the time this paper was
written and which already contain comparisons with Galactic H.E.S.S.\ sources.}.
The figure also shows the measured integral VHE $\gamma$-ray\ flux and the HGPS
observation times. As can be seen from the map of observation times
(Fig.~\ref{fig:hgps_region_exposure_illustration}, lower panel), the HGPS data
set is not homogeneous. Nonetheless, the HGPS features on average a point-source
sensitivity better than 1.5\%~Crab\footnote{Throughout this paper, and as is
generally the case in VHE $\gamma$-ray\ astronomy, we use the Crab Nebula flux as a
standard candle reference: 1~Crab unit is defined here as
$\Phi\left(>1\mathrm{TeV}\right) = 2.26 \cdot 10^{-11}$ cm$^{-2}$ s$^{-1}$
\citep{ref:hesscrab}.} in the core survey region within 60\degr\ in longitude of
the Galactic center~(see Fig.~\ref{fig:hgps_sensitivity}, lower panel).
In this paper, we aim to present the entire data set of the HGPS in a way that is
accessible and useful for the whole astronomical community. We have made the
maps of VHE $\gamma$-ray\ significance, flux, upper limits, and sensitivity
available online\footnote{\url{https://www.mpi-hd.mpg.de/hfm/HESS/hgps}} for the first time in \emph{FITS} format
\citep{Pence:2010}. We developed a semi-automatic analysis pipeline to construct
a catalog by detecting and modeling discrete sources of VHE $\gamma$-ray\ emission
present in these survey maps. We applied a standardized methodology to the
characterization of the $\gamma$-ray\ sources to measure their
morphological and spectral properties. The goal was to perform a robust analysis
of sources in the survey region with as little manual intervention as possible.
With such a generic approach, the catalog pipeline is not optimal for the few
very bright and extended sources with complex (non-Gaussian) morphology. For
these sources, dedicated analyses are more appropriate, and in all cases, they
have already been performed and published elsewhere. We therefore exclude these
sources, which are listed in Table~\ref{tab:hgps_external_sources} below, from the
pipeline analysis but include the results from the dedicated analysis in the
HGPS catalog for completeness.
We have structured the present paper as follows: we describe the H.E.S.S.\
telescope array, the data set, and the analysis techniques in
Sect.~\ref{sec:dataset}. We provide the maps of the VHE $\gamma$-ray\ sky in
various representations and details of their production in Sect.~\ref{sec:maps}.
Section~\ref{sec:cc} explains how the HGPS catalog of $\gamma$-ray\ sources was
constructed, then Sect.~\ref{sec:results} presents and discusses the results,
including source associations and identifications with other astronomical
objects. Section~\ref{sec:conclusions_outlook} concludes the main paper with a
summary of the HGPS and its results. In Sect.~\ref{sec:online}, we describe the
supplementary online material (maps and catalog in electronic form), including
caveats concerning measurements derived from the maps and catalog.
\section{Data set}
\label{sec:dataset}
\subsection{The High Energy Stereoscopic System (H.E.S.S.)}
\label{sec:dataset:hess}
H.E.S.S.\ is an array of five IACTs located at an altitude of 1800~m above sea
level in the Khomas highland of Namibia. It detects Cherenkov light emitted by
charged particles in an electromagnetic extensive air shower (EAS) initiated
when a primary photon ($\gamma$-ray) of sufficient energy enters Earth's
atmosphere. This array consists of four smaller telescopes, built and operated in the
first phase of the experiment (H.E.S.S.\ \textit{Phase I}) and a fifth much larger
telescope, which was added to the center of the array in 2012 to launch the
second phase (H.E.S.S.\ \textit{Phase II}) of the experiment.
H.E.S.S.\ accumulated the data presented here exclusively with the H.E.S.S.\ array
during its first phase. These four H.E.S.S.~\textit{Phase I} telescopes have
tessellated mirrors with a total area of 107~m$^2$ and cameras consisting of 960
photomultipliers. The energy threshold of the four-telescope array is roughly
200~GeV at zenith and increases with increasing zenith angle. We can reconstruct
the arrival direction and energy of the primary photon with accuracies of
$\sim$$0.08\degr$\ and $\sim$15\%, respectively. Because of its comparatively
large field of view (FoV), 5\degr\ in diameter, the H.E.S.S.~\textit{Phase I} array
is well suited for survey operations. The relative acceptance for $\gamma$-rays\ is
roughly uniform for the innermost 2\degr\ of the FoV and gradually drops toward
the edges to 40\% of the peak value at 4\degr\ diameter \citep{ref:hesscrab}.
\subsection{Observations, quality selection, and survey region}
\label{sec:dataset:selection}
The HGPS data set covers the period from January 2004 to January 2013. H.E.S.S.\
acquired this data set by pointing the IACT array to a given position in the sky
for a nominal duration of 28~min\ (referred to as an observation run
hereafter). We considered all runs with zenith angles up to 65\degr\ and
observation positions centered in the Galactic coordinate range $\ell =
244.5\degr$ to $77.5\degr$ and $|b|<7.0\degr$. To reduce systematic effects
arising from imperfect instrument or atmospheric conditions, we carefully
selected good-quality runs as close as possible to the nominal description of
the instrument used in the Monte Carlo (MC) simulations
\citep[see][]{ref:hesscrab}. For example, the IACT cameras suffer from
occasional hardware problems affecting individual or groups of camera pixels, so
we did not use observation runs with significant pixel problems. In addition, we
only used those runs with at least three operational telescopes.
Furthermore, despite the very good weather conditions at the H.E.S.S.\ site, both
nightly and seasonal variations of the atmospheric transparency occur and
require monitoring. Layers of dust or haze in the atmosphere effectively act as
a filter of the Cherenkov light created in an EAS, thereby raising the energy
threshold for triggering the IACTs. Since we calculated the instrument response
tables describing the performance of the instrument (e.g., the effective areas)
with MC simulations, deviations from the atmospheric conditions assumed in the
simulations lead to systematic uncertainties in the determination of energy
thresholds, reconstructed energies, and $\gamma$-ray\ fluxes. To account for this,
we applied a further quality cut using only observations where the Cherenkov
transparency coefficient~$T$~\citep{2014APh....54...25H}, which characterizes
the atmospheric conditions, falls within the range $0.8<T<1.2$ (for clear skies,
$T=1$).
After applying the aforementioned data quality selection cuts, 6239~observation
runs remain, $\sim$77\% of which are runs with four telescopes operational. The
total observation time is 2864~h, corresponding to a total livetime of 2673~h
(6.7\% average dead time). The third panel of
Fig.~\ref{fig:hgps_region_exposure_illustration} is a map of the observation
time over the survey region, clearly showing a non-uniform exposure. This is a
result of the HGPS observation strategy, summarized as follows:
\begin{itemize}
\item Dedicated survey observations, taken with a typical spacing between
pointings of $0.7\degr$ in longitude and in different latitude bands located
between $b=-1.8\degr$ and $b=1\degr$. In addition, for the longitude bands
$\ell = 355\degr$ to $5\degr$ and $\ell = 38\degr$ to $48\degr$, we extended
the survey observations in latitude, adding observation pointings from
$b=-3.5\degr$ to $b=3.5\degr$ to explore the possibility of high-latitude
emission.
\item Deeper follow-up observations of source candidates (``hot spots'') seen in
previous survey observations.
\item Exploratory and follow-up observations of astrophysical objects located
inside the survey region that were promising candidates for emitting VHE
$\gamma$-rays.
\item Observations to extend the HGPS spatial coverage and fill-up
observations to achieve a more uniform sensitivity across the Galactic plane.
\end{itemize}
Combining all of these observations, we achieved a more uniform, minimum
2\%~Crab flux sensitivity in the region between $\ell=283\degr$ to $58\degr$ and
$b=-0.3\degr\,\pm\,0.7\degr$ (see the sensitivity map in
Fig.~\ref{fig:hgps_sensitivity}).
\subsection{Event reconstruction and selection}
\label{sec:dataset:events}
We first converted the camera pixel data to amplitudes measured in units of
photoelectrons (p.e.), identifying the non-operational pixels for a given
observation following the procedures described by \citet{ref:hesscalib}. We
then applied standard H.E.S.S.\ techniques for the analysis of the camera images:
image cleaning, Hillas moment analysis, and the stereo reconstruction of the
direction of the primary photon, described by \citet{ref:hesscrab}. To suppress
the hadronic background and select photon candidate events, we used a
multivariate machine learning technique using boosted decision trees based on
EAS and image shape parameters \citep{ref:tmva}. For the generation of the
survey maps (Sect.~\ref{sec:maps}), we applied the hard cuts configuration
whereas for the extraction of source spectra (Sect.~\ref{sec:results}) we used
the standard cuts. The most important distinguishing cut is a minimum of
160~p.e. for hard cuts and 60~p.e. for standard cuts, but there are other
differences. See \citet{ref:tmva} for further information; specifically, we used
the $\zeta$ analysis cuts listed in Table~2(a) for the HGPS.
We cross-checked the results presented in this paper with an alternative
calibration, reconstruction, and gamma-hadron separation method based on a
semi-analytical description of the EAS development \citep{2009APh....32..231D}
with hard cuts of 120~p.e. for maps and standard cuts of 60~p.e. for spectra.
For the energy reconstruction of the primary photons, we compared the image
amplitudes in the cameras to the mean amplitudes found in MC simulations of the
array \citep{ref:simulations}. Those simulations, which were analyzed with the
same chain as the real data for the sake of consistency, include the detailed
optical and electronic response of the instrument. The range of optical
efficiencies encountered in the HGPS data set is large; efficiencies start at
100\% of the nominal value and drop to almost 50\% for some telescopes prior to
the mirror refurbishments conducted in 2009--2011. Therefore, we produced
several sets of MC simulations, each with optical efficiencies of the four
telescopes corresponding to their states at suitably chosen times: at the start
of H.E.S.S.\ operations; at the point when efficiencies had dropped to $\sim$70\%,
before the first mirror refurbishment campaign; and after the mirror
refurbishment of each telescope. We then chose the set of simulations most
closely matching the state of the system at a given time. Finally, we corrected
the remaining difference between simulated and actual optical efficiencies using
a calibration technique based on the intensity of ring-shaped images from
individual muons producing Cherenkov radiation above a telescope
\citep{ref:muonsbolz, ref:muonsleroy}.
\section{HGPS sky maps}
\label{sec:maps}
In this section, we describe the methods used to produce the HGPS sky maps. We
used the sky maps as the basis for subsequent construction of the HGPS source
catalog; this catalog is also a data product that we release to the community
along with this work.
We first computed sky maps for each individual observation run. We then summed
these maps over all observations. We chose to use a Cartesian projection in
Galactic coordinates, covering the region from $\ell=70\degr$ to $250\degr$ and
$b=\pm5\degr$, and we set the pixel size to $0.02\degr$~/~pixel.
In Sect.~\ref{sec:events_map}, we describe the production of the map containing
the detected events (events map). In Sect.~\ref{sec:background_estimation}, we
describe the map of expected background events (acceptance map,
Sect.~\ref{sec:acceptance_map}), the estimation of a refined background map by
introducing exclusion regions (Sect.~\ref{sec:exclusion_regions}), and the usage
of the adaptive ring background method (Sect.~\ref{sec:adaptiveringmethod}). We
then continue in Sect.~\ref{sec:significancemaps} by describing the computation
of the significance map, and, in Sect.~\ref{sec:highlevelmaps}, the exposure map
(Sect.~\ref{sec:exposure_map}), which is used to derive quantities such as flux
(Sect.~\ref{sec:fluxmaps}), flux error and upper limits
(Sect.~\ref{sec:errfluxmap}), and sensitivities (Sect.~\ref{sec:sensmaps}).
\subsection{Events map}
\label{sec:events_map}
The events map consists of the reconstructed positions of the primary $\gamma$-ray\
photons from all events in the sky. To avoid systematic effects near the edge of
the FoV in each observation run, we only include events for which the direction
of the primary photon is reconstructed within $2\degr$ of the center of the FoV.
This choice results in an effective analysis FoV of $4\degr$ diameter.
At the lowest energies, the energy reconstruction is biased by EASs with upward
fluctuations in the amount of detected Cherenkov light; downward fluctuations do
not trigger the cameras. In order to derive reliable flux maps (see
Sect.~\ref{sec:fluxmaps}), we only kept events with an energy reconstructed
above a defined safe energy threshold. We chose the level of this safe threshold
such that, for each run, the energy bias as determined by MC simulations is
below 10\% across the entire FoV. This conservative approach (together with the
use of hard analysis cuts defined in Sect.~\ref{sec:dataset:events}) leads to
energy threshold values ranging from $\sim$400~GeV, where the array observed
close to zenith, up to 2~TeV at 65$\degr$ from zenith.
Figure~\ref{fig:hgps_energy_threshold_profiles} plots the variation of the safe
energy threshold with Galactic longitude, showing the energy threshold for each
observation together with the minimum value for each longitude. The variations
observed are mainly due to the zenith angle dependency, and regions of different
Galactic longitude generally are observable at different zenith angles.
\subsection{Background estimation}
\label{sec:background_estimation}
Events passing the event reconstruction and selection procedure are considered
$\gamma$-ray\ candidate events. Since these events are still dominantly from EASs
induced by $\gamma$-ray-like cosmic rays and electrons or positrons, we estimated the
amount of remaining background events on a statistical basis using a ring
model~\citep{ref:bgmodeling} as detailed further below. For each test position,
we counted the photon candidates found in a suitable ring-shaped region around
that position in the same FoV. This yields an estimate of the background level
after proper normalization and after excluding regions with actual $\gamma$-ray\
emission from the background estimate.
\subsubsection{Acceptance map}
\label{sec:acceptance_map}
The acceptance map represents the number of expected events from cosmic-ray
backgrounds estimated from runs of sky regions at similar zenith angles but
without VHE $\gamma$-ray\ sources. As for the events map (see
Sect.~\ref{sec:events_map}), we computed the acceptance map for energies above
the safe energy threshold. To account for the differences in optical efficiency
and observation time between these runs and those under analysis, we normalized
the acceptance map such that, outside the exclusion regions (see
Sect.~\ref{sec:exclusion_regions}), the number of expected counts matches the
number of measured counts. The acceptance maps are used to derive the
normalization coefficient between the region of interest and the background
region (see Sect.~\ref{sec:significancemaps}).
\begin{figure*}
\includegraphics[width=\textwidth]{figures/hgps_energy_threshold_profiles}
\caption[HGPS survey energy threshold]{
HGPS minimum safe energy threshold as a function of Galactic longitude for a
latitude of $b=0\degr$. The blue curve shows the minimum threshold for hard cuts
(used for maps), and the green curve indicates standard cuts (used for spectra).
The black dots represent the safe threshold for each observation run obtained
for the hard cuts configuration. The few black dots below the blue line
correspond to runs at Galactic latitude $|b| > 2\degr$.
}
\label{fig:hgps_energy_threshold_profiles}
\end{figure*}
\subsubsection{Exclusion regions}
\label{sec:exclusion_regions}
The background estimation method described above only works if regions with VHE
$\gamma$-ray\ emission are excluded from the background estimation region. We
defined exclusion regions automatically using an iterative algorithm to avoid
potential observer bias and to treat the entire data set in a uniform way. The
procedure starts with the significance maps (see
Sect.~\ref{sec:significancemaps}) produced for the two standard correlation
radii $R_{\mathrm{c}} = 0.1\degr$ and $0.2\degr$. These radii define the
circular region over which a quantity (e.g., $\gamma$-ray\ excess) is integrated.
The procedure identifies regions above $5\sigma$ and expands them by excluding
an additional $0.3\degr$ beyond the $5\sigma$ contour. This procedure is
conservative; it minimizes the amount of surrounding signal that could
potentially contaminate the background estimation. A first estimation of the
exclusion regions is then included in the significance map production and a new
set of exclusion regions is derived. We iterated this procedure until stable
regions are obtained, which typically occurs after three iterations. The
resulting regions are shown in Fig.~\ref{fig:catalog:rois} below.
\subsubsection{Adaptive ring method}
\label{sec:adaptiveringmethod}
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{figures/hgps_map_background_estimation}}
\caption[Illustration of adaptive ring background estimation for maps]{
Illustration of the adaptive ring method for background estimation for a single
observation (see Sect.~\ref{sec:adaptiveringmethod}). The HGPS significance
image is shown in inverse grayscale and exclusion regions as blue contours. The
analysis FoV for one observation is shown as a black circle with 2\degr\ radius
and a black cross at the observation pointing position. The red rings illustrate
the regions in which the background is estimated for two positions in the FoV
(illustrated as red squares). Only regions in the ring inside the FoV and
outside exclusion regions are used for background estimation. For the position
in the lower right, the ring was adaptively enlarged to ensure an adequate
background estimate (see text).
}
\label{fig:hgps_map_background_estimation}
\end{figure}
In the HGPS, often exclusion regions cover a significant fraction of the FoV;
therefore, we could not use the standard ring background method
\citep{ref:bgmodeling}. For example, using a typical outer ring radius of
$\sim$0.8\degr\ would lead to numerous holes in the sky maps at positions where
the entire ring would be contained inside an exclusion region (i.e., where no
background estimation was possible). A much larger outer radius (e.g.,
$\sim$1.5\degr) would be necessary to prevent these holes but would lead to
unnecessarily large uncertainties in the background estimation in regions
without, or with small, exclusion regions where smaller ring radii are feasible.
To address the limitations of the standard method, we do not use a static ring
geometry but rather adaptively change the inner and outer ring radii, as
illustrated in Fig.~\ref{fig:hgps_map_background_estimation}, depending on the
exclusion regions present in a given FoV. For a given test position within a
FoV, we begin with a minimum inner ring radius of $0.7\degr$ and constant ring
thickness $0.44\degr$ and enlarge the inner radius if a large portion of the
ring area overlaps with exclusion regions. We do this until the acceptance
integrated in the ring (but outside exclusion regions) is more than four times the
acceptance integrated at the test position. A maximum outer radius of $1.7\degr$
avoids large uncertainties in the acceptance toward the edge of the FoV.
\subsection{Significance maps}
\label{sec:significancemaps}
We produced significance maps to determine the exclusion regions (see
Sect.~\ref{sec:exclusion_regions}). For each grid position $(\ell,b)$ in a
significance map, we counted the number of photon candidates $N_\mathrm{ON}$ in
the circular ON region, defined a priori by the correlation radius
$R_{\mathrm{c}}$. We determined the background level by counting the number of
photon candidates $N_\mathrm{OFF}$ in the ring centered at $(\ell,b)$. The
background normalization factor is $\alpha \equiv \xi_\mathrm{ON} /
\xi_\mathrm{OFF}$, where $\xi_\mathrm{ON}$ is the integral of the acceptance map
within $R_{\mathrm{c}}$ and $\xi_\mathrm{OFF}$ is the integral of the acceptance
map within the ring. The number of excess events $N_{\gamma}$ within
$R_{\mathrm{c}}$ is then
\begin{equation}
\label{eq:excess}
N_{\gamma}=N_\mathrm{ON}-\alpha{}N_\mathrm{OFF}.
\end{equation}
We computed the significance of this $\gamma$-ray\ excess according to
Eq.~17 of \cite{ref:lima} without correcting further for trials.
\subsection{High-level maps}
\label{sec:highlevelmaps}
We can derive additional high-level maps based on the measurement of
$N_{\gamma}$ within a given $R_{\mathrm{c}}$ and the instrument response
functions. In this work, we computed flux, flux error, sensitivity, and upper
limit maps, starting from the formula
\begin{equation}
\label{eq:flux}
F = \frac{N_{\gamma}}{N_{\mathrm{exp}}}
\,\int_{E_1}^{E_2}\phi_\mathrm{ref}(E)\,\mathrm{d}E,
\end{equation}
where $F$ is the integral flux computed between the energies $E_1$ and $E_2$,
$N_{\gamma}$ is the measured excess, and $N_{\mathrm{exp}}$ is the total
predicted number of excess events, also called exposure (see
Sect.~\ref{sec:exposure_map}).
\subsubsection{Exposure maps}
\label{sec:exposure_map}
The exposure $N_{\mathrm{exp}}$ in Eq.~\ref{eq:flux} is given by
\begin{equation}
\label{eq:expcounts}
N_{\mathrm{exp}} \equiv \mathcal{E} = \sum_\mathrm{R \in runs} T_\mathrm{R}\,\int_{E_\mathrm{min}}^\infty
\phi_\mathrm{ref}(E_\mathrm{r})\,A_\mathrm{eff}(E_\mathrm{r},q_\mathrm{R})\,\mathrm{d}E_\mathrm{r}.
\end{equation}
Here, $E_\mathrm{r}$ is the reconstructed energy, $T_\mathrm{R}$ is the
observation livetime, $q_\mathrm{R}$ symbolizes the observation parameters for a
specific run (zenith, off-axis, and azimuth angle; pattern of telescopes
participating in the run; and optical efficiencies); $A_\mathrm{eff}$ is the
effective area obtained from MC simulations, which is assumed constant during a
28~min\ run; and $E_\mathrm{min}$ is the safe threshold energy appropriate
for the observation (as described in Sect.~\ref{sec:events_map}). We computed
the quantity $N_{\mathrm{exp}}$ for each position in the sky to create the
expected $\gamma$-ray count map, also referred to as the exposure map
$\mathcal{E}$ in the following. The function $\phi_{\mathrm{ref}}(E)$ is the
reference differential photon number $\gamma$-ray source flux, assumed to be
following a power law (PL) with a predefined spectral index, i.e.,
\begin{equation}
\label{eq:refflux}
\phi_\mathrm{ref}(E)=\phi_0\,(E/E_0)^{-\Gamma}.
\end{equation}
\subsubsection{Flux maps}
\label{sec:fluxmaps}
In Eq.~\ref{eq:flux}, the flux value $F$ is completely determined by the scaling
factor $N_{\gamma}/N_{\mathrm{exp}}$ once the spectral shape is fixed. We chose
to use $E_1=1\,\mathrm{TeV}$ and $E_2=\infty$. We stress that $E_1$ is not the
threshold energy used in the analysis, but the energy above which the integral
flux is given. In Eq.~\ref{eq:refflux}, one can choose the flux normalization
$\phi_0$ arbitrarily, since it cancels out in the computation of the flux. We
also chose the spectral index $\Gamma = 2.3$ in the released
maps to be compatible with the average index of known Galactic VHE $\gamma$-ray\
sources. To test the impact of this latter assumption, we performed tests that
show that, on average, flux variations are less than $5\%$ if the assumed
spectral index is varied by $\pm0.2$ (our systematic uncertainty of the spectral
index).
The released flux maps contain values of integral flux above $1$~TeV, calculated
according to Eq.~\ref{eq:flux}, in units of cm$^{-2}$~s$^{-1}$. This should be
interpreted as the flux of a potential source, assuming a spectrum
$\phi_\mathrm{ref}(E)$, that is centered on a given pixel position in the map
and fully enclosed within $R_{\mathrm{c}}$.
Figures~\ref{fig:hgps_region_exposure_illustration} and \ref{fig:fluxmap} show
two example flux maps computed with $R_{\mathrm{c}} = 0.4\degr$ and $0.1\degr$,
respectively. The maps contain nonzero values only in regions in which the
sensitivity is better than 2.5\% Crab to prevent very large (positive and
negative) values due to statistical fluctuations in low-exposure regions.
\subsubsection{Flux error and upper limit maps}
\label{sec:errfluxmap}
Statistical uncertainties on the flux were computed by replacing $N_{\gamma}$ in
Eq.~\ref{eq:flux} by $N_{\gamma}^\mathrm{\pm1\sigma}$, which are the upper and
lower boundaries of the measured excess for a 68\% confidence level. Those
errors were computed with a Poisson likelihood method described in
\citet{ref:rolke}, using the same $N_\mathrm{ON}$ and $N_\mathrm{OFF}$
integrated within the circle of radius $R_{\mathrm{c}}$ used when computing the
excess maps. The values reported in the flux-error maps are the average of the
upper and lower error bars.
Similarly, an upper-limit map can be calculated by replacing $N_{\gamma}$ in
Eq.~\ref{eq:flux} by $N_{\gamma}^\mathrm{UL}$, that is, the upper limit on the
excess found for a predefined confidence level of 95\%; we used the same profile
likelihood method as for the error bar.
\subsubsection{Sensitivity maps}
\label{sec:sensmaps}
\begin{figure*}
\includegraphics[width=\textwidth]{figures/hgps_sensitivity}
\caption[Sensitivity image and longitude profile] {
HGPS point-source sensitivity map and profile along the Galactic plane at a
Galactic latitude $b = 0\degr$. The sensitivity is given in \%~Crab, for a
correlation radius $R_{\mathrm{c}} = 0.1\degr$, assuming a spectral index
$\Gamma = 2.3$. This sensitivity is computed under
the isolated point source assumption and is thus better than the actual
sensitivity achieved for the HGPS source catalog (see
Sect.~\ref{sec:cc:discussion}).
}
\label{fig:hgps_sensitivity}
\end{figure*}
The sensitivity is defined as the minimal flux needed for a source with the
assumed spectrum and fully contained within the correlation circle
$R_{\mathrm{c}}$ to be detected above the background at $5\sigma$ statistical
significance. Alternatively this can be thought of as a measure of
$\hat{N_{\gamma}}$, the number of photons needed to reach such a significance
level above the background determined by $N_\mathrm{OFF}$ and $\alpha$. To
compute the sensitivity map, $N_{\gamma}$ in Eq.~\ref{eq:flux} is replaced by
$\hat{N_{\gamma}}$, which is determined by numerically solving Eq.~17 of
\citet{ref:lima} for $N_\mathrm{ON}$ (related to $\hat{N_{\gamma}}$ by
Eq.~\ref{eq:excess} above). We note that possible background systematics are not
taken into account in this computation.
The point-source sensitivity level reached by H.E.S.S.\ at all points in the HGPS
data set is depicted in Fig.~\ref{fig:hgps_sensitivity}, where a projection of
the sensitivity map along Galactic longitude at a Galactic latitude of
$b=0\degr$ is also shown. It is typically at the level of 1\% to 2\% Crab. The
deepest observations were obtained around interesting objects for which
additional pointed observations were performed. Examples include the Galactic
center region (around $\ell=0\degr$, where the best sensitivity of $\sim 0.3$\%
Crab is reached), the Vela region ($\ell=266\degr$), the regions around
\object{HESS J1825$-$137} and \object{LS 5039} ($\ell=17\degr$), or around
\object{HESS J1303$-$631} and \object{PSR B1259$-$63} ($\ell=304\degr$).
Similarly, the sensitivity values along Galactic latitude for two values of
longitude are shown in Fig.~\ref{fig:hgps_sources_glat}. For most of the
surveyed region, the sensitivity decreases rapidly above $|b|>2\degr$ due to the
finite FoV of the H.E.S.S.\ array and the observation pattern taken, except for a
few regions, such as at $\ell=0\degr$ where high latitude observations were
performed (see Sect.~\ref{sec:dataset}). The best sensitivity is obtained around
$b=-0.3\degr$, reflecting the H.E.S.S.\ observation strategy; the latitude
distribution of the sources peaks in this region.
We note that the sensitivity shown in Fig.~\ref{fig:hgps_sensitivity} does not
correspond to the completeness of the HGPS source catalog. One major effect is
that the HGPS sensitivity is dependent on source size; it is less sensitive for
larger sources, as shown in Fig.~\ref{fig:hgps_sources_flux_extension} and
discussed at the end of Sect.~\ref{sec:results:distributions}. Other effects
that reduce the effective sensitivity or completeness limit of HGPS are
the detection threshold, which corresponds to $\sim 5.5\sigma$; the large-scale
emission model; and source confusion, as discussed in the following
Sect.~\ref{sec:cc}
\section{HGPS source catalog}
\label{sec:cc}
\subsection{Introduction and overview}
\label{sec:cc:introduction}
The HGPS source catalog construction procedure intends to improve upon previous
H.E.S.S.\ survey publications both in sensitivity and homogeneity of the analysis
performed. The previous iteration, the second H.E.S.S.\ survey paper of 2006
\citep{ref:gps2006}, used a 230 h data set with inhomogeneous exposure that was
limited to the innermost region of the Galaxy. This survey detected a total of
14 sources by locating peaks in significance maps on three different spatial
scales: $0.1\degr$, $0.22\degr$, and $0.4\degr$. It then modeled the sources by
fitting two-dimensional symmetric Gaussian morphological models to determine the
position, size and flux of each source, using a Poissonian maximum-likelihood
method.
Since 2006, H.E.S.S.\ has increased its exposure tenfold and enlarged the survey
region more than twofold, while also improving the homogeneity of the exposure.
As illustrated in the upper panel of Fig.~\ref{fig:hgps_catalog_model}, the data
now show many regions of complex emission, for example, overlapping emission of
varying sizes and multiple sources with clearly non-Gaussian morphologies. Apart
from discrete emission, the Galactic plane also exhibits significant emission on
large spatial scales \citep{2014PhRvD..90l2007A}. For these reasons, we needed
to develop a more complex analysis procedure to construct a more realistic model
of the $\gamma$-ray\ emission in the entire survey region. Based on this model, we
compiled the HGPS source catalog.
We first introduce the maximum-likelihood method used for fitting the emission
properties (Sect.~\ref{sec:cc:ml}). Next, we describe the H.E.S.S.\ point spread
function (PSF; Sect.~\ref{sec:cc:maps:psf}) and the TS\ maps
(Sect.~\ref{sec:maps:ts}), which are two important elements in the analysis and
catalog construction. The procedure is then as follows:
\begin{enumerate}
\item Cut out the Galactic center (GC) region and shell-type supernova remnants
from the data set because of their complex morphologies
(Sect.~\ref{sec:cc:cutout_sources}).
\item Model the large-scale emission in the Galactic plane globally
(Sect.~\ref{sec:cc:large-scale-emission}).
\item Split the HGPS region into manageable regions of interest (ROIs)
(Sect.~\ref{sec:cc:maps:roi}).
\item Model the emission in each ROI as a superposition of components with
Gaussian morphologies (Sect.~\ref{sec:cc:components}).
\item Merge Gaussian components into astrophysical VHE $\gamma$-ray\ sources
(Sect.~\ref{sec:cc:component_classification}).
\item Determine the total flux, position, and size of each $\gamma$-ray\ source
(Sect.~\ref{sec:cc:source_characterization}).
\item Measure the spectrum of each source (Sect.~\ref{sec:cc:spectra}).
\item Associate the HGPS sources with previously published H.E.S.S.\ sources and
multiwavelength (MWL) catalogs of possible counterparts
(Sect.~\ref{sec:results:assoc_id}).
\end{enumerate}
\begin{figure*}
\includegraphics[width=\textwidth]{figures/hgps_catalog_model}
\caption
[Source catalog model construction illustration]{
Illustration of the catalog model construction in the region of
350\degr~to~328\degr\ in Galactic longitude. The upper panel shows the
$\gamma$-ray\ excess counts smoothed by the PSF, the middle panel the PSF-convolved
and smoothed excess model, and the lower panel the significance map of the
residuals for a point-like source hypothesis (given in $\mathrm{sign}
(\mathrm{Flux}) \sqrt{\mathrm{TS}}$). The middle panel shows examples of the
steps taken in the excess map modeling part of the source catalog procedure (see
Sect.~\ref{sec:cc} for details). It starts by cutting out shell-type supernova
remnants (SNRs; RX~J1713.7$-$3946 and the SNR candidate HESS~J1614$-$518 in this
region) and by assuming a fixed large-scale emission component. Then a
multi-Gaussian model was fitted with the significant components shown in the
middle panel as thin transparent circles. Some of these were discarded and are
not part of the emission attributed to HGPS catalog sources. White circles show
examples of single-component as well as multicomponent sources. For a complete
overview of all analysis regions (ROIs) and excluded sources, see
Fig.~\ref{fig:catalog:rois}.
}
\label{fig:hgps_catalog_model}
\end{figure*}
\subsection{Poisson maximum-likelihood morphology fitting}
\label{sec:cc:ml}
To detect and characterize sources and to model the large-scale emission
in the Galactic plane, we used a spatially-binned likelihood analysis based on
the following generic model:
\begin{equation}
\label{eq:generic_model}
N_{\mathrm{Pred}} =
N_{\mathrm{Bkg}} + \mathrm{PSF} \ast \left( \mathcal{E} \cdot S \right)
,\end{equation}
where $N_{\mathrm{Pred}}$ represents the predicted number of counts,
$N_{\mathrm{Bkg}}$ the background model created with the adaptive ring method
(described in Sect.~\ref{sec:adaptiveringmethod}), $\mathcal{E}$ the exposure
map (see Eq.~\ref{eq:expcounts} in Sect.~\ref{sec:fluxmaps}), and $S$ a
two-dimensional parametric morphology model that we fit to the data.
Additionally, we took into account the angular resolution of H.E.S.S.\ by
convolving the flux model with a model of the PSF of the instrument.
Assuming Poisson statistics per bin, the maximum-likelihood fit then minimizes
the \textit{Cash} statistic \citep{1979ApJ...228..939C},
\begin{equation}
\mathcal{C} = 2 \sum_{i} \left(M_i - D_i \log M_{i} \right),
\label{eq:cash_statistic}
\end{equation}
where the sum is taken over all bins $i$, and $M_i$ (model) represents the
expected number of counts according to Eq.~\ref{eq:generic_model} and $D_i$
(data) the actual measured counts per bin.
To determine the statistical significance of a best-fit source model compared to
the background-only model, we use a likelihood ratio test with test statistic
TS. This is defined by the likelihood ratio or equivalently as the difference in
$\mathcal{C}$ between both hypotheses,
\begin{equation}
\mathrm{TS} = \mathcal{C}_0 - \mathcal{C}_S,
\label{eq:ts_definition}
\end{equation}
where $\mathcal{C}_0$ corresponds to the value of the \textit{Cash} statistic of
the background-only hypothesis and $\mathcal{C}_S$ the best-fit model that
includes the source emission.
For a large number of counts, according to Wilks' theorem \citep{Wilks38}, TS\
is asymptotically distributed as $\chi^2_N$, where $N$ is the number of free
parameters defining the flux model. In this limit, the statistical significance
corresponds approximately to $\mathrm{sign}(\mathrm{Flux}) \cdot
\sqrt{|\mathrm{TS}|}$, where the sign of the best-fit flux is needed to allow
for negative significance values in regions where the number of counts is
smaller than the background estimate (e.g.,~due to a statistical downward
fluctuation).
We performed the modeling and fitting described above in
Eqs.~\ref{eq:generic_model}, \ref{eq:cash_statistic}, and \ref{eq:ts_definition}
in pixel coordinates using the HGPS maps in Cartesian projection. Spatial
distortion of flux models are negligible as a result of the projection from the
celestial sphere because the HGPS observations only cover a latitude range of
$|b|\leqslant3\degr$. We implemented the analysis in Python using Astropy version~1.3
\citep{2013AandA...558A..33A}, Sherpa version~4.8 \citep{Freeman:2001}, and
Gammapy version~0.6 \citep{Donath2015, 2017arXiv170901751D}.
\subsection{Point spread function}
\label{sec:cc:maps:psf}
For HGPS, the PSF was computed for a given sky position assuming a power-law
point source with a spectral index of 2.3\ (average index of
known VHE $\gamma$-ray\ sources) and assuming rotational symmetry of the PSF. Since
the H.E.S.S.\ PSF varies with $\gamma$-ray\ energy and observing parameters such as
the number of participating telescopes, zenith angle, and offset angle in the
field of view, an effective PSF corresponding to the HGPS survey counts maps was
computed by applying the same cuts (especially safe energy threshold) and
exposure weighting the PSF of contributing runs (i.e., within the FoV of
2\degr). The per-run PSF was computed by interpolating PSFs with similar
observation parameters, using precomputed lookups from MC EAS simulations. All
computations were carried out using two-dimensional histograms with axes
$\theta^2$, where $\theta$ is the offset between the MC source position and the
reconstructed event position, and $\log(E_r)$, where $E_r$ is the reconstructed
event energy; at the very end, the integration over energy was performed,
resulting in a one-dimensional histogram with axis $\theta^2$, which was fitted
by a triple-exponential analytical function to obtain a smooth distribution,
\begin{equation}
\frac{\mathrm{d}P}{\mathrm{d}\theta^2}(\theta^2) =
\sum_{i=1}^3 A_i \exp\left(-\frac{\theta^2}{2\sigma_i^2}\right),
\end{equation}
where $P$ is the event probability, and $A_i$ and $\sigma_i$ are the weights and
widths of the corresponding components, respectively. This ad hoc model
corresponds to a triple-Gaussian, two-dimensional, PSF model when projected onto
a sky map.
For the HGPS catalog, the 68\% containment radius of the PSF model adopted is
typically $\theta\sim$$0.08\degr$\ and varies by approximately $\pm 20\%$ at
the locations of the HGPS sources. For observations with large FoV offsets, the
68\% containment increases by almost a factor of two to $\theta\sim$0.15\degr,
which is mostly relevant for high Galactic latitude sources at the edge of the
HGPS survey region. The HGPS PSF has a 95\% containment radius of
$\theta\sim$0.2\degr and approximately varies by $\pm 20\%$ at the locations of
the HGPS sources. The PSF at large FoV offsets (corresponding to high-GLAT
regions in the survey map) is more tail heavy; there the 95\% to 68\%
containment radius ratio increases from $\sim$2.5 up to 4.
Section~\ref{sec:cc:extension_ul} discusses systematic uncertainties related to
the PSF model in connection with upper limits on source sizes.
\subsection{Test statistics maps}
\label{sec:maps:ts}
In addition to the standard Li~\&~Ma\ significance maps described in
Sect.~\ref{sec:significancemaps}, we also used TS\ maps in the analysis. The
TS\ denotes the likelihood ratio of the assumed source hypothesis versus the
null hypothesis (i.e.,\ background only) for every position (pixel) in the map.
We computed these maps assuming various spatial templates: a point-like source
morphology (i.e., PSF only), and PSF-convolved Gaussian morphologies with widths
$0.05\degr$, $0.10\degr$, and $0.20\degr$. During the computation of each map,
at the center of each map pixel, we performed a single-parameter likelihood fit
of the amplitude of the template, according to Eq.~\ref{eq:generic_model}. We
then filled the map with the TS\ value defined in Eq.~\ref{eq:ts_definition}.
We used the resulting TS\ maps primarily to compute residual maps and residual
distributions. The main advantage over standard Li~\&~Ma\ significance maps is that
source morphology and PSF information can be taken into account. Additionally,
this paper uses TS\ maps when presenting sky maps because they contain uniform
statistical noise everywhere in the map. In contrast, flux or excess maps that
are smoothed with the same spatial templates still show increased noise in
regions of low exposure. We implemented the TS\ map algorithm available in
Gammapy; see also \citet{Stewart:2009} for a more detailed description of TS\
maps.
\subsection{Sources not reanalyzed}
\label{sec:cc:cutout_sources}
H.E.S.S.\ observations have revealed many sources with complex morphology, e.g.,
\object{RX J0852.0$-$4622} (also known as \object{Vela Junior}), which has a
very pronounced shell-like structure~\citep{HESS:VelaJnr}, or the Galactic
center region, which has multiple point-sources embedded in a very elongated
ridge-like emission \citep{HESS:Arc}. Dedicated studies model such regions of
emission using complex parametric models, for example, model templates based on
molecular data, shell-like models, asymmetric Gaussian models, and combinations
thereof. It is challenging to systematically model the emission across the
entire Galactic plane using these more complex models, which tend to yield
unstable or non-converging fit results because of the large number of free and
often poorly constrained parameters. This can be especially problematic in ROIs
with multiple, complex overlapping sources.
Given the difficulties with modeling complex source morphologies, we decided to
restrict the HGPS analyses to a symmetrical Gaussian model assumption and
exclude all firmly identified shell-like sources and the very complex GC region
from reanalysis. A complete list of the ten excluded (or cut-out) sources in the
HGPS region is given in Table~\ref{tab:hgps_external_sources}. The table also
contains four sources that were not significant in the current HGPS analysis but
were found to be significant in other dedicated, published analyses; these cases
are discussed in detail in Sect.~\ref{sec:results:previously:missing}. We refer
to these \hgpsSourceCountCutout sources in total listed in
Table~\ref{tab:hgps_external_sources} as ``EXTERN'' HGPS sources and have
included these sources in the HGPS source catalog because we wanted to give a
complete list of sources in the HGPS region. We also have these sources included
in the various distributions, histograms, and other plots exploring the global
properties of the HGPS sources in Sect.~\ref{sec:results:distributions}. The
morphological and spectral parameters for those sources were adapted from the
most recent H.E.S.S.\ publication (listed in
Table~\ref{tab:hgps_external_sources}).\footnote{We note that the values in the
HGPS catalog for EXTERN sources do not fully reflect the results of the original
publication. Specifically, in some cases the information is incomplete (e.g.,
when certain measurements were not given in the paper) or not fully accurate
(e.g., when the published measurements do not fully agree with the definition of
measurements in this paper, or when parameter errors are different due to error
inaccuracies in the error propagation when converting to HGPS measures.)}
\begin{table*}
\caption
[Sources in the HGPS catalog with parameters taken from previous publications]{
Fourteen EXTERN sources in the HGPS catalog, i.e., VHE sources in the HGPS
region previously detected by H.E.S.S.\ that were not reanalyzed in this paper. For
each source, we list the reason why it was not reanalyzed and give the reference
that was used to fill the parameters in the HGPS source catalog. See
Sect.~\ref{sec:cc:cutout_sources} and for sources not significant in the HGPS
analysis also Sect.~\ref{sec:results:previously:missing}.
}
\label{tab:hgps_external_sources}
\centering
\begin{tabular}{lllll}
\hline\hline
Source name & Common name & Reason for not reanalyzing & Reference \\
\hline
\object{HESS J0852$-$463} & Vela Junior & Shell morphology & \cite{HESS:VelaJnr} \\
\object{HESS J1442$-$624} & \object{RCW 86} & Shell morphology & \cite{2016arXiv160104461H} \\
\object{HESS J1534$-$571} & G323.7$-$1.0 & Shell morphology & \cite{HESS:Shells} \\
\object{HESS J1614$-$518} & --- & Shell morphology & \cite{HESS:Shells} \\
\object{HESS J1713$-$397} & \object{RX J1713.7$-$3946} & Shell morphology & \cite{HESS:RXJ1713} \\
\object{HESS J1731$-$347} & G353.6$-$0.7 & Shell morphology & \cite{2011AandA...531A..81H} \\
\object{HESS J1912$+$101} & --- & Shell morphology & \cite{HESS:Shells} \\
\object{HESS J1745$-$290} & Galactic~center & Galactic center region & \cite{GCPevatron} \\
\object{HESS J1746$-$285} & Arc~source & Galactic center region & \cite{HESS:Arc} \\
\object{HESS J1747$-$281} & G0.9$+$0.1 & Galactic center region & \cite{Aharonian:2005d} \\
\hline
\object{HESS J1718$-$374} & G349.7$+$0.2 & Not significant in HGPS & \cite{2015AandA...574A.100H} \\
\object{HESS J1741$-$302} & --- & Not significant in HGPS & \cite{HESS:1741} \\
\object{HESS J1801$-$233} & \object{W 28} & Not significant in HGPS & \cite{Aharonian:2008f}\\
\object{HESS J1911$+$090} & \object{W 49B} & Not significant in HGPS & \cite{HESS:W49} \\
\hline
\end{tabular}
\end{table*}
\subsection{Large-scale emission model}
\label{sec:cc:large-scale-emission}
We previously demonstrated that there exists VHE $\gamma$-ray\ emission that is
large scale and diffuse along the Galactic plane \citep{2014PhRvD..90l2007A}.
In that paper, we constructed a mask to exclude the regions of the plane where
significant emission was detected. The latitude profile of excess $\gamma$-rays\
outside this mask clearly showed the presence of significant large-scale
$\gamma$-ray\ emission. We do not extend the analysis of this diffuse emission any
further here. Whether the emission originates from interactions of diffuse
cosmic rays in the interstellar medium or from faint, unresolved $\gamma$-ray\
sources (or a combination thereof) is not investigated. Instead, we take a
pragmatic approach and model the large-scale emission present in the HGPS
empirically as described in the following.
The presence of a large-scale component of $\gamma$-ray\ emission along the
Galactic plane complicates the extraction of the Gaussian $\gamma$-ray\ source
components. This large-scale emission can mimic the presence of spurious
degree-scale sources in some regions of the plane and it also tends to broaden
the Gaussian components that describe otherwise well-defined sources. It is
therefore necessary to model the large-scale $\gamma$-ray\ emission to measure the
flux and morphology of the HGPS sources more accurately.
To do so, we built an empirical surface brightness model of the large-scale
emission (see Fig.~\ref{fig:hgps_diffuse_model}), where the latitude profile is
Gaussian and defined by three parameters: the peak position in latitude, the
width, and amplitude of the Gaussian. We estimated the parameters using a
maximum-likelihood fit in regions where no significant emission is measurable on
small scales, i.e.,\ outside the exclusion regions defined for the ring
background model, taking exposure into account. Regardless of the physical
origin of the large-scale emission, it is likely to be structured along the
plane and not constant.
To estimate the variable parameters of the model, we fit the Gaussian parameters
in rectangular regions of width $20\degr$ in longitude and height $6\degr$ in
latitude. We excluded all pixels inside the standard exclusion regions used to
produce the background maps (see Sect.~\ref{sec:background_estimation}). The
Gaussian parameters were dependent on the size of both the exclusion regions and
rectangular regions. We found that the typical variations were $\sim$25\%. To
obtain a smooth sampling of the variations, we followed a sliding-window
approach, distributing the centers of the rectangular regions every $2.5\degr$
in longitude and interpolating between these points.
The maximum-likelihood fit compares the description of the data between the
cosmic-ray (CR) background only and the CR background plus the model. We used
the likelihood ratio test to estimate the significance of adding the large-scale
component in each 20-deg-wide window, finding it to be larger than $3\sigma$ (TS
difference of 9) over most of the HGPS region.
Figure~\ref{fig:hgps_diffuse_model} shows the resulting best-fit Gaussian
parameters together with the associated uncertainty intervals estimated from the
likelihood error function. After this fit, we froze the parameters of the model
for use in the $\gamma$-ray\ source detection and morphology fitting procedure.
While the approach presented here provides an estimate of the large-scale
emission present in the HGPS maps, it does not comprise a measurement of the
total Galactic diffuse emission (see discussion in
Sect.~\ref{sec:results:large}).
\begin{figure*}
\includegraphics[width=\textwidth]{figures/hgps_diffuse_model}
\caption[Large-scale emission model]{
Distribution of the fit large-scale emission model parameters with Galactic
longitude. The first panel gives the peak brightness of the large-scale emission
model in units of $10^{-9}$~cm$^{-2}$~s$^{-1}$~sr$^{-1}$ ($\approx
1.3$\%~Crab~deg$^{-2}$). The second panel shows the peak position of the
Gaussian along the Galactic latitude axis in degrees and the third panel shows
the width ($\sigma$) of the Gaussian in degrees. The solid lines are the result
of fitting each set of parameters every $2.5\degr$ in longitude and
interpolating. The light blue bands show the 1$\sigma$ error region obtained
from the covariance matrix of the likelihood function. The lower panel
illustrates the $20\degr$ wide sliding-window method (red rectangle) that was
used to determine the large-scale emission model in areas (shown in light blue)
where the HGPS sensitivity is better than 2.5\%~Crab but outside exclusion
regions (shown in dark blue); this is explained in further detail in the main
text.
}
\label{fig:hgps_diffuse_model}
\end{figure*}
\subsection{Regions of interest}
\label{sec:cc:maps:roi}
To search for sources, we divided the whole HGPS region into smaller overlapping
ROIs. This was necessary to limit both the number of simultaneously fit
parameters and the number of pixels involved in the fit.
We manually applied the following criteria to define the ROIs:
\begin{enumerate}
\item[(a)] All significant emission (above $5\sigma$) in the HGPS region should
be contained in at least one ROI.
\item[(b)] No significant emission should be present close to the edges of an
ROI.
\item[(c)] The width of each ROI should not exceed $\sim$10\degr\ in longitude
to limit the number of sources involved in the fit.
\item[(d)] ROIs should cover the full HGPS latitude range from $-5\degr$ to
$5\degr$.
\end{enumerate}
In cases in which criterion~(b) could not be fulfilled, we excluded the
corresponding emission from the ROI and assigned it to a different, overlapping
ROI. Figure~\ref{fig:catalog:rois} illustrates the boundaries of the 18 ROIs
defined with these criteria. Some of the ROIs show regions without any exposure;
these regions were masked out and ignored in the subsequent likelihood fit.
\subsection{Multi-Gaussian source emission model}
\label{sec:cc:components}
After excluding shell-type supernova remnants (SNRs) and the GC region from
reanalysis and adding a model for large-scale emission to the background, we
modeled all remaining emission as a superposition of Gaussian components. We
took the following model as a basis:
\begin{equation}
N_{\mathrm{Pred}} = N_{\mathrm{Bkg}} + \mathrm{PSF} \ast
\left( \mathcal{E} \cdot \sum_{i} S_{\mathrm{Gauss},i} \right) +
\mathcal{E} \cdot S_{\mathrm{LS}},
\label{eq:expected_signal}
\end{equation}
where $N_{\mathrm{Pred}}$ corresponds to the predicted number of counts,
$N_{\mathrm{Bkg}}$ to the number of counts from the background model,
$S_{\mathrm{LS}}$ the contribution of the large-scale emission model, $\sum_{i}
S_{\mathrm{Gauss},i}$ the sum of the Gaussian components, and $\mathcal{E}$ the
exposure as defined in Eq.~\ref{eq:expcounts}.
For a given set of model parameters, we integrated the surface brightness
distribution $S$ over each spatial bin, multiplied it by the exposure
$\mathcal{E}$, and convolved it with the PSF to obtain the predicted number of
counts per pixel. For every ROI, we took the PSF at the position of the
brightest emission and assumed it to be constant within the ROI.
For the Gaussian components, we chose the following parametrization:
\begin{equation}
S_{\mathrm{Gauss}}(r| \phi, \sigma) =
\phi \frac{1}{2\pi\sigma^2}\exp\left(-\frac{r^2}{2\sigma^2}\right),
\label{eq:gauss}
\end{equation}
where $S_{\mathrm{Gauss}}$ is the surface brightness, $\phi$ the total
spatially integrated flux, and $\sigma$ the width of the Gaussian component. The
offset $r~=~\sqrt{(\ell - \ell_{0})^2 + (b - b_{0})^2}$ is defined with respect
to the position $(\ell_{0}, b_{0})$ of the component measured in Galactic
coordinates.
We conducted the manual fitting process following a step-by-step procedure.
Starting with one Gaussian component per ROI, we added Gaussian components
successively and refit all of the parameters simultaneously until no significant
residuals were left. In each step, we varied the starting parameters of the fit
to avoid convergence toward a local minimum. The significance of the tested
component was estimated from
\begin{equation}
\mathrm{TS} =
\mathcal{C}(\mbox{with component}) -
\mathcal{C}(\mbox{best solution without component})
\label{eq:detection:alternative_threshold}
.\end{equation}
We considered the component to be statistically significant and kept it in the
model when the TS value exceeded a threshold of $\mathrm{TS}=30$. The
probability of having one false detection in the HGPS survey from statistical
background fluctuations is small ($p=0.03$). This number was determined by
simulating 100 HGPS survey counts maps as Poisson-fluctuated background model
maps, followed by a multi-Gaussian peak finding method, resulting in three peaks
with $\mathrm{TS}\ge30$. However, we note that this assessment of expected
false detections lies on the assumption that the hadronic background as well as
the large-scale and source gamma-ray emission model are perfect. In HGPS, as in any
other Galactic plane survey with complex emission features, this is not the
case. Several components with $\mathrm{TS}\ge30$ are not confirmed by the
cross-check analysis (see Sect.~\ref{sec:cc:component_classification}).
The definition of TS\ above differs slightly from the definition given in
Eq.~\ref{eq:ts_definition}. For a single, isolated component, both values are
identical. However, if a second, overlapping component exists, some of the
emission of the first source is modeled by the second source, reducing the
significance of the first. We therefore estimated the significance of a
component from the TS difference in the total model of the ROI and not from the
TS difference compared to the background-only model.
Applied to real data, we found a total of 98\ significant
Gaussian components using this procedure and TS\ threshold.
Figure~\ref{fig:hgps_residual_significance_distribution_ts} depicts the residual
$\sqrt{\mathrm{TS}}$ distributions over the entire HGPS region. These
distributions demonstrate that there is approximate agreement with a normal
Gaussian distribution; in particular, we find no features above the
$\sqrt{\mathrm{TS}} = \sqrt{30}$ detection threshold. Inherent imperfections in
the background, large-scale emission models and source emission models lead to a
slight broadening of the distributions with respect to a normal distribution, as
expected.
For reference, the 98\ Gaussian components have been
assigned identifiers in the format \verb=HGPSC NNN=, where \verb=NNN= is a
three-digit number (counting starts at 1), sorted by right ascension (which is
right to left in the survey maps). The complete list of components is provided
in the electronic catalog table (see Table~\ref{tab:hgps_component_columns}).
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{figures/hgps_residual_distribution_ts}}
\caption[Residual significance distribution]{
Residual significance distribution after taking the HGPS emission model into
account (see Fig.~\ref{fig:hgps_catalog_model}, middle panel). The significance
was computed using a Gaussian source morphology of size $\sigma = 0.05\degr$,
0.10\degr\, and 0.20\degr. A vertical line at $\sqrt{\mathrm{TS}} = \sqrt{30}$
is shown, corresponding to the detection threshold for the HGPS multi-Gaussian
modeling. The sky region corresponding to this distribution includes pixels
inside exclusion regions, except for the Galactic center and shell-type SNRs,
which were not modeled for the HGPS (see Table~\ref{tab:hgps_external_sources},
lower panel of Fig.~\ref{fig:hgps_catalog_model} and
Fig.~\ref{fig:catalog:rois}).
}
\label{fig:hgps_residual_significance_distribution_ts}
\end{figure}
\subsection{Component selection, merging, and classification}
\label{sec:cc:component_classification}
We repeated the entire modeling procedure described in the previous section with
a second set of maps produced with an independent analysis framework (see
Sect.~\ref{sec:dataset:events}). Five of the 98\ HGPS
components were not significant in the cross-check analysis and were therefore
discarded (see Fig.~\ref{fig:hgps_catalog_model} and
Table~\ref{tab:hgps_component_columns}). Those components we labeled with
\verb=Discarded Small= in the column \verb=Component_Class= of the FITS table.
We observed two other side effects of the modeling procedure. Firstly, very
bright VHE sources, even some with center-filled morphologies such as Vela~X,
decomposed into several Gaussian components, modeling various morphological
details of the source. Figure~\ref{fig:hgps_catalog_model} illustrates this
effect: there are two multicomponent sources shown. Therefore in cases where
overlapping components were not clearly resolved into separate emission peaks,
we merged them into a single source in the HGPS catalog. In total, we found 15
such multicomponent sources: ten consisting of two Gaussian components and five
consisting of three Gaussian components. It would be intriguing to analyze the
complex morphology of these multicomponent sources in greater detail, but this
kind of analysis is beyond the scope of this survey paper. We labeled components
that are part of a multicomponent source as \verb=Source Multi=. We used the
label \verb=Source Single,= respectively, if there is only one component
modeling the source.
The second side effect was that some of the Gaussian components appeared to have
very large sizes coupled with very low surface brightness. We interpret these
components as artifacts of the modeling procedure, which picks up additional
diffuse $\gamma$-ray\ emission that is not covered by our simple large-scale
emission model (Sect.~\ref{sec:cc:large-scale-emission}). For example, as shown
in Fig.~\ref{fig:hgps_catalog_model}, the emission around $\ell \sim 345\degr$
initially comprised three model components: two components that clearly
converged on the two discrete emission peaks visible in the excess map and one
very large and faint component that appeared to be modeling large-scale emission
along the Galactic plane in between the two and not clearly related to either of
the two peaks. In total, we found ten such large-scale components (see
Table~\ref{tab:hgps_component_columns}), which we discarded and did not include
in the final HGPS source catalog as they are likely low-brightness diffuse
emission. We labeled this class of components as \verb=Discarded Large= in the
component list.
\subsection{Source characterization}
\label{sec:cc:source_characterization}
\subsubsection{Position, size, and flux}
\label{sec:cc:merged_source_parameters}
For HGPS sources that consist of several components, we determined the final
catalog parameters of the sources as follows:
\paragraph*{Flux\\}
The total flux is the sum of the fluxes of the individual components
\begin{equation}
F_{\mathrm{Source}} = \sum_i F_i
\label{eq:source_flux}
.\end{equation}
\paragraph*{Position\\}
We calculated the position by weighting the individual component positions with
the respective fluxes. The final $\ell_{\mathrm{Source}}$ and
$b_{\mathrm{Source}}$ coordinates of the source are written as%
\begin{linenomath}
\begin{align}
\ell_{\mathrm{Source}} = \frac{1}{F_{\mathrm{Source}}}\sum_i \ell_iF_i
&& \mathrm{and} &&
b_{\mathrm{Source}} = \frac{1}{F_{\mathrm{Source}}}\sum_i b_iF_i.
\end{align}
\end{linenomath}
\paragraph*{Size\\}
We obtained the size in $\ell$ and $b$ directions from the second moment of the
sum of the components as follows:
\begin{linenomath}
\begin{align}
\sigma_{\ell, \mathrm{Source}}^2 =&
\frac{1}{F_{\mathrm{Source}}}
\sum_i F_i \cdot (\sigma_i^2 + \ell_i^2) - \ell_{\mathrm{Source}}^2
\\
\sigma_{b, \mathrm{Source}}^2 =&
\frac{1}{F_{\mathrm{Source}}}
\sum_i F_i \cdot (\sigma_i^2 + b_i^2) - b_{\mathrm{Source,}}^2
\end{align}
\end{linenomath}
where additionally we defined the average circular size as
\begin{equation}
\sigma_{\mathrm{Source}} = \sqrt{
\sigma_{\ell, \mathrm{Source}}\,
\sigma_{b, \mathrm{Source}}
}
\label{eq:source_size}
.\end{equation}
We computed the uncertainties of the parameters using Gaussian error
propagation, taking the full covariance matrix estimate from the fit into
account.
\subsubsection{Size upper limits}
\label{sec:cc:extension_ul}
In the morphology fit, we did not take into account uncertainties in the PSF
model. However, studies using H.E.S.S.\ data \citep[e.g.,][]{Stycz16} have revealed
a systematic bias on the size of point-like extragalactic sources on the order
of $\sigma_{\mathrm{syst}} = 0.03\degr$, so we have adopted this number as the
systematic uncertainty of the PSF.
Given a measured source extension $\sigma_{\mathrm{Source}}$ and corresponding
uncertainty $\Delta\sigma_{\mathrm{Source}}$, we used the following criterion to
claim a significant extension beyond the PSF:
\begin{equation}
\sigma_{\mathrm{Source}} - 2\Delta \sigma_{\mathrm{Source}}
> \sigma _{\mathrm{syst}},
\end{equation}
i.e.,\ if the extension of a source is $2\Delta \sigma_{\mathrm{Source}}$ beyond
the systematic minimum $\sigma_{\mathrm{syst}}$. If this criterion is not met,
we consider the source to be compatible with being point-like and define an
upper limit on the source size as follows:
\begin{equation}
\sigma_{\mathrm{UL}} =
\max(\sigma_{\mathrm{syst}}, \sigma_{\mathrm{Source}}
+ 2\Delta \sigma_{\mathrm{Source}}).
\label{eq:extension_ul}
\end{equation}
\subsubsection{Localization}
\label{sec:cc:localisation}
The HGPS source location error is characterized by error circles with radius
$R_{\alpha}$ at confidence levels $\alpha = 0.68$ and $\alpha = 0.95$, computed
as
\begin{equation}
R_{\alpha} = f_{\alpha} \times
\sqrt{\Delta \ell_{\mathrm{stat}}^2 + \Delta \ell_{\mathrm{syst}}^2 +
\Delta b_{\mathrm{stat}}^2 + \Delta b_{\mathrm{syst}}^2}.
\label{eq:position_error}
\end{equation}
The values $\Delta \ell_{\mathrm{stat}}$ and $\Delta b_{\mathrm{stat}}$ are the
statistical errors on Galactic longitude $\ell$ and latitude $b$, respectively,
from the morphology fit. For the H.E.S.S.\ systematic position error, a value of
$\Delta \ell_{\mathrm{syst}} = \Delta b_{\mathrm{syst}} = 20\arcsec =
0.0056\degr$ per axis was assumed, following the method and value in
\citep{2010MNRAS.402.1877A}.
Assuming a Gaussian probability distribution, the factor $f_{\alpha}$ is chosen
as $f_{\alpha} = \sqrt{-2\log(1-\alpha)}$ for a given confidence level $\alpha$
\citep[see Eq.~1 in][]{Abdo:2009e}.
\subsubsection{Source naming}
\label{sec:cc:identifier}
The 78 \ HGPS catalog sources have been assigned source names
in the format \verb=HESS JHHMM=$\pm$\verb=DDd=, where \verb=HHMM= and
$\pm$\verb=DDd= are the source coordinates in right ascension and declination,
respectively. For new sources, the source name is based on the source location
reported in this paper. For sources that had been assigned names in previous
H.E.S.S.\ publications or conference presentations, the existing name was kept for
the HGPS catalog, even if the position in the HGPS analysis would have led to a
different name. Similarly, the source candidates (or hotspots, see
Sect.~\ref{sec:sourcecandidates}) have been assigned names in the format
\verb=HOTS JHHMM=$\pm$\verb=DDd=.
\subsection{Source spectra}
\label{sec:cc:spectra}
After detection and subsequent morphological analysis of the sources, we
measured a spectrum for each of the sources using an aperture photometry method.
In this method we sum the ON counts within an aperture defined as a circular
region centered on the best-fit position of each source. We fit a spectral model
within that aperture using an ON-OFF likelihood method \citep{Piron:2001}, where
the OFF background is estimated using reflected regions defined on a run-by-run
basis \citep{1994APh.....2..137F, ref:bgmodeling}. Based on the morphology
model, we then corrected the measured flux for containment and contamination
from other nearby sources. For the spectral analysis, we applied standard cuts,
resulting in energy thresholds in the range 0.2--0.5~TeV, lower than the
thresholds achieved using hard cuts in the detection and morphology steps.
Figure~\ref{fig:hgps_energy_threshold_profiles} shows the variation of the
threshold with longitude. In the following sections, we describe the spectral
analysis process in more detail.
\subsubsection{Circular apertures and reflected region background estimate}
\label{sec:cc:spectra:reg}
The optimal choice for the size for the spectral extraction region is a balance
between including a large percentage of flux from the source and limiting the
contamination of the measurement by hadronic background events, large-scale
emission, and other nearby sources. Following these requirements, we chose the
aperture radius $R_{\mathrm{spec}}$ as follows:
\begin{itemize}
\item $R_{\mathrm{spec}}=R_{\mathrm{70}}$ for \hgpsRSpecIsNotChanged medium-size
sources, where $R_{\mathrm{70}}$ is the 70\% containment radius measured on the
PSF-convolved excess model image (\texttt{R70} in the catalog),
\item minimum $R_{\mathrm{spec}}=0.15\degr$ for \hgpsRSpecIsExtended small
($\texttt{R70} < 0.15\degr$) sources,
\item maximum $R_{\mathrm{spec}}=0.5\degr$ for \hgpsRSpecIsReduced very large
($\texttt{R70} > 0.5\degr$) sources.
\end{itemize}
A minimal aperture radius of $0.15\degr$ was imposed to make the measurement of
the source spectrum more robust against systematic uncertainties of the PSF and
the source morphology assumption.
The aperture radius was limited to a maximum radius of $R_{\mathrm{spec}} =
0.50\degr$ to limit the fraction of observations that cannot be used for the
spectrum measurement because no background estimate could be obtained.
As illustrated in Fig.~\ref{fig:hgps_spectrum_background_estimation}, the
background is estimated using the reflected region method
\citep{1994APh.....2..137F, ref:bgmodeling}. For every spectral extraction
region (ON region), corresponding OFF regions with the same shape and offset to
the pointing position are chosen outside exclusion regions.
The method works well for small, isolated $\gamma$-ray\ sources such as active
galactic nuclei (AGNs) or the Crab nebula, where typically $\sim$10 OFF regions
are found in every observation. This results in a well-constrained background,
and all the exposure can be used for the spectral measurement. Because of the
high density of sources in the Galactic plane, large areas of emission are
excluded and only few reflected regions can be found. This effectively results
in a loss of exposure for the spectrum measurement compared to the map
measurement. For the HGPS analysis this is a large problem because of the very
extensive exclusion regions used: 64\% of the livetime is lost for spectral
analysis compared to the total available livetime that is used in the map-based
analysis. For each source, see the \texttt{Livetime\_Spec} and \texttt{Livetime}
information in the source catalog. In cases where the loss of exposure is very
high, the background cannot be well constrained, which consequently results in
spectral parameters that are not well constrained. The following sources are
affected by this issue:
\begin{itemize}
\item Sources located in or near large exclusion regions (see
Fig.~\ref{fig:catalog:rois}). An area of width $\sim$2\degr\ is often excluded
along the Galactic plane, and this covers a significant portion of the analysis
FoV, which has a diameter of $4\degr$.
{}
\item Sources with large ON regions.
\item Sources observed with too small or too large offsets because they are
located close to other sources that were covered with dedicated observations.
\end{itemize}
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{figures/hgps_spectrum_background_estimation}}
\caption[Illustration of reflected region background estimation for spectra] {
Illustration of reflected region background estimation for spectra
(Sect.~\ref{sec:cc:spectra:reg}). The HGPS significance image is shown in
inverse grayscale and exclusion regions as blue contours. The analysis FoV for
one observation is shown as a black circle with $2\degr$ radius and a black
cross at the observation pointing position. The non-filled red circle
illustrates the ON region for spectral analysis; the filled red circles indicate
the OFF regions.
}
\label{fig:hgps_spectrum_background_estimation}
\end{figure}
\subsubsection{Flux containment and contamination correction}
\label{sec:cc:flux_correction}
By construction and because of additional effects such as PSF leakage or source
morphologies featuring tails, the spectral extraction region does not contain
the full flux of the source. Additionally, the large-scale emission model and
other nearby Gaussian components bias the flux measurement within the spectral
region. Based on this emission model, we separate the contributions from the
different components and derive a correction factor for the spectral flux
measurement.
The total flux in the spectral measurement region is
\begin{equation}
\label{eq:flux_total}
F_{\mathrm{Total}}^{\mathrm{ON}} =
F_{\mathrm{Source}}^{\mathrm{ON}}
+ F_{\mathrm{LS}}^{\mathrm{ON}}
+ F_{\mathrm{Other}}^{\mathrm{ON}},
\end{equation}
where $F_{\mathrm{Source}}^{\mathrm{ON}}$ is the contribution from the source
itself, $F_{\mathrm{LS}}^{\mathrm{ON}}$ is the contribution from the large-scale
emission model, and $F_{\mathrm{Other}}^{\mathrm{ON}}$ is the contribution from
nearby sources and other, discarded Gaussian emission components.
Assuming $F_{\mathrm{Source}}$ is the flux measurement from the morphology fit,
we define the correction factor as
\begin{equation}
\label{eq:correction_factor_1}
C_{\mathrm{Correction}} = F_{\mathrm{Source}} / F_{\mathrm{Total}}^{\mathrm{ON}}.
\end{equation}
To summarize the contributions from the large-scale emission model and other
sources in close (angular) proximity, we define a quantity called
contamination. This quantity measures the fraction of flux within the
spectral region that does not originate from the source itself and is written as
\begin{equation}
\label{eq:contamination}
C_{\mathrm{Contamination}} = \frac{F_{\mathrm{LS}}^{\mathrm{ON}}
+ F_{\mathrm{Other}}^{\mathrm{ON}}}{F_{\mathrm{Total}}^{\mathrm{ON}}}
.\end{equation}
Additionally, we define the containment of a source as the ratio
between the flux of the source within the spectral measurement region
$F_{\mathrm{Source}}^{\mathrm{ON}}$ (taking the morphology model into account)
and the total flux obtained from the morphology fit $F_{\mathrm{Source}}$ as follows:
\begin{equation}
\label{eq:containment}
C_{\mathrm{Containment}} = F_{\mathrm{Source}}^{\mathrm{ON}} / F_{\mathrm{Source}}
.\end{equation}
The HGPS catalog provides all the quantities mentioned in this section,
and all aperture-photometry based flux measurements in the HGPS catalog (see
Table~\ref{tab:hgps_sources_columns}) are corrected by the factor given in
Eq.~\ref{eq:correction_factor_1} (see Sect.~\ref{sec:cc:spectra:model} and
~\ref{sec:cc:spectra:points}).
We note that this region-based spectral analysis method with a single integral
flux correction factor assumes energy-independent source morphology. The spectra
obtained for sources with energy-dependent morphology does not correspond to the
correct total emission spectra of the sources. Currently energy-dependent
morphology has been clearly established for two sources (HESS J1303$-$631 and
HESS J1825$-$137), and there are hints of energy-dependent morphology for a few
more. Furthermore, using an integral flux correction factor is not fully correct
because the HGPS PSF is somewhat dependent on energy (smaller PSF at higher
energies). The resulting inaccuracy on HGPS spectral results is small, because
we have chosen a minimal spectral aperture radius of 0.15\degr, which contains
most of the emission for point sources at all energies. Generally, spectra for
sources with large correction factors are likely to be less accurate, because
the morphology models used to compute the correction are only approximations.
\subsubsection{Spectral model fit}
\label{sec:cc:spectra:model}
We performed the spectral fits on the stacked\footnote{Observation stacking was
performed as described here: \url{http://cxc.harvard.edu/ciao/download/doc/combine.pdf}} observations, using the ON-OFF
Poisson likelihood function, referred to as the $W$ statistic (WSTAT) in
XSPEC\footnote{See \url{https://heasarc.gsfc.nasa.gov/xanadu/xspec/manual/XSappendixStatistics.html}\ or Appendix~A of \citet{Piron:2001}.}. For each
observation, we applied a safe energy threshold (see Sect.~\ref{sec:events_map})
cut at low energies, and the maximum energy was chosen at the highest event
energy in the stacked counts spectrum for the on region (resulting in a maximum
energy of 30~TeV to 90~TeV). Energy dispersion was not taken into account via a
matrix, but in an approximate way in which the effective area is computed in
such a way that it results in fully correct spectral results for power-law
spectra with spectral index 2, and, given the good energy resolution of H.E.S.S.,
only small errors are made for other spectral shapes \citep{Hoppe:2008c}.
To describe the spectral shape of the VHE $\gamma$-ray\ emission, we fit a PL
model to the data, i.e.,
\begin{equation}
\phi(E) =
\frac{{\rm d}N}{{\rm d}E} =
\phi_0 \left(\frac{E}{E_0}\right)^{-\Gamma},
\label{eqn:pl}
\end{equation}
where $\phi_0$ is the differential flux at a reference (pivot) energy
$E_0$ and $\Gamma$ is the spectral index. In addition, we also fit an
exponential cutoff power-law (ECPL) model,
\begin{equation}
\phi(E) = \phi_0 \left(\frac{E}{E_0}\right)^{-\Gamma} \exp(-\lambda E)
\label{eqn:ecpl}
,\end{equation}
which additionally contains the inverse cutoff energy $\lambda = 1 /
E_{\mathrm{cutoff}}$ as a third, free parameter. The reference (pivot) energy
$E_0$ is not a free parameter in either model; we compute this parameter on a
source-by-source basis to minimize the correlation between the other fit
parameters.
We computed integral fluxes as
\begin{equation}
F(E_1, E_2) = \int_{E_1}^{E_2}\phi(E)\,\mathrm{d}E,
\label{eq:spec_int_flux}
\end{equation}
usually for the energy band above 1~TeV, with integral flux errors computed
using Gaussian error propagation. We computed energy fluxes for a given energy
band as
\begin{equation}
G(E_1, E_2) = \int_{E_1}^{E_2}E\,\phi(E)\,\mathrm{d}E.
\end{equation}
The source catalog provides the PL fit results (see
Table~\ref{tab:hgps_sources_columns} for a description of columns) for every
source and the ECPL parameters where the ECPL model is more likely
($\mathrm{TS} = W_{\mathrm{ECPL}} - W_{\mathrm{PL}} > 9$). All
aperture-photometry based flux measurements are corrected by the factor given in
Eq.~\ref{eq:correction_factor_1}.
\subsubsection{Flux points}
\label{sec:cc:spectra:points}
Flux points are estimates of the differential flux $\phi$ at a given set of
reference energies $E_{ref}$. To compute flux points for the HGPS catalog, we
chose a method similar to that used for the \emph{Fermi}-LAT\ catalogs \citep[see, e.g.,
Sect.~5.3 in][]{3FGL}. For every source we selected a total number of six bins
$(E_1, E_2)$ in reconstructed energy, logarithmically spaced between the safe
energy threshold and a maximum energy of 50~TeV. The reference energy for the
flux point estimation was set to the logarithmic bin center $E_{ref} = \sqrt{E_1
E_2}$. The differential flux $\phi$ was computed via a one-parameter likelihood
fit (same method as described in Sect.~\ref{sec:cc:spectra:model}), under the
assumption of the global best-fit PL and using only the data within the bin of
reconstructed energy $(E_1, E_2)$. An 1$\sigma$ asymmetric error on $\phi$ was
computed from the likelihood profile, and for spectral points of small
significance ($TS<1$), in addition an upper limit on $\phi$ was computed at 95\%
confidence level. All spectral point measurements in the HGPS catalog are
corrected by the factor given in Eq.~\ref{eq:correction_factor_1}.
\subsection{Method discussion}
\label{sec:cc:discussion}
The sensitivity profile and map shown in Fig.~\ref{fig:hgps_sensitivity} were
computed assuming a point-like source morphology and using the Li~\&~Ma\
significance estimation. The likelihood fit method including the large-scale
emission model component used for the catalog production fundamentally differs
from that. We qualitatively discuss below the most important differences and
their influence on the effective sensitivity with which the catalog was
produced.
In Sect.~\ref{sec:sensmaps}, the sensitivity was defined as the minimum required
flux for a source to be detected with a certain level of confidence. Assuming
the source is extended, which applies to most of the Galactic sources found by
H.E.S.S., the total flux of the source is distributed over a larger area on the
sky. Given a fixed background level, the signal-to-noise ratio is decreased and
the sensitivity scales with the size of the source as
\begin{equation}
F_{\mathrm{min}}(\sigma_{\mathrm{source}}) \propto
\sqrt{\sigma_{\mathrm{source}}^2 + \sigma_{\mathrm{PSF}}^2},
\label{eq:sensitivity_extended}
\end{equation}
where $\sigma_{\mathrm{source}}$ is the size of the source and
$\sigma_{\mathrm{PSF}}$ the size of the PSF \citep{ref:hintonhofmann}. It is
constant for sources smaller than the PSF and increases linearly with source
size for sources much larger than the PSF.
For low surface brightness sources close to the Galactic plane, high levels of
contamination (defined as in Eq.~\ref{eq:contamination}) from the large-scale
emission model were observed. This effectively reduces the sensitivity close to
the Galactic plane and even caused a few previously detected H.E.S.S.\ sources to
fall below the detection threshold (see also
Sect.~\ref{sec:results:previously:missing}) chosen for the HGPS analysis. For
sources far from the Galactic plane, however, the influence of the large-scale
emission can be neglected.
Systematic and statistical background uncertainties, which are neglected in this
analysis, bias the sensitivity for large, extended sources. Neglecting
background fluctuations in the likelihood fit can lead to an overestimation of
the significance of large sources, which can lead to unreliable detections of
large emission components. In addition, the adaptive ring method
(Sect.~\ref{sec:adaptiveringmethod}), which has a minimal inner ring radius of
$0.7\degr$, does not provide a reliable background estimate for those large
emission components.
Systematic uncertainties of various origins affect the spectral parameters of
the sources. In addition to the transparency of the atmosphere, calibration, and
event reconstruction (see Sect.~\ref{sec:dataset}), the analysis method itself
can introduce uncertainties. In particular, the background and large-scale
emission emission model, and the source extraction and measurement method
(multi-Gaussian morphology and aperture photometry) influence the flux and
spectral index measurement. We estimate the relative systematic uncertainties of
the flux extracted from the maps (Sect.~\ref{sec:maps}) and from the spectrum
(Sect.~\ref{sec:cc:spectra}) to be 30\%; for the spectral index
(Sect.~\ref{sec:cc:spectra}) we estimate an absolute systematic uncertainty of
0.2. This estimate is based on the scatter seen in the cross-check analysis and
other analyses (e.g., a source catalog extracted without a large-scale emission
model component). For individual difficult sources (poor containment, large
contamination, complex, and marginally significant morphology features), larger
systematics may arise (see Sects.~\ref{sec:results:previously} and
\ref{sec:results:xcheck}). We note that the systematic uncertainties quoted here
are the same as in the previous HGPS publication~\citep{ref:gps2006}, and, as
expected for a population of extended sources in the Galactic plane, these
values are slightly larger than the systematic uncertainties previously
estimated for isolated point-like sources such as the Crab nebula
\citep{ref:hesscrab}.
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{figures/hgps_sources_spectrum_map_flux_comparison}}
\caption[Comparison of source flux measurements]{
Comparison of integral source flux measurements above 1~TeV as calculated with
two different methods. The flux estimate from maps is the total source flux
according to the morphology model fit, assuming a spectral index of $\Gamma =
2.3$ (the \texttt{Flux\_Map} column in the catalog). The flux estimate from
spectra is computed from the total source best-fit spectral model extracted
using aperture photometry and aperture correction (the
\texttt{Flux\_Spec\_Int\_1TeV} column in the catalog). The gray band in the
background illustrates a systematic uncertainty of 30\% on the flux values.
}
\label{fig:hgps_sources_spectrum_map_flux_comparison}
\end{figure}
A comparison of the two methods presented in this paper for calculating HGPS
source integral flux ($E > 1$~TeV) was performed as a diagnostic test (see
scatter plot in Fig.~\ref{fig:hgps_sources_spectrum_map_flux_comparison}). The
flux show on the x-axis is the total source flux estimated from the source
morphology fit on the maps (given by Eqn.~\ref{eq:source_flux}), assuming a
power-law spectrum with index $\Gamma = 2.3$. The flux estimate on the y-axis
was obtained from a spectral analysis (see Eqn.~\ref{eq:spec_int_flux} in
Sect.~\ref{sec:cc:spectra}), using a PL or ECPL spectral model assuption
(best-fit model) and an aperture photometry method that includes a containment
and contamination correction according to the HGPS multi-Gaussian plus
large-scale emission model. One can see that the two flux estimates agree very
well for most sources within the statistical errors and the 30\% flux systematic
uncertainty that we quote above. There are exceptions however, which can to a
large degree be attributed to differences in the underlying morphology and
spectral model assumptions of the two flux estimators. We note that when
comparing either of these HGPS source flux estimates against the cross-check,
the level of agreement is similar, but not quite as good (see
Sect.~\ref{sec:results:xcheck} for a discussion of individual cases). When
comparing against previous publications, the scatter is even larger (flux
differences up to a factor of 2 in a few cases), which can in many cases be
understood to be the result of differences in morphology model (of the source
itself, of nearby overlapping sources, or the large-scale emission model) or
the spectral extraction region; most previous publications did not apply
containment or contamination corrections.
\section{Results and discussion}
\label{sec:results}
This section presents the results and a discussion of the HGPS based on the
data set (Sect.~\ref{sec:dataset}), maps (Sect.~\ref{sec:maps}), and catalog
(Sect.~\ref{sec:cc}).
\subsection{Source associations and firm identifications}
\label{sec:results:assoc_id}
Determining the physical nature of a VHE $\gamma$-ray\ source often requires
detailed spectral and morphological characterization of the VHE emission and
availability of complementary MWL information. Finding a
likely counterpart of a point-like VHE source is generally easy thanks to the
limited region of the sky to investigate. For an extended source, such as the vast
majority of the HGPS sources, the procedure is often much more involved because
of multiple spatial associations, unless the VHE morphology is similar to that
observed at other wavelengths (e.g.,~for a large shell-type SNR).
We therefore make a distinction between source associations and firm
identifications of sources. The former is a list of astronomical objects,
extracted from catalogs of plausible counterparts, which are are found to be
spatially coincident with the HGPS source. When particularly solid evidence
exists that connects one of these associated objects to the VHE emission, such
as variability or shell-type morphology, we consider the HGPS source to be
firmly identified.
In Sect.~\ref{sec:association_procedure} we first describe the systematic
association procedure, followed by the discussion of the results of this search
for plausible counterparts in Sect.~\ref{sec:association_results}. Finally we
present the list of firmly identified HGPS sources in
Sect.~\ref{sec:identifications}.
\subsubsection{Source association procedure}
\label{sec:association_procedure}
Our objective is to associate each HGPS source with plausible counterparts found
among nearby objects in the most relevant counterpart catalogs (that is catalogs
of objects already identified as VHE emitters such as SNRs and pulsar wind
nebulae and high energy $\gamma$-ray sources, see
Table~\ref{tab:hgps_associations_catalogs}). We search for these counterparts in
a region larger than the nominal HGPS source size (its 68\% containment radius);
we associate all the objects whose cataloged positions are at an angular
distance smaller than the source spectral extraction radius,
\texttt{$R_{\mathrm{SPEC}}$} (Sect.~\ref{sec:cc:spectra:reg}; see also
Fig.~\ref{fig:hgps_survey_mwl_1} ff.). This spatial criterion is motivated by
the fact that often the origin of the relativistic particles is significantly
offset from the VHE centroid, for example, when VHE pulsar wind nebulae (PWNe)
are offset from energetic pulsars or extended well beyond their X-ray PWNe
counterparts. We expect this procedure to be affected by source confusion
(multiple associations that are difficult to disentangle), especially for larger
VHE sources.
\begin{table*}
\caption[Automatic source association results]{
Results of the automatic association procedure for each catalog used (see main
text for details and selections applied). The second column lists the numbers of
objects in the HGPS survey region for each catalog. The third column gives the
total number of associations found. The last column gives the number of HGPS
sources having at least one associated object of a given category. The
difference between the two last columns is only large for 3FGL because 3FGL is
the only counterpart catalog for which the source density is so high that many
HGPS sources are associated with multiple 3FGL sources. Out of the
78 HGPS sources, only 11 are left
without any association.
}
\label{tab:hgps_associations_catalogs}
\centering
\begin{tabular}{lrrr}
\hline\hline
Type & Number of objects & Total number of & Number of HGPS sources\\
& in HGPS region & associations & with at least 1 association\\
\hline
2FHL sources & \InHGPSFHLSourceCount & \HGPSFHLAssociationCount & \oneFHLAssocCount \\
3FGL sources & \InHGPSFGLSourceCount & \HGPSFGLAssociationCount & \oneFGLAssocCount \\
\hline
Supernova remnants & \InHGPSSNRSourceCount & \HGPSSNRAssociationCount & \oneSNRAssocCount \\
Pulsar wind nebulae & \InHGPSPWNSourceCount & \HGPSPWNAssociationCount & \onePWNAssocCount \\
Composite remnants & \InHGPSCOMPSourceCount & \HGPSCOMPAssociationCount & \oneCOMPAssocCount \\
\hline
Energetic pulsars & \InHGPSPSRSourceCount & \HGPSPSRAssociationCount & \onePSRAssocCount \\
\hline
Extra associations & -- & \HGPSEXTAssociationCount & -- \\
\hline
\end{tabular}
\end{table*}
This criterion is a compromise between the number of spurious associations and
the number of missed associations. A spurious association would be one with a
counterpart that is physically unrelated to the HGPS source (e.g.,~a chance
spatial coincidence in the same region of the sky). A missed association would
be a real counterpart that is not selected by the procedure (e.g.,~a pulsar
significantly offset from a VHE source could be missed even though it is known
to generate a PWN). As a consequence of this spatial criterion, larger sources
naturally has a larger number of associated objects. The criterion is intended
to be loose (inclusive) to minimize missed associations at the expense of
including potentially spurious associations. Nonetheless, this procedure has
certain limitations, for example,~difficulties in associating VHE emission with
an SNR if the emission was produced in offset molecular clouds illuminated by
cosmic rays that escaped from the SNR.
In the following paragraphs, we briefly describe the catalogs used for the
automatic association procedure applied to search for counterparts. We also
describe a list of additional objects that have been associated with HGPS
sources in previous publications but are not present in the counterpart search
catalogs. We note that some of these catalogs contain a single, specific type of
object (e.g., supernova remnants), whereas other catalogs contain multiple types
of physical objects because they are the result of broad surveys at energies
relevant to the HGPS (e.g., the \emph{Fermi}-LAT\ catalogs).
\paragraph{High-energy $\gamma$-ray\ sources\\}
We searched for associated high-energy (HE) $\gamma$-ray\ sources in the \emph{Fermi}-LAT\
2FHL source catalog \citep{2FHL} and the full 3FGL catalog \citep{3FGL}. The
2FHL catalog covers the 50~GeV to $\sim$2~TeV energy range, and the 3FGL catalog
covers the 0.1--300~GeV range. They contain \InHGPSFHLSourceCount and
\InHGPSFGLSourceCount sources in the HGPS region, respectively. We expect the
\emph{Fermi}-LAT\ catalogs to contain a significant number of HGPS sources. In the case of
2FHL, this is due to its energy range, which partially overlaps that of the
HGPS, and its sensitivity, which reaches $\sim$3-4\% Crab in the HGPS region
\citep{2FHL}. But even without such overlaps, we expect to find many \emph{Fermi}-LAT\
associations, since many objects emit $\gamma$-rays\ following a spectrum that
extends from the HE to the VHE range. Even for noncontinuous spectra we expect
to find numerous associations, for example, when a pulsar emits GeV emission
detected by \emph{Fermi}-LAT\ and its wind nebula emits TeV emission detected by H.E.S.S.
\paragraph{Supernova remnants and pulsar wind nebulae\\}
Supernova remnants and PWNe are among the most common particle accelerators in
the Galaxy and are well-known VHE $\gamma$-ray\ emitters. Nonetheless, it is often
challenging to establish associations between SNRs and VHE sources. For example,
only specific regions of an SNR shell could be emitting or neighboring molecular
clouds could be illuminated by multi-TeV particles that escaped the shock front
of the SNR. Pulsar wind nebulae evolve as their pulsar ages and the available
rotational energy (spin-down power) decreases. Since the X-ray synchrotron
radiation from PWNe arises from higher energy electrons than the IC radiation in
the VHE gamma-ray band, and the cooling time of the electrons decreases with
their energy ($t_c = E/(dE/dt)$, for radiative losses $t_c \propto 1/E$) we
expect PWNe to shine longer in VHE gamma rays. Furthermore, a decreasing
magnetic field with age can limit the emission time in radio and X-rays without
affecting the VHE emission. As a result, some old PWNe should be undetectable
outside the VHE $\gamma$-ray\ domain (see, e.g., \cite{1997MNRAS.291..162A,
deJager09, HESS:PWNPOP}). For such old PWNe only the detection of a middle-aged
energetic pulsar in the vicinity of a VHE source can provide evidence toward the
true nature of the VHE emission.
To search for SNR and PWN associations, we take the most complete catalog of
SNRs and PWNe to date into account, SNRcat\footnote{\url{http://www.physics.umanitoba.ca/snr/SNRcat}, accessed
Oct~10,~2015} \citep{SNRcat}. The SNRcat is a census of Galactic supernova
remnants and their high-energy observations. It is based on the radio Catalogue
of Galactic Supernova Remnants \citep{2014BASI...42...47G} but additionally
includes other types of remnants in an effort to be as complete and up-to-date
as possible. In particular, it contains plerionic objects, PWNe with no observed
shell. The possible presence of a PWN is usually assessed based on the presence
of diffuse, nonthermal emission in radio, X-rays, or even $\gamma$-rays. Several of
these cataloged objects have been classified by SNRcat as candidate PWNe solely
because of the presence of VHE emission in the vicinity of an energetic pulsar.
We removed those objects from the catalog used in our association procedure to
avoid cases in which we might misleadingly self-associate.
For the association procedure, we split the SNRcat objects into three subsets
based on their apparent type. The first subset consists of objects that have no
evidence of nebular emission and mostly belong to the shell or filled-center
types in SNRcat; this subset contains \InHGPSSNRSourceCount\ objects within the
HGPS region. The second subset consists of objects that are listed in SNRcat as
PWNe (or PWNe candidates) showing no evidence for shell-like emission; this
subset contains \InHGPSPWNSourceCount objects within the HGPS region. The third
subset consists of objects showing evidence of both shell and nebular emission,
which we refer to as composite objects; this subset contains
\InHGPSCOMPSourceCount objects within the HGPS region. For a further discussion
of a potential PWN nature of these objects see the population study presented in
\cite{HESS:PWNPOP}.
\paragraph{Energetic pulsars\\}
We selected energetic pulsars from version 1.54 of the ATNF catalog of radio
pulsars \citep{Manchester:2005}. We excluded millisecond pulsars because they
are not expected to power VHE PWNe and applied a cut on the spin-down energy
flux $\dot{E} / d^2 > 10^{33}$~erg~s$^{-1}$~kpc$^{-2}$ on the remaining pulsars.
In addition, to take into account energetic pulsars of unknown distance, we
included all objects with a spin-down luminosity $\dot{E} >
10^{34}$~erg~s$^{-1}$, resulting in a total of \InHGPSPSRSourceCount pulsars
used in the association procedure. We did not take into account pulsars that do
not have a measured $\dot{E}$. It is important to note that pulsars represent
indirect associations: the associated pulsars are not directly emitting the
unpulsed VHE $\gamma$-ray\ emission found in the HGPS, but rather indicate that
they could be powering a PWN that directly emits such emission.
\subsubsection{Association results and discussion}
\label{sec:association_results}
\paragraph{HE $\gamma$-ray\ sources\\}
Of the \InHGPSFGLSourceCount 3FGL sources present in the HGPS region, we find
\HGPSFGLAssociationCount to be associated with an HGPS source. As expected, we
also find a large portion of the \InHGPSFHLSourceCount 2FHL sources in the HGPS
region to be associated with HGPS sources: only \FHLnonAssociated of these have
no HGPS counterpart. One of these sources is notably coincident with the VHE
source candidate \object{HOTS J1111$-$611} (Sect.~\ref{sec:HOTS_J1111m611}).
Many of the other 2FHL sources lacking an HGPS association tend to be located in
low-sensitivity parts of the HGPS region. Only four 2FHL sources in parts of the
HGPS with good sensitivity show no significant VHE emission in the HGPS:
\object{Puppis A} \citep{2015AA...575A..81H}, \object{2FHL J0826.1$-$4500},
\object{$\eta$ Carinae}, and the composite \object{SNR G326.3$-$1.8}
\citep{2013ApJ...768...61T}.
\paragraph{Supernova remnants\\}
We find 24 of the 78 HGPS sources to be associated with shell-like SNRs. Given
the large number of such objects in the HGPS region (\InHGPSSNRSourceCount) and
given their sizes, the number of chance coincidences is non-negligible. This is
to be expected since we have not tried to specifically match SNR and HGPS source
positions and sizes as in \citet{Fermi_SNR_cat}. Nonetheless, as discussed
below, we find six known shells in the HGPS to be firmly identified and two more
to be VHE shell candidates based on their specific morphologies
\citep{HESS:Shells}. We study the population of known SNRs in the HGPS further
in a companion paper \citep{HESS:SNRUL}.
\paragraph{Pulsar wind nebulae and composites\\}
We find 37 of the SNRcat objects (in the HGPS region) containing a PWN or PWN
candidate to be associated with an HGPS source. Conversely, we find more than
40\% of HGPS sources to have at least one associated object in the PWN or
composite classes. This supports the notion that systems containing PWNe are
prolific VHE emitters. As discussed below, we are able to firmly identify about
half of these associations using additional observational evidence such as
similar MWL morphology or energy-dependent $\gamma$-ray\ morphology.
\paragraph{Pulsars\\}
We find 47 of all the HGPS sources to be associated with an energetic pulsar.
This suggests that the population of HGPS sources contains numerous PWNe.
However, we selected a relatively low threshold $\dot{E}$ in our association
criteria to minimize missed associations. We quantitatively study such selection
effects in a companion paper \citep{HESS:PWNPOP} that provides a detailed look
at the physical characteristics of firmly identified PWNe and a list of
candidate PWN identifications based on various expected characteristics.
\paragraph{Extra associations\\}
For completeness, in addition to the associations obtained through the
catalog-based, automatic procedure, we add a list of \HGPSEXTAssociationCount
extra associated objects that are plausible counterparts for some HGPS sources
and are not covered by the limited set of catalogs we use. Previous publications
had proposed most of these associations, often thanks to dedicated MWL
observations of VHE source regions (e.g.,~the X-ray source \object{XMMU J183245$-$0921539}
and \object{HESS J1832$-$093}). We propose other associations in this work for some of
the new sources (Sect.~\ref{sec:results:new}). We also include the original
identifiers of VHE sources discovered first by other instruments (e.g.,
\object{VER J1930$+$188}, which corresponds to \object{HESS J1930$+$188}).
Table~\ref{tab:hgps_associations} includes all of these extra associations,
labeled ``EXTRA''.
\paragraph{Sources without physical associations\\}
Eleven HGPS sources do not have any associations with known physical objects,
although some are associated with HE $\gamma$-ray\ sources. We list and discuss
these briefly here (the new VHE sources are discussed in
Sect.~\ref{sec:results:new}):
\begin{enumerate}
\item \object{HESS J1457$-$593} is one of the new sources detected in the HGPS
analysis. Although the automatic association procedure does not find any
counterparts, the VHE $\gamma$-ray\ emission may originate in a molecular cloud
illuminated by CRs that escaped from the nearby but offset \object{SNR
G318.2$+$0.1}. This scenario is briefly described in
Sect.~\ref{sec:HESS_J1457m593}.
\item \object{HESS J1503$-$582} is also a new HGPS source and does not have any
compelling association except for the HE $\gamma$-ray\ sources \object{3FGL
J1503.5$-$5801} and \object{2FHL J1505.1$-$5808}, neither of which is of a
firmly identified nature. We describe this enigmatic source in
Sect.~\ref{sec:HESS_J1503m582}.
\item \object{HESS J1626$-$490} has only one association, with the HE $\gamma$-ray\
source \object{3FGL J1626.2$-$4911}. A dedicated \emph{XMM-Newton}\ observation did not
reveal any compelling X-ray counterpart either \citep{2011A&A...526A..82E}.
\item \object{HESS J1702$-$420} is near the point-like source \object{2FHL
J1703.4$-$4145}. The elongation of the VHE $\gamma$-ray\ emission prevented the
automated procedure from making the association, but a connection between the
objects seems plausible. The small size \object{SNR G344.7$-$0.1} (about
$8^{\prime}$ in diameter) is also in the vicinity and in good positional
coincidence with the (point-like) 2FHL source.
\item \object{HESS J1708$-$410} has no compelling association, even though this
source was the target of dedicated X-ray observations to look for associated
emission \citep{2009ApJ...707.1717V}. Given the brightness and relatively steep
spectrum of this VHE source ($\Gamma = 2.57 \pm 0.09$), the absence of a
counterpart at lower $\gamma$-ray\ energies in the \emph{Fermi}-LAT\ catalogs is surprising
and suggests the emission peaks in the hundreds of GeV range.
\item \object{HESS J1729$-$345} is north of the nearby SNR HESS~J1731$-$347
\citep{2011AandA...531A..81H}. An investigation into a potential connection
between the two suggests the VHE emission from the former could be from a
molecular cloud illuminated by hadronic particles that escaped from the SNR
\citep{Cui2016}.
\item HESS~J1741$-$302 is the subject of a dedicated companion paper
\citep{HESS:1741} discussing potential PWNe and SNR-related association
scenarios, among others. These aspects are therefore not discussed here.
\item \object{HESS J1745$-$303} is close to, but offset from, \object{SNR
G359.1$-$0.5}. \emph{Suzaku}\ observations have revealed neutral iron line emission in
the region, suggesting the presence of molecular matter and making this object
another possible case of a CR-illuminated cloud \citep{2009ApJ...691.1854B}. We
find this object also to be associated with the HE $\gamma$-ray\ sources
\object{2FHL J1745.1$-$3035} and \object{3FGL J1745.1$-$3011}.
\item \object{HESS J1828$-$099} is a new HGPS source described in
Sect.~\ref{sec:HESS_J1828m099}.
\item \object{HESS J1832$-$085} is also a new HGPS source, described in
Sect.~\ref{sec:HESS_J1832m085}.
\item \object{HESS J1858$+$020} has an association with the HE $\gamma$-ray\ source
\object{3FGL J1857.9$+$0210} and is close to, but offset from, \object{SNR
G35.6$-$0.4}. A dedicated study \citep{2014A&A...561A..56P} did not find any
compelling X-ray counterpart, although multiple possible scenarios were
investigated, including CR-illuminated molecular clouds.
\end{enumerate}
\subsubsection{Firmly identified HGPS sources}
\label{sec:identifications}
In this section, we go one step further and treat those HGPS sources for which
the physical origin of the VHE $\gamma$-ray\ emission has been firmly identified.
Whereas the association criteria were principally based on positional evidence
(angular offset), we also perform a census of the additional evidence that is
available to reinforce spatial associations and arrive at firm identifications.
The supplementary observables we consider are correlated MWL variability,
matching MWL morphology, and energy-dependent $\gamma$-ray\ morphology
\citep{ref:hintonhofmann}. Table~\ref{tab:hgps_identified_sources} summarizes
the results, along with the respective references for the additional evidence.
Among the 78 sources in the HGPS region, we determine
31 to be firmly identified.
\begin{table*}
\caption
[Firmly-identified HGPS sources]{
Table of 31 firmly-identified objects among the HGPS sources.
The object classes are $\gamma$-ray\ binary, shell-type supernova remnant (SNR),
pulsar wind nebula (PWN), and composite SNR (in cases where it is not possible
to distinguish between the shell and interior nebula). The evidence used to
identify the VHE $\gamma$-ray\ emission include position, morphology, variability,
and energy-dependent morphology (ED Morph.).
}
\label{tab:hgps_identified_sources}
\centering
\begin{tabular}{lllll}
\hline\hline
Source name & Identified object & Class & Evidence & Reference \\
\hline
\object{HESS J1018$-$589} A & \object{1FGL J1018.6$-$5856} & Binary & Variability & \cite{J1018} \\
\object{HESS J1302$-$638} & PSR~B1259$-$63 & Binary & Variability & \cite{2005AandA...442....1A} \\
\object{HESS J1826$-$148} & LS~5039 & Binary & Variability & \cite{Aharonian:2006a} \\
\hline
HESS~J0852$-$463 & Vela~Junior & SNR & Morphology & \cite{Aharonian:2005b} \\
HESS~J1442$-$624 & RCW~86 & SNR & Morphology & \cite{2016arXiv160104461H} \\
HESS~J1534$-$571 & G323.7$-$1.0 & SNR & Morphology & \cite{HESS:Shells} \\
HESS~J1713$-$397 & RX~J1713.7$-$3946 & SNR & Morphology & \cite{2004Natur.432...75A} \\
HESS~J1718$-$374 & G349.7$+$0.2 & SNR & Position & \cite{2015AandA...574A.100H} \\
HESS~J1731$-$347 & G353.6$-$0.7 & SNR & Morphology & \cite{2011AandA...531A..81H} \\
HESS~J1801$-$233 & W~28 & SNR & Position & \cite{Aharonian:2008f} \\
HESS~J1911$+$090 & W~49B & SNR & Position & \cite{HESS:W49} \\
\hline
\object{HESS J0835$-$455} & \object{Vela X} & PWN & Morphology & \cite{Aharonian:2006d} \\
HESS~J1303$-$631 & G304.10$-$0.24 & PWN & ED Morph. & \cite{2012AandA...548A..46H} \\
\object{HESS J1356$-$645} & G309.92$-$2.51 & PWN & Position & \cite{2011AandA...533A.103H} \\
\object{HESS J1418$-$609} & G313.32$+$0.13 & PWN & Position & \cite{Aharonian:2006b} \\
\object{HESS J1420$-$607} & G313.54$+$0.23 & PWN & Position & \cite{Aharonian:2006b} \\
\object{HESS J1514$-$591} & \object{MSH 15$-$52} & PWN & Morphology & \cite{Aharonian:2005c} \\
\object{HESS J1554$-$550} & G327.15$-$1.04 & PWN & Morphology & Section~\ref{sec:HESS_J1554m550} \\
HESS~J1747$-$281 & G0.87$+$0.08 & PWN & Morphology & \cite{Aharonian:2005d} \\
\object{HESS J1818$-$154} & G15.4$+$0.1 & PWN & Morphology & \cite{HESS2014_J1818} \\
HESS~J1825$-$137 & G18.00$-$0.69 & PWN & ED Morph. & \cite{Aharonian:2006g} \\
\object{HESS J1837$-$069} & G25.24$-$0.19 & PWN & Morphology & \cite{2008AIPC.1085..320M} \\
\object{HESS J1849$-$000} & G32.64$+$0.53 & PWN & Position & Section~\ref{sec:HESS_J1849m000} \\
\hline
\object{HESS J1119$-$614} & G292.2$-$0.5 & Composite & Position & Section~\ref{sec:HESS_J1119m614} \\ %
\object{HESS J1640$-$465} & G338.3$-$0.0 & Composite & Position & \cite{2014MNRAS.439.2828A}, \cite{2014ApJ...788..155G} \\
\object{HESS J1714$-$385} & \object{CTB 37A} & Composite & Position & \cite{2008AA...490..685A} \\
\object{HESS J1813$-$178} & G12.8$-$0.0 & Composite & Position & \cite{2007AA...470..249F}, \cite{2009ApJ...700L.158G} \\
\object{HESS J1833$-$105} & G21.5$-$0.9 & Composite & Position & Section~\ref{sec:HESS_J1833m105} \\
\object{HESS J1834$-$087} & \object{W 41} & Composite & Morphology & \cite{2015AA...574A..27H} \\
\object{HESS J1846$-$029} & G29.7$-$0.3 & Composite & Position & Section~\ref{sec:HESS_J1846m029} \\
HESS~J1930$+$188 & G54.1$+$0.3 & Composite & Position & \cite{2010ApJ...719L..69A}, Sect.~\ref{sec:results:previously} \\
\hline
\end{tabular}
\end{table*}
Firm identifications rely on different forms of evidence that vary depending on
the source class. The VHE $\gamma$-ray\ emission from compact binary systems is
always point-like and should exhibit variability that is also seen at lower
energies. In contrast, the VHE emission from shell-type SNRs is extended
(provided the SNR is sufficiently large and close) and nonvariable, but can be
identified based on the specific shell morphology and correlated morphology at
lower energies.
Composite SNRs have both a shell and an interior PWN detected at lower energies
and can be more complex to identify correctly. If the angular size of the shell
emission is larger than the size of the VHE emission, we can identify the VHE
emission as coming from the PWN filling the SNR. This is the case, for example,
for HESS~J1747$-$281 (PWN in \object{SNR G0.9$+$0.1}) and HESS~J1554$-$550 (PWN
in \object{SNR G327.1$-$1.1}). In other cases, we are only able to identify the
HGPS source with the composite SNR as a whole, i.e., we are confident that the
VHE emission originates in the composite object but cannot disentangle whether
it comes predominantly from the PWN or the shell (usually due to PSF
limitations).
More evolved stellar remnant systems are difficult to identify firmly. We can
make a firm PWN identification when there is a PWN of comparable size and
compatible position detected at lower energies. This is the case, for example,
for HESS~J1420$-$607 (\object{PWN G313.54$+$0.23}) and HESS~J1356$-$645
(\object{PWN G309.92$-$2.51}). In the absence of any clear PWN, or when its
size at lower energies is much smaller than the VHE source, we have to rely on
other evidence. The clearest such evidence is the detection of energy-dependent
morphology, expected in PWNe because of the cooling of energetic electrons as
they are transported away from the pulsar. At higher energies, the extent of the
emission shrinks and its barycenter moves closer to the pulsar. This is the case
for two sources thus far, HESS~J1303$-$631 (\object{PWN G304.10$-$0.24}) and
HESS~J1825$-$137 (\object{PWN G18.00$-$0.69}). In the absence of such evidence,
the identification of a VHE source as a PWN remains tentative when the only
evidence is an energetic pulsar in the vicinity. Candidate PWN identifications
are evaluated in detail in a companion paper \citep{HESS:PWNPOP}.
A large percentage (39\%) of the 31 firmly identified sources
are PWNe. The next largest source classes identified are SNR shells (26\%) and
composite SNRs (26\%). Finally, $\gamma$-ray\ binary systems are also identified in
the HGPS. It is not yet possible to identify firmly more than half of the total
78 HGPS sources with the conservative criteria we adopted,
although the vast majority have one or more promising spatial associations that
could prove to be real identifications following more in-depth studies beyond
the scope of this work. We do not find any physical associations for 11 of the
VHE sources in the HGPS, although for some of these, potentially related
emission is seen in HE $\gamma$-rays, and for others, offset counterparts are
present but simply not found by the automated association procedure adopted (see
previous section). Figure~\ref{fig:hgps_source_id} summarizes these
identifications.
We note that one source in HGPS, \object{HESS J1943$+$213}, is likely an extragalactic
object. It has no measured extension and a radio counterpart that many recent
studies tend to classify as a BL-Lac object \citep{2014A&A...571A..41P,
2016ApJ...822..117S, 2016ApJ...823L..26A}. However, its VHE flux has not
revealed any variability so far, which is unusual for such an object
\citep{2016arXiv161005799S}.
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{figures/hgps_source_id}}
\caption[Source identification summary pie chart]{
Source identification summary pie chart. See
Table~\ref{tab:hgps_identified_sources} and Sect.~\ref{sec:identifications}.
}
\label{fig:hgps_source_id}
\end{figure}
\subsection{Large-scale emission}
\label{sec:results:large}
In Sect.~\ref{sec:cc:large-scale-emission}, we introduced an empirical spatial
model to account for the large-scale VHE $\gamma$-ray\ emission we observed along
the Galactic plane to detect and characterize accurately the discrete VHE
$\gamma$-ray\ sources. This model provides an estimate of the spatial distribution
of the large-scale VHE emission discovered by \citet{2014PhRvD..90l2007A}. We
find that the fit amplitude, latitudinal width, and position of this model,
shown on Fig.~\ref{fig:hgps_diffuse_model}, are consistent with the latitude
profile of that previous work. The width is also comparable to the HGPS source
latitude distribution (Fig.~\ref{fig:hgps_sources_glat}, ff.) but smaller than
that of molecular gas traced by CO emission \citep{Dame01}.
Owing to the observational constraints and analysis used, the large-scale
emission model cannot be considered a measurement of the total Galactic diffuse
emission. The large-scale emission model provides an estimate of the diffuse
emission present in the HGPS maps. Its parameter values depend on the map
construction technique, in particular the exclusion region mask used in the
analysis (Sect.~\ref{sec:exclusion_regions}), i.e., changes in the mask can
alter the parameters of the model. For instance, the peak observed at $\ell \sim
340\degr$ in Fig.~\ref{fig:hgps_diffuse_model} is due to the presence of
low-level emission that is just below the threshold to be covered by the
exclusion mask we use for the HGPS. While a significant percentage of the
large-scale emission is expected to be truly interstellar diffuse emission, it
is very likely that emission from discrete but unresolved sources contributes as
well. Finally, some features in the HGPS large-scale emission model are likely
artifacts of errors in the estimation of the background model of gamma-like
cosmic ray EAS events (see Sect.~\ref{sec:background_estimation}); these events
are the dominating model component in the HGPS counts maps, thus small relative
errors in that background model can lead to significant changes in the excess
model of the HGPS sources, but even more so the HGPS large-scale emission model.
\subsection{Source parameter distributions}
\label{sec:results:distributions}
In the following section we study the global properties of the VHE $\gamma$-ray\
sources in the HGPS catalog. We compare certain key source parameters against
each other and briefly discuss the implications in the context of the Galactic
VHE source population, survey sensitivity, and firmly identified MWL source
classes.
\begin{figure*}[!ht]
\includegraphics[width=\textwidth]{figures/hgps_sources_glat}
\caption[Source Galactic latitude distribution]{
Galactic latitude distribution of the HGPS sources (gray histogram). The bin
size of this histogram is $0.3\degr$. The HGPS point source sensitivity is shown
(in units of \%~Crab) at two different longitudes of $0\degr$ and $333\degr$.
For comparison, the pulsar (PSR), supernova remnant (SNR), 3FGL and 2FHL source
distributions in the HGPS longitude range are shown as overlaid curves, smoothed
with Gaussians of width $0.15\degr$. The dashed line shows \emph{Planck}
measurements of CO(1-0) line emission as an estimate for matter density in the
Galaxy and similarly smoothed. All curves are normalized to the area of the
histogram.
}
\label{fig:hgps_sources_glat}
\end{figure*}
The latitude distribution of the 78 HGPS sources is shown in
Fig.~\ref{fig:hgps_sources_glat}. The distribution has a mean of $b =
-0.41\degr$ and a width of $0.87\degr$. For visual comparison, the latitude
distributions of the main classes of associated counterparts
(Sect.~\ref{sec:results:assoc_id}) --- SNRs, energetic pulsars, 3FGL sources,
and 2FHL sources --- are shown in this figure. Also shown for reference is an
estimate of the matter density profile as traced by \emph{Planck} measurements
of CO(1-0) line emission \citep{Planck15}. It should be kept in mind throughout
this section that the HGPS sensitivity is not uniform as a function of longitude
or latitude (Sect.~\ref{sec:sensmaps}).
The HGPS latitude distribution of sources correlates well with both potential
counterparts and tracers of matter density. The distribution is somewhat skewed
toward negative latitudes even though the HGPS sensitivity has a relatively wide
and flat coverage in latitude. In Fig.~\ref{fig:hgps_sources_glat}, the
sensitivity is illustrated by two curves showing regions of relatively good
sensitivity (e.g., at $\ell = 0\degr$) and relatively poor sensitivity (e.g., at
$\ell = 333\degr$). These curves demonstrate that the HGPS sensitivity coverage
in latitude is, in general, much wider than the HGPS source distribution.
Although there are local exceptions at some longitudes, the latitude coverage is
generally flat in the range $-2.0\degr < b < 1.5\degr$, at various locations
even in $-2.5\degr < b < 2.5\degr$. However, the counterpart catalogs are known
to suffer from various selection biases and the Galactic disk itself is known to
not be perfectly symmetric as observed across the spectrum.
In addition, one might still argue that, given the narrow range of latitudes
observed with respect to surveys at other wavelengths, the HGPS sources may not
be representative of the underlying distribution of VHE $\gamma$-ray\ sources.
However, in light of the counterpart distributions, in particular the 2FHL
sources, it can be reasonably assumed that the limited latitude coverage only
has a weak effect on the observed source population distribution.
\begin{figure*}
\includegraphics[width=\textwidth]{figures/hgps_sources_glon}
\caption[Source Galactic longitude distribution]{
Galactic longitude distribution of the HGPS sources (gray histogram).
The bin size of this histogram is $5\degr$. For comparison, the 3FGL and 2FHL
source distributions (smoothed with a Gaussian of width $5\degr$) and the
\emph{Planck} measurements of CO(1-0) line emission as an estimate for matter
density in the Galaxy (smoothed with a Gaussian of width $2.5\degr$) are shown
for the range in Galactic latitude $b \le 5\degr$ and normalized to the area of
the histogram. Spiral arm tangent locations shown are from
\cite{2014ApJS..215....1V}.
}
\label{fig:hgps_sources_glon}
\end{figure*}
The longitude distribution of the 78 HGPS sources is shown in
Fig.~\ref{fig:hgps_sources_glon}, together with the molecular interstellar
matter column density profile as traced by CO(1-0) line emission (same as in the
previous figure). The latter, measured by \emph{Planck} \citep{Planck15}, has a
uniform exposure (sensitivity) over the sky, unlike the HGPS, adding caveats to
potential detailed correlations seen in this figure. We can nevertheless
robustly conclude that there is a very general correlation in longitude between
the number of HGPS sources and the molecular matter column density and that the
HGPS sources are mostly found in the inner $\sim$60\degr\ of the Galaxy.
Additionally, the spiral arm tangents as traced by CO
\citep{2014ApJS..215....1V} are shown in Fig.~\ref{fig:hgps_sources_glon}. An
increased number of sources could be expected in the directions of the near
spiral arm tangents (see Fig.~\ref{fig:hgps_face_on_milky_way}). In the
longitude distribution, a slight excess of sources in the direction of
\textit{Scutum} and between \textit{Norma} and \textit{Crux-Centaurus} can be
observed. However, because of the limited sample size of 1--6 sources per bin, no
significant increased source density in the direction of spiral arm tangents can
be observed.
For comparison, we also added distributions for the Fermi-LAT
catalogs 3FGL and 2FHL to Fig.~\ref{fig:hgps_sources_glon}. While
Fermi-LAT has a roughly uniform exposure, their sensitivity in the HGPS region
is reduced in the inner Galaxy where diffuse emission is brighter, and also the
source extraction is very different from the HGPS approach, so that a direct
comparison is not possible. Finally we have chosen not to show the SNR and
pulsar distributions in the Galactic longitude distribution at all because the
coverage of those catalogs is not uniform.
\begin{figure*}[ht!]
\centering
\includegraphics[width=18cm]{figures/hgps_sources_flux_extension}
\caption[Source flux versus size scatter plot]{
\textbf{Panel A:} Integral source flux ($E > 1$~TeV) vs. source size scatter
plot with colors representing the different classes of firmly identified
sources. For HGPS sources modeled as single Gaussians, the size is its width
($\sigma$). For sources modeled as multiple Gaussians (indicated with a circle
around the marker), the size is the RMS of the two-dimensional intensity
distribution (see Eq.~\ref{eq:source_size}). For sources with shell-like
morphology (SNRs), the size is the outer shell radius. To improve the
visibility of the plot, we do not show the SNR Vela~Junior (HESS~J0852$-$463)
at a size of $1\degr$ and a flux of 103\%~Crab. We illustrate the approximate
sensitivity limit of the HGPS, as defined in Eq.~\ref{eq:sensitivity_extended},
with an assumed point-source sensitivity of 1\%~Crab and an uncertainty band
with a factor $\pm$2 to represent the sensitivity variations in the survey
region (see caveats in main text).
\textbf{Panel B:} Distribution of the integral fluxes
($E > 1$~TeV)of the HGPS sources; colors are shown as in panel~A.
\textbf{Panel C:} Distribution of the HGPS source sizes; colors shown as in
panel~A. The first bin contains 30 sources, of which 17 are compatible with
point-like sources according to Eq.~\ref{eq:extension_ul}. As in panel~A, we
omit Vela~Junior, at a size of $1\degr$.
}
\label{fig:hgps_sources_flux_extension}
\end{figure*}
We compare the HGPS source integral fluxes ($E > 1$~TeV) to source sizes in
panel A of Fig.~\ref{fig:hgps_sources_flux_extension} and show the distributions
of fluxes and sizes separately in panels~B and C, respectively. In the
flux--size figure, we plot the approximate flux sensitivity limit of the HGPS as
a function of source size. One can see that the sensitivity worsens as the
source size increases, as expressed by Eq.~\ref{eq:sensitivity_extended}. The
HGPS sources indeed generally follow this trend. From
Fig.~\ref{fig:hgps_sources_flux_extension}, we therefore conclude that the HGPS
can be considered complete down to $\sim$10\%~Crab for sources $< 0.7\degr$. For
smaller sources ($< 0.1\degr$), the HGPS achieves completeness at a few \%~Crab
(see also Fig.~\ref{fig:hgps_sensitivity}).
We show the distribution of HGPS source integral fluxes ($E > 1$~TeV), which are
calculated assuming a spectral index of $\Gamma = 2.3$, in panel B of
Fig.~\ref{fig:hgps_sources_flux_extension}. At higher fluxes, we naturally
expect the number of sources to decrease. At the lowest fluxes, we also expect
the number to be small, because we reached the sensitivity limit of the HGPS.
As can be seen in panel C of Fig.~\ref{fig:hgps_sources_flux_extension} and
despite the modest H.E.S.S.\ PSF ($0.08\degr$), the majority of sources are not
compatible with being point-like but rather found to be significantly extended
and as large as $1\degr$. Owing to the methods used for background subtraction
(see Sect.~\ref{sec:adaptiveringmethod}), the HGPS is not sensitive to sources
with larger sizes.
The firmly identified HGPS sources (Sect.~\ref{sec:results:assoc_id}) are
highlighted in Fig.~\ref{fig:hgps_sources_flux_extension}. It can be seen that
all identified binary systems appear as point-like sources in the HGPS, as
expected. The PWNe appear to have various angular sizes, in agreement with the
diversity observed in the VHE PWN population \citep{HESS:PWNPOP}. Most
identified SNRs are extended, likely owing to selection bias (smaller SNRs are
difficult to identify, e.g., through shell-like morphology) and the H.E.S.S.\ PSF.
The identified composite SNRs, on the other hand, are typically smaller, owing
to the difficulty in disentagling VHE emission from the SNR shell and interior
PWN, similarly related to the H.E.S.S.\ PSF. In any case, it does not seem possible
to identify the nature of the many unidentified sources solely on the basis of
their sizes or a flux--size comparison.
\begin{figure}[!t]
\resizebox{\hsize}{!}{\includegraphics{figures/hgps_sources_spectral_index}}
\caption[Catalog: Spectral index distribution]{
Distribution of the HGPS source power-law (PL) spectral indices. For
consistency, the PL model spectral index is used for all sources, even those
for which an exponential cutoff power law (ECPL) fits better. Taking statistical and
systematic uncertainties into account, all indices are compatible within
$2\sigma$ with the mean $\Gamma = 2.4 \pm 0.3$ of the distribution.
}
\label{fig:sources_index}
\end{figure}
Figure~\ref{fig:sources_index} shows the distribution of the HGPS source
power-law (PL) spectral indices $\Gamma$. For consistency, the PL model spectral
index is used for all sources, even those for which an exponential cutoff power
law (ECPL) fits better. The index distribution has a mean $\Gamma = 2.4 \pm
0.3$. This is compatible with the index ($\Gamma = 2.3$) adopted in the
production of the HGPS flux maps (Sect.~\ref{sec:fluxmaps}) and the HGPS PSF
computation (Sect.~\ref{sec:cc:maps:psf}). We note that individual source indices
have typical statistical uncertainties of order $\pm 0.2$ and a similar
systematic uncertainty; HGPS data are often not sufficient to precisely
constrain the index because the energy range covered with good statistical
precision is typically only about one decade ($1 \la E \la 10$~TeV). Finally,
the figure also shows how the firmly identified HGPS sources are distributed in
index, showing no strong tendency with respect to source class.
\begin{figure}[!th]
\resizebox{\hsize}{!}{\includegraphics{figures/hgps_sources_log_n_log_s}}
\caption[Source log N -- log S distributions]{
Cumulative $\log N(>S)$ -- $\log S$ distribution for the HGPS sources, showing
the number of sources $N$ above given flux thresholds $S$ (integral flux above
1~TeV in \%~Crab). The line and error band show the result of an unbinned
PL fit above a flux threshold of 10\%~Crab; the dashed line in the
1-10\%~Crab flux range illustrates the extension of the PL to
fluxes below 10\%~Crab (for comparison, not fitted in that range).
}
\label{fig:hgps_sources_log_n_log_s}
\end{figure}
We show the cumulative $\log N(>S)$ -- $\log S$ distribution of HGPS source
integral fluxes ($E > 1$~TeV, obtained from the maps) in
Fig.~\ref{fig:hgps_sources_log_n_log_s}. The 78 HGPS sources
span a range in flux from 0.6\% Crab to 103\% Crab; 32 sources are above 10\%
Crab. We performed an unbinned likelihood fit of a PL model to the $\log N$ --
$\log S$ distribution (also shown in Fig.~\ref{fig:hgps_sources_log_n_log_s}),
using only the range $S > 10$\% Crab where we consider the HGPS survey mostly
complete. The best-fit value of the PL slope is $-1.3~\pm~0.2$ (for the
cumulative distribution), and the amplitude corresponds to $32\pm 5$ sources
above 10\%~Crab. This slope is consistent with Galactic models in which
equal-luminosity sources are homogeneously distributed in a thin disk, which
predict a slope of $-1.0$.\footnote{The flux $S$ of a source scales with the
distance $d$ like $S \propto L / d^2$, where $L$ is the intrinsic luminosity of
the source. For a thin disk, we have $N(>S) \propto d^2 \propto L / S$, which
corresponds to a slope of $-1.0$ in the cumulative $\log N$ -- $\log S$
distribution.}
The only robust statement that can be inferred from the $\log N$ -- $\log S$
distribution of HGPS sources is that it provides a lower limit on the true $\log
N$ -- $\log S$ distribution; that is, there are at least, for example, 70
sources above 1\%~Crab. If one assumes that $\log N$ -- $\log S$ distributions
are always concave (which most ``reasonable'' spatial distributions and source
luminosity functions encountered in the literature are), then the extrapolation
of the PL fit shown in Fig.~\ref{fig:hgps_sources_log_n_log_s} sets an upper
limit of $\sim 600$ sources above 1\%~Crab, with a statistical error of a factor
of 2.
More detailed analyses of the $\log N$ -- $\log S$ distribution or of the
flux-size distribution are possible in principle but in practice do not yield
robust results because of the limited number of sources and the large
uncertainties concerning the effective sensitivity achieved. We emphasize that
the catalog creation procedure is complex (special treatment of known shell-type
sources, large-scale emission model component, 15 discarded and several merged
components; see Sect.~\ref{sec:cc:component_classification}), with the net
effect that the sensitivities shown in Fig.~\ref{fig:hgps_sensitivity} and
panel~A of Fig.~\ref{fig:hgps_sources_flux_extension} are not reliably achieved,
because those sensitivity estimates assume isolated sources, there is no
underlying large-scale emission or source confusion, and there is a detection
threshold of $5\sigma$, whereas the component detection threshold of $TS=30$
corresponds to $\sim$~$5.5\sigma$.
\begin{figure*}
\includegraphics[width=\textwidth]{figures/hgps_face_on_milky_way}
\caption[Location of Galactic H.E.S.S.\ sources in the Galaxy]{
Illustration of the location of identified H.E.S.S.\ sources in the Galaxy with
respect to HGPS completeness (sensitivity limits). This is a face-on view; the
spiral arms \citep{2014ApJS..215....1V} are schematically drawn as gray bars.
The HGPS horizons for source luminosities of $10^{33}$ and $10^{34}$~erg/s (for
a putative 5$\sigma$ detection of a point-like source, same as
Fig.~\ref{fig:hgps_sensitivity}) are depicted by light blue and light brown
lines (and shaded regions therein), respectively. The source distances are from
SNRcat \citep{SNRcat} and ATNF pulsar catalog \citep{Manchester:2005}. When no
distance uncertainties were available, we applied a generic uncertainty of
factor two on the distance. The three labeled sources are the Galactic
$\gamma$-ray\ sources outside the HGPS region detected by H.E.S.S.
}
\label{fig:hgps_face_on_milky_way}
\end{figure*}
A representation of the Galaxy seen face-on is depicted in
Fig.~\ref{fig:hgps_face_on_milky_way} to visualize how much of the Galaxy the
HGPS has been able to probe at different sensitivity levels. Two limits are
shown, illustrating the sensitivity detection limit (horizon) of the HGPS for
potential point-like sources with presumed luminosity of $10^{33}$ and
$10^{34}$~erg/s. Given the achieved sensitivity in the Galactic plane, it is
clear that H.E.S.S.\ has only probed a small fraction of the Galaxy -- just up to a
median distance of $7.3$~kpc for bright ($10^{34}$~erg/s) point-like sources
(and less for extended sources). Furthermore, this illustrative look at survey
completeness strengthens the hypothesis that the large-scale emission described
in Sect.~\ref{sec:cc:large-scale-emission} could be partly explained by a
population of unresolved sources, presumed to be distant.
\subsection{Comparison with previous VHE publications}
\label{sec:results:previously}
In total, we reanalyzed \hgpsSourceCountReAnalysed\ VHE $\gamma$-ray\ sources that
have been the subject of past H.E.S.S.\ publications. In this section we present a
systematic comparison of the present HGPS results, with the latest published
results, as summarized in \textit{gamma-cat}\footnote{\url{https://github.com/gammapy/gamma-cat}, accessed
July~24,~2017}, the open TeV source catalog.
We associated HGPS sources with previous analyses simply by the name of the
source, which was unique except for three cases: \object{HESS J1800$-$240},
\object{HESS J1746$-$308}, and HESS~J1930$+$188, which we discuss in detail in
Sect.~\ref{sec:results:previously:changed}. We excluded these sources from the
systematic comparison in the first place.
To further identify the cases for which we obtained significantly different
results from previously published analyses, we compared the position, size,
spectral index, and flux of the remaining uniquely associated sources, taking
statistical and systematic errors of the measurements into account. For each of
these parameters, we estimated the total uncertainty $\sigma_{\mathrm{tot}}$ as
the 1$\sigma$ statistical and systematic uncertainties added in quadrature. We
estimated this quantity for both the HGPS-derived source parameters and
previously published H.E.S.S.\ values.
The systematic uncertainties on position and size are given in
Sect.~\ref{sec:cc:localisation} and Sect.~\ref{sec:cc:extension_ul},
respectively. Additionally, we assumed a systematic uncertainty
$\Delta\Gamma_{\mathrm{syst}} = 0.2$ on the spectral index and 30\% on the flux
of the source, in agreement with previous estimates \citep{ref:hesscrab}. We
then defined the criterion for significant outliers as
\begin{equation}
\label{eq:morphoutliercriterion}
\Delta_{\mathrm{HGPS-H.E.S.S.}} >
2 \sqrt{\sigma_{\mathrm{tot, HGPS}}^2 + \sigma_{\mathrm{tot,H.E.S.S.}}^2}
,\end{equation}
where $\Delta_{\mathrm{HGPS-H.E.S.S.}}$ is the difference between the corresponding
parameter values. When comparing the position we chose the angular separation as
comparison parameter. We note that for many sources, the data sample used here
is significantly different from that used in the publication, hence the
correlation of statistical errors is usually not too large.
We first discuss the general level of agreement between the current and previous
analyses (excluding the outliers) in Sect.~\ref{sec:results:previously:overview}
and later discuss the outliers of the comparison individually in
Sect.~\ref{sec:results:previously:changed}.
\subsubsection{Agreement with previous publications}
\label{sec:results:previously:overview}
For the vast majority of sources, we find that there is good agreement between
the HGPS-derived position, morphology, and spectrum within the statistical and
systematic uncertainties.
\paragraph*{Position\\}
We found the position of 43 (out of \hgpsSourceCountCrossChecked) sources to be
compatible with the previously published value, according to
Eq.~\ref{eq:morphoutliercriterion}. For point-like sources we found an average
shift of $0.02\pm0.01$~deg, while for extended sources the value was
$0.06\pm0.05$~deg. Both values agree well with the expected scatter considering
the statistical and systematic uncertainties on the measurements. As an
additional check, we also verified that the positions of the identified
$\gamma$-ray\ binaries (known point sources) HESS~J1826$-$148 and HESS~J1302$-$638
are in good agreement (within 40$\arcsec$) with the reference positions of the
corresponding objects LS~5039 and PSR~B1259$-$63 as listed in
SIMBAD\footnote{\url{http://simbad.u-strasbg.fr/simbad}}.
\paragraph*{Size\\}
Comparing the sizes of the extended sources we found 30 (out of 35) sources to
be compatible with the previously published value. The average size difference
for the extended sources was on the order of $\sim18$~\%, the distribution of
values having a width of $\sim40$~\%. This indicates that with the current
analysis we measured slightly larger sizes of the sources on average, but the
distribution is dominated by a large scatter. We expect the scatter to result
mainly from differences in the analysis procedure. Previous analyses mainly
fitted single Gaussian morphologies, while in this analysis we allowed for
multiple Gaussian components. Further differences are the addition of the
large-scale emission model and the systematic modeling of emission from
neighboring sources.
Previous publications found seven sources to be compatible with a point-like
source. In the current analysis we found all these sources to be compatible with
a point-like source again. Additionally, we identified the following three cases
that are compatible with a point-like source according to
Eq.~\ref{eq:extension_ul}, which were previously found to be extended:
\begin{enumerate}
\item For \object{HESS J1427$-$608} we measured a size of $0.048\pm0.009$\degr,
compared to $0.063\pm0.010$\degr\ in \cite{ref_gps_unids2008}. This source is a
borderline case that just meets our criterion for a point-like source.
\item For HESS~J1714$-$385 we found a size of $0.034\pm0.011$\degr\ compared to
$0.067\pm0.017$\degr\ in \cite{2008AA...490..685A}. With the current analysis,
a smaller size was found because underlying emission was modeled by separate
emission components (see Fig.~\ref{fig:hgps_catalog_model}).
\item We now measure the size of \object{HESS J1808$-$204} to be
$0.058\pm0.014$\degr\ (consistent with point-like, in the definition of
Eq.~\ref{eq:extension_ul}), compared to the previously measured size
$0.095\pm0.015$\degr\ (extended) \citep{HESS:1808}. This discrepancy is due to
the HGPS's inclusion of a large-scale emission component that now models
$\gamma$-ray\ excess previously accounted for in the source component itself.
\end{enumerate}
\paragraph*{Flux\\}
We found the flux of 42 (out of \hgpsSourceCountCrossChecked) sources to be
compatible with the previous published value, according to
Eq.~\ref{eq:morphoutliercriterion}.
The average difference in flux for extended sources was $3$~\% with a width of
$43$~\% for the distribution of values. While the average value is compatible
with previous analyses, we still found a large scatter (albeit compatible to the
systematic and statistical errors) of the distribution.
A fair comparison between flux values obtained with the current method and
earlier analyses proved to be difficult again because of fundamental differences
between the methods used. In previous publications, aperture photometry was
mostly used, while in this analysis the main flux measurement was based on a
model fit, taking the PSF and morphology of the source and large-scale emission
into account. Flux estimate differences with these two methods are shown in
Fig.~\ref{fig:hgps_sources_spectrum_map_flux_comparison} (both measures from the
HGPS analysis, not with respect to previous publications). Many of the
differences in spectra and fluxes measured in the HGPS analysis and previous
publications are the result of changes in the spectral extraction region
(position and size).
\paragraph*{Spectral index\\}
For all sources we found the spectral power-law indices to be compatible with
the previously published values. The mean difference in spectral index was
$0.04$ with a width of $0.23$ for the distribution. This is well compatible with
the expected scatter taking statistical and systematic uncertainties of the
measured spectral indices into account.
\subsubsection{Differences with previous publications}
\label{sec:results:previously:changed}
In the following paragraphs, we list and discuss the outliers as identified by
Eq.\ref{eq:morphoutliercriterion}.
\paragraph*{HESS~J0835$-$455\\}
This source (\object{Vela X}) exhibits complex morphology, and the HGPS analysis
best models the VHE emission as a superposition of three Gaussian components
with an average size $0.58\degr \pm 0.052\degr$. This value is somewhat larger
than the value published first in \citet{Aharonian:2006d}, where it was modeled
as a single asymmetric Gaussian of size $0.48\degr \pm 0.03\degr \times
0.36\degr \pm 0.03\degr$. However, a more recent H.E.S.S.\ publication
\citep{2012A&A...548A..38A} studied the complex emission more thoroughly. It
fit profiles of the emission along two perpendicular axes, the main one aligned
with the primary orientation of the emission. Along the major axis, the study
measured a Gaussian size $0.52\degr \pm 0.02\degr$, and along the minor axis,
two Gaussians (sizes $0.12\degr \pm 0.02\degr$ and $0.60\degr \pm 0.04\degr$)
were required to best fit the emission. The HGPS model of the emission from
HESS~J0835$-$455 is thus largely compatible with the most recent dedicated study
of the VHE emission, and the apparent discrepancy is simply a result of
comparing two different multi-component models with our general outlier
criterion (Eq.~\ref{eq:morphoutliercriterion}).
\paragraph*{HESS~J1646$-$458\\}
\object{HESS J1646$-$458} is a complex emission region located in the vicinity
of the stellar cluster \object{Westerlund 1}. Its morphology suggests it
consists of multiple sources. \citet{2012A&A...537A.114A} separated the emission
into at least two distinct features (with radii $0.35\degr$ and $0.25\degr$,
respectively) as well as some structured extended emission, distributed over the
signal region of 2.2\degr\ diameter, and even extending beyond. A flux above
1~TeV in the signal region of $7.6 \pm 1.3 \pm 1.5\times
10^{-12}$~cm$^{-2}$~s$^{-1}$ was derived, and a spectral index of $2.19 \pm 0.08
\pm 0.20$. An ON-OFF background estimation technique was used to cope with the
large source size.
In the HGPS analysis, this complex emission is modeled by a single Gaussian
component of 0.5\degr\ size shifted by 0.47\degr\ from the center of the region
used in \citet{2012A&A...537A.114A}, with a lower flux above 1~TeV of $5.48 \pm
0.46 \times 10^{-12}$~cm$^{-2}$~s$^{-1}$, and steeper index of $2.54 \pm 0.13$.
Given the complex morpology and the large scale of the spectral extraction
region used in \citet{2012A&A...537A.114A}, significant differences in source
parameters are to be expected; in the HGPS analysis part of the flux is absorbed
in the large-scale diffuse background.
\paragraph*{HESS~J1708$-$410\\}
The flux above 1 TeV of HESS~J1708$-$410 is found to be smaller in the HGPS
analysis than in \citet{ref_gps_unids2008}. While the size of the source is
similar in both cases, the different approaches used in the HGPS analysis lead
to different integration radii used to derive the source spectrum. The HGPS
analysis uses an integration radius about two times smaller than in the
dedicated analysis, which explains the apparent discrepancy.
\paragraph*{HESS~J1729$-$345\\}
For HESS~J1729$-$345, the HGPS analysis finds a flux above 1 TeV larger than in
\citet{2011AandA...531A..81H}. Because of the HGPS morphology modeling of the
source and its procedure to define the integration radius, the spectrum of this
source is derived in a region with a radius about two times larger than in the
dedicated publication, accounting for the observed difference.
\paragraph*{HESS~J1745$-$303\\}
HESS~J1745$-$303 was studied in \citet{2008A&A...483..509A} with 80~h of data.
Its morphology is complex and three subregions, called A, B, and C, were
discussed. In the HGPS analysis, with more than 160~h on the region, two
distinct sources are detected: HESS~J1745$-$303 and HESS~J1746$-$308. The
former encloses the hotspots A and C and a fraction of region B. A second
source is now detected at $b = -1.11\degr$ latitude. This source contains part
of hotspot B and emission at large latitudes that was not significant before,
likely due to the additional livetime obtained since 2008. It is fainter and
its spectrum is very steep but poorly constrained. There is also a third
extended ($\sigma \sim 0.5\degr$) Gaussian component in the region. It is
currently considered to be a diffuse component. The association of the two
sources and the extended component is unclear and the exact morphology of the
VHE emission in the region will require dedicated studies.
\paragraph*{HESS~J1800$-$240\\}
In \citet{Aharonian:2008f} the emission in the region of W~28 was found to be
split into two components: HESS~J1801$-$233 (addressed below), which is not
significant in the HGPS analysis and is coincident with the W~28 SNR itself, and
a complex region HESS~J1800$-$240 offset by $0.5\degr$ to the south. The latter
was previously found to be resolved into three hotspots dubbed HESS~J1800$-$240
A, B, and C \citep{Aharonian:2008f}. Since sources HESS~J1800$-$240 A and B are
spatially coincident with molecular clouds, \citet{Aharonian:2008f} suggested
that they were produced by CRs that had escaped the SNR and had illuminated
ambient gas clouds, making this system an archetype of CR escape from evolved
SNRs \citep[see, e.g.,][]{Aharonian1996, 2015SSRv..188..187S,
2015arXiv151002102G}.
In the HGPS analysis, however, only one source is redetected, HESS~J1800$-$240,
as one large Gaussian component centered on the hotspot B. The separation into
several components does not result in a high enough TS\ to separate it into
several significant sources in the analysis shown here.
\paragraph*{HESS~J1825$-$137\\}
HESS~J1825$-$137 is a large PWN with a bright core surrounded by extended,
asymmetric emission. The HGPS analysis finds it has a size of $0.46\degr \pm
0.03\degr$, using three Gaussian components to model the VHE entire $\gamma$-ray\
emission. This is significantly larger than the $0.24\degr \pm 0.02\degr$
obtained with a single symmetric Gaussian model or the $0.23\degr \pm 0.02\degr
\times 0.26\degr \pm 0.02\degr$ with a single asymmetric Gaussian in
\citet{Aharonian:2006g}. These models were stated to have rather poor $\chi^2$
goodness-of-fit values. The more complex approach taken for the morphology
modeling in the HGPS improves the description of the $\gamma$-ray\ emission from
this PWN and accounts for the differences with respect to previous, simpler
modeling.
\paragraph*{HESS~J1837$-$069\\}
The HGPS analysis finds HESS~J1837$-$069 to have a size of $0.36\degr \pm
0.03\degr$ based on modeling the VHE $\gamma$-ray\ emission as three Gaussian
components. This is larger than the size previously derived using a single
asymmetric Gaussian \citep{ref:gps2006}, i.e., $0.12\degr$ by $0.05\degr$; and
using a single Gaussian \citep{2008AIPC.1085..320M}, i.e., $0.22\degr$. The more
complex modeling of the HGPS, which also takes into account more of the extended
nebular emission from this identified PWN, explains the apparent discrepancy.
Consequently, we used a larger region (twice the radius compared to
\citet{ref:gps2006}) to derive the spectrum, leading to an integral flux above
1~TeV that is larger by a factor of $\sim$3 than in the dedicated publication.
\paragraph*{HESS~J1857$+$026\\}
The size of the source \object{HESS J1857$+$026} is significantly larger in this
analysis than previously published in \citet{ref_gps_unids2008}. In the latter,
the source is fit with an asymmetric Gaussian ($0.11\degr \pm 0.08\degr \times
0.08\degr \pm 0.03\degr$), whereas the HGPS analysis best models the source with
two Gaussian components for an approximate size of $0.26\degr \pm 0.06\degr$.
The difference in size is explained by the multicomponent approach of the HGPS
that better takes into account the larger scale emission underneath the central
bright core.
\paragraph*{\object{HESS J1908$+$063}\\}
The position and size published in \citet{2009A&A...499..723A} are significantly
different from those obtained in the HGPS analysis. The position is offset by
$0.17\degr$ and the size is found to be $0.48\degr \pm 0.04\degr$, which is
$0.14\degr$ larger. We note that the size we find is consistent with that
measured by the VERITAS Collaboration~\citep{2014ApJ...787..166A}, even though
the positions differ by $0.3\degr$. A plausible cause for these discrepancies is
that this is a large source likely composed of multiple components, where
results are expected to be sensitive to the morphology assumptions and to
details in background modeling techniques, in particular, if those tend to
absorb large-scale features.
\paragraph*{HESS~J1923$+$141\\}
The VHE $\gamma$-ray\ source \object{HESS J1923$+$141} \citep[preliminary H.E.S.S.\
results published in][]{W51:HESS_ICRC} is spatially coincident with the
\object{W 51} region, studied in detail with the MAGIC IACT \citep{MAGIC:W51}.
The HGPS results are generally compatible with those from MAGIC. However, the
latter shows evidence for a $\gamma$-ray\ source composed of two components above
1~TeV, which cannot yet be confirmed by H.E.S.S.\ One component is coincident with
the interaction region between \object{W 51C} and \object{W 51B}, while the
other is coincident with the potential PWN \object{CXOU J192318.5$+$140305}
\citep{Koo2005}, suggesting that HESS~J1923$+$141 may be a composite of VHE
emission of different astrophysical origins.
\paragraph*{HESS~J1930$+$188\\}
\label{sec:HESS_J1930p188}
The VHE $\gamma$-ray\ source, discovered with VERITAS \citep[with the identifier
VER~J1930$+$188,][]{2010ApJ...719L..69A}, is coincident with the composite
\object{SNR G54.1$+$0.3} and the pulsar \object{PSR J1930$+$1852}. We report on
the H.E.S.S.\ observations of this source for the first time here. The HGPS source
is found to have a slightly displaced position from the pulsar and the VERITAS
best fit (by $0.04\degr$). Despite the agreement with the VERITAS spectral
index, the integral flux above 1 TeV found in our analysis is $\sim$40\% lower
than their published flux. We note, however, that the apparent discrepancy with
VERITAS is not confirmed by our cross-check analysis, which yields a flux for
this source that is larger by more than the nominal 30\% systematic flux
uncertainty, and is in agreement with the VERITAS measurement.
\subsubsection{Sources not redetected}
\label{sec:results:previously:missing}
In total, there are \hgpsSourceCountMissingStr\ previously published VHE
$\gamma$-ray\ sources that are not redetected with the current HGPS analysis. All
of these are rather faint sources which, for the HGPS analysis, yield
significances close to the HGPS detection threshold of $TS=30$. We consider
these as real sources of $\gamma$-rays; the nondetection in the HGPS is primarily a
result of differences between the HGPS analysis and specific analysis methods.
We found that some of the most relevant differences are
\begin{enumerate}
\item event reconstruction and $\gamma$-ray-hadron separation cuts that are less
sensitive compared to more specialized methods that have been used in individual
source analyses;
\item higher energy threshold in the HGPS analysis, in conjunction with a soft
spectrum of the tested source;
\item use of the $2\degr$ FoV offset cut (see Sect.~\ref{sec:events_map}), which
is tighter than the value used in many previous H.E.S.S.\ publications ($2.5\degr$
or even $3\degr$).
\end{enumerate}
In addition, the use of a large-scale emission model and the modeling of nearby
extended components and overlapping sources modifies the measured flux and hence
the significance of a source compared to previous analyses, where larger scale
background features were accounted for in different ways (e.g., partly absorbed
in the ring background). Given these differences, it is not surprising that few
faint sources fail the HGPS detection criteria.
In the following paragraphs, we describe the individual cases in more detail.
For completeness, we added all missing sources to the final HGPS catalog;
the source parameters were taken from the corresponding publication (see also
Table~\ref{tab:hgps_external_sources}).
\paragraph*{HESS~J1718$-$374 and HESS~J1911$+$090\\}
The VHE $\gamma$-ray\ sources HESS~J1718$-$374 and HESS~J1911$+$090
(Figs.~\ref{fig:hgps_survey_mwl_2} and \ref{fig:hgps_survey_mwl_1}) were
previously detected toward the \object{SNR G349.7$+$0.2} and W~49B /
SNR~G43.3$-$0.2, respectively. Both are thought to result from interactions
with molecular clouds and exhibit correspondingly steep (soft) spectra, which
have PL indices $\Gamma = 2.80\pm0.27$ \citep{2015AandA...574A.100H} and
$3.14\pm0.24$ \citep{HESS:W49}, respectively. The energy threshold of the
analyses is therefore key to detecting these sources. As described in
Sect.~\ref{sec:maps}, the maps that serve as a starting point for the source
catalog have been produced using the hard cuts configuration and a conservative
safe energy threshold, explaining the lack of detection of these sources in the
HGPS analysis.
\paragraph*{HESS~J1741$-$302\\}
The unidentified source HESS~J1741$-$302 is located on the Galactic plane ($b =
0.05\degr$) and $\sim$1.7\degr\ away from the Galactic center. With an integral
flux of $\sim$1\% Crab above 1~TeV it is one of the faintest H.E.S.S.\ sources
detected so far \citep{HESS:1741}. Because of the addition of the large-scale
emission model in the HGPS analysis, HESS~J1741$-$302 does not reach the HGPS
$TS=30$ detection threshold.
\paragraph*{HESS~J1801$-$233\\}
HESS~J1801$-$233 is part of the HESS~J1800$-$240 and HESS~J1801$-$233 source
complex discussed above, characterizing emission features of the SNR~W~28 region
\citep{Aharonian:2008f}. The emission was found to be split into two components:
HESS~J1801$-$233, which is coincident with the northeastern boundary of W~28
where the shockwave is interacting with a molecular cloud, and a complex region
HESS~J1800$-$240 offset by $0.5\degr$ to the south. HESS~J1801$-$233 does not
reach the $TS=30$ threshold and is therefore not found to be significant in the
HGPS analysis. We note that the $\gamma$-ray\ emission from W~28 is bright in the
GeV range and is clearly detected above 50~GeV \citep{2FHL}. It has a steep
spectral index of $2.7\pm 0.3$ at VHE \citep{Aharonian:2008f}. It is therefore
not detected here because of our higher analysis energy threshold (about
400\,GeV at a longitude of $7\degr$, see
Fig.~\ref{fig:hgps_energy_threshold_profiles}) and because of the inclusion of
the large-scale emission model in our analysis, which reduces the significance
of such a faint source. Furthermore, we reiterate that HESS~J1800$-$240 is
detected in the HGPS as one large Gaussian source, see
Sect.~\ref{sec:results:previously:changed}, rather than three individual
hotspots as in \citet{Aharonian:2008f}. This potentially also contributes to a
reduction of the significance of this previously established source
HESS~J1801$-$233.
\subsection{Comparison with the cross-check analysis}
\label{sec:results:xcheck}
For most sources, the spectral fit results reported in this catalog agree with
those obtained from the independent cross-check analysis (see
Sect.~\ref{sec:cc:discussion}). For the following sources, however, larger
differences, exceeding the systematic errors, are observed. Several factors
could explain these differences, such as the lower energy threshold in the
cross-check analysis, the differences in the morphology models, or the fact that
the cross-check spectrum analysis is run for the positions and sizes obtained
with the main analysis.
\begin{itemize}
\item HESS~J1503$-$582 (see Sect.~\ref{sec:HESS_J1503m582}): While the spectral
indices are compatible, the derived integral flux above 1 TeV is about two times
higher in the main analysis than in the cross-check analysis.
\item HESS~J1646$-$458 (Westerlund 1): the cross-check analysis gives a spectrum
that is about two times brighter around 1~TeV, with a curvature or cutoff
leading to similar fluxes as the main analysis at low and high energies. We
would like to stress again that the HGPS analysis for this source is not very
reliable, because the source size is similar to the H.E.S.S.\ field of view and a
more careful individual study and background estimation is needed, as explained
in Sect.~\ref{sec:results:previously:changed} which points out the differences
with the previously published measurement.
\item \object{HESS J1718$-$385}: For both the main and cross-check analyses, the
preferred spectral model is a power law with an exponential cutoff. The cutoff
energies are compatible and the spectra are in agreement above $\sim$ 3 TeV.
However, below this energy, some discrepancy is observed as the main analysis
spectral fit yields a spectral index that is harder than in the cross-check
analysis, resulting in an integral flux above 1 TeV about two times lower in the
main analysis than in the cross-check analysis.
\item HESS~J1729$-$345: While the derived spectral indices are compatible, the
integral flux above 1 TeV is about two times higher in the cross-check analysis
than in the main analysis.
\item HESS~J1746$-$308: The large spectral index derived from the main analysis
could not be confirmed by the cross-check analysis. The differential flux values
at 1 TeV are compatible, but the discrepancy in the obtained spectral indices
leads to an integral flux above 1 TeV about two times higher in the cross-check
analysis than in the main analysis.
\item \object{HESS J1852$-$000} (see Sect.~\ref{sec:HESS_J1852m000}): The derived
spectral indices are compatible, but the integral flux above 1 TeV is about two
times higher in the cross-check analysis than in the main analysis.
\end{itemize}
Spectral model results for these six sources should therefore be treated with
caution.
\subsection{New VHE sources}
\label{sec:results:new}
During the construction of the HGPS catalog, statistically significant VHE
$\gamma$-ray\ emission was detected from 16 sources which were not
previously known or for which only preliminary detections had been published
(e.g. in conference proceedings). All of these new sources are confirmed by the
cross-check analysis --- we do not expect any of these new sources to be a false
detection (see Sect.~\ref{sec:cc:components} and
\ref{sec:cc:component_classification}) The morphological and spectral properties
of these new, confirmed VHE sources are provided in
Tables~\ref{tab:hgps_catalog}~and~\ref{tab:hgps_spectra}, their spectra are
shown in Fig.~\ref{fig:hgps_spectra_new_sources}. Each new source is also
briefly described in the following sections, in the context of its MWL
environment and possible origin of the VHE $\gamma$-rays.
\begin{figure*}
\includegraphics[width=\textwidth]{figures/hgps_spectra_new_sources}
\caption[Spectra of the new sources]{
Fitted power-law spectral models with uncertainty bands and flux points for new
sources.
}
\label{fig:hgps_spectra_new_sources}
\end{figure*}
\subsubsection{HESS~J1119$-$614}
\label{sec:HESS_J1119m614}
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{figures/per-source-plots/HESS_J1119m614}}
\caption[New source image: HESS~J1119$-$614]{
Significance ($\approx\sqrt{TS}$) of the VHE $\gamma$-ray\ excess, centered on the
new source HESS~J1119$-$614, with the H.E.S.S.\ The PSF for this data set shown
inset. The black circle at the center indicates the 68\% uncertainty in the
best-fit centroid of the VHE emission. The white circle represents the 70\%
containment region of the emission (\texttt{R\_SPEC}, used also for spectral
measurement). The approximate size of the radio shell of \object{SNR
G292.2$-$0.5} is shown as a green circle and the \object{PWN G292.15$-$0.54} as
a green marker. The position of the pulsar \object{PSR J1119$-$6127} is denoted
by a cyan diamond. The FoV is $1.5\degr \times 1.5\degr$.
}
\label{fig:HESS_J1119m614}
\end{figure}
We confirm the discovery of VHE $\gamma$-ray\ emission from HESS~J1119$-$614
(Fig.~\ref{fig:HESS_J1119m614}) and identify it as the composite SNR
G292.2$-$0.5. We base the firm identification on the basis of spatial
coincidence with the SNR and its associated PWN G292.15$-$0.54 and highly
magnetized pulsar PSR~J1119$-$6127. H.E.S.S.\ previously published
\citep{Djannati-Atai09} preliminary source properties that are compatible with
the HGPS results.
A compact (size $6\arcsec \times 15\arcsec$), nonthermal PWN has been detected
in X-rays \citep{Gonzalez03,Safi-Harb08} and is considered a candidate PWN in HE
$\gamma$-rays\ \citep{Acero13}. It is powered by the energetic pulsar PSR
J1119$-$6127, with spin-down luminosity $\dot{E} = 2.3 \times
10^{36}$~erg~s$^{-1}$ and distance $d = 8.4 \pm 0.4$~kpc \citep{Caswell04}. The
pulsar has been detected in radio \citep{Camilo00} and HE $\gamma$-rays\
\citep[][as \object{3FGL J1119.1$-$6127} in the latter]{Parent11,3FGL} and is
characterized by a relatively high surface B-field ($4.1 \times 10^{13}$~G).
Despite it being a rotation-powered pulsar, it has recently joined the other
high-B pulsar \object{PSR J1846$-$0258} in revealing a magnetar-like behavior
\citep{2016ApJ...829L..25G, 2016ATel.9378....1Y, 2016ATel.9282....1A}. It is
further notable for being among the handful of pulsars for which braking indices
have been measured, in this case $n = 2.684 \pm 0.002$ \citep{Weltevrede11}, as
opposed to simply assuming $n = 3$, giving a more precise characteristic age
$\tau_{\mathrm{c}} = \frac{P}{(n-1)\dot{P}} = 1.9$~kyr, where $P$ and $\dot{P}$
are the currently measured period and period derivative, respectively.
Considering the luminosity of HESS~J1119$-$614, $L_{\gamma}(\mathrm{1-10~TeV}) =
2.4\times10^{34} (d/8.4\mathrm{kpc})^2 $~erg~s$^{-1}$, the apparent efficiency
of converting the pulsar's rotational energy to $\gamma$-rays,
$\epsilon_{\mathrm{1-10~TeV}} \equiv L_{\gamma} / \dot{E} = 1.1\%$, is
compatible with the efficiencies ($\la 10\%$) of other VHE sources that have
been identified as PWNe \citep{Kargaltsev13}. The offset of the VHE emission
from this young pulsar, where the X-ray PWN is located, is not statistically
significant with respect to the uncertainty on the best-fit VHE centroid ($\pm
0.02\degr$).
The age of SNR~G292.2$-$0.5 is in the range 4.2$-$7.1~kyr \citep{Kumar12}. This
can be reconciled with the characteristic age of the pulsar if the braking
index~$n$ was much smaller than the current value until recently. This
assumption is reasonable in light of recent evidence for erratic radio timing
behavior from the pulsar \citep{Weltevrede11}. The X-ray emission from the SNR
is predominantly thermal and has an additional hard, nonthermal, X-ray
component. This nonthermal emission is likely from the PWN, although an origin
in the SNR reverse shock could not be ruled out \citep{Kumar12}.
The X-ray spectral measurements suggest the SNR is generally expanding in a
low-density medium, appearing to disfavor a hadronic origin for the VHE
$\gamma$-rays\ \citep{Drury94}. However, there is also evidence for localized,
high-density regions near the eastern SNR shell, including dark clouds and CO
features \citep{Kumar12}. We cannot confirm the claim by \cite{Kumar12}, based
on preliminary H.E.S.S.\ results \citep{Djannati-Atai09}, that no VHE emission is
detected from the eastern SNR shell, as it is well within the VHE emission
region in the HGPS analysis.
In conclusion, while the identification with the composite SNR and PWN system is
firm, it is not yet clear whether the VHE emission originates in the SNR shock,
either leptonically, from the shell itself, or hadronically, from interactions
with ambient media; the PWN; or some combination thereof.
\subsubsection{HESS~J1457$-$593}
\label{sec:HESS_J1457m593}
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{figures/per-source-plots/HESS_J1457m593}}
\SingleSourceCaption{HESS~J1457$-$593}{
Additionally, the SNR G318.2$+$0.1 is shown by plotting its 843-MHz radio
intensity \citep{Whiteoak96} with contours at 4, 8, and 12~mJy/beam. The FoV is
$2.8\degr \times 2.8\degr$.
}
\label{fig:HESS_J1457m593}
\end{figure}
VHE $\gamma$-ray\ emission from the new source HESS~J1457$-$593
(Fig.~\ref{fig:HESS_J1457m593}) is associated with the \object{SNR
G318.2$+$0.1}, on the basis of a spatial coincidence with a shell-type SNR and
lack of other potential MWL counterparts. Preliminary H.E.S.S.\ morphological
properties were initially published by \cite{Hofverberg10}. The HGPS source
position is compatible with the preliminary position; however, the size of the
source in the catalog is different because of a difference in the assumed
morphological model. Previously, the source was modeled as an asymmetric
Gaussian ($0.31\degr \pm 0.07\degr$ by $0.17\degr \pm 0.05\degr$) whereas the
HGPS source is modeled, like all HGPS sources, as a symmetric Gaussian
($0.33\degr \pm 0.04\degr$). Nonetheless, the spatial overlap between
HESS~J1457$-$593 and the southern part of the SNR shell still holds.
G318.2+0.1 is observed as a relatively large ($40^{\prime} \times 35^{\prime}$)
shell in radio \citep[e.g.,][]{Whiteoak96}, which is characterized by two
arc-like, nonthermal filaments in the northwest and southeast (SE) that together
form the shell. The VHE emission is much larger than the SNR shell, and the VHE
centroid is significantly offset ($\sim$0.4\degr) from the SNR center, although
it is partially coincident with the SE rim of the shell. Furthermore, there is
evidence in $^{12}$CO \citep{Dame01} of a giant molecular cloud (GMC) at $(\ell,
b) \approx (318.4\degr, -0.5\degr)$ coincident with both the VHE emission and
the SE rim; this GMC is $1.8\degr \times 1.1\degr$ (average physical size 80 pc)
in size and has mass $\sim$3$\times 10^5$~M$_{\odot}$ and density
$\sim$40~cm$^{-3}$, assuming the near solution of the kinematic distance $3.5
\pm 0.2$~kpc \citep{Hofverberg10}. Little is known about G318.2$+$0.1 itself,
but assuming it is at the same distance as the GMC and further assuming a
Sedov-Taylor model for the SNR evolution, its physical diameter would be
$\sim$40~pc and its age $\sim$8~kyr. These data suggest a plausible SNR and
molecular cloud interaction scenario \citep[e.g.,][]{Gabici07}, where particles
are accelerated in the shell, escape, and interact with a nearby but offset MC,
producing $\gamma$-rays\ via hadronic p-p collisions.
An X-ray study of the SNR with \emph{BeppoSAX} and \emph{ROSAT} did not find
evidence for shell-like, nonthermal emission, nor thermal X-ray emission that
should trace the interaction between the SNR and ISM \citep{Bocchino01}.
However, several hard X-ray sources were found, suggestive of at least localized
nonthermal electron acceleration. Additional MWL observations and spectral
modeling are required to further investigate the scenario responsible for the
production of VHE $\gamma$-rays.
\subsubsection{HESS~J1458$-$608}
\label{sec:HESS_J1458m608}
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{figures/per-source-plots/HESS_J1458m608}}
\SingleSourceCaption{HESS~J1458$-$608}{
Additionally, the ellipse represents the 95\% uncertainty in the position of the
HE $\gamma$-ray\ point source 3FGL~J1456.7$-$6046, and the cyan diamond indicates the
position of the pulsar. The FoV is $2.6\degr \times 2.6\degr$.
}
\label{fig:HESS_J1458m608}
\end{figure}
VHE $\gamma$-ray\ emission from the new source \object{HESS J1458$-$608}
(Fig.~\ref{fig:HESS_J1458m608}) is associated with the pulsar \object{PSR
J1459$-$6053} and can likely be identified as a heretofore undetected PWN, on
the basis of a spatial coincidence with an energetic pulsar and the absence of
other plausible MWL counterparts. Preliminary VHE morphological and spectral
properties were first announced by \cite{delosReyes12}. The updated
morphological properties from the HGPS catalog differ from those preliminary
ones, which had underestimated the extent of the large, complex emission region
($0.37\degr \pm 0.03\degr$ vs. $0.17\degr \pm 0.07\degr$; both morphological
models 2D symmetric Gaussian), likely due to the irregular shape of the
emission. Previously there was a hint for additional structure, possibly a
second source hidden in the tail of a dominant source, but this remains
statistically insignificant in the HGPS analysis with respect to a single-source
Gaussian morphology. Also of note, the best-fit centroid of the VHE emission is
now located closer to the $\gamma$-ray\ pulsar ($0.11\degr$ versus $0.16\degr$
offset), bolstering the scenario in which the VHE emission is interpreted as a
PWN powered by the pulsar. As expected for such changes in morphological
properties, the HGPS spectral results also differ from the previously derived
preliminary values.
The pulsar PSR~J1459$-$6053 (also \object{3FGL J1459.4$-$6053}) is a relatively
old ($\tau_{\mathrm{c}} = 65$ kyr) but still very energetic HE $\gamma$-ray
pulsar with a spin-down luminosity $9.1 \times 10^{35}$ erg s$^{-1}$ and
unknown distance ($d < 22$ kpc) \citep{Abdo13}. As noted above, it is offset
$0.11\degr$ from the VHE centroid, which is consistent with offsets observed in
other PSR and VHE PWN systems \citep[e.g.,][]{Kargaltsev13}. The putative PWN
has not been detected in X-rays potentially because of the age of the system
\citep{Ray11} or HE $\gamma$-rays \citep{Acero13}.
The new VHE spectrum ($E > 0.46$ TeV) is consistent with the 31-316 GeV \emph{Fermi}-LAT\
upper limits. However, the conclusion, made by \cite{Acero13}, that the peak of
the PWN's inverse Compton emission is located in this energy range has to be
revised as the peak can now only be inferred to be at higher energies.
Apart from the HE $\gamma$-ray\ pulsar, there is a second HE source (\object{3FGL
J1456.7$-$6046}) in the FoV. However, it is unclear if it is related to the PSR and
PWN scenario, since it exhibits a highly curved, log-parabolic spectrum typical
of blazars and a TS that fluctuates strongly with the choice of diffuse model or
analysis method \citep{3FGL}.
\subsubsection{HESS~J1503$-$582}
\label{sec:HESS_J1503m582}
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{figures/per-source-plots/HESS_J1503m582}}
\SingleSourceCaption{HESS~J1503$-$582}{
Additionally, the ellipse represents the 95\% uncertainty in the position of the
HE $\gamma$-ray\ point source 3FGL~J1503.5$-$5801; the circle represents the 68\%
uncertainty in the position of the HE ($E > 50$~GeV) $\gamma$-ray\ point source
2FHL~J1505.1$-$5808; and the star represents the location of the X-ray point
source. The FoV is $2.3\degr \times 2.3\degr$.
}
\label{fig:HESS_J1503m582}
\end{figure}
HESS~J1503$-$582 (Fig.~\ref{fig:HESS_J1503m582}) is a new source for which the
origin of the VHE $\gamma$-ray\ emission is unidentified. H.E.S.S.\ earlier announced
preliminary morphological and spectral properties for this source
\citep{Renaud08}, which are now superseded by those in this paper. The VHE
emission appears to be one of the softest ($\Gamma = 2.68 \pm
0.08_{\mathrm{stat}}$) in the HGPS, although both its morphological and spectral
properties are affected by systematic uncertainties larger than nominal (see,
e.g., Sect.~\ref{sec:cc:discussion} and Sect.~\ref{sec:results:xcheck}).
A point-like HE ($E > 50$~GeV) $\gamma$-ray\ source, 2FHL~J1505.1$-$5808
\citep{2FHL}, is spatially coincident with the VHE emission region. A
comparison of the VHE and HE ($E > 50$~GeV) spectra suggests that it may be a
PWN \citep{2FHL}, although no PWN or energetic pulsar has been detected so far.
Another, different, point-like HE ($E > 100$~MeV) $\gamma$-ray\ source,
3FGL~J1503.5$-$5801 \citep{3FGL}, is also within the VHE region. Its nature is
unknown, but its log-parabolic spectrum suggests it may not be directly related
to HESS~J1503$-$582.
Faint X-ray emission \citep[\object{AX J1504.6$-$5824},][]{Sugizaki01} is
present toward the edge of the VHE emission. Nominally cataloged as a
cataclysmic variable, its X-ray properties are not well known owing to the low
\emph{ASCA} sensitivity. Analysis of more sensitive data from other X-ray
telescopes is needed to investigate the possibility it may be a PWN; this is the
case despite a lack of an energetic pulsar in the vicinity, but bearing in mind
the unknown nature of the nearby 3FGL source.
A relatively comprehensive search of MWL archives \citep{Renaud08} led to the
investigation of an atypical scenario where the VHE emission could be linked
with a forbidden velocity wing (FVW): faint, characteristic 21-cm \ion{H}{I}\ line
emission structures seen as deviations from the canonical Galactic rotation
curve \citep{Kang07}. The hypothesis is that this FVW, \object{FVW 319.8$+$0.3},
may be related to an older SNR in its radiative phase, as was the case for two
other FVWs \citep{Koo06,Kang12}. Although the SNR would no longer have
sufficient shock velocity to accelerate particles responsible for producing VHE
$\gamma$-rays\ \citep{Ptuskin05}, it could nevertheless be indicative of increased
or more recent activity in the region (stellar winds and/or supernova
explosions). A large \ion{H}{I}\ shell, the result of such activity, is nearby
\citep[see][\object{GSH 319$-$01$+$13}]{McClure-Griffiths02}; however, its
centroid is substantially offset by more than 1$\degr$ from HESS~J1503$-$582 and
its extent considerably larger than the VHE emission region, so it seems
unlikely to be related.
On the other hand, VERITAS also searched for VHE emission from an FVW, one which
does show clear shell-type emission in \ion{H}{I}\ (\object{FVW 190.2$+$1.1}).
Despite observations that reached a sensitivity better than 1\%~Crab, VERITAS
did not detect any significant VHE emission \citep{Holder09}. Furthermore, there
is no definitive identification of VHE emission from young stellar clusters,
with the possible exception of the superbubble \object{30 Dor C} in the LMC
\citep{HESS15_LMC}. Therefore, this FVW scenario remains speculative.
\subsubsection{HESS~J1554$-$550}
\label{sec:HESS_J1554m550}
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{figures/per-source-plots/HESS_J1554m550}}
\SingleSourceCaption{HESS~J1554$-$550}{
Additionally, the composite SNR is shown by plotting its 843-MHz radio intensity
\citep{Whiteoak96} with contours at 1, 8, and 15~mJy/beam. The FoV is $0.9\degr
\times 0.9\degr$.
}
\label{fig:HESS_J1554m550}
\end{figure}
VHE $\gamma$-ray\ emission from the new source HESS~J1554$-$550
(Fig.~\ref{fig:HESS_J1554m550}) is firmly identified with the \object{PWN
G327.15$-$1.04} within the composite \object{SNR G327.1$-$1.1}, on the basis of
both a spatial coincidence with the PWN and the size of the VHE emission region,
which can be constrained to less than $0.035\degr$. Preliminary H.E.S.S.\
morphological and spectral properties of the VHE source were first published by
\citet{Acero:2012} and are compatible with the HGPS results. However, while
previously the source size was given as $0.03\degr \pm 0.01\degr$, the more
conservative HGPS analysis procedure used here (Sect.~\ref{sec:cc:extension_ul})
finds the source to be compatible with being point-like, and there is a limit on
the size that is nonetheless compatible.
The VHE size limit rules out significant emission from the outer shell of the
SNR and is compatible with the compact, nonthermal PWN, which is observed in
both radio and X-rays \citep{Temim09,Temim15} but not HE $\gamma$-rays\
\citep{Acero13,2FHL}. Furthermore, the VHE centroid is compatible with the peak
of the radio emission from the PWN and the tail of X-ray PWN. Although pulsed
emission from the putative pulsar at the heart of the composite SNR has not been
detected in radio, X-ray, nor HE $\gamma$-ray\ bands, the X-ray data provide
evidence for the existence of a powerful pulsar, that has an estimated $\dot{E} =
3.1 \times 10^{36}$~erg~s$^{-1}$ \citep{Temim15}. The distance to the SNR is not
well determined, but has been estimated to be roughly 9~kpc \citep{Sun99}.
Assuming this distance, the VHE luminosity of HESS~J1554$-$550 is
$L_{\gamma}(1-10~\mathrm{TeV})=1.0\times10^{34}(d/9~\mathrm{kpc})^2$~erg~s$^{-1}$
and the apparent efficiency $\epsilon_{\mathrm{1-10 TeV}} \equiv L_{\gamma} /
\dot{E} = 0.3\%$, which is compatible with the efficiencies ($\la 10\%$) of
other VHE sources that have been identified as PWNe \citep{Kargaltsev13}.
\subsubsection{\object{HESS J1813$-$126}}
\label{sec:HESS_J1813m126}
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{figures/per-source-plots/HESS_J1813m126}}
\SingleSourceCaption{HESS~J1813$-$126}{
Additionally, the position of the pulsar is denoted by a cyan diamond. The FoV is
$1.7\degr \times 1.7\degr$.
}
\label{fig:HESS_J1813m126}
\end{figure}
The HGPS catalog analysis has revealed an intriguing new source of VHE
$\gamma$-rays\ (Fig.~\ref{fig:HESS_J1813m126}) not previously detected, one of the
few off-plane VHE sources ($b = 2.5\degr$). The only plausible MWL counterpart
associated with this emission is the energetic pulsar \object{PSR J1813$-$1246}
\citep{Abdo09}, marginally coincident with the VHE best-fit centroid. This
suggests the VHE emission originates in a PWN powered by the pulsar, which has a
spin-down luminosity $\dot{E} = 6.3 \times 10^{36}$~erg~s$^{-1}$ and
characteristic age $\tau_{\mathrm{c}} = 43$~kyr. The pulsar is one of the
brightest $\gamma$-ray\ pulsars \citep[\object{3FGL J1813.4$-$1246},][]{3FGL} and
the second-most energetic, radio-quiet pulsar. This pulsar also been found to
exhibit strong X-ray pulsations, and its distance has been recently constrained
to $d > 2.5$~kpc \citep{Marelli14}. This implies a lower limit on the VHE
luminosity $L_{\gamma}(1-10~\mathrm{TeV}) > 2.9 \times 10^{33}$~erg~s$^{-1}$ and
a corresponding limit on the apparent efficiency $\epsilon_{\mathrm{1-10 TeV}} >
0.05\%$.
In other energy bands, no off-pulse emission (e.g. emission from the putative
PWN) is detected in HE $\gamma$-rays\ (0.1--100~GeV) based on the analysis of five
years of \emph{Fermi}-LAT\ data \citep{Marelli14}, dismissing earlier hints for a GeV PWN
\citep{Ackermann11,Abdo13}, likely owing to the larger data set and improved models
for diffuse emission used in the new analysis. In X-rays ($0.3-10$~keV), despite
relatively deep \emph{XMM-Newton}\ (130~ks) and \emph{Chandra}\ (50~ks) observations, no PWN is
detected beyond $1-1.5\arcsec$ from the pulsar \citep{Marelli14}. This is very
unusual for a pulsar this energetic; that is the derived upper limits in X-rays are
only marginally compatible with known relations between PWN and pulsar
luminosities \citep{Kargaltsev08} and between PSR luminosity and distance to the
PWN termination shock \citep{Gaensler06}.
Therefore, HESS~J1813$-$126 appears to be a rare case of a relic PWN
\citep{deJager09} currently detected exclusively in the VHE domain. Observations
in the hard X-ray domain with \emph{NuSTAR} would be useful to investigate the
hint of a signal seen at 30--520~keV with \emph{INTEGRAL} \citep{Marelli14} and
to determine if there is an unpulsed, nebular component visible at those energies.
Regardless, further work modeling the MWL spectral energy distribution is
necessary to fully investigate this intriguing system.
\subsubsection{HESS~J1826$-$130}
\label{sec:HESS_J1826m130}
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{figures/per-source-plots/HESS_J1826m130}}
\SingleSourceCaption{HESS~J1826$-$130}{
Additionally, the approximate centroid of the PWN is marked by a green cross,
and the pulsar position is marked by a cyan diamond. The FoV is $2.0\degr \times
2.0\degr$.
}
\label{fig:HESS_J1826m130}
\end{figure}
The HGPS catalog analysis reveals a distinct new source of VHE $\gamma$-rays,
\object{HESS J1826$-$130} (Fig.~\ref{fig:HESS_J1826m130}), in what was previously
considered extended emission from the nearby PWN HESS~J1825$-$137
\citep{Aharonian:2006g}. Because of the very close proximity to its bright neighbor,
the spectral measurement is highly contaminated (41\%).
\citet{2017arXiv170107002A} reported preliminary findings for this new source.
HESS~J1826$-$130 is associated with the ``Eel'' PWN\footnote{\citet{Roberts07},
based on visual inspection of the VHE images, first suggested that the VHE
emission is separate from the PWN HESS~J1825$-$137 and associated it with the
Eel.} (\object{PWN G18.5$-$0.4}), an elongated, nonthermal, X-ray source
observed with \emph{Chandra}\ \citep{Roberts07}, and the energetic pulsar \object{PSR
J1826$-$1256} \citep{Abdo09}, on the basis of a spatial coincidence. The
best-fit VHE centroid is compatible with the Eel, while the pulsar is somewhat
offset ($0.09\degr$) from the centroid but well within the VHE emission region
(size $0.15\degr \pm 0.02\degr$). The pulsar is now notable for being one of the
brightest radio-quiet $\gamma$-ray\ pulsars \cite[\object{3FGL
J1826.1$-$1256};][]{3FGL}. The distance of the pulsar is unfortunately not
known, which precludes conclusions on the energetics, but its position, $\dot{E} =
3.6 \times 10^{36}$~erg~s$^{-1}$, and $\tau_{\mathrm{c}} = 14$~kyr suggest it is
probably powering the Eel. The PWN is not detected in HE $\gamma$-rays\
\citep{Ackermann11,2FHL}. Finally, we note that dense molecular gas was also
found overlapping HESS~J1826$-$130 at a distance matching that of the dispersion
measure of the pulsar \citep{2016MNRAS.458.2813V}, suggesting a possible
hadronic origin for this VHE source.
The \object{SNR G18.6$-$0.2} \citep{Brogan06} is also coincident with the VHE
emission region, although it is significantly smaller in size ($0.1\degr$
diameter). Very little is known about this SNR, except that a partial shell-type
morphology has been observed so far only in radio and IR and that its distance
is estimated to be 4.0--5.2~kpc \citep{Johanson09}.
A firm identification of the VHE source as a PWN is not possible at this time,
in part resulting from the unknown distance to the Eel PWN and PSR system and the
poorly studied SNR. We are currently preparing more advanced VHE spectral
analysis methods that can account for contamination in crowded FoVs. These
methods will enable more accurate modeling of the SED.
\subsubsection{HESS~J1828$-$099}
\label{sec:HESS_J1828m099}
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{figures/per-source-plots/HESS_J1828m099}}
\SingleSourceCaption{HESS~J1828$-$099}{
The FoV is $1.0\degr \times 1.0\degr$.
}
\label{fig:HESS_J1828m099}
\end{figure}
HESS~J1828$-$099 is a new source of VHE $\gamma$-rays\
(Fig.~\ref{fig:HESS_J1828m099}), which is unique because it appears to be
completely dark at lower energies with no apparent associations (see
Table~\ref{tab:hgps_associations}). It is also notable for being one of the
{\hgpsPointLikeSourceCount point-like sources in the HGPS catalog with a size
(Gaussian std. dev.) less than $0.07\degr$. The detection of a spatially
coincident HE $\gamma$-ray\ source has been claimed \citep{Neronov10} but not
confirmed with the latest, significantly larger \emph{Fermi}-LAT\ data sets, i.e. there is
no respective source neither in the 3FGL catalog \citep{3FGL} nor 2FHL catalog
\citep[$E > 50$~GeV;][]{2FHL}. Deeper follow-up observations in especially the
radio and X-ray bands are strongly encouraged to probe nonthermal emission.
\subsubsection{HESS~J1832$-$085}
\label{sec:HESS_J1832m085}
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{figures/per-source-plots/HESS_J1832m085}}
\SingleSourceCaption{HESS~J1832$-$085}{
Additionally, the position of the two pulsars are denoted by cyan diamonds. The
FoV is $0.7\degr \times 0.7\degr$.
}
\label{fig:HESS_J1832m085}
\end{figure}
HESS~J1832$-$085 (Fig.~\ref{fig:HESS_J1832m085}) is an unidentified source of
VHE $\gamma$-rays. It is notable for its point-like morphology, which is measured
to be less than $0.05\degr$ in extension, and its scarcity of promising MWL
counterparts.
An interesting object that is spatially coincident with HESS~J1832$-$085 is the
pulsar \object{PSR~J1832$-$0827} \citep{Clifton1986}, which has so far only been
detected in radio wavelengths. The pulsar is likely at a distance of
$\approx$4.9~kpc \citep{CordesLazio:2002}, in agreement with other estimates in
the range 4.4--6.1~kpc \citep{Frail91} and has a spin-down
luminosity\footnote{This pulsar was not selected by the standardized HGPS
association procedure (Sect.~\ref{sec:results:assoc_id}) as a possible
counterpart because its luminosity is just below the $\dot{E} >
10^{34}$~erg~s$^{-1}$ threshold.} $\dot{E} = 9.3 \times 10^{33}$~erg~s$^{-1}$
\citep{Hobbs2004}. It is one of the few pulsars with a measured braking index,
$n = 2.5 \pm 0.9$ \citep{Johnston1999}, providing a characteristic age
$\tau_{\mathrm{c}} \approx 200$~kyr. Another very intriguing object in the FoV
is the energetic millisecond pulsar \object{PSR~J1832$-$0836}, which has a 2.7
ms period \citep{Burgay2013}. It has a spin-down luminosity $\dot{E} = 1.7
\times 10^{34}$~erg~s$^{-1}$, a very large characteristic age (typical of
millisecond pulsars) $\tau_{\mathrm{c}} = 5 \times 10^9$~kyr, and distance
1.1~kpc \citep{CordesLazio:2002}.
There are no known PWNe associated with these two pulsars nor close to
HESS~J1832$-$085. If either or both of these pulsars are powering VHE PWNe, a
relatively large conversion efficiency of $\epsilon_{\mathrm{1-10~TeV}} \sim
23\%$ would be required for PSR~J1832$-$0827, and a more reasonable
$\epsilon_{\mathrm{1-10~TeV}} \sim 0.6\%$ for PSR~J1832$-$0836. The older ages
are at odds with the inferred small sizes of the VHE PWNe, constrained to be
less than $\approx 4~(d~/~4.9~\mathrm{kpc})$~pc and $\approx
1~(d~/~1.1~\mathrm{kpc})$~pc, respectively. These circumstances, plus the
borderline low spin-down luminosity of PSR~J1832$-$0827, combine to disfavor a
PSR and PWN scenario as the origin of the VHE emission in light of the known VHE
PWN population \citep{HESS:PWNPOP}. The millisecond pulsar scenario is even more
uncertain. That pulsar is slightly more energetic and much closer, but thus far
millisecond pulsars, with ages of billions of years, are not known to produce
PWNe that emit detectable levels of $\gamma$-rays\ at TeV energies. Therefore, the
origin of the emission from this new, enigmatic, VHE $\gamma$-ray\ source is still
very much unclear.
\subsubsection{HESS~J1833$-$105}
\label{sec:HESS_J1833m105}
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{figures/per-source-plots/HESS_J1833m105}}
\SingleSourceCaption{HESS~J1833$-$105}{
Additionally, the composite SNR is shown by plotting a green circle
approximating the radio shell, and the pulsar position is denoted by a cyan
diamond. The FoV is $0.8\degr \times 0.8\degr$.
}
\label{fig:HESS_J1833m105}
\end{figure}
VHE $\gamma$-ray\ emission from the new source \object{HESS~J1833$-$105}
(Fig.~\ref{fig:HESS_J1833m105}) can now be firmly identified with the composite
\object{SNR G21.5$-$0.9} \citep{Wilson76}, which contains a Crab-like PWN.
Preliminary H.E.S.S.\ source properties were previously shown
\citep{Djannati-Atai:2008a} and are compatible with the HGPS results, although
at the time it was not yet possible to disentangle the possible contributions to
the VHE emission from the PWN and SNR shell.
The new identification is supported by a positional coincidence between the VHE
emission centroid and the PWN center, but most importantly by the lack of
extension of the VHE emission region; this region is constrained to be less than
0.03\degr, which is our systematic limit for source sizes. This implies that we cannot
claim significant VHE emission from the forward shock of the spherical, faint
SNR shell at a radius of 0.038\degr\ \citep{Bocchino05}.
The PWN has also been detected in X-rays \citep{Safi-Harb01,Bocchino05} and IR
\citep{Zajczyk12} although not in HE $\gamma$-rays\ \citep{Acero13}, and its
distance has been estimated to be $d \approx 4.8$ kpc \citep{Tian08}. It is
powered by the very energetic \object{PSR J1833$-$1034}, currently the fifth
most energetic pulsar known in the Galaxy, and has a spin-down luminosity $\dot{E}
\approx 3.4 \times 10^{37}$ erg s$^{-1}$. The pulsar has been detected in radio
\citep{Gupta05,Camilo06} and HE $\gamma$-rays \citep[as \object{3FGL
J1833.5$-$1033};][]{3FGL}. The age of the system has been argued
to be $870^{+200}_{-150}$~yr~\citep{2008MNRAS.386.1411B}, which is significantly less
than the $\tau_{\mathrm{c}} = 4.9$~kyr of the pulsar.
Considering the luminosity of \object{HESS~J1833$-$105}, $L_{\gamma}
(1-10~\mathrm{TeV}) = 2.6 \times 10^{33}$ ($d$ / 4.9 kpc)$^2$ erg s$^{-1}$, the
apparent efficiency converting the rotational energy of the pulsar to $\gamma$-rays,
$\epsilon_{\mathrm{1-10 TeV}} \equiv L_{\gamma} / \dot{E} = 0.08\%$, is
compatible with the efficiencies ($\la 10\%$) of other VHE sources that have
been identified as PWNe \citep{Kargaltsev13}.
The HGPS results confirm predictions that the PWN would emit VHE $\gamma$-rays\
at the level of a few percent of the Crab Nebula and exhibit a relatively
hard spectrum \citep{deJager95}.
\subsubsection{HESS~J1843$-$033}
\label{sec:HESS_J1843m033}
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{figures/per-source-plots/HESS_J1843m033}}
\SingleSourceCaption{HESS~J1843$-$033}{
Additionally, the SNR is shown by plotting a green circle approximating the
radio shell. The 3FGL ellipses represents the 95\% uncertainty in the position
of the HE $\gamma$-ray\ point sources. The FoV is $1.4\degr \times 1.4\degr$.
}
\label{fig:HESS_J1843m033}
\end{figure}
An extended region of VHE emission, called \object{HESS~J1843$-$033}, was first
published by \citet{2008ICRC....2..579H}. This emission is resolved by the HGPS
catalog analysis into three components that were merged into two distinct
sources: HESS~J1843$-$033 and HESS~J1844$-$030.
HESS~J1843$-$033 consists of two merged offset components (HGPSC~83 and
HGPSC~84) and is therefore highly structured (see
Fig.~\ref{fig:HESS_J1843m033}). The image of the source shows two peaks
separated by $\sim$0.2\degr. The first Gaussian component is clearly associated
with the upper peak. The second Gaussian component is larger and offset with
respect to the lower peak. This is due to more diffuse, low-brightness emission
around $(\ell, b) = (28.6\degr, -0.1\degr)$, suggesting the presence of another
currently unresolved source that shifts the position of the second component.
HESS~J1843$-$033 is therefore most probably a complex region with overlapping
sources that were merged in the HGPS analysis.
Two GeV sources, \object{3FGL~J1843.7$-$0322} and \object{3FGL~J1844.3$-$0344},
are found within the R80 extension of the source. The former is found in the
main region of emission but does not seem to correlate well with any of the two
main peaks. The latter \emph{Fermi}-LAT\ source is located in the low-brightness region
around $(\ell, b) = (28.6\degr, -0.1\degr)$.
No compelling radio counterpart was found in the VLA Galactic Plane Survey
\citep{2006AJ....132.1158S}. Dedicated X-ray observations show the presence of a
faint absorbed extended source with a nonthermal spectrum that is coincident
with the HGPSC~83 component. No compelling counterpart for the second component
has been found. We note however that the nearby radio \object{SNR G28.6$-$0.1}
\citep{Helfand89} is filled with nonthermal X-rays \citep{Ueno03}. If this
emission is due to synchrotron X-rays produced by energetic electrons, IC
emission at VHE is likely contributing to the low-brightness emission that is
visible around the SNR position in Fig.~\ref{fig:HESS_J1843m033}.
\subsubsection{HESS~J1844$-$030}
\label{sec:HESS_J1844m030}
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{figures/per-source-plots/HESS_J1844m030}}
\SingleSourceCaption{HESS~J1844$-$030}{
Additionally, the composite SNR is shown by plotting a green circle
approximating the radio shell. The position of the X-ray source is denoted by a
cross, while the position of PMN~J1844$-$0306 is indicated by a diamond. The FoV
is $1.0\degr\ \times 1.0\degr$.
}
\label{fig:HESS_J1844m030}
\end{figure}
HESS~J1844$-$030 is a faint VHE $\gamma$-ray\ source that compatible with being
point-like and located in the vicinity of the complex region of
HESS~J1843$-$033. It is positionally coincident with a number of distinct
objects, most notably the radio source \object{PMN J1844$-$0306} (cyan diamond
in Fig.~\ref{fig:HESS_J1844m030}). The nature of the latter is ambiguous. Its
elongated, jet-like morphology is very reminiscent of a radio galaxy, which is
supported by 6 cm VLA observations revealing polarization along the structure
\citep{Helfand89}. This elongated radio feature is surrounded by a partial ring
visible in the 21 cm VLA continuum image. The object is therefore classified as
a SNR candidate in the MAGPIS catalog \citep{Helfand06}, G29.37$+$0.10. It is
also coincident with the X-ray source \object{AX~J1844.7$-$0305}
\citep{Vasisht00,Sugizaki01}.
The association of the jet radio feature and the SNR candidate is unclear.
Although rare, SNRs with jets are plausible, for example PWN structures such as
MSH~15$-$52 \citep{2002ApJ...569..878G} or the \object{SS433} / \object{W 50}
microquasar SNR system with its radio jets and lobes
\citep{1998AJ....116.1842D,HESS:SS433}. The jet structure could also be a
background radio galaxy aligned by chance with a faint radio shell. We note,
however, thanks to \ion{H}{I}\ absorption, \citet{Johanson09} place the source at a
distance between 5 and $\sim$15~kpc. Interestingly, a heavily absorbed X-ray
PWN, dubbed G29.4$+$0.1, is present in SNRcat and overlaps with a part of
PMN~J1844$-$0306. Further MWL observations will be necessary to assess the
nature of the system and the origin of the VHE emission.
\subsubsection{HESS~J1846$-$029}
\label{sec:HESS_J1846m029}
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{figures/per-source-plots/HESS_J1846m029}}
\SingleSourceCaption{HESS~J1846$-$029}{
Additionally, the composite SNR is shown by plotting a green circle
approximating the radio shell. The position of the pulsar is indicated by a cyan
diamond. The FoV is $0.8\degr\ \times 0.8\degr$.
}
\label{fig:HESS_J1846m029}
\end{figure}
VHE $\gamma$-ray\ emission from the new source HESS~J1846$-$029 (see
Fig.~\ref{fig:HESS_J1846m029}) is spatially coincident with G29.7$-$0.3 (also
known as \object{Kes~75}), one of the youngest composite SNRs in the Galaxy,
which contains the nebula of PSR~J1846$-$0258. Preliminary results were
presented in \cite{Djannati-Atai:2008a} and are compatible with those obtained
in the HGPS analysis.
PSR~J1846$-$0258 is a young, high magnetic-field pulsar. This source has a
rotation period of 324~ms and a spin-down power of $8.3 \times
10^{36}$~erg~s$^{-1}$. It is among the youngest pulsars in the Galaxy with a
characteristic age of only 723~years \citep{Livingstone06}. It has experienced a
strong increase in its pulsed flux in June 2006 associated with spectral
\citep{Kumar08} and timing \citep{Gavriil08} changes in a similar manner to
magnetars. The result of the search for variations in the VHE source flux at
various timescales was negative \citep{2008AIPC.1085..316T}.
A nebula of 20\arcsec\ in radius surrounds the pulsar in radio and X-ray
wavelengths and \textit{Chandra} high-resolution observations have revealed a
jet and torus \citep{Ng08}. A 3\arcmin\ diameter asymmetric radio shell
surrounds the PSR and PWN system. It consists mainly of two lobes to the south
of the pulsar. These lobes are emitting X-rays from heated swept-up
interstellar matter and ejecta \citep{Morton07}. Infrared measurements suggest
that the shock is in a region of typical density of 60~cm$^{-3}$
\citep{Temim12}. \cite{Su09} found a bubble in the molecular matter in good
coincidence with the SNR. They proposed that this structure is the wind blown
bubble of the SNR progenitor.
The extension of the VHE emission from HESS~J1846$-$029 is compatible with that
of a point-like source. The upper limit on the size is $0.03\degr$, that is,
comparable with the SNR shell size. The position of this object is compatible
with the position of PSR~J1846$-$0258, within localization uncertainties.
Therefore, we are not able to distinguish between emission from the shell and
emission from the PWN in this composite object.
Assuming a distance of 6 kpc \citep{2008A&A...480L..25L}, which yields a
luminosity of $L_{\gamma} (1-10~\mathrm{TeV}) = 6.9 \times 10^{33}$ ($d$ / 6
kpc)$^2$ erg s$^{-1}$, the apparent conversion efficiency of the rotational
energy of the pulsar to $\gamma$-rays is $\epsilon_{\mathrm{1-10 TeV}} \equiv
L_{\gamma} / \dot{E} = 0.08\%$. The VHE emission is therefore completely
consistent with an origin in the PWN \citep[see also,
e.g.,][]{2011ApJ...741...40T,2014JHEAp...1...31T}. Yet, given the uncertainties
on extension it is not possible to exclude a contribution from $\gamma$-rays\
produced by particles accelerated at the SNR shock, in particular from
collisions of hadrons with ambient and swept-up matter at the shock, or even a
contribution of escaping particles with the molecular shell revealed by
\citet{Su09}.
\subsubsection{HESS~J1848$-$018}
\label{sec:HESS_J1848m018}
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{figures/per-source-plots/HESS_J1848m018}}
\SingleSourceCaption{HESS~J1848$-$018}{
Additionally, the 3FGL ellipse represents the 95\% uncertainty in the position
of the HE $\gamma$-ray\ point source. The central stellar cluster of the
mini-starburst W~43 is denoted by a diamond. The FoV is $2.1\degr\ \times
2.1\degr$.
}
\label{fig:HESS_J1848m018}
\end{figure}
For the new source \object{HESS~J1848$-$018} (Fig.~\ref{fig:HESS_J1848m018})
preliminary H.E.S.S.\ source properties were previously shown \citep{Chaves:2008}.
These properties are compatible with the HGPS results except for the source size
and flux; these were overestimated because the earlier analysis did not include
a model for the diffuse emission (see Sect.~\ref{sec:cc:large-scale-emission}),
which is particularly bright in this region.
The origin of the VHE $\gamma$-ray\ emission of \object{HESS~J1848$-$018} is not
yet firmly identified. No SNR or energetic pulsar is currently detected in the
proximity, although we have associated the VHE source with \object{3FGL
J1848.4$-$0141} \citep{3FGL}. This unidentified HE $\gamma$-ray\ point source is
significantly offset from the VHE $\gamma$-ray\ centroid (by $\sim$0.2\degr) but
well within the VHE emission region. Studies attempting to relate the HE with
the (preliminary) VHE morphology and spectra remained inconclusive
\citep{Tam10,Acero13}. A potential PSR and PWN scenario cannot be confirmed due
to the lack of a detected pulsar (at any wavelength), although the HE spectrum
does exhibit curvature typical of pulsars \citep{3FGL}. Furthermore, there is no
known PWN nearby, although one study has shown marginal statistical evidence for
an extension of the HE source \citep{Lemoine-Goumard11}, which is expected if
the HE emission is from a PWN or the combination of a pulsar and PWN.
An extensive search for other MWL counterparts found the VHE $\gamma$-ray\ emission
to be in the direction of the massive star-forming region \object{W~43}, a very
active mini-starburst located at a distance of $6.2 \pm 0.6$~kpc
\citep{Russeil03}. It is one of the closest and most luminous star-forming
regions in the Galaxy \citep{Motte03}, hosting a giant \ion{H}{II} region
(\object{G30.8$-$0.2}), a giant molecular cloud, and the Wolf-Rayet binary star
system \object{WR~121a} in the central stellar cluster together with O-type
stars. The massive stars in the dense central cluster exhibit strong stellar
winds with extreme mass loss rates, in particular the WN7-subtype
\object{WR~121a} \citep{Blum99}.
This unique MWL environment is of interest because the central cluster of W~43
could be the site of efficient particle acceleration in various plausible
hadronic scenarios involving the high-velocity (up to 2000~km~s$^{-1}$) stellar
winds \citep[e.g.,][]{Reimer06,Romero10}. Furthermore, the very large amount of
molecular gas present in W~43 \citep[$\sim$7$ \times
10^6$~M$_{\odot}$;][]{Nguyen11} provides a natural target for accelerated cosmic
rays (regardless of their potential acceleration site), which would lead to
$\gamma$-ray\ production via hadronic p-p collisions \citep[e.g.,][]{Aharonian91}.
It is not yet possible to confirm the W~43 hadronic scenario for the origin of
the VHE emission, in part because of the very complex morphologies present and
the challenges in correlating features observed in radio and infrared
observations at arcsecond scales with the $\sim$5$\arcmin$ resolution in VHE.
The VHE centroid, in particular, is significantly offset from the central
cluster by $\sim$0.2\degr, although the extended VHE emission is generally
coincident with the W~43 complex. This scenario remains under investigation,
especially in light of the recent detection of the superbubble 30~Dor~C in the
LMC \citep{HESS15_LMC}, which suggests that particle acceleration occurring in
the collective winds of massive stars can indeed produce VHE emission.
\subsubsection{HESS~J1849$-$000}
\label{sec:HESS_J1849m000}
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{figures/per-source-plots/HESS_J1849m000}}
\SingleSourceCaption{HESS~J1849$-$000}{
Additionally, the position of the pulsar is indicated by a cyan diamond and the PWN
by a green cross. The FoV is $1.5\degr\ \times 1.5\degr$.
}
\label{fig:HESS_J1849m000}
\end{figure}
The faint, slightly extended source \object{HESS~J1849$-$000} (see
Fig.~\ref{fig:HESS_J1849m000}) was first reported by
\citet{2008AIPC.1085..312T}. It was found to be spatially coincident with the
hard X-ray source \object{IGR~J18490$-$0000} \citep{2012AandA...545A..27K}.
\emph{XMM-Newton}\ observations revealed a nonthermal, point-like, X-ray source surrounded by
a nebula, making this object a solid PWN candidate. Follow-up observations of
the hard X-ray source with \emph{RXTE} have confirmed this hypothesis with the
discovery of a 38.5~ms periodicity of the X-ray signal
\citep{2011ApJ...729L..16G}. The associated pulsar, \object{PSR~J1849$-$0001},
was found to have a spin-down luminosity $\dot{E} = 9.8 \times
10^{36}$~erg~s$^{-1}$ and a characteristic age $\tau_{\mathrm{c}} = 42.9$~kyr.
The HGPS analysis confirms the existence of a source coincident with
PSR~J1849$-$0001. The best-fit position of HESS~J1849$-$000 is located less than
$0.03\degr$ from the X-ray pulsar position (cyan diamond on
Fig.~\ref{fig:HESS_J1849m000}), well within statistical uncertainties in both
source localizations. The best-fit size of the VHE emission is $0.09\degr$,
which is about a factor of two larger than that of the extended X-ray component
\citep{2011ApJ...729L..16G,2015MNRAS.449.3827K}.
The source has an energy flux $\sim$2.1$\times 10^{-12}$~erg~cm$^{-2}$~s$^{-1}$,
in the range 1--10~TeV, a factor of $\sim$2 above the X-ray nebula energy flux
in the range 2--10~keV \citep{2015MNRAS.449.3827K}. This confirms the likely
nature of HESS~J1849$-$000 as a PWN in transition between a young,
synchrotron-dominated phase and an evolved, IC-dominated phase.
\subsubsection{HESS~J1852$-$000}
\label{sec:HESS_J1852m000}
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{figures/per-source-plots/HESS_J1852m000}}
\SingleSourceCaption{HESS~J1852$-$000}{
Additionally, the position of the pulsars are indicated by cyan diamonds, and the
SNR is shown by plotting a green circle approximating the radio shell. The FoV
is $2.0\degr\ \times 2.0\degr$.
}
\label{fig:HESS_J1852m000}
\end{figure}
The new source of VHE $\gamma$-ray\ emission \object{HESS~J1852$-$000}
(Fig.~\ref{fig:HESS_J1852m000}) is currently unidentified due to multiple source
counterpart confusion. It is spatially associated with the partial shell-type
\object{SNR G32.8$-$0.1} \citep[also known as \object{Kes 78};][]{Kesteven68,
Velusamy74}, the incomplete shell-type \object{SNR G33.2$-$0.6} \citep{Reich82},
and two energetic pulsars, \object{PSR J1853$-$0004} and \object{PSR
J1853$+$0011} \citep{Hobbs04}. Preliminary H.E.S.S.\ source properties were
previously shown \citep{Kosack11} and are compatible with the HGPS results. As
mentioned in Sect.~\ref{sec:results:xcheck}, the spectral properties of
HESS~J1852$-$000 are affected by systematic uncertainties larger than nominal.
The VHE emission is located along the eastern edge of SNR Kes~78 but extends
well beyond the SNR. The SNR itself is characterized by an elongated and partial
nonthermal shell seen in radio and X-rays \citep{Zhou11,Bamba16}. It is
interacting with adjacent molecular clouds, evidenced by the detection of a
shock-excited OH(1720~MHz) maser on the shell \citep{Koralesky98} and studies of
the CO molecular environment \citep{Zhou11}. The distance of the SNR is
estimated to be $\sim$5~kpc \citep{Koralesky98,Zhou11}, although $\sim$8.8~kpc
has also been suggested \citep[e.g.,][]{Xu09}. A hadronic origin of the VHE
emission has been briefly discussed \citep{Kosack11}, involving escaped cosmic
rays from Kes~78 \cite[e.g.,][]{Aharonian91,Gabici07}. However, the scenario
remains unconfirmed in the absence of a more detailed study of the gas
environment and its potential correlation with the complex VHE morphology.
The presence of two radio pulsars, PSR~J1853$-$0004 and PSR~J1853$+$0011, within
the VHE emission region also suggests that the VHE $\gamma$-rays\ could originate
in one of the PWN or could even be a result of superimposed emission from two
PWNe. Although there are currently no known PWNe at other energies, the
pulsars' spin-down luminosities $\dot{E} = 2.1 \times 10^{35}$~erg~s$^{-1}$ and
$2.1 \times 10^{34}$~erg~s$^{-1}$, respectively, and distances $d = 6.6$~kpc and
7.5~kpc, are reasonable in the context of other pulsars thought to be powering
VHE PWNe \citep{HESS:PWNPOP}. The pulsars have so far only been detected in
radio, although PSR~J1853$-$0004 has been associated with the HE $\gamma$-ray\
source \object{3FGL~J1853.2$+$0006}, which is itself a source whose existence
and properties are currently uncertain \citep[subject to analysis Flags 3 and 4
in][]{3FGL}.
In conclusion, it is not yet clear whether the VHE emission originates from a
hadronic SNR and molecular cloud interaction, previously undetected PWNe
associated with one or both of the spatially coincident pulsars, or some other
yet unknown source.
\subsubsection{Source candidates}
\label{sec:sourcecandidates}
Three VHE $\gamma$-ray\ source candidates (hotspots) were found above the $TS = 30$
detection threshold in one HGPS analysis (primary or cross-check), but these
candidates had $TS < 30$ in the other analysis. These should be considered
unconfirmed, or candidate, VHE sources to be confirmed by deeper VHE
observations.
\paragraph{HOTS~J1111$-$611\\}
\label{sec:HOTS_J1111m611}
The VHE emission from the source candidate HOTS~J1111$-$611 has a significance
of $TS = 22$ (cross-check $TS = 41$). It is located at ($\ell$,~$b$)~=~
($291.18\degr \pm 0.03\degr$, $-0.54\degr \pm 0.03\degr$), has a measured
integral flux $F(E > 1~\mathrm{TeV}) = 3.8 \times 10^{-13}$ cm$^{-2}$~s$^{-1}$,
and a size $0.09\degr \pm 0.03\degr$. It is located near ($\sim$0.1\degr) the
very energetic $\tau_{\mathrm{c}} = 32.7$~kyr pulsar \object{PSR J1112$-$6103}
\citep{Manchester:2001} emitting in radio and HE $\gamma$-rays\ \citep[\object{3FGL
J1111.9$-$6058},][]{Abdo13}. The pulsar has a high spin-down luminosity
$\dot{E}=4.5\times 10^{36}$~erg~s$^{-1}$ and a distance of 12.2~kpc
\citep{CordesLazio:2002}. Moreover, a significant HE $\gamma$-ray\ source
\citep[\object{2FHL J1112.1$-$6101e},][]{2FHL} above 50~GeV has been reported at
$0.04\degr$ from the pulsar, which makes this HE source likely to be a PWN. The
characteristics of this pulsar, the apparent efficiency $\epsilon_{E >
1~\mathrm{TeV}} \sim 1\%$, and the presence of a HE component in its vicinity
suggests that it could plausibly power a VHE PWN.
\paragraph*{HESS~J1831$-$098\\}
\label{sec:HESS_J1831m098}
The source candidate \object{HESS J1831$-$098} is found to have $TS = 59$ in the
main HGPS analysis but only $TS = 17$ in the cross-check analysis and is
therefore considered a source candidate. HESS~J1831$-$098 is located in a
complex region with nearby diffuse components, which might explain the
discrepancy observed for this source candidate. Preliminary VHE morphological
and spectral properties on HESS~J1831$-$098 were announced by
\citet{2011ICRC....7..244S}. This source candidate is coincident with the
energetic pulsar \object{PSR J1831$-$0952}, which exhibits a spin-down
luminosity of $1.1 \times 10^{36}$ erg~s$^{-1}$ and a characteristic age of
128~kyr. According to \citet{2011ICRC....7..244S}, a
$\epsilon_{1-20~\mathrm{TeV}} \sim 1\%$ conversion efficiency from rotational
energy to $\gamma$-rays would be required to power a PWN; this is similar to
values observed in other VHE PWN.
\paragraph{HOTS~J1907$+$091\\}
\label{sec:HOTS_J1907p091}
The VHE emission from source candidate \object{HOTS J1907$+$091} has a
significance of only $TS = 18$ (cross-check $TS = 43$). It is located at
($\ell$,~$b$)~=~ ($42.88\degr \pm 0.08\degr$, $0.69\degr \pm 0.08\degr$), has a
measured integral flux $F(E > 1~\mathrm{TeV}) = 4.3 \times
10^{-13}$~cm$^{-2}$~s$^{-1}$, and an extension of $0.17\degr \pm 0.04\degr$. Two
potential counterparts are found to be spatially coincident with this source
candidate: the magnetar \object{SGR 1900$+$14} \citep{Mazets:1979} and the
\object{SNR G42.8$+$0.6} \citep{Fuerst:1987}. The former has an age
$\tau_{\mathrm{c}} = 0.90$~kyr and a spin-down luminosity $\dot{E} = 2.6 \times
10^{34}$~erg~s$^{-1}$. It is assumed to be at a distance $12.5 \pm 1.7$~kpc
\citep{Davies:2009} based on an association of the magnetar with a massive star
cluster \citep{Wachter:2008}. SGR~1900$+$14 has similar properties to those of
another magnetar, \object{SGR~1806$-$20}, that is associated with the VHE source
HESS~J1808$-$204 \citep{HESS:1808}. It underwent a major burst of soft
$\gamma$-ray\ emission in 1998 \citep{Hurley:1999,Frail99} and, similar to
SGR~1806$-$20, it might also be emitting VHE $\gamma$-rays. Little is known about
SNR G42.8$+$0.6. The centroid of the VHE emission is marginally coincident with
the magnetar, while the bulk of the emission overlaps the northeastern half of
the SNR shell.
\section{Summary and conclusions}
\label{sec:conclusions_outlook}
The H.E.S.S.\ Collaboration has completed its Galactic plane survey, which is an
observation and analysis program that spanned over a decade. This paper presents
the final results of the survey. The four-telescope H.E.S.S.\ Phase I array was
used for the observations, which features a 5\degr\ FoV that is well suited to
scanning large regions of the sky like the Galactic plane. The Phase I array has
a typical sensitivity to point-like $\gamma$-ray\ sources of 1\% Crab Nebula
integral flux ($E > 1$~TeV) in less than 25~h.
The H.E.S.S.\ Collaboration added a fifth, larger telescope to the array in 2012
(H.E.S.S.\ Phase II) to extend its sensitivity to lower energies as well as its
ability to rapidly reposition to observe transient phenomena. However, it also
features a smaller FoV than the four Phase I telescopes, making it much less
suited for scanning large regions. In addition, the HGPS had improved the
uniformity of its exposure and achieved a target sensitivity of 2\% Crab flux in
the inner Galaxy. Primarily for these reasons, as well as the diminishing gains
stemming from source significance scaling approximately as the square root of
livetime, the H.E.S.S.\ Collaboration in 2013 decided not to continue the HGPS
observation program.
First early results from the HGPS were published in 2005
\citep{2005Sci...307.1938A}. The observations at the time amounted to just 120~h
yet led to the detection of ten VHE $\gamma$-ray\ sources in the inner Galaxy,
eight of which were not previously known. Further results followed in 2006
\citep{ref:gps2006}, using 230~h of data and discovering additional four
$\gamma$-ray\ sources. Since then, we provided the community with periodic
updates, which had steadily increasing exposure and additional source
discoveries, by releasing new unidentified sources~\citep{ref_gps_unids2008} and
via published conference proceedings \citep{2008ICRC....2..579H, Chaves08_GPS,
ref:icrc09, ref:icrc11, Deil12_GPS, Carrigan13a_GPS, Carrigan13b_GPS}.
The HGPS data set (Sect.~\ref{sec:dataset}) is now over a factor of ten larger
than in 2006, comprising 2673~h of observations accumulated over the period 2004
to 2013. These data come from a variety of observations: observations from the
initially published surveys, targeted observations of known sources, follow-up
observations of newly discovered source candidates, observations to extend the
HGPS spatial coverage, and fill-up observations to achieve a more uniform
sensitivity across the Galactic plane (Fig.~\ref{fig:hgps_sensitivity}). The
energy threshold of the HGPS varies with the longitude observed but is typically
lower than 0.8~TeV for detections and maps (0.5~TeV for spectral analyses) and
as low as 0.4~TeV (0.2~TeV) in many regions, especially the innermost Galaxy
(Fig.~\ref{fig:hgps_energy_threshold_profiles}).
Compared to the previous publication, the HGPS was also expanded to cover a much
wider range of both longitude and latitude
(Fig.~\ref{fig:hgps_region_exposure_illustration}). In the first Galactic
quadrant, the HGPS now extends in longitude from the Galactic center to nearly
$\ell = 65\degr$, the northern limit of visibility from the southern-hemisphere
H.E.S.S.\ site. In the fourth Galactic quadrant, the HGPS coverage is continuous
and even extends beyond Vela to $\ell = 250\degr$ in the third quadrant. In
latitude, the coverage varies but is generally $b = \pm 3\degr$ and as large as
$b = \pm 5\degr$ in some regions to explore areas of particular interest off the
plane. The point-source sensitivity is better than 2\%~Crab along the Galactic
plane ($b = 0\degr$) over most of the longitudes covered by the HGPS
(Figs.~\ref{fig:hgps_sensitivity},~\ref{fig:hgps_sources_glat}). However, the
flux sensitivity varies significantly owing to the mix of observations
comprising the HGPS. It is better than 1\%~Crab in numerous regions but at a
more modest level of 2--10\%~Crab off-plane. The HGPS achieves the best
sensitivity at the Galactic center, reaching 0.3\% of the Crab flux.
To ensure robust results, the HGPS relies on results that agree between two
independent software frameworks (chains; Sect.~\ref{sec:dataset:events}) used to
calibrate raw Cherenkov data as well as reconstruct and analyze the $\gamma$-ray\
images and spectra. The primary software chain used the Hillas method for event
reconstruction, and an event classification method using boosted decision trees.
The secondary (cross-check) chain uses an alternative event reconstruction and
classification based on EAS models, returning results that are in very good
agreement globally although there are some variations on a source-by-source
basis (discussed in Sect.~\ref{sec:results:previously}). Monte Carlo simulations
provide the instrument response functions that describe the performance of the
instrument. The mean angular resolution of H.E.S.S.\ (68\% containment radius of
the PSF) is $\sim$$0.08\degr$\ and varies by approximately 10\% across the
survey region.
We have generated a number of sky maps (images; Sect.~\ref{sec:maps}), which are
public data products\footnote{\url{https://www.mpi-hd.mpg.de/hfm/HESS/hgps}} and also form the basis for the HGPS
source catalog construction. To accumulate sufficient signal and search for
$\gamma$-ray\ emission of different sizes, we generated three different sets of
maps with events spatially correlated over radii of $0.1\degr$ (point-like),
$0.2\degr$ and $0.4\degr$, respectively. To subtract background from hadronic
CRs passing $\gamma$-ray\ selections in the FoV, we developed an adaptive version
of the classic ring background method that is more flexible and can compensate
for large exclusion regions that minimize signal contaminating background
regions (Sect.~\ref{sec:adaptiveringmethod}). For each point in a sky map, we
calculated the $\gamma$-ray\ excess, statistical significance of the $\gamma$-ray\
excess, and $\gamma$-ray\ flux (or flux upper limit). The main map products
released (Sect.~\ref{sec:online:maps}) are those of significance
(Sect.~\ref{sec:significancemaps}), flux (Fig.~\ref{fig:fluxmap}) and upper
limits. Auxiliary maps include flux errors and sensitivity
(Fig.~\ref{fig:hgps_sensitivity}).
To detect and characterize the VHE $\gamma$-ray\ sources, we developed a
semi-automatic analysis pipeline to construct a source catalog (see
Sect.~\ref{sec:cc}). To disentangle individual $\gamma$-ray\ sources in complex
regions of overlapping emission, we implemented morphological modeling based on
two-dimensional maximum-likelihood estimation. We fit the $\gamma$-ray\ excess by
two-dimensional symmetric Gaussian components, keeping components with $TS>30$.
To arrive at the HGPS catalog, Gaussian components that that did not correspond
to a clear emission peak in the main and cross-check analysis were rejected.
Some components that strongly overlapped were merged into a single source for
which position, extension, and flux were characterized by the moments of the
multi-Gaussian emission model. In this process, it was necessary to model the
underlying large-scale $\gamma$-ray\ emission along the Galactic plane to improve
the modeling of the discrete sources (Sect.~\ref{sec:cc:large-scale-emission}).
We chose to use an empirical model derived with a sliding window method,
$20\degr$ wide in longitude and Gaussian in latitude, whose Gaussian center,
amplitude, and width were fit to the excess outside exclusion regions. We
calculated source spectra using the reflected region background method when
possible, fitting PL spectral models and determining the best-fit normalization
and spectral index. Flux information is also available from the aforementioned
maps albeit assuming a spectral index of $\Gamma = 2.3$.
The HGPS source catalog includes 78 sources of VHE
$\gamma$-rays. Of these, \hgpsSourceCountAnalysed were detected with the HGPS
pipeline analysis. For completeness, the catalog includes an additional
\hgpsSourceCountCutout~H.E.S.S.\ sources from regions excluded from the HGPS
pipeline, for example, because of their complexity, such as the Galactic center
region and sources with shell-like morphologies. H.E.S.S.\ has previously published
the discovery of most of the HGPS sources, although in many cases the available
observation time used for the HGPS analysis is considerably larger. Of the
total 78 ~sources, 16 are new discoveries
published here for the first time. Five of these new sources are firmly
identified objects: HESS~J1554$-$550 and HESS~J1849$-$000 are PWNe
(Sect.~\ref{sec:HESS_J1554m550},~\ref{sec:HESS_J1849m000}), and
HESS~J1119$-$614, HESS~J1833$-$105, and HESS~J1846$-$029 are composite SNRs
(Sect.~\ref{sec:HESS_J1119m614},~\ref{sec:HESS_J1833m105},~\ref{sec:HESS_J1846m029}).
Three more of the new sources are spatially coincident with HE $\gamma$-ray\
pulsars, recently discovered in \emph{Fermi}-LAT\ data, and are thus plausible PWN
candidates.
The HGPS sources have diverse characteristics (Sect.~\ref{sec:results}). Apart
from the shell-like sources, most source morphologies are generally well-modeled
as symmetric two-dimensional Gaussians, but their sizes range from point-like
($\la 0.1\degr$) to $0.6\degr$ (Fig.~\ref{fig:hgps_sources_flux_extension}).
Their fluxes cover a wide range as well from 0.6\%~Crab to 103\%~Crab of which
the majority are in the range 1-20\%~Crab
(Fig.~\ref{fig:hgps_sources_flux_extension}). The cumulative $\log N$ -- $\log
S$ distribution above 10\%~Crab (containing 32 sources) is well described by a
power law of slope $-1.3~\pm~0.2$ (Fig.~\ref{fig:hgps_sources_log_n_log_s}),
matching the expectation of a power law of slope $-1$ from a population of
equal-luminosity sources homogeneously distributed in the Galactic disk. Below
10\%~Crab, the HGPS source catalog is incomplete and can only provide a lower
limit on the true number of fainter VHE sources (70 above 1\%~Crab). Spectral
indices range from hard ($\Gamma \approx 2.0$) to very soft ($\Gamma \approx
3.0$) in an approximately normal distribution centered at $2.4 \pm 0.3$
(Fig.~\ref{fig:sources_index}). The VHE sources cluster narrowly along the
Galactic plane (median $b = -0.20\degr$, with a spread of $0.51\degr$), in good
agreement with the distributions of SNRs, energetic pulsars, molecular gas, and
HE $\gamma$-ray\ sources (Fig.~\ref{fig:hgps_sources_glat}). Their distribution in
longitude (Fig.~\ref{fig:hgps_sources_glon}) shows a general correlation with
molecular gas.
To study the origin of the VHE $\gamma$-rays, we performed a systematic search to
associate the HGPS sources with known or suspected VHE source classes, based
largely on spatial compatibility with objects in the SNR and PWN catalog SNRcat,
the ATNF pulsar catalog, and the \emph{Fermi}-LAT\ 3FGL and 2FHL catalogs
(Sect.~\ref{sec:results:assoc_id}). By comparing the HGPS catalog to plausible
MWL counterpart catalogs, we come to one of the main conclusions of the HGPS
program: the majority (67 , or 86\%) of the HGPS sources are
associated with at least one astronomical object that could potentially account
for the production of $\gamma$-rays\ at TeV energies. The unassociated sources
(11 , 14\%) are not necessarily dark, i.e. emitting
exclusively in the VHE domain; it is also possible that their counterparts were
missed by our association procedure. In short, most HGPS sources have either
firm associations, plausible or potential counterparts in other wavelength
regimes. Whether there remains a population of truly dark VHE sources in the
HGPS can only be figured out with deeper MWL studies.
We then used additional, stricter criteria, such as shell-like morphology or
variability, to establish firm identifications for 31 sources
(Fig.~\ref{fig:hgps_source_id}). We found the largest identified VHE source
class to be PWNe (12 sources, or 39\% of identified sources),
followed by shell-type SNRs (8 , 26\%); composite SNRs
(8 , 26\%), where both the interior PWN and SNR shell may
contribute to the emission; and high-energy binary systems (3 ,
10\%). At present, only 40\% of the HGPS sources can be firmly identified. This
is typically due to difficulties resolving ambiguity among competing scenarios
involving multiple associated objects in large part because of the large
intrinsic sizes of VHE $\gamma$-ray\ sources.
The HGPS data set allows for population studies of sources. An early study of 15
globular clusters was published before the HGPS was completed \citep{GCPop}. Two
further such studies, on the primary Galactic VHE source classes of PWNe and
SNRs, are published as companion articles to this paper
\citep[][respectively]{HESS:PWNPOP,HESS:SNRUL}, together with more specific
studies on a number of microquasars \citep{HESS:MQ} and bow shocks of runaway
stars~\citep{HESS:Bow}. With the public release of the HGPS catalog along with
sky maps, more comprehensive such population studies will become possible.
Further insights into the Galactic VHE source population and diffuse emission in
the coming years can be expected. H.E.S.S., \emph{Fermi}-LAT\ and HAWC are surveying the
Milky Way; the analysis methods for the individual gamma-ray data sets, and
joint analysis methods combining multiple data sets are improving; and new
surveys at lower wavelengths (especially those detecting nonthermal emission in
the radio and X-ray bands) will be come available soon. The next major leap
forward will be achieved by the Galactic plane survey of the Cherenkov Telescope
Observatory (CTA), which will consist of two arrays in the northern and southern
hemisphere \citep{2017arXiv170907997C}. The Galactic plane survey is a key
science project of CTA, and is planned to cover the whole Galactic plane, over a
wider energy band and with better angular resolution and sensitivity compared to
HGPS \citep{2013APh....43..317D, 2017arXiv170907997C}.
In conclusion, the additional exposure obtained since 2006, plus significant
improvements in analysis and reconstruction methods, allowed us to probe much
more of the Galaxy, whether it be more distant sources, fainter nearby sources,
or regions never before observed at TeV energies. The HGPS program clearly
demonstrates that sources of VHE $\gamma$-ray\ emission are common in the Galaxy
and are linked to diverse sites of high-energy particle acceleration.
\section{Online material}
\label{sec:online}
In this section, we provide further information about the public data products
released in electronic format. We also provide some guidance and caveats
regarding the correct use of these products. The HGPS survey maps and source
catalog presented in this paper are available for download at:
\begin{center}
\url{https://www.mpi-hd.mpg.de/hfm/HESS/hgps}
\end{center}
In addition to the figures and tables present in Sect.~\ref{sec:results}, there
are a series of HGPS maps and tables available online:
\begin{itemize}
\item Figure~\ref{fig:fluxmap}: HGPS flux map
\item Figures~\ref{fig:hgps_survey_mwl_1}-\ref{fig:hgps_survey_mwl_4}:
Four-panel HGPS significance maps with all VHE sources and MWL associations labeled
\item Table~\ref{tab:hgps_catalog}: HGPS catalog source morphology summary
\item Table~\ref{tab:hgps_spectra}: HGPS catalog source spectrum summary
\item Table~\ref{tab:hgps_associations}: HGPS catalog source associations
\end{itemize}
\subsection{Sky maps}
\label{sec:online:maps}
\subsubsection*{Description}
Survey maps are released in \textit{FITS} format \citep{Pence:2010}, using a
Cartesian (CAR) projection in Galactic coordinates
\citep{2002AandA...395.1077C}. The maps contain the whole HGPS region
($-114\degr < l < 75\degr$ and $-5\degr < b < +5\degr$), with a binning of 0.02\degr\ per pixel corresponding to a total
size of 9400~$\times$~500~pixels. Maps are available for the following
quantities:
\begin{itemize}
\item Statistical significance (described in Sect.~\ref{sec:significancemaps})
\item Flux (described in Sect.~\ref{sec:fluxmaps})
\item 1$\sigma$ flux error (described in Sect.~\ref{sec:errfluxmap})
\item Flux upper limit (described in Sect.~\ref{sec:errfluxmap})
\item Sensitivity (described in Sect.~\ref{sec:sensmaps})
\end{itemize}
We provide all flux and flux-like quantities as integral photon fluxes
above 1~TeV assuming a PL spectrum for the differential flux with an index
$\Gamma = 2.3$. Each map is provided for two correlation radii,
$R_{\mathrm{c}} = 0.1\degr$ and $0.2\degr$.
A total of ten files are released (five quantities, each for two $R_{\mathrm{c}}$),
with file names \verb=hgps_map_<quantity>_<radius>deg_v<version>.fits.gz=, e.g.,
the significance map with $R_{\mathrm{c}} = 0.2\degr$ can be found in the file
\verb=hgps_map_significance_0.2deg_v1.fits.gz=.
\subsubsection*{Usage notes and caveats}
\begin{itemize}
\item Since none of the released flux-derived maps are computed for a point-like
source hypothesis, information extracted from these maps should always be used
in the context of full containment of the PSF. Otherwise, this could yield
incorrect information, for example, a flux upper limit that is too low
(optimistic). In particular, since the H.E.S.S.\ PSF has a size comparable to
0.1\degr, the maps computed with this correlation radius do not fully contain
the PSF. In this case, one only gets roughly 80\% of the flux when reading a
pixel value at a given position of a point-like source. Those maps should
therefore be used with care when extracting a flux value (see also below).
\item The released maps are already spatially correlated (oversampled);
therefore, pixel values should be read at the corresponding position of interest
for a circular region of radius $R_{\mathrm{c}}$. In the case of a region size
between two of the provided $R_{\mathrm{c}}$ values, interpolation could be used
as a first approximation. The oversampling also implies that maps should not be
used for morphology studies (e.g., production of radial profiles or fitting).
\item Some caution should be taken for values in the $0.2\degr$ correlation maps
where a gradient in exposure is present, since the background is estimated at
the center of the ROI and not averaged across it (see
Sect.~\ref{sec:background_estimation}).
\item The significance maps contain, at each position, the statistical
significance of the $\gamma$-ray\ excess. This value is not corrected for trials
and the large-scale emission component is not taken into account in its
computation.
\item We recommend assuming a systematic error of 30\% on the flux values (see
Sect.~\ref{sec:cc:discussion}).
\end{itemize}
\subsection{Source catalog}
\label{sec:online:catalog}
\subsubsection*{Description}
The HGPS source catalog (construction described in Sect.~\ref{sec:cc}) and
a number of other tables are available as BINTABLE \emph{FITS} extensions in
the \verb=hgps_catalog_v1.fits.gz= file.
An overview of the available tables (including links to the tables in this paper
describing the columns in detail) is given in Table~\ref{tab:hgps_fits_catalog}.
Here is some further information on the content of the tables:
\begin{itemize}
\item \verb=HGPS_Sources= : The HGPS catalog, one source per row, identified via
the \verb=Source_Name= column, which is in the format
\verb=HESS JHHMM=$\pm$\verb=DDd=.
\item \verb=HGPS_Gauss_Components= : The HGPS Gaussian component list, one
component per row. Reference back to \verb=HGPS_Sources= catalog via the
\verb=Source_Name= column (if the component is part of a source).
\item \verb=HGPS_Associations= : The HGPS association list, one association per
row. Reference back to \verb=HGPS_Sources= catalog via the \verb=Source_Name=
column. A given HGPS catalog source can appear zero, one, or multiple times in
this table.
\end{itemize}
\subsubsection*{Usage notes and caveats}
For reasons of reproducibility, we decided to release the complete emission
model on which the analysis is based. This includes the full list of Gaussian
components and parameters of the large-scale emission model. When working
with this data, beware of following usage notes and caveats:
\begin{itemize}
\item Some of the components are unstable and are not confirmed by the
cross-check analysis. Use the source catalog, not the component list, for
studies based on the HGPS.
\item For the HGPS catalog, we did not perform detailed per-source systematic
error estimates. In general, when using spectra from the HGPS catalog, we
recommend assuming a systematic error of 30\% on the absolute flux and 0.2 on
the spectral index.
\item In Fig.~\ref{fig:hgps_sources_spectrum_map_flux_comparison}, there are a
few sources where the integral flux estimate differs by more than 30\% when
using the two methods discussed in this paper. As discussed in
Sect.~\ref{sec:cc:discussion}, the estimate of a source spectrum is affected by
the assumed source morphology, diffuse gamma and atmospheric hadronic background
model and uncertainties in the instrument response functions. In particular,
the integral flux estimate may be uncertain by more than 30\% for sources with
relatively low significance that are not spatially isolated from other sources.
In those cases, one can assume the difference between \verb=Flux_Map= and
\verb=Flux_Spec_Int_1TeV= to be a lower limit on the systematic error.
\end{itemize}
\begin{table*}
\caption{
HGPS catalog FITS tables. See Sect.~\ref{sec:online:catalog}.
}
\label{tab:hgps_fits_catalog}
\centering
\begin{tabular}{llrl}
\hline \hline
HDU Extension name & Description & Rows & Column description \\
\hline
HGPS\_Sources & HGPS source catalog & 78 & see Table~\ref{tab:hgps_sources_columns} \\
HGPS\_Gauss\_Components & HGPS component list & 98 & see Table~\ref{tab:hgps_component_columns} \\
HGPS\_Associations & HGPS association list & 223 & see Table~\ref{tab:hgps_associations_columns} \\
HGPS\_Identifications & HGPS identification list & 31 & see Table~\ref{tab:hgps_identifications_columns} \\
HGPS\_Large\_Scale\_Component & HGPS large-scale emission model parameters & 50 & see Table~\ref{tab:hgps_diffuse_columns} \\
SNRcat & Bundled version of SNRcat used for associations & 282 & \\
\hline
\end{tabular}
\end{table*}
\onecolumn
\LTcapwidth=\textwidth
\begin{longtable}{llp{0.5\textwidth}}
\caption[HGPS FITS table columns for \texttt{HGPS\_Sources}]{
HGPS FITS table columns for \texttt{HGPS\_Sources}, the main source catalog.
The column descriptions link back to sections and equations in the main text where needed.
}
\label{tab:hgps_sources_columns}\\
\hline \hline
Column & Unit & Description \\
\hline
\endfirsthead
\caption{continued.}\\
\hline\hline
Column & Unit & Description \\
\hline
\endhead
\hline
\endfoot
\input{tables/hgps_sources_columns_data.tex}
\end{longtable}
\begin{table*}[h!]
\caption[HGPS FITS table columns for \texttt{HGPS\_Gauss\_Components}]{
HGPS FITS table columns for \texttt{HGPS\_Gauss\_Components}.
See Sects.~\ref{sec:cc:components} and \ref{sec:cc:source_characterization}.
}
\label{tab:hgps_component_columns}
\centering
\begin{tabular}{llll}
\hline\hline
Column & Unit & Description\\
\hline
\input{tables/hgps_gauss_components_columns_data}
\end{tabular}
\end{table*}
\begin{table*}[h!]
\caption[HGPS FITS table columns for \texttt{HGPS\_Associations}]{
HGPS FITS table columns for \texttt{HGPS\_Associations}.
See Sect.~\ref{sec:results:assoc_id}.
}
\label{tab:hgps_associations_columns}
\centering
\begin{tabular}{llll}
\hline\hline
Column & Unit & Description\\
\hline
\input{tables/hgps_associations_columns_data}
\end{tabular}
\end{table*}
\begin{table*}[h!]
\caption[HGPS FITS table columns for \texttt{HGPS\_Identifications}]{
HGPS FITS table columns for \texttt{HGPS\_Identifications}.
See Table~\ref{sec:identifications}.
}
\label{tab:hgps_identifications_columns}
\centering
\begin{tabular}{llll}
\hline\hline
Column & Unit & Description\\
\hline
\input{tables/hgps_identifications_columns_data}
\end{tabular}
\end{table*}
\begin{table*}[h!]
\caption[HGPS FITS table columns for \texttt{HGPS\_Large\_Scale\_Component}]{
HGPS FITS table columns for \texttt{HGPS\_Large\_Scale\_Component}
See Sect.~\ref{sec:cc:large-scale-emission}.
}
\label{tab:hgps_diffuse_columns}
\centering
\begin{tabular}{llll}
\hline\hline
Column & Unit & Description\\
\hline
\input{tables/hgps_large_scale_component_columns_data}
\end{tabular}
\end{table*}
\section*{Acknowledgements}
This work made extensive use of gamma-cat\footnote{\url{https://github.com/gammapy/gamma-cat}},
SNRcat\footnote{\url{http://www.physics.umanitoba.ca/snr/SNRcat}} \citep{SNRcat}, ATNF\footnote{\url{http://www.atnf.csiro.au/research/pulsar/psrcat}}
\citep{Manchester:2005}, SIMBAD\footnote{\url{http://simbad.u-strasbg.fr/simbad}}
\citep{2000AnAS..143....9W} and NASA's Astrophysics Data System Bibliographic
Services.
For data analysis, we made extensive use of the Python packages
Gammapy\footnote{\url{https://github.com/gammapy/gammapy}} \citep{Donath2015, 2017arXiv170901751D},
Astropy\footnote{\url{http://www.astropy.org}} \citep{2013AandA...558A..33A} and
Sherpa\footnote{\url{http://cxc.cfa.harvard.edu/sherpa}} \citep{Freeman:2001}, as well as Numpy
\citep{numpy}, Scipy \citep{scipy} and Matplotlib \citep{matplotlib}.
The support of the Namibian authorities and the University of Namibia in
facilitating the construction and operation of H.E.S.S. is gratefully
acknowledged, as is the support by the German Ministry for Education and
Research (BMBF), the Max Planck Society, the German Research Foundation (DFG),
the French Ministry for Research, the CNRS-IN2P3 and the Astroparticle
Interdisciplinary Programme of the CNRS, the U.K. Science and Technology
Facilities Council (STFC), the IPNP of the Charles University, the Czech Science
Foundation, the Polish Ministry of Science and Higher Education, the South
African Department of Science and Technology and National Research Foundation,
the University of Namibia, the Innsbruck University, the Austrian Science Fund
(FWF), and the Austrian Federal Ministry for Science, Research and Economy, and
by the University of Adelaide and the Australian Research Council. We appreciate
the excellent work of the technical support staff in Berlin, Durham, Hamburg,
Heidelberg, Palaiseau, Paris, Saclay, and in Namibia in the construction and
operation of the equipment. This work benefited from services provided by the
H.E.S.S. Virtual Organisation, supported by the national resource providers of
the EGI Federation.
\listofobjects
\bibliographystyle{aa}
|
1,477,468,750,348 | arxiv | \section{Introduction}
\label{sec:1}
Relative movements of tectonic plates lead to a slow accumulation of stress over time and along the faults near the boundaries of the plates that sometimes are exhibited by abundant micro-earthquakes. The accumulated energy is then suddenly released during an earthquake once the stress loading reaches a trigger threshold. This activity may in turn stimulate neighboring faults, developing a sequence of occurrences in space and time, to bring the dynamic medium to a new state of equilibrium \cite{bib:26}, \cite{bib:28}. Therefore, while the major energy release may seem isolated to a particular time and fault location, in general, an earthquake cannot be analyzed locally in time or space \cite{bib:1}, \cite{bib:2} and it is reasonable to look for spatial and temporal statistical dependences (aka correlation) in previous recordings, to find important information about impending earthquakes. Given the unpredictability of seismic events, the most common approach is to use a statistical approach based on point processes \cite{bib:3}, \cite{bib:27}. One difficulty is that the major earthquakes are rare and the physical processes time varying, so there is insufficient data to create accurate statistical models. One alternative is to attempt to relate micro-earthquakes with larger earthquakes, taking advantage of the larger density of micro-earthquakes and their spatial information between stations to infer statistical relationships across scales. The seismicity of micro-earthquakes in a long-term period of one decade, prior to a major earthquake, has been used to explain how the seismic or tectonic processes change ahead of large earthquakes \cite{bib:22}, but there has been no attempt in defining significant statistical criteria based on micro-earthquake activity that can be used as a precursor of large earthquakes. This work proposes to apply signal processing methods to detect abnormal seismic activities on a set of faults. In order to capitalize on the spatio-temporal structure of the micro-earthquake data a pre-earthquake state is statistically defined, also called earthquake precursory activity. The proposed method quantifies the interaction between the faults\textquotesingle{} activities obtained by seismic recordings in a distributed network across space by means of the micro-earthquakes that are produced, and uses distance measures on spike trains to evaluate their dissimilarity structure over time.
A typical seismic network includes several monitoring stations, which may be located tens to hundreds of kilometers apart from each other. These stations record local seismic activities over time. For a micro-earthquake network, the sensors are even able to detect micro-earthquakes (i.e., events with magnitudes $M\leq2$ Richter), which cannot be felt beyond several kilometers from their epicenters. While the magnitudes of these low-intensity events carry information, this work just considers the timings of micro-earthquakes so they are reduced to time-series of spike trains also called a point process. Previously, statisticians have modeled the event distribution of strong earthquakes $(M\geq6)$ over time \cite{bib:3}. Recent studies have tried to find the distribution of the number of earthquakes with magnitude larger than two on a single location to find an indicator for the temporal correlation in a single spike train \cite{bib:4}. Another approach \cite{bib:5} characterized the behavior of earthquake aftershocks $M\geq5.5$ using prototype point patterns by clustering the sequence of aftershocks of given main shocks $(7.5\leq M\leq 8)$.
This paper analyzes the dissimilarity of the spike trains from micro-earthquakes readings at the stations of a broad-band seismic network, with the aim of extracting precursors for all earthquakes $M\geq 4$ in the region, which preserves the spatial information because these small events are only detected by the closest station. Larger events, however, are recorded in almost every local station, making it unreasonable to evaluate dissimilarity between the spike trains of larger earthquakes. Here, dissimilarity is selected because it can be efficiently estimated by divergence between spike trains, while the converse (similarity) is much harder to quantify because it requires measures of statistical dependence. The Victor-Purpura (VP) distance measure \cite{bib:6}, \cite{bib:7} is selected, which is sensitive to rate and coincidence of events. Furthermore, another measure from information theoretic learning, the Cauchy-Schwarz (CS) divergence \cite{bib:8} is used, which quantifies the distance between the probability laws of two point processes in probability space. This latter metric is generally more robust than distance measures; fewer assumptions are also made when using it. The results of this study suggest that extreme dissimilarities are followed by light to significant earthquakes.
This paper is organized as follows. Sect.~\ref{sec:2} defines the pre-earthquake state and outlines how the significance of this definition is tested by surrogate method. Sect.~\ref{sec:3} describes the distance measures that are employed to analyze the spike trains. Sect.~\ref{sec:4} covers the characteristics of the seismic recordings used in this study. Sect.~\ref{sec:5} presents the results of applying dissimilarity measures to the spike trains along with how the similarities change over time. Lastly, Sect.~\ref{sec:6}, states the conclusions and offers some suggestions for the possible extensions.
\section{Methodology}\label{sec:2}
Here, a statistical approach is pursued to identify abnormal behaviors ahead of large earthquakes. This approach starts with the micro-earthquakes\textquotesingle{} event times recorded in at least two stations of a sensitive broadband seismic network and applies point process distance measures to evaluate dissimilarity of the spatial seismic activity over time. Abnormal values are then identified using a statistical threshold and are labeled as the pre-earthquake state. Fig.~\ref{fig:1} illustrates the block diagram of this method, which indicates how the signal processing approaches are applied to the station measurements of the faults. This is what is discussed in detail in the rest of this paper.
\indent
\begin{figure}[htbp]
\centering
\includegraphics[scale=0.55]{Fig1.pdf}
\caption{The block diagram of the method. The upper block illustrates a typical area with some faults. Stations of a seismic network (the bullets) and corresponding spike trains are also depicted}
\label{fig:1}
\end{figure}
\subsection{Definition of the Pre-Earthquake State}
\label{sec:2.1}
The pre-earthquake state in this paper is defined as an occasional or durable increase in dissimilarity of spatial seismic activity, only if the amount of dissimilarity passes a statistical threshold value. The extreme (maximal) dissimilarity in the collected data quantifies a critical change in connectivity of the regional faults, which can result in a large earthquake that settles the whole system in a new state. The pre-earthquake state is based on a statistical evaluation of pairwise dissimilarity instead of physical principles. It is, however, compatible with the spring-block physical model for earthquakes and other seismic activities \cite{bib:23}. Indeed the Burridge-Knopoff spring-block model shows that the slip of one fault redefines the forces on local faults, so further slips occur and subsequently cause multiple reactionary events. When the system stress loading reaches a threshold value, a large earthquake is triggered, after which the process of relaxation begins. Based on the idea of self-organized criticality for earthquakes \cite{bib:24}, both the trigger point and relaxation point can be characterized by spatio-temporal correlation among the faults. This is exactly what the definition of pre-earthquake state is based on.
\subsection{Statistical Test Design}
\label{sec:2.2}
To examine the hypothesis that \enquote{major earthquakes are preceded by an increase in dissimilarity of micro-earthquakes,} the null hypothesis is defined as \enquote{increases in dissimilarity are merely the results of local fluctuations and they are not related to an earthquake}. The goal is to find whether the null hypothesis can be rejected at a certain level. One option to implement the null hypothesis is to generate a Poisson point processes as a surrogate for the micro-earthquake spike trains. However, this is not easy because the rate of micro-earthquakes is continuously changing as demonstrated in Fig.~\ref{fig:3}, and this will confound the test. One widely used alternative is to synthetically create a set of spike trains by modifying the original spikes. This modification should destroy the feature of interest, which is the correlation between spike trains of earthquakes, while keeping the other statistical properties such as density intact. The surrogate data is then used to estimate the acceptance interval for normal fluctuations in dissimilarity. A dissimilarity beyond the acceptance interval is considered as an anomaly.
The dissimilarity metrics that are used in this paper do not distinguish between coincidence and changes in firing rate. Therefore, it seems quite reasonable to use surrogates that experience destruction in both coincidence and firing rates. However, if the method completely destroys the firing rate profile and reduces the inhomogeneous point process to a homogeneous one, a little change in the rate profile will lead to a false positive. The method employed to generate the surrogate data is uniform spike time dithering \cite{bib:20}, which randomly displaces spikes in a dithering window. To enforce causality in prediction, the method is slightly modified here such that dithering window always follows the original position of each spike.
Using spike time dithering to generate surrogate data, there is a hyper parameter, which is the size of the dithering window. The dithering window needs to be long enough to destroy any coincidences in the original signal. The process of dithering also somewhat smooths the local rate profile, an effect that increases with an increase in window length. As a rule of thumb, a window length two to four times that of the window to compute the distance is recommended \cite{bib:20} for enough correlation destruction. Here, a statistical approach is used to find the optimal length of dithering window, which indeed is the minimum length that guarantees surrogates with destructed correlation. This length is equal to the lag where the original spike trains are uncorrelated. This lag corresponds to the first local minimum of cross-correlation histogram \cite{bib:30} of original spike trains. In Sect.~\ref{sec:5}, binned cross-correlation \cite{bib:29} between micro-earthquake spike trains is computed for this purpose.
Using the dithering method explained above, two sets of surrogates are obtained from micro-earthquake data of the two stations. To produce the acceptance band, distances are computed between one-to-one pairs of surrogates from the two stations, and then are made to have identical statistical distribution via quantile normalization.
Other free parameters of each method are selected and fixed for the whole time period using 5-fold cross-validation, where each fold roughly equals one month. The positive predictive value, that is, the proportion of true positives in the positive calls, is used as the measure of fit.
\section{Spike Trains Distance Measures}\label{sec:3}
The concept of distance is conversely and strongly related to statistical dependence, which extends the concept of correlation between time series. However, unlike conventional amplitude based signals, spike train spaces are devoid of an obvious algebra. To tackle this difficulty, time binning may be used to map a spike train to Euclidean space, which allows the use of the Euclidean inner product. This process, however, has disadvantages. While binning with a coarse bin size sacrifices time precision, smaller bin sizes may keep temporal structure but are sensitive to temporal fluctuations and also suffer from dimensionality problem.
Binless measures of spike train dissimilarity have been proposed to overcome these difficulties. Most of these measures consider the spike trains to be points in an abstract metric space, proposed by Victor and Purpura \cite{bib:6}, \cite{bib:7}. The widely used time dependent approaches include the Victor-Purpura\textquotesingle s distance \cite{bib:7}, the van Rossum distance \cite{bib:9}, and the similarity measure proposed by Schreiber et al. \cite{bib:10} (see \cite{bib:11}, \cite{bib:12} for a comparison). All of these measures are dependent upon a smoothing parameter that controls the method\textquotesingle{s} sensitivity to dissimilarities in spike rate or spike coincidences. Hence, they still include a free parameter, which indicates the time precision for distance analysis, almost similar to bin size, but without time quantization.
The Victor-Purpura\textquotesingle s distance is one of the measures that is used in this work. In addition, the Cauchy-Schwarz dissimilarity measure is used, which corresponds to the correlation measure used by Schreiber et al.
\subsection{Victor-Purpura\textquotesingle s Distance}
\label{sec:3.1}
The VP distance defines the dissimilarity between two spike trains in terms of the minimum cost of transforming one spike train into the other by just three elementary operations: spike insertion, spike deletion (each with a cost of one), and shifting one spike in time to synchronize with the other. The cost of a time shift for a spike at $t_m$ to $ t_n$ is $\mathrm{q}|t_m-t_n |$, where $\mathrm{q}$ defines the time-scale with inverse time unit. The VP distance between spike trains $s_i$ and $s_j$ is defined as
\begin{equation}
d_{\mathrm{VP}}(s_i,s_j)\triangleq \min_{C(s_i \leftrightarrow s_j)} \sum_{l}{K_\mathrm{q}(t_{c_i[l]}^i,t_{c_j[l]}^j)}, \label{eqn:1}
\end{equation}
where $C(s_i\leftrightarrow s_j) $ is the set of all possible sequences of elementary operations that transform $s_i$ to $s_j$, or vice-versa, and $c_{(\cdot)}[\cdot] \in C(s_i\leftrightarrow s_j)$. That is, $c_i[l]$ denotes the index of the spike time of $s_i$ manipulated in the $l$-th step of a sequence. $K_\mathrm{q}(t_{c_i[l]}^i,t_{c_j[l]}^j)$ is the cost associated with the step of mapping the $c_i[l]$-th spike of $s_i$ at $t_{c_i[l]}^i$ to $t_{c_j[l]}^j$, corresponding to $c_j[l]$-th spike of $s_j$, or vice-versa.
Given two spike trains, each with a single spike, the distance is
\begin{equation}
K_\mathrm{q}(t_m^i-t_n^j)=\min({\mathrm{q}|t_m^i-t_n^j|,2}),\label{eqn:2}
\end{equation}
This means that the VP algorithm shifts a spike at most, $2/{\mathrm{q}}$ far from the other. Otherwise, it is cheaper to delete one of the spikes and insert another for a cost of 2. The distance $K_\mathrm{q}$ may be considered as a scaled and inverted triangular kernel applied to the spike trains \cite{bib:11}. This interpretation encourages the use of alternate dissimilarity measures based on different kernels.
\subsection{Cauchy-Schwarz Dissimilarity}
\label{sec:3.2}
An alternative dissimilarity measure based on the Cauchy-Schwarz (CS) divergence \cite{bib:8} uses the Laplacian kernel. The kernel size $\tau$ tunes the time scale of the measure, and plays the reciprocal role of the free parameter of the VP distance $\mathrm{q}$ \cite{bib:11}. Here, by choosing a large $\tau$ the measure is more sensitive to dissimilarity in the firing rates of the spike trains, similar to the VP distance with a small $\mathrm{q}$ value. It also can be defined from the inner product of intensity functions (firing rates) of the spike trains in $L_2$.
For a spike train $s_i$ with $N_i$ spikes on the time interval $[0,T]$ and the spike times $\{t_m^i, m=1,\cdots,N_i\}$, $s_i$ can be represented as a sum of time-shifted impulses
\begin{equation}
s_i(t)=\sum_{m=1}^{N_i}{\delta(t-t_m^i)}.\label{eqn:3}
\end{equation}
The firing rate $\lambda_{s_i}(t)$ can be estimated using a kernel smoothing representation of the spike train as
\begin{equation}
{\hat{\lambda}}_{s_i}(t)= \sum_{m=1}^{N_i}{h(t-t_m^i)},\label{eqn:4}
\end{equation}
with $h(t)$ as the smoothing kernel. This kernel needs to be non-negative valued with a unit area constraint. The memoryless cross intensity (mCI) kernel \cite{bib:13} is defined on spike trains as
\begin{equation}
I(s_i,s_j)=\int_{-\infty}^{+\infty}{{\hat{\lambda}_{s_i}}(t){\hat{\lambda}_{s_j}}(t)}dt.\label{eqn:5}
\end{equation}
Using exponential decay for kernel smoothing, the mCI kernel can be evaluated efficiently as
\begin{equation}
I(s_i,s_j)=\frac{1}{N_i N_j}\sum_{m=1}^{N_i} \sum_{n=1}^{N_j} {\kappa(t_m^i,t_n^j)},\label{eqn:6}
\end{equation}
where $\kappa(\cdot)$ is the Laplacian kernel \cite{bib:14}. The Cauchy-Schwarz dissimilarity is then defined as
\begin{equation}
d_{\mathrm{CS}}(s_i,s_j)=-\log \frac{I^2(s_i,s_j)}{I(s_i,s_i).I(s_j,s_j)}.\label{eqn:7}
\end{equation}
Mercer's theorem \cite{bib:15} implies that for the symmetric non-negative definite function $\kappa(\cdot)$ that is square integrable, the kernel has an eigen-decomposition as
\begin{equation}
\kappa(t_m^i,t_n^j)=\langle \mathrm{\Phi}_m^i, \mathrm{\Phi}_n^j\rangle_{H_k},\label{eqn:8}
\end{equation}
where $\mathrm{\Phi(\cdot)}$ is the nonlinear mapping from the input space to the reproducing kernel Hilbert space $H_k$ induced by the kernel function. Thus, Eq.~(\ref{eqn:7}) is equivalent to
\begin{equation}
d_{\mathrm{CS}}(s_i,s_j) = -\log \frac{(\sum_{m=1}^{N_i} \sum_{n=1}^{N_j} {\langle \mathrm{\Phi}_m^i, \mathrm{\Phi}_n^j\rangle_{H_k}})^2}{(\sum_{m=1}^{N_i} \sum_{n=1}^{N_i} {\langle \mathrm{\Phi}_m^i, \mathrm{\Phi}_n^i \rangle_{H_k}})(\sum_{m=1}^{N_j} \sum_{n=1}^{N_j} {\langle \mathrm{\Phi}_m^j, \mathrm{\Phi}_n^j \rangle_{H_k}})}.
\label{eqn:9}
\end{equation}
The RKHS \cite{bib:21} interpretation of the CS divergence is interesting, because it does not require explicit PDF estimation. Instead, this representation provides enough space to extend the algorithm for earthquake precursory using the properties of the functional space, as will be discussed later in this paper.
\section{Experimental Data}\label{sec:4}
The original data is continuously recorded by surface broadband stations of the Iranian Seismological Center (IRSC). The recordings are then digitized and sent to a remote network center over satellite communication channels. At network center, the data is analyzed by a virtual seismic analyst to identify the events in real-time. The outcomes are also reviewed by an experienced seismic analyst. In this study, micro-earthquake data identified in two stations of IRSC network, Tabriz sub-network, from March 15, 2012 to August 11, 2012, are used for experiments. This is a five-month period prior to the earthquake M6.4 in northwest Iran. To define the target area, which may have precursors (warning) based on the analysis of this data set, the existing studies which explain the regional tectonic settings \cite{bib:17}, \cite{bib:25} are useful. Fig.~\ref{fig:2} illustrates the tectonic map of the Alpine system and the position of the two stations S1 and S2 (the bullets). These stations are very close to the boundaries of Iranian, Turkish, Van, and Arabian tectonic plates. The target area of this study is located at the intersection of these plates and is shown within the dashed-line rectangle (latitude:~$35.6$ to $43.1$ degrees, longitude:~$35.5$ to $49.2$ degrees). Unfortunately, not every seismic station that cover the whole area is accessible. However, this figure explains why a far earthquake in the rectangle may have a precursor provided by the two stations of this study.\\
\begin{table}[htbp]
\small
\centering
\begin{threeparttable}
\renewcommand{\arraystretch}{1.3}
\caption{The Complete Set of Earthquakes E1 to E41\tnote{a}}
\label{tab:1}
\begin{tabular}{llllllll}
\hline\noalign{\smallskip}
& DATE & TIME & MAG. & LOCATION & LAT. & LON. & DEP.\tnote{b}\\% & USGS ID\\
\noalign{\smallskip}\hline\noalign{\smallskip}
\textbf{E1} &\textbf{2012-03-18} &\textbf{02:38:16}& \textbf{4.4}& northern Iran& 36.82& 49.20& 14 \\%&usp000jgdx\\
\textbf{E2} &\textbf{2012-03-23}&\textbf{15:43:37}& \textbf{4.2} & eastern Turkey& 38.94& 43.62& 5 \\%&usp000jgp7\\
E3 &2012-03-24& 06:57:47&4.2& eastern Turkey& 38.92& 43.53& 5\\%& usp000jgq8\\
E4& 2012-03-25& 14:50:29& 4& eastern Turkey& 39.94& 42.94& 3.4\\%& usp000jgs6\\
\textbf{E5}& \textbf{2012-03-26}&\textbf{10:35:32}& \textbf{5} & eastern Turkey& 39.17& 42.33& 5\\%& usp000jgts\\
\textbf{E6}&\textbf{2012-03-31}&\textbf{10:38:18}& \textbf{4.2} & eastern Turkey& 39.079& 43.78& 5\\%& usp000jh1m\\
\textbf{E7}&\textbf{2012-04-04}&\textbf{09:41:40}& \textbf{4.4} & eastern Turkey& 38.88& 43.57& 2.6\\%& usp000jh7z\\
\textbf{E8}&\textbf{2012-04-04}&\textbf{11:05:16}& \textbf{4} & Turkey-Syria border&36.93& 37.05& 8.4\\%& usp000jh80\\
E9& 2012-04-04& 14:18:38& 4.4 & eastern Turkey& 39.23& 41.03& 10\\%& usp000jh85\\
E10&2012-04-12& 09:32:42& 4.1& eastern Turkey& 38.69& 43.05& 2.2 \\
E11& 2012-04-13& 00:04:50& 4.2&Turkey-Iran border& 39.03& 44.04& 5\\%& usp000jhtn\\
\textbf{E12}&\textbf{2012-04-13}&\textbf{04:22:08}& \textbf{4.3}& eastern Turkey& 38.67& 43.18& 5\\%& usp000jhu6\\
\textbf{E13}&\textbf{2012-04-18}&\textbf{23:30:58}& \textbf{4.5} & eastern Turkey& 38.84& 43.58& 5\\%& usp000jj72\\
\textbf{E14}&\textbf{2012-04-23}&\textbf{15:50:20}& \textbf{4.1} & Georgia& 42.32& 45.25& 10\\%& usp000jjfh\\
\textbf{E15}&\textbf{2012-04-28}&\textbf{03:17:04}& \textbf{4.7} & eastern Turkey& 38.49& 40.74& 5\\%& usp000jjq0\\
\textbf{E16}&\textbf{2012-05-07}&\textbf{04:40:27}& \textbf{5.6} & Azerbaijan& 41.55& 46.79& 11\\%& usp000jk44\\
E17& 2012-05-07& 05:38:03& 4.6& Azerbaijan& 41.47& 46.75& 11.9\\%& usp000jk46\\
E18& 2012-05-07& 05:40:31& 4.6& Azerbaijan& 41.423& 46.76& 16.6\\%& usp000jk47\\
E19& 2012-05-07& 08:36:24& 4.2& Azerbaijan& 41.51& 46.77& 16.8\\%& usp000jk4f\\
E20& 2012-05-07& 14:15:14& 5.3& Azerbaijan& 41.55& 46.72& 11.9\\%& usp000jk4p\\
E21& 2012-05-07& 14:36:20& 4& Azerbaijan& 41.51& 46.73& 11.7\\%& usp000jk4q\\
E22& 2012-05-07& 16:58:56& 4.4& Azerbaijan& 41.56& 46.79& 14\\%& usp000jk4t\\
\textbf{E23}&\textbf{2012-05-14}&\textbf{06:46:23}& \textbf{4.3} & Azerbaijan& 38.70& 48.76& 21.4\\%& usp000jkdt\\
E24& 2012-05-14& 09:58:20& 4.1& Azerbaijan& 41.25& 47.23& 7.4\\%& usp000jkdx\\
E25& 2012-05-14& 15:51:02& 4& Azerbaijan& 41.19& 47.23& 10.5\\%& usp000jke5\\
\textbf{E26}&\textbf{2012-05-15}&\textbf{04:54:38}& \textbf{4.2} & Azerbaijan& 41.54& 46.73& 17\\%& usp000jkeu\\
E27& 2012-05-18& 14:46:35& 4.9& Azerbaijan& 41.58& 46.76& 14.8\\%& usp000jkjv\\
\textbf{E28}&\textbf{2012-05-18}&\textbf{14:47:22}& \textbf{5.1} & Azerbaijan& 41.44& 46.79& 18.1\\%& usp000jkjw\\
\textbf{E29}&\textbf{2012-05-25}&\textbf{11:22:38}& \textbf{4.4} & eastern Turkey& 38.12& 38.60& 5\\%& usp000jkx0\\
\textbf{E30}&\textbf{2012-06-05}&\textbf{16:29:48}& \textbf{4.2}& Azerbaijan& 41.49& 46.79& 38.3\\%& usp000jmcq\\
\textbf{E31}&\textbf{2012-06-14}&\textbf{05:52:53}& \textbf{5.3} & Turkey-Syria-Iraq&37.29&42.33&5.4\\% & usp000jmr8\\
E32& 2012-06-14& 19:17:43& 4.2& eastern Turkey& 38.06& 42.55& 2.5 \\% & usp000jms0\\
\textbf{E33}&\textbf{2012-06-24}&\textbf{20:07:20}& \textbf{4.9} & eastern Turkey& 38.71& 43.65& 5\\%& usp000jn6q\\
\textbf{E34}&\textbf{2012-06-25}&\textbf{20:05:59}& \textbf{4.3} & Azerbaijan& 41.26& 47.11& 10\\%& usp000jn7p\\
E35& 2012-06-28& 08:39:16& 4.2& eastern Turkey& 38.72& 43.35& 5.3 \\% & usp000jnbc\\
\textbf{E36}&\textbf{2012-07-20}&\textbf{13:51:12}& \textbf{4.3} & Georgia& 42.53& 44.14& 10\\% & usp000jp7q\\
\textbf{E37}&\textbf{2012-07-22}&\textbf{09:26:02}& \textbf{5} & central Turkey& 37.55& 36.38& 7.6\\% & usp000jpab\\
\textbf{E38}&\textbf{2012-07-24}&\textbf{22:53:39}& \textbf{4.5} & eastern Turkey& 38.69& 43.43& 5\\% & usp000jpe0\\
\textbf{E39}&\textbf{2012-07-31}&\textbf{23:12:11}& \textbf{4.1} & eastern Turkey& 38.68& 43.05& 5\\% & usp000jpqe\\
\textbf{E40}&\textbf{2012-08-05}&\textbf{20:37:23}& \textbf{5} & Turkey-Syria-Iraq&37.42&42.97&17.5\\% & usp000jpxn\\
\textbf{E41}&\textbf{2012-08-11}&\textbf{12:23:18}& \textbf{6.4} & northwestern Iran&38.33&46.83& 11\\% & usp000jq5p\\
\noalign{\smallskip}\hline
\end{tabular}
\begin{tablenotes}
\item[a] Reported by USGS from March 15, 2012 to August 11, 2012, in a rectangular area (Latitude:~$35.6$ to $43.1$ degrees, Longitude:~$35.5$ to $49.2$ degrees). The main shocks, which includes 25 earthquakes, are bold highlighted
\item[b] Depths are in kilometer
\end{tablenotes}
\end{threeparttable}
\end{table}
\indent
\begin{figure*}[!h]
\centering
\includegraphics[scale=0.7]{Fig2.pdf}
\caption{Tectonic setting of the Alpine system \cite{bib:17}, \cite{bib:25}. The location of the stations (the bullets) and the area of interest at the intersection of Iranian, Turkish, Van, and Arabian tectonic plates (within the dashed-line rectangle) are illustrated}
\label{fig:2}
\end{figure*}
The rectangular search of the U.S. Geological Survey (USGS) catalog over the five-month time period includes 41 earthquakes $M\geq4$, whose characteristics are presented in Table~\ref{tab:1}. The main shocks, that is, those events that are not immediately followed by a larger earthquake, are highlighted. Among these events, there are 25 main shocks, six foreshocks and 10 aftershocks. The aftershocks are ignored, but the prediction of foreshocks is still important.\\
\indent
\begin{figure*}[!h]
\centering
\includegraphics[scale=0.65]{Fig3.pdf}
\caption{Instantaneous rates of micro-earthquakes recorded in each station over the five-month period, created using Gaussian kernel smoothing method with optimal bandwidth of ~3 $Day$. Times of occurence of earthquakes E12 and E26 are also illustrated by vertical lines, to highlight event density differences before these two major earthquakes}
\label{fig:3}
\end{figure*}
Before computing the dissimilarity of spike trains, it is noteworthy to consider the instantaneous rate of micro-earthquakes recorded in each individual station. Here, the rate profiles are created using the Gaussian kernel smoothing method with optimal bandwidth for the kernels \cite{bib:18}. Fig.~\ref{fig:3} shows the smoothed signals. The bandwidth values are optimally set to 2.97843 $Day$ and 2.97739 $Day$ for the two stations S1 and S2, respectively. It is very important to bear in mind that the output of the smoothing filter depends on future events. Hence, it is not a causal estimator. The rate profiles may be used to understand the characteristics of the input data, however cannot provide predictability. As shown in Fig.~\ref{fig:3}, both the recorded event densities and their difference at each time instance are highly variable and demonstrate different behavior prior to major events. There are minimum and maximum differences in micro-earthquake event densities just before the major events E12 (M4.3, April 13, 2102) and E26 (M4.2, May 15, 2012), respectively. This suggests a method based on rate profile thresholding may not be enough to provide consistent information about upcoming earthquakes.
\section{Results and Discussions}\label{sec:5}
The results for applying the VP distance and CS divergence to micro-earthquakes recorded in stations S1 and S2 of Tabriz sub-network are presented. Each measure is applied to spike time vectors obtained over a sliding window. These vectors span the entire time period of interest. Hence, the output is a time resolved profile of dissimilarity of seismic activities between the two stations. Together with the distances, the one-tailed (positive) acceptance intervals from the surrogate data test at 90\% confidence will be depicted to clearly show moments when the dissimilarities between spike trains exceed the limit of normal fluctuations on spike train structure dictated by the statistical test. The acceptance band is produced using two set of surrogates from original spike trains of stations S1 and S2, each set including 1,000 surrogates. Spike time dithering method explained in Sect.~\ref{sec:2} is used for surrogates. Following the statistical test design explained earlier, the cross-correlation (CC) of 2-day binned spike trains of micro-earthquakes is computed to find the optimal dithering window. Fig.~\ref{fig:4} depicts the original cross-correlation (the thick solid line). As expected, the cross-correlation decreases as lag increases, and has the first local minimum at 6 days. Therefore, the optimal length of dithering window is selected as 6 days. Fig.~\ref{fig:4} also depicts the mean CC over the surrogates (the thick dashed-line) and mean CC plus twice the standard deviation (the fine dashed-line). Obviously, the surrogate CC follows the shape of original CC.
\begin{figure*}[!h]
\centering
\includegraphics[scale=0.35]{Fig4.pdf}
\caption{Original and surrogate cross-correlations (CCs)}
\label{fig:4}
\end{figure*}
\subsection{Results for Victor-Purpura\textquotesingle s distance}\label{sec:5.1}
The VP distance for the original spike trains is illustrated in Fig.~\ref{fig:5}. Here, q is set to 100 $Day^{-1}$, meaning that the VP algorithm only shifts a spike which is at most 2/100 $Day$ (28.8 minutes) far from the other. The length of sliding window is set to 2 days and it slides one hour at each step. To define the one-tailed 90\% acceptance interval, the VP distances at each step are sorted $\mathrm{VP}_{1}(l)\leq \cdots \leq \mathrm{VP}_{M}(l)$, where $l$ denotes the sliding window position, and $M$ is the number of surrogates. The lower and upper limits are then $a(l)=\mathrm{VP}_{1}(l)$ and $b(l)=\mathrm{VP}_{0.9M}(l)$, respectively.\\
\indent
\begin{figure*}[!h]
\centering
\includegraphics[scale=0.67]{Fig5.pdf}
\caption{Dissimilarity of the two stations over the five-month period, computed using the VP distance. Times of occurrence of major earthquakes $E_{1}$ to $E_{41}$ are illustrated using vertical dashed lines. Only the main shocks are labeled. The Anomalies in red are extreme dissimilarities that pass the acceptance interval}
\label{fig:5}
\end{figure*}
\begin{figure*}[!h]
\centering
\includegraphics[scale=0.65]{Fig6.pdf}
\caption{Monthly dissimilarity of the two stations over the five-month period, computed using the VP distance. Times of occurrence of major earthquakes $E_{1}$ to $E_{41}$ are illustrated using vertical dashed lines. Anomalies are also labeled}
\label{fig:6}
\end{figure*}
\begin{table}[htbp]
\centering
\begin{threeparttable}
\renewcommand{\arraystretch}{1.3}
\caption{Confusion Matrix for Victor-Purpura Distance}
\label{tab:2}
\small
\begin{tabular}{lll}
\hline\noalign{\smallskip}
& True & False \\
\noalign{\smallskip}\hline\noalign{\smallskip}
Positive& 13 (E2,E12,E14,E15,E28,E31,E33,&2 (A8,A11) \\
& E34,E36,E37,E39,E40,E41)& \\
Negative& n/a&12 (E1,E5,E6,E7,E8,E13,E16, \\
& &E23,E26,E29,E30,E38) \\
\noalign{\smallskip}\hline
\end{tabular}
\begin{tablenotes}
\item[a] Aftershocks are ignored in this table
\end{tablenotes}
\end{threeparttable}
\end{table}
The monthly plot of Fig.~\ref{fig:6} has better visualization and the anomalies are also labeled. Comparing the anomalies with times of occurrence of major earthquakes reveals that there are extreme dissimilarities prior to 13 earthquakes, namely E2, E12, E14, E15, E28, E31, E33, E34, E36, E37, E39, E40, and E41. These anomalies may be considered as true warnings for corresponding earthquakes including E41, which is the deadly Varzaghan-Ahar earthquake M6.4 in northwest Iran. However, there are also two false positive warnings, and 12 earthquakes happen without any anomaly. The performance of the algorithm in providing efficient warnings prior to the major earthquakes is summarized in Table~\ref{tab:2}. When producing the confusion matrix in this table, the aftershocks are ignored again. Furthermore, a main shock that has a foreshock is reported as true positive, provided that the algorithm successfully predict the first earthquake, that is, the foreshock. This is the case for E12 and E28. On the other hand, while there is an anomaly before the main shock E5 it is reported false negative because the anomaly appears only after the corresponding foreshock E4.\\
\indent
\begin{table}[htbp]
\small
\centering
\begin{threeparttable}
\renewcommand{\arraystretch}{1.3}
\caption{Earthquake Precursory Performance Using the VP Distance}
\label{tab:3}
\begin{tabular}{llll}
\hline\noalign{\smallskip}
& & Precursory & \\ [-1ex]
\raisebox{1.5ex}{Eearthquake} & \raisebox{1.5ex}{Anomaly} & {Time}& \raisebox{1.5ex}{Duration} \\
\noalign{\smallskip}\hline\noalign{\smallskip}
\textbf{E1}&Not Detected &No Precursory &- \\
\textbf{E2}, E3 &A1 &27 hours& 3 hours\\
E4,\textbf{E5} & Not Detected&No Precursory&-\\
\textbf{E6}& Not Detected& No Precursory& -\\
\textbf{E7}, E9& Not Detected& No Precursory& -\\
\textbf{E8}& Not Detected & No Precursory&-\\
E10, E11, \textbf{E12}& A3&86 hours&3 hour\\
\textbf{E13}&Not Detected&No Precursory& -\\
\textbf{E14}&A5& 35 hours&16 hours\\
\textbf{E15}& A6& 9 hours & 3 hours\\
\textbf{E16} to E22& Not Detected& No Precursory& -\\
\textbf{E23}& Not Detected& No Precursory& -\\
E24, E25, \textbf{E26}& Not Detected&No Precursory& -\\
E27, \textbf{E28}& A7& 65 hours& 4 hours\\
\textbf{E29}& Not Detected& No Precursory& -\\
\textbf{E30}&Not Detected& No Precursory & - \\
\textbf{E31}, E32& A9& 95 hours& 84 hours\\
\textbf{E33}, E35& A10&71 hours& Up to the event time\\
\textbf{E34}& A10& 94 hours& Up to the event time\\
\textbf{E36}& A12& 99 hours& 31 hours\\
\textbf{E37}& A13 &37 hours &20 hours\\
\textbf{E38}& Not Detected & No Precursory& -\\
\textbf{E39} &A14& 122 hours& 97\\
\textbf{E40}& A15& 34& 3\\
\textbf{E41}& A16& 3 hours& Up to the event time\\
\noalign{\smallskip}\hline
\end{tabular}
\end{threeparttable}
\end{table}
Table~\ref{tab:3} presents the precursory behavior for each earthquake. Each row of the table includes one main shock in bold, preceded (succeeded) with corresponding foreshocks (aftershocks), if any. The true positive anomalies are also reported for each group of earthquakes. Precursory time, with a mean value of 59.77 hours $(\pm 38.01)$, is the earliest warning before the event. Duration, with a mean value of 33.15 hours $(\pm 38.43)$, indicates how long the warning have been in effect in average, that is, how long the dissimilarity have been above the acceptance interval just prior to the event.
\subsection{Results for Cauchy-Schwarz divergence}\label{sec:5.2}
The same experiment is repeated, using the Cauchy-Schwarz divergence instead of VP distance. The results are illustrated in Fig.~\ref{fig:7}. The kernel width is set to $\tau=2.5$ \emph{Hour} using the cross-validation method explained earlier. Other parameters are the same as those of the VP distance.
\indent
\begin{figure*}[!h]
\centering
\includegraphics[scale=0.67]{Fig7.pdf}
\caption{Dissimilarity of the two stations over the five-month period, computed using the CS divergence }
\label{fig:7}
\end{figure*}
Compared with the VP distance, the CS divergence tends to be more sensitive with respect to the surrogate. The confusion matrix is presented in Table~\ref{tab:4}. The CS divergence is doing slightly better compared with the VP distance.
The monthly plot of Fig.~\ref{fig:8} indicates that there are precursors prior to 19 out of 25 main shocks and there are three false alarms (A4, A15, and A16), given an 86\% positive predictive value, which is comparable to the surrogate confidence. However, six main shocks have not any precursor.
\indent
\begin{figure*}[!h]
\centering
\includegraphics[scale=0.65]{Fig8.pdf}
\caption{Monthly dissimilarity of the two stations over the five-month period, computed using the CS divergence. Times of occurrence of major earthquakes $E_{1}$ to $E_{41}$ are illustrated using vertical dashed lines. Anomalies are also labeled}
\label{fig:8}
\end{figure*}
\begin{table}[htbp]
\small
\centering
\begin{threeparttable}
\renewcommand{\arraystretch}{1.3}
\caption{Confusion Matrix for Cauchy-Schwarz Distance}
\label{tab:4}
\begin{tabular}{lll}
\hline\noalign{\smallskip}
& True & False \\
\noalign{\smallskip}\hline\noalign{\smallskip}
Positive&19 (E1,E2,E5,E6,E12,E13,E16,E23,E26,E28&3 (A4,A15,A16)\\
&E29,E30,E31,E33,E34,E36,E37,E39,E41)&\\
Negative&n/a &6 (E7,E8,E14,E15,E38,E40) \\
\noalign{\smallskip}\hline
\end{tabular}
\begin{tablenotes}
\item[a] Aftershocks are ignored in this table
\end{tablenotes}
\end{threeparttable}
\end{table}
Table~\ref{tab:5} discusses the precursory behavior when using the CS divergence. Here, the mean value for precursory time is 44.53 hours $(\pm 38.90)$ and the mean precursory duration is 15.71 hours $(\pm 13.63)$.\\
\begin{table}[htbp]
\small
\centering
\begin{threeparttable}
\renewcommand{\arraystretch}{1.3}
\caption{Earthquake Precursory Performance Using the CS Divergence}
\label{tab:5}
\begin{tabular}{llll}
\hline\noalign{\smallskip}
& & Precursory & \\ [-1ex]
\raisebox{1.5ex}{Eearthquake} & \raisebox{1.5ex}{Anomaly} & {Time}& \raisebox{1.5ex}{Duration} \\
\noalign{\smallskip}\hline\noalign{\smallskip}
\textbf{E1}&A1 &4 hours &Up to the event time \\
\textbf{E2}, E3 &A2 &48 hours& 8 hours\\
E4,\textbf{E5} & A3&13 hours&1 hour\\
\textbf{E6}& A5& 13 hours& Up to the event time\\
\textbf{E7}, E9& Not Detected& No Precursory& -\\
\textbf{E8}& Not Detected & No Precursory&-\\
E10, E11, \textbf{E12}& A6&11 hours&1 hour\\
\textbf{E13}&A7&2 hours& Up to the event time\\
\textbf{E14}& Not Detected& No Precursory&-\\
\textbf{E15}& Not Detected& No Precursory&-\\
\textbf{E16} to E22& A8& 26 hours& 4 hours\\
\textbf{E23}& A9& 49 hours& 32 hours\\
E24, E25, \textbf{E26}& A9& 53 hours& 32 hours\\
E27, \textbf{E28}& A10& 48 hours& Up to the event time\\
\textbf{E29}& A11& 84 hours& 41 hours\\
\textbf{E30}& A12& 1 hour & 1 hour \\
\textbf{E31}, E32& A13& 121 hours& 6 hours\\
\textbf{E33}, E35& A14& 71 hours& 14 hours\\
\textbf{E34}& A14& 94 hours& 14 hours\\
\textbf{E36}& A17& 11 hours& Up to the event time\\
\textbf{E37}& A17 &54 hours &32 hours\\
\textbf{E38}& Not Detected & No Precursory& -\\
\textbf{E39} &A18& 21 hours& 13\\
\textbf{E40}& Not Detected& No Precursory& -\\
\textbf{E41}& A19& 122 hours& 21 hours\\
\noalign{\smallskip}\hline
\end{tabular}
\end{threeparttable}
\end{table}
Based on these results, it seems that the null hypothesis can be rejected using the CS measure and conclude that this method has been able to identify the pre-earthquake state based on temporal increases in dissimilarity. Although the goal in this paper is not to compare dissimilarity measures, further work is needed to explain the superior performance of the CS dissimilarity measure upon the VP distance for the surrogate test presented here. From a theoretic perspective, divergence is a stricter and a stronger statistic \cite{bib:19}, in the sense that it compares the entire probability laws. It can therefore go beyond comparing just simple statistics such as mean firing rate or spike count. It is also shown that CS dissimilarity is less sensitive to missing spikes \cite{bib:13}. Some earthquakes may not be registered by one or more monitoring stations in a seismic network, so it is important for the measure to be resistant to missing spikes and avoid deviation.
\section{Conclusion}\label{sec:6}
This paper provided a statistical representation for the pre-earthquake state by using spike train distances applied to micro-earthquakes, and tested its performance as an earthquake precursor. The spike train dissimilarity measures of Victor-Purpura distance and Cauchy-Schwarz divergence were applied to spike trains of micro-earthquakes to examine the idea that increases in dissimilarity in at least two stations may be considered as a warning for future occurrence of major earthquakes. While evidences of precursory behavior were observed using the VP distance, the CS divergence had higher performance in validating the hypothesis.
The relationship between the magnitudes of the earthquakes and the precursory behavior was not addressed in this paper. The magnitude of micro-earthquakes was ignored, and earthquakes $M\geq2$ were not incorporated in the input data set. A possible improvement is to utilize the theory of reproducing kernel Hilbert spaces (RKHS), where tensor products of multiple kernels can be defined to insert magnitude information and exploit the theory of \enquote{marked Point processes} to define distances, which takes advantage of the full information available in the earthquake catalog. Another possible extension is to insert the exact location information of the input events and relax the obligation of just labeling the input events with the stations. This may be helpful to predict the location of target earthquakes, a topic that is not addressed in this paper.
\begin{acknowledgements}
The authors of this paper are grateful to the Iranian Seismological Center for providing the data used in this study. The authors would also like to thank Dr. John F. Dewey for his helpful insight on the tectonic settings of the region of this study, and Isaac Sledge for proofreading the manuscript.
\end{acknowledgements}
\bibliographystyle{IEEEtran}
|
1,477,468,750,349 | arxiv | \section{Introduction}
In differential geometry, the investigation of compact surfaces characterized by curvature properties and variational equations, such as the study of constantly curved minimal 2-spheres in symmetric spaces, is an enduring and important topic. In space forms (real and complex), the structure of these 2-spheres is simple and well known. For example, any minimal 2-sphere of constant curvature in the complex projective space $\mathbb{C}P^n$ belongs to the Veronese sequence, up to a rigid motion (see \cite{Bando-Ohnita,Bolton1988}). The proof was essentially based on the rigidity theorem of holomorphic curves in $\mathbb{C}P^n$ \cite{Calabi}. However, this rigidity does not hold for generic symmetric spaces, among which the Grassmannian is a prototypical example. This phenomenon was first observed by the first named author and Zheng in \cite{Chi-Zheng}, where noncongruent holomorphic 2-spheres of degree $2$ and constant curvature in $G(2,4,\mathbb{C})$ were classified into two families, using the method of moving frames and Cartan's theory of higher order invariants~\cite{Cartan, Jensen}. Since then, there have emerged many works on constantly curved minimal 2-spheres in the Grassmannian (see \cite{JiaoLiS^2inQn,Li-Yu, Peng-Jiao, PengWangXu, Peng-Xu} and the references therein), most of which were devoted to studying constantly curved minimal 2-spheres in the hyperquadric $\mathcal{Q}_{n-1}$ of $\mathbb{C}P^n$ defined by $z_0^2+\cdots+z_n^2=0$
with respect to the homogeneous coordinates of ${\mathbb C}P^n$, where $\mathcal{Q}_{n-1}$, when identified with the oriented real Grassmannian $\widetilde{G}(2,n+1,\mathbb{R})$, can be seen as the next simplest symmetric spaces beyond space forms. On the other hand, quadrics (smooth or singular) play a fundamental role in regard to the complex Grassmannian, since by the Pl\"ucker embedding, any complex Grassmannian $G(2, n+1,\mathbb{C})$ can be realized as the intersection of quadrics in the associated complex projective space (true, in fact, for any variety).
Even in the case of $\mathcal{Q}_{n-1}$, only some special examples and partial classification (e.g. under the condition of homogeneity or lower dimension) have been obtained up to now. Indeed, with the homogeneous assumption, Peng, Wang and Xu gave a complete classification of minimal 2-spheres in $\mathcal{Q}_{n-1}$ in \cite{PengWangXu}, where they proposed the following.
{\bf Problem 1.} {\em How to construct a nonhomogenous constantly curved minimal $2$-sphere in $\mathcal{Q}_{n-1}$ for $n\geq 4$}?
{\bf Problem 2.} {\em Does there exist a linearly full totally real minimal $2$-sphere in $\mathcal{Q}_{n-1}$ which is also minimal in $\mathbb{C}P^n$}?
Jiao and Li \cite{JiaoLiS^2inQn}, using the constructive method of harmonic sequence given by Bahy-El-Dien and Wood in \cite{Bahy-Wood}, classified all constantly curved minimal 2-spheres with higher isotropic order in $\widetilde{G}(2,n+1,\mathbb{R})\cong\mathcal{Q}_{n-1}$ under the totally unramified assumption. Based on this, a complete classification of all totally unramified minimal 2-spheres of constant curvature in $\widetilde{G}(2,7,\mathbb{R})\cong\mathcal{Q}_{5}$ has recently been obtained by Jiao and Li in \cite{Jiao-Li}. For classifications in $\mathcal{Q}_{2}, \mathcal{Q}_{3}$ and $\mathcal{Q}_{4}$, we refer to \cite{JiaoWangQn,zbMATH06272104,JiaoLiS^2inQn} and the references therein.
One observes that a constantly curved minimal 2-sphere of $\mathcal{Q}_{n-1}$ in all these classifications either is also minimal in $\mathbb{C}P^n$, or can be constructed from a totally real constantly curved 2-sphere both minimal in $\mathcal{Q}_{n-1}$ and $\mathbb{C}P^{n}$; moreover, almost all of them are homogeneous. With this observation, the present paper is contributed to studying constantly curved 2-spheres minimal in both $\mathcal{Q}_{n-1}$ and $\mathbb{C}P^{n}$. By the theory of singular-value decomposition (denoted by SVD in this paper) of complex matrices, a method of constructing such kind of 2-spheres is introduced, from which an abundance of nonhomogenous examples can be constructed to answer {\bf Problem 1}. The existence part in {\bf Problem 2} has been affirmed in the classification results of Jiao and Li \cite{JiaoLiS^2inQn}. We obtain the uniqueness part as follows; see also Corollary~\ref{cor-uniqueness}.
\begin{theorem}
Suppose a linearly full totally real minimal $2$-sphere of constant curvature $8/(d^2+2d)$ in $\mathcal{Q}_{n-1}$ is also minimal in $\mathbb{C}P^{n}$. Then $d$ is even and $n=2d+1$. Moreover, it is unique up to a real orthogonal transformation.
\end{theorem}
This SVD method, novel in a sense, is effective and unifying in describing the moduli space of noncongruent 2-spheres with the same constant curvature, such that they are minimal in both $\mathcal{Q}_{n-1}$ and $\mathbb{C}P^{n}$;
see Theorem~\ref{simple classification}. As an example, the aforementioned result of Chi and Zheng follows from our classification of constantly curved holomorphic $2$-spheres of degree no more than 3, to be done in Section~\ref{sec-classify}. More generally, the classification of all constantly curved holomorphic $2$-spheres in $G(2,4,{\mathbb C})$ was obtained by Li and Jin in~\cite{zbMATH05590797} by a direct calculation via elaborate coordinate changes. We will give a systematic SVD proof of it in Proposition~\ref{Pr}.
As another example, recently, using a sophisticated method from the perspective of holomorphic isometric embeddings of $\mathbb{C}P^1$ in $\mathcal{Q}_{n-1}$, Macia, Nagatomo and Takahashi \cite{M-N-T} studied the moduli space of noncongruent constantly curved holomorphic 2-spheres of degree $(n-1)/2$ in $\mathcal{Q}_{n-1}$ when $n$ is odd. The real dimension of this moduli space was determined by them to be $(n^2-4n-1)/4$.
We point out that the dimension count can also be attained via the SVD method and the fact that the ideal of a rational normal curve of degree $d$ is generated by $d^2-d$ independent quadrics. Note that a rational normal curve of degree $d$ lies in $\mathcal{Q}_{n-1}$, if and only if, the quadric given by the intersection of $\mathcal{Q}_{n-1}$ and the projective $d$-plane spanned by the curve belongs to the ideal of the curve. Conversely, to guarantee that quadrics in this ideal belong to $\mathcal{Q}_{n-1}$, the SVD method reveals that there is no other constraint if $(n-1)/2$ when $n$ is odd, whence follows the dimension count (see Theorem~\ref{coro-dim}).
On the other hand, when $d>(n-1)/2$, there are other constraints (see Proposition~\ref{classification of cdn}). This makes the study of constantly curved holomorphic 2-spheres in $\mathcal{Q}_{n-1}$ with degree higher than $(n-1)/2$ more subtle, where the problem of existence has been little understood up to now.
The SVD method, however, enables us to construct plenty of examples and give a lower bound to the dimension of the moduli space for the higher degree case.
\begin{theorem}
For any $(n-1)/2< d\leq n-2$, the linearly full constantly curved holomorphic $2$-spheres of degree $d$ exist in $\mathcal{Q}_{n-1}$. Moreover, if $3\leq(n-1)/2<d\leq n-5$, then the moduli space of such noncongruent holomorphic $2$-spheres assumes $(n-d)^2-11(n-d)+33$ as the lower bound to its dimension.
\end{theorem}
For more precise description, see Theorem~\ref{thm-Hd}.
Our paper is organized as follows. Section~\ref{sec-pre} is devoted to reviewing known results on the SVD of complex matrices, the representation theory of $SU(2)$ with emphasis on the Clebsch-Gordan formula, and some basic formulas of minimal surfaces in the hyperquadric $\mathcal{Q}_{n-1}$. In Section~\ref{sec-orbit}, the orbit space of Grassmannian $G(d+1,n+1,\mathbb{C})$ with respect to the action of real orthogonal group $O(n+1;\mathbb{R})$ is determined by the SVD method. For noncongruent minimal 2-spheres of the same constant curvature, we investigate the structure of their moduli space in Section~\ref{sec-mod}, where how to construct minimal 2-spheres by the SVD method is introduced. Then Section~\ref{sec-const} is devoted to constructing constantly curved holomorphic 2-spheres of higher degree in $\mathcal{Q}_{n-1}$. For degree no more than 3, a complete classification is obtained in Section~\ref{sec-classify}. After studying more geometric properties of minimal 2-spheres constructed by employing the SVD method, {\bf Problem 1} and {\bf Problem 2} are discussed in Section~\ref{sec-prob}.
\section{Preliminaries}\label{sec-pre}
\subsection{Singular-value decomposition and unitary congruence.}
Let $M$ be an $n\times m$ complex matrix with $\corank(M)=r_0$. Set $q:=\min\{m,n\}$. It is well known that the eigenvalues of the Hermitian matrix $MM^{\ast}$, where $M^{\ast}$ is the conjugate transpose of $M$, are nonnegative real numbers. We denote them in nondecreasing order
\begin{equation*}
\lambda_{1}=\cdots=\lambda_{r_0}=0<\lambda_{r_0+1}\leq \lambda_{r_0+2}\leq\cdots\leq\lambda_{q}.
\end{equation*}
Set $\sigma_{i}=\sqrt{\lambda_{i}}.$
\begin{equation}\label{vec sigma}\sigma_1,\sigma_2,\cdots,\sigma_q, \quad or,\quad\vec{\sigma}=(\sigma_{1},\sigma_{2},\ldots,\sigma_{q}),
\end{equation}
are called, respectively, the \emph{singular values}, or, {\em singular-value vector}, of $M$.
Using these notations, the singular-value decomposition (SVD) of $M$ can be stated as follows.
\begin{theorem}{\rm\cite[Thm.~2.1,~p.~150]{HornMatrAnalysis}}\label{singular value decomp lemma}
Let $M$ be an $n\times m$ complex matrix. Set $q=\min\{m,n\}$. Assume $\vec{\sigma}$ is the singular-value vector given in \eqref{vec sigma}. Let $\Sigma_{q}:=\diag(\sigma_{1},\ldots,\sigma_{q})$. Then there are unitary matrices $V\in U(n)$ and $W\in U(m)$, such that
\begin{equation}\label{svd equation}
M=V\,\Sigma \,W^{\ast}
\end{equation}
where $$
\Sigma=\begin{cases}
\begin{pmatrix}\Sigma_{q} & 0_{n\times(m-n)} \end{pmatrix},~~~&n<m,\\
~\Sigma_q,~~~&n=m,\\
^{t}\!\!\begin{pmatrix}\Sigma_{q} & 0_{m\times(n-m)} \end{pmatrix},~~~&n>m.
\end{cases}
$$
\end{theorem}
For us, the SVD of real and complex symmetric matrices are useful in the following.
\begin{coro}{\rm\cite[Cor.~2.6.7,~p.~154]{HornMatrAnalysis}\label{real svd theorem}}
Let $M$ be an $n\times m$ \emph{real} matrix. Under the same assumptions and notations as in Theorem {\rm\ref{singular value decomp lemma}}, there are real orthogonal matrices $V\in O(n;\mathbb{R})$ and $W\in O(m;\mathbb{R})$ satisfying \eqref{svd equation}.
\end{coro}
Two complex matrices $A$ and $B$ are said to be \emph{unitarily congruent} to each other \cite[p.~41]{HornMatrAnalysis} if there is a unitary matrix $U\in U(n)$ such that $$A=^{t}\!\!UB\,U, $$where $^{t}U$ is the transpose of $U$. It is clear that the singular values of $A$ and $B$ are identical. Thus, the singular values are invariant under unitary congruence. Moreover, for complex symmetric matrices, the singular values are the complete invariants.
\begin{theorem}\rm{\cite[p.~153,~Cor 2.6.6, p.~263, Cor 4.4.4]{HornMatrAnalysis}\label{singular values as unique invariant}}{\em
Let $A$ be an $n\times n$ complex symmetric matrix with $\corank(A)=r_0$. We denote the distinct positive singular values of $A$ by $\sigma_{1},\sigma_{2},\ldots,\sigma_{m}$, in increasing order, with multiplicities $r_1, r_2, \cdots, r_m,$ respectively.
{\bf (1)} There is a unitary matrix $U\in U(n)$ such that
\begin{equation}\label{dilation}
A=^{t}\!\!U\,
\diag(0_{r_0\times r_0},\,\sigma_{1}Id_{r_1},\,\cdots,\,\sigma_{m}Id_{r_m})\,
U,
\end{equation}
and, moreover, if $\widetilde U$ is another such kind of matrix, then
$$\widetilde U=\diag(A_{r_0},\,A_{r_1},\,\cdots,\,A_{r_m})\,U,$$
where $A_{r_0}\in U(r_0)$ is unitary and $A_{r_j}\in O(r_j;\mathbb{R})$ is real orthogonal for any $1\leq j\leq m$.
{\bf (2)} Furthermore, two complex symmetric matrices are unitarily congruent to each other if and only if their singular values are the same.
}
\end{theorem}
\subsection{Irreducible representations of $SU(2)$.}\label{2.2}
Under the standard metric of constant curvature, the group of automorphisms of isometry of $\mathbb{C}P^{1}$ is $SU(2)$.
The irreducible representations of $SU(2)$ are well known \cite[Chap.~6]{GrpAndSymmetry} which we state briefly as follows.
Let $\mathcal{V}^{\frac{d}{2}}$ be the linear space of homogeneous polynomials of degree $d$ in two variables $(u,v)$, where $d$ is a nonnegative integer; we adopt the $d/2$ notation in accord with spin-$1/2$ in physics. The complex dimension of $\mathcal{V}^{\frac{d}{2}}$ is $d+1$, and we choose the following basis
\begin{equation}\label{basis of repre of su2}
e_{l}=\tbinom{d}{l}^{\frac{1}{2}}u^{d-l}v^{l},~~~l=0,\ldots,d.
\end{equation}
We equip $\mathcal{V}^{\frac{d}{2}}$ with an inner product such that $\{e_{l},~l=0,\ldots,d\}$ is an orthonormal basis. Consider the representation of $SU(2)$ on $\mathcal{V}^{\frac{d}{2}}$ given by
\begin{equation*}
\varrho^{\frac{d}{2}}:SU(2)\times \mathcal{V}^{\frac{d}{2}} \rightarrow \mathcal{V}^{\frac{d}{2}}, \quad\quad\quad\quad
(g=\begin{pmatrix}
a & b \\
-\bar{b} & \bar{a} \\
\end{pmatrix},~f
)\mapsto f\circ g^{\ast},
\end{equation*}
where $|a|^{2}+|b|^{2}=1$. Explicitly, $(\varrho^{\frac{d}{2}}(g)f)(u,v)=f(\bar{a}u-bv,\bar{b}u+av)$. Under the basis $\{e_{l},~l=0,\ldots,d\}$, by a slight abuse of notation, we set
\begin{equation}\label{irre repre in matrix form}
(e_{0},\ldots,e_{d})\cdot \rho^{\frac{d}{2}}(g):=\varrho^{\frac{d}{2}}(g)(e_{0},\ldots,e_{d}),
\end{equation}
to indicate that $\rho^{\frac{d}{2}}(g)$ on the left hand side is a matrix in $U(d+1)$ (dot denotes matrix multiplication). Recall the rational normal (or, Veronese) curve $\mathbb{C}P^{1}$ of degree $d$,
\begin{equation}\label{rational normal curve
Z_{d}: \mathbb{C}P^{1}\mapsto \mathbb{C}P^{d},\quad\quad\quad\quad
[u,v]\mapsto ~^{t}[u^{d},\ldots,\tbinom{d}{i}^{\frac{1}{2}}u^{d-i}v^{i},\ldots,v^{d}];
\end{equation}
Using the basis $e_{l}$ in \eqref{basis of repre of su2}, we can rewrite $Z_{d}=~^{t}[e_{0},\ldots,e_{d}].$ Consider a unitary transformation $U\in U(d+1)$ fixing the rational normal curve $Z_{d}$ with
\begin{equation}\label{image equivalence}
\text{Image}~U\cdot Z_{d}=\text{Image}~Z_{d}.
\end{equation}
(Henceforth, we use $Im$ to denote ``Image''). Since $U$ induces an isometric biholomorphic map of $\mathbb{C}P^{1}$, there are $A\in SU(2)$ and $\lambda\in U(1)$, such that
$U=\lambda\cdot ~^{t}\rho^{\frac{d}{2}}(A)$. The coefficient $\lambda$ here is necessary, because we consider projective transformations. The converse is also true. The geometric meaning of $U$ satisfying \eqref{image equivalence} is to reparametrize $Z_{d}$ by $A$.
The rational normal curve $Z_{d}$ is a special case of minimal 2-spheres called \emph{Veronese maps} in $\mathbb{C}P^{d}$ \cite{Bolton1988}. Explicitly,
\begin{align}\label{Veronese sequence
\begin{split}
&Z_{d,p}:\mathbb{C}P^{1}\rightarrow \mathbb{C}P^{d},\quad\quad\quad
[u,v]\mapsto [g_{p,0}(\frac{v}{u}),\cdots,g_{p,d}(\frac{v}{u})],\\
&g_{p,l}(z)=\frac{p!}{(1+|z|^{2})^{p}}\sqrt{\tbinom{d}{l}}~z^{l-p}\sum_{k}(-1)^{k}\tbinom{l}{p-k}\tbinom{d-l}{k}|z|^{2k},\quad 0\leq p,l\leq d.
\end{split}
\end{align}
Note that $Z_{d,0}$ is the standard rational normal curve $Z_{d}$; we continue to denote it by $Z_{d}$ henceforth whenever convenient. We list some basic facts about the Veronese maps \cite[Section 2,~5]{Bolton1988} for easy reference. On the affine chart $u\neq 0$, set $z:=\frac{v}{u}$. Then $Z_{d,0}$ is given by
\begin{equation}\label{rational normal curve in affine chart}
Z_{d,0}=~^{t}[1,\sqrt{\tbinom{d}{1}}~z,\ldots,\sqrt{\tbinom{d}{k}}~z^{k},\ldots,z^{d}].
\end{equation}
$Z_{d,p+1}$ satisfies the following recursive formulas
\begin{equation}\label{recursive formulas}
Z_{d,p+1}=\frac{\partial}{\partial z}Z_{d,p}-\frac{\partial\log|Z_{d,p}|^{2}}{\partial z}Z_{d,p},~~~0\leq p\leq d-1.
\end{equation}
It follows that
\begin{equation}\label{local sections equivalent}
Z_{d,p}\equiv \frac{\partial^{p}}{\partial z^{p}}Z_{d,0} \mod (Z_{d,0},\ldots, Z_{d,p-1}),~~~
\end{equation}
for $0\leq p\leq d$. The norm squared of $Z_{d,p}$ is $|Z_{d,p}|^{2}=\frac{d!~p!}{(d-p)!}(1+|z|^{2})^{d-2p}$. Moreover,
\begin{equation}\label{facts of veronese sequence
\frac{\partial}{\partial \bar{z}}Z_{d,p} =-\frac{|Z_{d,p}|^{2}}{|Z_{d,p-1}|^{2}}Z_{d,p-1}=-\frac{p(d-p+1)}{(1+|z|^{2})^{2}}Z_{d,p-1}.
\end{equation}
Equip $\mathbb{C}P^{d}$ with the Fubini-Study metric
of holomorphic sectional curvature $4$. Then the pullback metric of the Veronese map $Z_{d,p}$ is
\begin{equation}
ds_{d,p}^{2}=\frac{d+2p(d-p)}{(1+|z|^{2})^{2}}dzd\bar{z},
\end{equation}
and its Gaussian curvature $K_{d,p}$ is the constant $4/(d+2p(d-p))$. The K\"ahler angle of $Z_{d,p}$ is also constant, and we denote it by $\theta_{d,p}$, where $\theta_{d,p}\in [0,\pi]$, which satisfies \begin{equation}\label{eq-kahler}
\cos\theta_{d,p}=(d-2p)/(2p(d-p)+d),~0\leq p\leq d.
\end{equation
Conversely, the constant curvature assumption characterizes $Z_{d,p}$ in the following rigidity theorem.
\begin{theorem}\label{rigidity theorem}{\rm\cite{Bolton1988}}
Let $f: \mathbb{C}P^{1}\rightarrow \mathbb{C}P^{d}$ be a linearly full minimal 2-sphere of constant curvature. Then there exist two unitary matrices $U\in U(d+1)$ and $B\in SU(2)$
such that
\begin{equation*}
f([u,v])=U\cdot ~^{t}\rho^{\frac{d}{2}}(B)\cdot Z_{d,p}([u,v]),~~~[u,v]\in \mathbb{C}P^{1}.
\end{equation*}
\end{theorem}
\subsection{Clebsch-Gordan formula.}
Consider the tensor product representation
\begin{align*
\begin{split}
\varrho^{\frac{d}{2}}\otimes \varrho^{\frac{d}{2}}:SU(2)\times \mathcal{V}^{\frac{d}{2}}\otimes \mathcal{V}^{\frac{d}{2}}&\rightarrow \mathcal{V}^{\frac{d}{2}}\otimes \mathcal{V}^{\frac{d}{2}}, \\
(g, e_{k}\otimes e_{l})&\mapsto (\varrho^{\frac{d}{2}}(g)e_{k}) \otimes (\varrho^{\frac{d}{2}}(g)e_{l}), ~~~0\leq k,l\leq d.
\end{split}
\end{align*}
Identify $e_{k}\otimes e_{l}$ with the square matrix $E_{kl}\in M_{(d+1)}(\mathbb{C})$, where the only nonvanishing entry of $E_{kl}$ is $1$ at the $(k,l)$ position,~$0\leq k,l\leq d$. This identification gives rise to an isomorphism of $M_{(d+1)}(\mathbb{C})$ with $\mathcal{V}^{\frac{d}{2}}\otimes \mathcal{V}^{\frac{d}{2}}$,
\begin{equation}\label{identification of matrix with tensor product
\varphi: M_{(d+1)}(\mathbb{C})\rightarrow \mathcal{V}^{\frac{d}{2}}\otimes \mathcal{V}^{\frac{d}{2}},\quad\quad\quad\quad
A\mapsto ~(^{t}Z_{d}\cdot A) \otimes Z_{d},
\end{equation}
under which
we may write $\varrho^{\frac{d}{2}}\otimes \varrho^{\frac{d}{2}}$ as
\begin{align}\label{action on M_d
\begin{split}
\varrho^{\frac{d}{2}}\otimes \varrho^{\frac{d}{2}}: SU(2)\times M_{(d+1)}(\mathbb{C})&\rightarrow M_{(d+1)}(\mathbb{C}), \\
(g, A)&\mapsto ^{t}\!\!\rho^{\frac{d}{2}}(g)\cdot A\cdot \rho^{\frac{d}{2}}(g).
\end{split}
\end{align}
It is well known that by the Clebsch-Gordan formula, $\mathcal{V}^{\frac{d}{2}}\otimes \mathcal{V}^{\frac{d}{2}}$ is decomposed into irreducible $SU(2)$-invariant subspaces (see \cite[2.4,~p.~90]{GrpAndSymmetry},
\begin{equation*}
\mathcal{V}^{\frac{d}{2}}\otimes \mathcal{V}^{\frac{d}{2}}\cong \underbrace{\mathcal{V}^{d}\oplus \mathcal{V}^{d-1}\oplus\cdots\oplus \mathcal{V}^{0}}_{d+1},
\end{equation*}
and the projection to the first summand $\mathcal{V}^{d}$ is given by the product of polynomials
\begin{equation
\Pi:\mathcal{V}^{\frac{d}{2}}\otimes \mathcal{V}^{\frac{d}{2}}\mapsto \mathcal{V}^{d},\quad\quad\quad
f_{1}\otimes f_{2}\mapsto f_{1}\cdot f_{2}.
\end{equation}
The set of symmetric matrices $Sym_{d+1}(\mathbb{C})$ turns out to be an $SU(2)$-invariant subspace of $M_{d+1}(\mathbb{C})$ as follows.
\begin{lemma}\label{symmetric matrix space}
Let $d$ be a nonnegative integer. Assume that $d=2[\frac{d}{2}]+r$, where $0\leq r\leq 1$ and $[\frac{d}{2}]$ is the greatest integer less than or equal to $\frac{d}{2}$. Then
\begin{equation*}
Sym_{d+1}(\mathbb{C})\cong \mathcal{V}^{d}\oplus \mathcal{V}^{d-2}\oplus \cdots \oplus \mathcal{V}^{r}.
\end{equation*}
\end{lemma}
\begin{proof} (sketch)
Consider the induced action of $\mathfrak{su}(2)$ on $Sym_{d+1}(\mathbb{C})$ and extend it to $\mathfrak{sl}(2;\mathbb{C})$.
Since
the symmetric matrix $\frac{1}{2}(E_{kl}+E_{lk})$ corresponds to
\begin{equation*}
\frac{1}{2}(e_{k}\otimes e_{l}+e_{l}\otimes e_{k}),~0\leq k\leq l\leq d,
\end{equation*}
under $\varphi$,
a multiplicity count of the eigenvalues of $J_{3}:=\diag(-1,1)/2
\, in $\mathfrak{sl}(2;\mathbb{C})$
finishes off the proof.
\end{proof}
Denote the projection of $Sym_{d+1}(\mathbb{C})$ into $\mathcal{V}^{d-2k}$ by $\Pi_{k}$,
\begin{equation}\label{general projection}
\Pi_{k}:Sym_{d+1}(\mathbb{C})\rightarrow \mathcal{V}^{d-2k},
\end{equation}
where $0\leq k\leq [\frac{d}{2}]$. Recall the definition of the identification $\varphi$, \eqref{identification of matrix with tensor product}, where the projection $\Pi_{0}$ to the first summand $\mathcal{V}^{d}$ is given by the product of polynomials, or, in matrix terms,
\begin{align}\label{projection to the first summand
\Pi_{0}:Sym_{d+1}(\mathbb{C})\mapsto \mathcal{V}^{d},\quad\quad\quad
S\mapsto ~^{t}Z_{d}\cdot S\cdot Z_{d}.
\end{align}
Fix $0\leq p\leq [\frac{d}{2}]$. Consider the following subspace of $Sym_{d+1}(\mathbb{C})$,
\begin{equation}\label{important subsapce of symm matrices}
\mathscr{S}_{d,p}:=\{S\in Sym_{d+1}(\mathbb{C})| ~^{t}Z_{d,p}\cdot S\cdot Z_{d,p}=0\},
\end{equation}
which can be seen as the set of all quadrics containing the standard Veronese map $Z_{d,p}$.
It turns out that it is $SU(2)$-invariant as follows.
\begin{prop}\label{quadric contains rnc and decomposition}
Let $d$ be a nonnegative integer and let $0\leq p\leq [\frac{d}{2}]$. Then
{\bf (1)} $\mathscr{S}_{d,p}=\ker \Pi_{0}\cap \dots\cap\ker \Pi_{p}$.
{\bf (2)} Hence, $\mathscr{S}_{d,p}\cong \mathcal{V}^{d-2p-2}\oplus \mathcal{V}^{d-2p-4}\oplus \cdots \oplus \mathcal{V}^{d-2[\frac{d}{2}]}$,
~~~~{ \rm (}{when $p=[\frac{d}{2}]$, the right hand side is understood to be $\{0\}$}{\rm)}.
\end{prop}
\begin{proof} For $p=0$, see \eqref{projection to the first summand}. The general case will be proven in Proposition \ref{summary theorem}.
\end{proof}
When $p=0$, for convenience, we denote $\mathscr{S}_{d,0}$ by $\mathscr{S}_{d}$ in this paper.
\subsection{Minimal surfaces in the hyperquadric.}\label{2.4}
Following the setup in Jiao and Wang \cite{JiaoWangQn}, let $f:M^{2}\rightarrow \mathcal{Q}_{n-1}\subseteq \mathbb{C}P^{n}$ be an isometric immersion. At every point $p\in M^{2}$, there is a local complex coordinate $z$ such that near that point
\begin{equation}\label{metric condition of surface in quadric}
ds^{2}_{M^{2}}=\lambda^{2}dzd\bar{z}.
\end{equation}
Assume $\widetilde{f}$ is a local lift of $f$ in $\mathbb{C}^{n+1}$ around $p$ with unit norm, $|\widetilde{f}|=1$. Consider the following two orthogonal projections of $\frac{1}{\lambda}\frac{\partial \widetilde{f}}{\partial z}$ and $\frac{1}{\lambda}\frac{\partial \widetilde{f}}{\partial \bar{z}}$ onto the subspace perpendicular to $\widetilde{f}$, respectively,
\begin{equation*}
X:=\frac{1}{\lambda}\frac{\partial \widetilde{f}}{\partial z}-\langle \frac{1}{\lambda}\frac{\partial \widetilde{f}}{\partial z},\widetilde{f} \rangle\widetilde{f},\quad\quad Y:=\frac{1}{\lambda}\frac{\partial \widetilde{f}}{\partial \bar{z}}-\langle \frac{1}{\lambda}\frac{\partial \widetilde{f}}{\partial \bar{z}},\widetilde{f} \rangle\widetilde{f},
\end{equation*}
where $\langle~~,~~\rangle$ denote the standard unitary inner product in $\mathbb{C}^{n+1}$.
From the metric condition \eqref{metric condition of surface in quadric}, we know
\begin{align*
|X|^{2}+|Y|^{2} &=1,\quad\quad\quad
^{t}\overline{X}~\cdot Y=0,
\end{align*}
where we identify $X$ and $Y$ as the column vectors in $\mathbb{C}^{n+1}$. The K\"ahler angle $\theta\in [0,\pi]$ of $f$ can be expressed as
\begin{equation}\label{kahler angle}
\cos \theta=|X|^2-|Y|^2.
\end{equation}
In \cite{JiaoWangQn}, the following global invariants are defined,
\begin{equation*}
\tau_{X}=|^{t}X\, X|,~~\tau_{Y}=|^{t}Y\, Y|,~~\tau_{XY}=|^{t}X\, Y|,
\end{equation*}
If $f$ is minimal in $\mathcal{Q}_{n-1}$, then from (2.22) in \cite[p.~821]{JiaoWangQn}, we obtain
\begin{equation}\label{formula of norm of second fundamental form}
||B||^{2}=2+6\cos^{2}\theta-2K-4\tau_{X}^{2}-4\tau_{Y}^{2}+8\tau_{XY}^{2},
\end{equation}
where $B$ is the second fundamental form of $f$.
As mentioned in the introduction, this paper is contributed to studying constantly curved 2-spheres minimal in both $\mathcal{Q}_{n-1}$ and $\mathbb{C}P^n$. The following characterization of Jiao, Wang and Zhong will be used in Section~\ref{sec-prob}.
\begin{theorem}\rm{\cite[Theorem 3.1]{zbMATH06272104}}\label{minimal in cpn and qn-1}
{\em The surface $f:M^{2}\rightarrow \mathcal{Q}_{n-1}\subseteq \mathbb{C}P^{n}$ is minimal in both $\mathcal{Q}_{n-1}$ and $\mathbb{C}P^n$, if and only if, $\tau_{XY}=0$.}
\end{theorem}
\begin{definition}
For two positive integers $d,n$ satisfying $d\leq n$, consider the set of minimal $2$-spheres of constant curvature, each of which spans a projective $d$-plane in $\mathbb{C}P^{n}$ and is minimal in both $\mathcal{Q}_{n-1}$ and $\mathbb{C}P^{n}$. For convenience, we denote it by $\mathbf{Mini}_{d,n}$.
\end{definition}
From the rigidity Theorem \ref{rigidity theorem} and Theorem \ref{minimal in cpn and qn-1}, we know such minimal 2-spheres are derived from the standard Veronese maps $Z_{d,p}$ with quadric constraint, for some $~0\leq p\leq d$, up to unitary transformations of $\mathbb{C}P^n$. This implies $\mathbf{Mini}_{d,n}$ has the structure of the following disjoint union.
\begin{prop}\label{set of mini dn}
\begin{equation*}
\mathbf{Mini}_{d,n}=\bigsqcup_{0\leq p\leq d}\mathbf{H}_{d,n,p},
\end{equation*}
where $\mathbf{H}_{d,n,p}$ is defined by
\begin{equation}\label{rnc in hyperquadric}
\mathbf{H}_{d,n,p}:=\{EZ_{d,p}|E\in M(n+1,d+1), ~E^{*}E=Id_{d+1},~\text{and }~EZ_{d,p}~ \text{lies in }~\mathcal{Q}_{n-1}\},
\end{equation}
where $E$ is of size $(n+1)\times (d+1)$. Moreover, by taking conjugation in $\mathbb{C}^{n+1}$, we have $\mathbf{H}_{d,n,p}\cong \mathbf{H}_{d,n,d-p},~0\leq p\leq [\frac{d}{2}]$.
\end{prop}
\begin{remark}
All minimal $2$-spheres belonging to $\mathbf{H}_{d,n,p}$ have the same constant curvature $K=4/(d+2p(d-p))$. Conversely, if a $2$-sphere minimal in both $\mathcal{Q}_{n-1}$ and $\mathbb{C}P^n$ takes $K=4/(d+2p(d-p))$ as its Gaussian curvature, then it must belong to $\mathbf{H}_{d,n,p}$.
\end{remark}
In view of the second statement in the preceding proposition, it suffices to determine $\mathbf{H}_{d,n,p}$ for $0\leq p\leq [\frac{d}{2}]$, to be done in the following sections.
It is geometrically clear that to study noncongruent minimal 2-spheres, we should mod out the real orthogonal group $O(n+1;\mathbb{R})$ and the reparametrizations of $\mathbb{C}P^1$.
To begin with, observe that a 2-sphere in ${\mathbf H}_{d,n,p}$ lies in the intersection, which is a quadric not necessarily smooth, of the hyperquadric $\mathcal{Q}_{n-1}$ and the projective $d$-plane it spans.
Following this observation, we will first study the action of $O(n+1; \mathbb{R})$ on all projective $d$-planes, i.e., on $G(d+1,n+1,\mathbb{C})$.
\section{The orbit space $G(d+1,n+1,\mathbb{C})/O(n+1;\mathbb{R})$}\label{sec-orbit}
Throughout this section, $d$ is a positive integer.
For a given plane $V\in G(d+1,n+1,\mathbb{C})$, we can choose an orthonormal basis $e_{0},\ldots,e_{d}$ of $V$ and represent it by an $(n+1)\times (d+1)$ matrix
\begin{equation}\label{representative of plane}
E:=\begin{pmatrix}
e_{0} & \ldots & e_{d} \\
\end{pmatrix}.
\end{equation}
Two orthonormal bases of $V$ differ by a unitary matrix in $U(d+1)$ multiplied on the right of \eqref{representative of plane}, which implies that the complex symmetric matrices $^t\!EE$ differ by unitary congruence. This means that we can endow $V$ with $d+1$ numbers
$$\sigma_0\leq\sigma_1\leq\cdots\leq\sigma_d,$$
which are the singular values of\, $^t\!EE$ defining $n$ $O(n+1;\mathbb{R})$-invariant functions on $G(d+1,n+1,\mathbb{C})$. For convenience, in this paper, we also call $\{\sigma_0,\cdots,\sigma_{d}\}$
the \emph{singular values of $V$}, or the \emph{singular values of $O(n+1;\mathbb{C})$-orbit through $V$ in $G(d+1,n+1,\mathbb{C})$}, interchangeably. These invariants turn out to be decisive in constructing the $(d+1)$-plane $V$. Our main observation is inspired by the work of Berndt \cite[Section 6,~p. 27]{BerndtTwoGrassman}, as follows.
Let $M$ be a complete Riemannian manifold, and
let $p\in M$ be a fixed point. Let $G$ be a compact Lie group with an action of isometry on $M$ given by,
$ G\times M\rightarrow M,\;
(g,q)\mapsto g\cdot q.$
Set $N=G\cdot p$, the $G$-orbit thorough $p$ in $M$. It is a homogenous space $G/H$, where $H$ is the isotropy group of $G$ at $p$.
We denote the normal space to $N$ at $p$ in $M$ by $T_p^\perp N$.
Note that $H$ also induces an isometric isotropy action on $T_{p}^{\perp}N$,
\begin{equation*}
H\times T_{p}^{\perp}N \rightarrow T_{p}^{\perp}N,\quad\quad\quad\quad
(g,v)\mapsto g_{\ast}|_{p}~v.
\end{equation*}
\begin{theorem}\label{general compute the orbit sapce}
For every point $q\in M$, there are $A\in G$ and $v\in T_{p}^{\perp}N$, such that
\begin{equation*}
q=A\cdot \exp_{p}v,
\end{equation*}
where $exp_{p}$ is the exponential map of the Riemannian manifold $M$ at $p$.
Moreover, under the isotropy action, if $u$ lies in the same $H$-orbit of $v$, i.e., if there are $g\in H$ and $u\in T_{p}^{\perp}N$ such that $v=g_{\ast}|_{p}\cdot u$, then $q=Ag\cdot \exp_{p}u$.
\end{theorem}
\begin{proof}
Since $G$ is compact, the $G$-orbit $N$ through $p$ is also compact. Since $M$ is complete, for every point $q\in M$, there is a point $r\in N$, such that
$dist(q,r)=dist(q,N).$
Let $l(t)$ be a minimal geodesic connecting $r$ and $q$. By the first variation formula, $l(t)$ is a normal geodesic starting from $r$. So there is a normal vector $w\in T_{r}^{\perp}N$, such that $q=exp_{r}w$. Since $G$ acts on the $G$-orbit $N$ transitively, there is an $A\in G$ such that
$A\cdot p=r.$
Denote $A^{-1}_{\ast}|_{r}~w\in T_{p}^{\perp}N$ by $v$. Then we have $q=A\cdot \exp_{p}v$ since the action of $G$ is isometric. The second claim is now clear with slight modification.
\end{proof}
Using Theorem~\ref{general compute the orbit sapce}, we obtain the following decomposition result of unitary matrices.
\begin{prop}\label{norm form of unitary group}
Let $U$ be a unitary matrix in $U(n+1)$ with $n\geq d$.
{\bf (1)} If $2d+1\leq n$, then there exist $A\in O(n+1;\mathbb{R}), U_{1}\in U(d+1),~U_{2}\in U(n-d)$, such that
\begin{equation}\label{general uintary low d}
U=A\begin{pmatrix}
\Lambda_{1} & \Lambda_{2} & 0 \\
\Lambda_{2} & \Lambda_{1} & 0 \\
0 & 0 & Id_{n-2d-1} \\
\end{pmatrix}\begin{pmatrix}
U_{1} & 0 \\
0 & U_{2} \\
\end{pmatrix},
\end{equation}
where $\Lambda_{1}=\diag(\cos a_{0},\ldots,\cos a_{d}),~\Lambda_{2}=\diag(\sqrt{-1}\sin a_{0},\ldots,\sqrt{-1}\sin a_{d})$, for some $a_{j}\in [0,2\pi],~0\leq j\leq d$.
{\bf (2)} If $2d+1>n$, set $l=2d+1-n$. Then there exist $A\in O(n+1;\mathbb{R}),~U_{1}\in U(d+1),~U_{2}\in U(n-d)$, such that
\begin{equation}\label{general uintary high d}
U=A\begin{pmatrix}
\Lambda_{1} & 0 & \Lambda_{2} \\
\Lambda_{2} & 0 & \Lambda_{1} \\
0 & Id_{l} & 0 \\
\end{pmatrix}\begin{pmatrix}
U_{1} & 0 \\
0 & U_{2} \\
\end{pmatrix},
\end{equation}
where $\Lambda_{1}=\diag(\cos a_{0},\ldots,\cos a_{d-l}),~\Lambda_{2}=\diag(\sqrt{-1}\sin a_{0},\ldots,\sqrt{-1}\sin a_{d-l})$, for some $a_{j}\in [0,2\pi],~0\leq j\leq d-l$.
\end{prop}
\begin{proof}
We consider only the case (1) with $2d+1\leq n$, the proof for the other case is similar. Equip $U(d+1)$ with the standard bi-invariant metric given by $(u,v)=Re~tr(u^{\ast}~v)$ over the Lie algebra $\mathfrak{u}(n+1)$.
Consider the group $G:=O(n+1;\mathbb{R})\times U(d+1)\times U(n-d)$ and its action on $U(n+1)$ given by
\begin{equation*}
G\times U(n+1)\rightarrow U(n+1),\quad\quad\quad
((A,K,L)
,U)\mapsto AU\begin{pmatrix}
K^{-1} & \\
& L^{-1} \\
\end{pmatrix},
\end{equation*}
which preserves the metric of $U(n+1)$.
We denote the $G$-orbit through the identity $Id\in U(n+1)$ by $N$. The isotropy group $H$ at $Id$ is isomorphic to $O(d+1;\mathbb{R})\times O(n-d;\mathbb{R})$,
\begin{equation*}
H=\{(\begin{pmatrix}
A_{1} & 0 \\
0 & A_{2} \\
\end{pmatrix}
,A_{1},A_{2})|~A_{1}\in O(d+1;\mathbb{R}),~A_{2}\in O(n-d;\mathbb{R})\}.
\end{equation*}
It is easy to see that $T_{Id}N=\mathfrak{o}(n+1;\mathbb{R})+\mathfrak{u}(d+1)+\mathfrak{u}(n-d)$ (this is not a direct sum). Hence the normal space $T_{Id}^{\perp}N$ is given by
\begin{equation*}
T_{Id}^{\perp}N=\{\begin{pmatrix}
0_{(d+1)\times(d+1)} & \sqrt{-1} ~^{t}B\\
\sqrt{-1}~B & 0_{(n-d)\times(n-d)} \\
\end{pmatrix}|~B\in M_{(n-d)\times(d+1)}(\mathbb{R})\}.
\end{equation*}
The induced isotropy action of $H$ on $T_{Id}^{\perp}N$ is $H\times T_{Id}^{\perp}N\rightarrow T_{Id}^{\perp}N$ given by
$$
\begin{small}\Big((\begin{pmatrix}
A_{1} & 0 \\
0 & A_{2} \\
\end{pmatrix}
,A_{1},A_{2}),\begin{pmatrix}
0_{(d+1)\times(d+1)} & \sqrt{-1} ~^{t}B\\
\sqrt{-1}~B & 0_{(n-d)\times(n-d)} \\
\end{pmatrix}\Big)\mapsto \begin{pmatrix}
0_{(d+1)\times(d+1)} & \sqrt{-1} ~A_{1}~^{t}BA_{2}^{-1}\\
\sqrt{-1}~A_{2}BA_{1}^{-1} & 0_{(n-d)\times(n-d)} \\
\end{pmatrix}.
\end{small}
$$
By Corollary \ref{real svd theorem} and the assumption $n-d\geq d+1$, without loss of generality, we may assume that $B$ is in the form
\begin{equation*}
B=\begin{pmatrix}
\Lambda \\
0_{(n-2d-1)\times(d+1)} \\
\end{pmatrix},
\end{equation*}
where $\Lambda=\diag(a_{0},\ldots,a_{d})$, for some $a_{j}\in\mathbb{R},~0\leq i\leq d$. Then by a straightforward computation on matrix exponential while invoking Theorem \ref{general compute the orbit sapce}, we arrive at \eqref{general uintary low d}.
\end{proof}
With the above decomposition theorem of unitary matrices, we can now show how to reconstruct the $(d+1)$-plane $V\subset \mathbb{C}^{n+1}$ from singular values.
\begin{coro}\label{classification of plane}
Let $V$ be a fixed $(d+1)$-plane in $\mathbb{C}^{n+1}$. Denote its distinct singular values by $\sigma_0,\sigma_1,\cdots,\sigma_m$, in increasing order, with multiplicities $r_0, r_1, \cdots, r_m$, respectively. Then all $\sigma_j \in [0,1]$, and there exist numbers $a_j\in [0, \frac{\pi}{4}]$ such that $\sigma_{j}=\cos2a_j$ for all $j$. Moreover,
{\bf (1)} if $2d+1\leq n$, then any orthonormal basis $E$ of $V$ can be expressed as $E=AV_{\vec{\sigma}}U$, where $A\in O(n+1;\mathbb{R})$, $U\in U(d+1)$, and
\begin{small}
\begin{equation}\label{jm alpha0
V_{\vec{\sigma}}\doteq\begin{pmatrix}
\mathrm{J}_0(\sigma_0)~&&\\
&\ddots&\\
&&\mathrm{J}_m(\sigma_m)\\
0_{(n-2d-1)\times r_0}&\cdots&0_{(n-2d-1)\times r_m}\\
\end{pmatrix},~~~
\mathrm{J}_j(\sigma_j)\doteq\begin{pmatrix}
\cos a_j \,Id_{r_j} \\
\sqrt{-1}\sin a_j\,Id_{r_j} \\
\end{pmatrix},~0\leq j\leq m;
\end{equation}
\end{small}
{\bf (2)} if $2d+1>n$ and we set $l=2d+1-n$, then $\sigma_m=1$, $r_m\geq l$, and any orthonormal basis $E$ of $V$ can be expressed by $E=AV_{\vec{\sigma}}U$, where $A\in O(n+1;\mathbb{R})$, $U\in U(d+1)$, and
\begin{small}
\begin{equation}\label{jm alpha
V_{\vec{\sigma}}\doteq\begin{pmatrix}
\mathrm{J}_0(\sigma_0)~&&&\\
&\ddots&&\\
&&\mathrm{J}_{m-1}(\sigma_{m-1})&\\
&&&Id_{r_m}\\
0_{(r_m-l)\times r_0}&\cdots&\cdots&0_{(r_m-l)\times r_m}
\end{pmatrix},
\end{equation}
\end{small}
where $\mathrm{J}_j(\sigma_j)$ is defined similarly as in \eqref{jm alpha0}.
\end{coro}
\begin{proof}
~We only deal with the case of $2d+1\leq n$; the proof for the case of $2d+1>n$ is similar.
Assume $E=(e_{0},\ldots,e_{d})$ is an orthonormal basis of $V$. Then there is a $U\in U(n+1)$, such that
\begin{equation*}
E=U\begin{pmatrix}
Id_{d+1} \\
0 \\
\end{pmatrix}.
\end{equation*}
By \eqref{general uintary low d}, we obtain \eqref{jm alpha0} for some real parameter $(a_{0},\ldots,a_{d})\in\mathbb{R}^{d+1}$, where $a_{j}\in[0,2\pi]$. We leave it to the reader as a routine exercise to show that
by multiplying $O(n+1;\mathbb{R})$ on the left and $U(d+1)$ on the right of \eqref{jm alpha0}, we may assume $a_{i} \in[0,\frac{\pi}{4}]$.
\end{proof}
\begin{remark}\label{rk-nonfull}
From the above proposition, we see that if $2d+1<n$, then there exists a universal $\mathbb{C}^n\subset\mathbb{C}^{n+1}$ such that any $(d+1)$-planes in $\mathbb{C}^{n+1}$ can be transformed into $\mathbb{C}^n$ by orthogonal transformations. So, to consider the orbit space $G(d+1,n+1,\mathbb{C})/O(n+1;\mathbb{R})$, we may assume $2d+1\geq n$ in the following.
\end{remark}
Now the structure of $G(d+1,n+1,\mathbb{C})/O(n+1;\mathbb{R})$ follows from Corollary~\ref{classification of plane} directly.
\begin{theorem}\label{moduli of plane}
If $d\leq n\leq 2d+1$, then the orbit space
$$G(d+1,n+1,\mathbb{C})/O(n+1;\mathbb{R})\cong \Theta_{d,n},$$
where
\begin{small}
\begin{equation}\label{eq-Theta}
\Theta_{d,n}:=\{\vec{\sigma}=(\sigma_{0},\sigma_{1},\cdots,\sigma_{d})\in \mathbb{R}^{d+1}|~0\leq \sigma_{0}\leq \sigma_{1}\leq\cdots\leq \sigma_{d}\leq 1,~\sigma_{n-d}=\cdots=\sigma_{d}=1\},
\end{equation}
\end{small}
which is a convex polytope in $\mathbb{R}^{d+1}$.
\end{theorem}
\iffalse
\begin{proof}
The singular values of a $d+1$ plane gives a natural map form $G(d+1,n+1;\mathbb{C})/O(n+1;\mathbb{R})$ to $\Theta_{d,n}$. We need only prove that it has an inverse. Set $r=2d+1-n$, take an element $\vec{\sigma}\in \Theta_{d,n}$ and represent it as below
$$\vec{\sigma}=(\cos 2a_0,\cdots,\cos 2a_{d-r}, \underbrace{1,\cdots,1}_{r}).$$
Then the required inverse mapping is given by
\begin{equation}
\vec{\sigma}\mapsto\begin{bmatrix}
W_{\vec{\sigma}}\doteq\begin{pmatrix}
\Lambda_{1} & 0_{(d+1-r)\times r}\\
\Lambda_{2} & 0_{(d+1-r)\times r}\\
0_{r\times (d+1-r)} & Id_{r}
\end{pmatrix} \\
\end{bmatrix},
\end{equation}
where $\Lambda_{1}=\diag(\cos a_{0},\ldots,\cos a_{d-r}),~\Lambda_{2}=\diag(\sqrt{-1}\sin a_{0},\ldots,\sqrt{-1}\sin a_{d-r})$.
\end{proof}
\fi
We point out that the isotropic $2$-planes in $Q_{4}$ given by the linear spans of $(e_i\pm\sqrt{-1}e_i)/\sqrt{2}, 1\leq i\leq 3$, respective of the sign, relative to the standard basis for ${\mathbb C}^6$, shows that two planes corresponding to the same parameters in $\Theta_{d,n}$ need not be congruent under the action of $SO(n+1;\mathbb{R})$. This is the key reason why we chose the group $O(n+1;\mathbb{R})$.
\begin{remark}
For a given $\vec{\sigma}\in \Theta_{d,n}$, the orbit determined by it can also be described in a clear way through computing the isotropic group of $V_{\vec{\sigma}}$ {\rm(}see \eqref{jm alpha}{\rm)}. Since it will not be used in the following, we omit it here, only to point out that the computation is similar to the proof of Lemma~{\rm\ref{lemma-equiv}} in the next section.
\end{remark}
\section{The moduli space of constantly curved 2-spheres minimal in both $\mathcal{Q}_{n-1}$ and $\mathbb{C}P^n$}\label{sec-mod
Let $d,n$ be two positive integers with $d\leq n$, and let $p$ be a nonnegative integer with $0\leq p\leq [\frac{d}{2}]$.
Suppose $\gamma\in \mathbf{H}_{d,n,p}$.
From~\eqref{rnc in hyperquadric}, we see $\gamma$ can be parameterized as $\gamma=EZ_{d,p}$, where $E$ is an orthonormal basis of the projective $d$-plane spanned by $\gamma$. It follows from Remark~\ref{rk-nonfull} that if $2d+1<n$, then $\gamma$ is not linearly full.
To consider the linearly full minimal 2-spheres, we make the following convention in the subsequent part of this paper,
$$n\leq 2d+1,~~~l:=2d+1-n.$$
The structure of $\mathbf{H}_{d,n,p}$ is now clear following the expression of $E$ given in Corollary~\ref{classification of plane}.
\begin{prop}\label{classification of cdn}
\begin{equation}
\mathbf{H}_{d,n,p}=\{EZ_{d,p}|~E\in M(n+1, d+1),~E^{*}E=Id_{d+1},~~^{t}\!EE\in \mathbf{S}_{d,n,p}\},
\end{equation}
where $\mathbf{S}_{d,n,p}$ is a closed subset of $\mathscr{S}_{d,p}$ {\rm(}see~\eqref{important subsapce of symm matrices} for notation{\rm)}, defined by
\begin{equation*
\{S\in \mathscr{S}_{d,p}| ~\text{$S$ takes 1 as its maximal singular value with multiplicity no less than } 2d+1-n\}.
\end{equation*}
\end{prop}
Here, we make another convention that will be used in the subsequent part of this paper,
$
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\mathbf{H}_{d,n}:=\mathbf{H}_{d,n,0},~~~~~~\mathbf{S}_{d,n}:=\mathbf{S}_{d,n,0}.
$
\begin{remark}\label{construction}
Conversely, for a given symmetric matrix $S\in \mathbf{S}_{d,n,p}$ with $\corank(S)=r_0$, suppose all the distinct positive singular values of $S$ are given by
$$\cos 2a_1< \cos 2a_2< \cdots< \cos 2a_{m},$$
with multiplicities $r_1, r_2, \cdots, r_m$, respectively, where $a_j\in[0,\frac{\pi}{4}]$.
Consider the SVD of $S$. It follows from Theorem~{\rm\ref{singular values as unique invariant} that there exists a unitary matrix $U\in U(d+1)$} such that
\begin{equation}\label{eq-SVD-S}
S=^{t}\!\!U\diag(0_{r_0\times r_0},\,\cos 2a_1I_{r_1},\,\cos 2a_2I_{r_2},\, \cdots , \,\cos 2a_{m}I_{r_m})\,U.
\end{equation}
It is easy to verify that
\begin{equation}\label{ex-construction}
V_{\vec{\sigma}}\,U\,Z_{d,p} \in \mathbf{H}_{d,n,p},
\end{equation}
where we have used the notation $V_{\vec{\sigma}}$ introduced in \eqref{jm alpha0} and \eqref{jm alpha}.
\end{remark}
\begin{lemma}\label{lemma-equiv}
If $\widetilde U\in U(d+1)$ is another unitary matrix in the SVD \eqref{eq-SVD-S} of $S$,
then there exists a matrix $A\in O(n+1;\mathbb{R})$ such that
$$AV_{\vec{\sigma}}U=V_{\vec{\sigma}}\widetilde U.$$
\end{lemma}
\begin{proof}
It follows from Theorem~\ref{singular values as unique invariant} that
$$\widetilde U=\diag(A_{r_0},A_{r_2},\cdots,A_{r_m})U,$$
where $A_{r_0}\in U(r_0)$ is unitary and $A_{r_j}\in O(r_j;\mathbb{R})$ is real orthogonal for any $1\leq j\leq m$.
Note that $V_{\vec{\sigma}}$ can also be written as
$$V_{\vec{\sigma}}=\diag(\mathrm{J}_0,\mathrm{J}_1,\cdots, \mathrm{J}_{m-1},\mathrm{J}_m),$$
where $\mathrm{J}_0=^t\!\!\begin{pmatrix}
\frac{1}{\sqrt{2}} \,Id_{r_0},
\frac{\sqrt{-1}}{\sqrt{2}}\,Id_{r_0}
\end{pmatrix}$, $\mathrm{J}_j=^t\!\!\begin{pmatrix}
\cos a_j \,Id_{r_j},
\sqrt{-1}\sin a_j\,Id_{r_j}
\end{pmatrix}$
for $1\leq j\leq m-1$, and
$\mathrm{J}_m=^t\!\!\begin{pmatrix}
\cos a_m \,Id_{r_m},
\sqrt{-1}\sin a_m\,Id_{r_m}
\end{pmatrix}$ or $\mathrm{J}_m=^t\!\!\begin{pmatrix}
Id_{r_m} ,
0_{ r_m\times(r_m-l)} \\
\end{pmatrix}$.
Now the conclusion follows from a straightforward verification that
$$\mathrm{J}_0A_{r_0}=\begin{pmatrix}
\frac{1}{\sqrt{2}} \,Id_{r_0}\\
\frac{\sqrt{-1}}{\sqrt{2}}\,Id_{r_0}
\end{pmatrix}A_{r_0}=\begin{pmatrix}
Re(A_{r_0})&Im(A_{r_0})\\
-Im(A_{r_0})&Re(A_{r_0})
\end{pmatrix}\begin{pmatrix}
\frac{1}{\sqrt{2}} \,Id_{r_0}\\
\frac{\sqrt{-1}}{\sqrt{2}}\,Id_{r_0}
\end{pmatrix},$$
$\mathrm{J}_jA_{r_j}=\diag(A_{r_j}, A_{r_j})\mathrm{J}_j$ for $1\leq j\leq m-1$, and $\mathrm{J}_mA_{r_m}=\diag(A_{r_m}, B)\mathrm{J}_m$ with $B=A_{r_m}$ or $B=Id_{r_m-l}$.
\end{proof}
To build a clearer relation between $\mathbf{H}_{d,n,p}$ and $\mathbf{S}_{d,n,p}$, some equivalences need to be introduced.
Two minimal 2-spheres in $\mathbf{H}_{d,n,p}$ are said to be equivalent if they are congruent in $\mathcal{Q}_{n-1}$, i.e., if one can be brought to the other by some $A\in O(n+1,\mathbb{R})$ and some $SU(2)$-reparametrization of ${\mathbb C}P^1$. For convenience, denote by $\mathbf{H}_{d,n,p}/O(n+1;\mathbb{R})$ the set of all equivalence classes. We point out that
\begin{equation*}
\mathbf{Mini}_{d,n}/O(n+1;\mathbb{R})=\bigsqcup_{0\leq p\leq d}\mathbf{H}_{d,n,p}/O(n+1;\mathbb{R}),
\end{equation*}
describes the moduli space of all noncongruent 2-spheres of constant curvature, minimal in both $\mathcal{Q}_{n-1}$ and $\mathbb{C}P^n$.
The equivalence on $\mathbf{S}_{d,n,p}$ is defined by the following group action, which is induced by \eqref{action on M_d},
\begin{align}\label{action on mathbf s_d
\varrho_{d}: U(1)\times SU(2)\times \mathbf{S}_{d,n,p}\rightarrow \mathbf{S}_{d,n,p},\quad\quad\quad
(\lambda,g, S)\mapsto \lambda\, ^{t}\!\!\rho^{\frac{d}{2}}(g)\cdot S\cdot \rho^{\frac{d}{2}}(g).
\end{align}
The orbit space of $\varrho_{d}$ is denoted by $\mathbf{S}_{d,n,p}/U(1)\times SU(2)$.
From the above definition of equivalence, it is easy to see that the map
$$\mathbf{H}_{d,n,p}/O(n+1)\longrightarrow \mathbf{S}_{d,n,p}/U(1)\times SU(2),~~~~~~[\gamma]\mapsto [^{t}WW],$$
is well-defined with its inverse given by
$$\mathbf{S}_{d,n,p}/U(1)\times SU(2)\longrightarrow\mathbf{H}_{d,n,p}/O(n+1),~~~~~~[S]\mapsto [V_{\vec{\sigma}}UZ_{d,p}],$$
where $\vec{\sigma}$ is the singular-value vector of $S$, $U\in U(d+1)$ is a unitary matrix coming from the SVD \eqref{eq-SVD-S} of $S$, and the well-definedness of this inverse mapping follows from Lemma~\ref{lemma-equiv}.
\begin{theorem}\label{simple classification} The moduli space of noncongruent $2$-spheres minimal in both $\mathcal{Q}_{n-1}$ and $\mathbb{C}P^n$, with the same curvature $K=4/(d+2p(d-p))$, is given by
$$\mathbf{H}_{d,n,p}/O(n+1;\mathbb{R})\cong\mathbf{S}_{d,n,p}/U(1)\times SU(2).$$
\end{theorem}
A special case is $p=0$, for which $\mathbf{H}_{d,n}/O(n+1;\mathbb{R})$ is the moduli space of all noncongruent holomorphic 2-spheres of constant curvature and degree $d$ in $\mathcal{Q}_{n-1}$. As a consequence of Theorem~\ref{simple classification}, we reprove the main results of \cite{M-N-T}.
\begin{theorem}\label{coro-dim}
~
{\bf (1)} If $2d+1<n$, then the holomorphic $2$-spheres of constant curvature and degree $d$
~~~~~in $\mathcal{Q}_{n-1}$ is not linearly full.
{\bf (2)}\label{con-2} $\bigcup_{n<2d+1}\mathbf{H}_{d,n}/O(n+1;\mathbb{R})$ constitutes the boundary of\, $\mathbf{H}_{d,2d+1}/O(n+1;\mathbb{R})$, and
\begin{equation}\label{eq-dimcount}
\dim(\mathbf{H}_{d, 2d+1}/O(2d+2;\mathbb{R}))=d^2-d-4.
\end{equation}
\end{theorem}
\begin{proof}
The first conclusion follows from Remark~\ref{rk-nonfull} as pointed out in the beginning of this section. In view of Theorem~\ref{simple classification}, to prove the second conclusion, it suffices to analyze $\mathbf{S}_{d,n}$. By definition, the boundary of $\mathbf{S}_{d,2d+1}$ is comprised of all $\mathbf{S}_{d,n}$ with $n<2d+1$. The dimension count follows from
$~~~~~~~~~~~~~~~~\dim\mathbf{S}_{d,2d+1}=\dim\mathscr{S}_{d}=\dim Sym_{d+1}(\mathbb{C})-2(2d+1)=d^2-d.$
\end{proof}
\begin{remark}\label{rk-geo-exp}
We give a geometric explanation to the structure of the boundary of the moduli space $\mathbf{H}_{d,2d+1}/O(n+1;\mathbb{R})$. Since we can naturally embed a lower-dimensional complex projective space into higher dimensional ones, it is readily seen that
$$\mathbf{H}_{d,d}\subset\mathbf{H}_{d,d+1}\subset\cdots\subset\mathbf{H}_{d,2d}\subset\mathbf{H}_{d,2d+1}.$$
Conversely,
take a holomorphic $2$-sphere $\gamma\in\mathbf{H}_{d,2d+1}$ on the boundary. Then the maximal singular values of the $d+1$ plane $V$ spanned by $\gamma$ in $\mathbb{C}^{n+1}$ is $1$, whose multiplicity is denoted by $r_m$. It follows from \eqref{jm alpha0} in Corollary~{\rm\ref{classification of plane}} that there exists a real orthogonal transformation $A$ such that $A\gamma\in \mathbf{H}_{d,2d+1-r_m}.$
\end{remark}
Theorem~\ref{coro-dim} shows that there is an abundance of noncongruent holomorphic $2$-spheres of constant curvature and degree $d$ in $\mathcal{Q}_{2d}$. We give a more detailed account of this.
\begin{prop}
Suppose $d\geq 4$, and $V$ is a generic $d$-plane in $\mathbb{C}P^{2d+1}$ containing a holomorphic $2$-sphere $\gamma\in \mathbf{H}_{d,2d+1}$. Then in $\mathbf{H}_{d,2d+1}$, there is at least a $(d^{2}-2d-5)$-dimensional family of holomorphic $2$-spheres lying in $V$, which are all noncongruent to each other.
\end{prop}
\begin{proof}
From Section~\ref{sec-orbit}, we know that the $d$-plane in $\mathbb{C}P^{2d+1}$ can be determined by its singular values up to real orthogonal transformations. Combining this with Theorem~\ref{simple classification}, we need only prove this lower bound of dimension estimate for complex symmetric matrices with the same singular values in $\mathbf{S}_{d,2d+1}$ modulo the action of $U(1)\times SU(2)$.
Note that the singular values give a semialgebraic map from $\mathbf{S}_{d,2d+1}$ to $\Theta_{d,2d+1}$ (see \eqref{eq-Theta} for definition), which are both semialgebraic sets. The local triviality theorem for semialgebraic maps \cite{Hardt} implies that for generic point in the image of this map, the dimension of its preimage is bigger than or equal to
$~~~~~~~~~~~~~~~~~~~~~~~~~~~\dim\mathbf{S}_{d,2d+1}-\dim\Theta_{d,2d+1}=d^2-2d-1.$
\end{proof}
As mentioned in the introduction, the ideal of a rational normal curve (holomorphic 2-sphere) of degree $d$ is generated by $d^2-d$ independent quadrics. Note that a rational normal curve $\gamma\in \mathbf{H}_{d,n}$, if and only if, the quadric given by the intersection of $\mathcal{Q}_{n-1}$ with the projective $d$-plane spanned by $\gamma$ belongs to the ideal of this curve. Conversely, from Proposition~\ref{classification of cdn}, we see that to guarantee that a quadric in this ideal lies in $\mathcal{Q}_{n-1}$, there are no other constraints if $n=2d+1$, whereas there are more constraints when $n<2d+1$ as the symmetric matrix determined by this quadric takes $1$ as its maximal singular values.
The SVD method introduced in Remark~\ref{construction} proves effective to handle this problem, which will be shown in the next section
\section{Constantly curved holomorphic 2-spheres of higher degree in $\mathcal{Q}_{n-1}$}\label{sec-const}
In this section, we consider the existence of linearly full holomorphic 2-spheres of constant curvature and higher degree ($d>\frac{n-1}{2}$) in $\mathcal{Q}_{n-1}$. Similar to the discussion in Remark~\ref{rk-geo-exp}, it is easy to verify that such holomorphic 2-spheres belong to $\mathbf{H}_{d,n}\setminus\mathbf{H}_{d,n-1}$.
From the construction method given in the last section (see Remark~\ref{construction}), we see that to construct examples in $\mathbf{H}_{d,n}\setminus\mathbf{H}_{d,n-1}$, a matrix $S\in \mathbf{S}_{d,n}\setminus\mathbf{S}_{d,n-1}$ need to be found. To be precise, we need to construct a complex symmetric matrix $S:=(s_{i,\,j})$ which takes $1$ as its maximal singular value with multiplicity $2d+1-n>0$ and satisfying the following equations
\begin{equation}\label{requirement}
\sum_{i+j=k}s_{i,\,j}\;\sqrt{\tbinom{d}{i}\;\tbinom{d}{j}}=0,~~~0\leq k\leq 2d.
\end{equation}
Observe that the more repetition of the maximum singular value 1, the more severe restriction it imposes on the set $\mathbf{H}_{d,n}$ when $d\leq n<2d+1$. In fact, the results of Li and Jin \cite{zbMATH05590797} shows that $\mathbf{H}_{4,5}$ is nonempty while $\mathbf{H}_{5,5}$ is; see also Proposition~\ref{Pr} for a singular-value proof of this fact. Nevertheless,
we can prove the following.
\begin{theorem}\label{thm-Hd}
If $d\geq 3$, then
$$\varnothing\neq \mathbf{H}_{d,d+2}\subsetneqq \mathbf{H}_{d,d+3}\subsetneqq\cdots\subsetneqq \mathbf{H}_{d,2d+1}.$$
Moreover, ignoring the action of the orthogonal group and reparametrizations of\, $\mathbb{C}P^1$,
we have the following dimension estimates that, $\dim(\mathbf{H}_{d, d+4})\geq3$ and
$$\dim(\mathbf{H}_{d, n+1}\setminus {\mathbf H}_{d,n})\geq (n-d)^2-11(n-d)+33,~~~d+5\leq n\leq 2d+1.$$
\end{theorem}
\begin{remark}\label{d,d+2rmk}
That the chain starts with ${\mathbf H}_{d,d+2}$ is essential, since ${\mathbf H}_{d,d}$ and ${\mathbf H}_{d,d+1}$ may be empty when $d$ is odd, while ${\mathbf H}_{d,d+1}$ may be equal to ${\mathbf H}_{d,d}$ when $d$ is even; see Remark~\rm{\ref{lijin}}.
\end{remark}
To establish the theorem, the following lemmas are needed.
\begin{lemma}\label{elme-lemma1}
Suppose $0< \epsilon_0<\epsilon_1<\cdots<\epsilon_k, k\geq 1,$ are some constants. For any given numbers
$$0\leq x_0\leq x_1\leq \cdots \leq x_{k-1}\leq 1,$$
there exists a number $x_k, k\geq 1,$ such that $0\leq x_k <x_{k-1}$, and, moreover, it solves
\begin{equation}\label{equxk1}
\aligned
&x_k\epsilon_k =
\sum_{j=0}^{(k-2)/2}(x_{2j+1}\epsilon_{2j+1}-x_{2j}\epsilon_{2j}),\quad
\text{if}\;\; k\;\text{is even, or},\\
&x_k\epsilon_k =
\sum_{j=1}^{(k-1)/2}(x_{2j}\epsilon_{2j}-x_{2j-1}\epsilon_{2j-1})+x_0\epsilon_0,\quad
\text{if}\;\;k\; \text{is odd}.
\endaligned
\end{equation}
\end{lemma}
\begin{proof}
With our assumption, it is easy to see that $x_k$ solves~\eqref{equxk1}
is nonnegative. On the other hand, we can transform \eqref{equxk1} by
$$x_k\epsilon_k =-x_0\epsilon_0-
\sum_{j=0}^{(k-2)/2}(x_{2j}\epsilon_{2j}-x_{2j-1}\epsilon_{2j-1})+x_{k-1}\epsilon_{k-1}\leq x_{k-1}\epsilon_{k-1},$$
which implies $x_k<x_{k-1}$.
\end{proof}
\begin{remark}\label{lemma1-remark}
In the lemma, it is readily seen that if $x_0$ is chosen to be positive, then $x_k$ is also positive.
\end{remark}
\begin{lemma}\label{elem-lemma2}
Suppose $0<\epsilon_0<\epsilon_1<\cdots<\epsilon_k, k\geq 1,$ are some constants and $\epsilon_k< 2\epsilon_{k-1}$. For any given numbers
$$0\leq x_0\leq x_1\leq \cdots \leq x_{k-2}\leq 1,$$
and $0\leq x_k \leq 1,$ there exists a number $x_{k-1}$ such that $-1< x_{k-1} <1$, and, moreover, it solves
\begin{equation}\label{equxk3}
\aligned
&2x_{k-1}\epsilon_{k-1} =
2\sum_{j=0}^{(k-3)/2}(x_{2j+1}\epsilon_{2j+1}-x_{2j}\epsilon_{2j})-x_k\epsilon_k,\quad
\text{if}\;\; k\; \text{is odd, or,}\\
&2x_{k-1}\epsilon_{k-1} =
2\sum_{j=1}^{(k-2)/2}(x_{2j}\epsilon_{2j}-x_{2j-1}\epsilon_{2j-1})+2x_0\epsilon_0-x_k\epsilon_k,\quad
\text{if}\;\; k\; \text{is even}.
\endaligned
\end{equation}
\end{lemma}
\begin{proof}
The proof is similar to that of Lemma~\ref{elme-lemma1} after transposing the $x_k\epsilon_k$ term from the right hand to the left hand in \eqref{equxk3}.
\end{proof}
\begin{remark}\label{lemma2-remark}
In the above lemma, it is straightforward to verify that if $x_0, x_1, \cdots, x_{k-2}, x_k$ are chosen to satisfy
$$
\text{either}\;\;0<x_0\leq x_1\leq \cdots \leq x_{k-2}\leq\frac{x_k}{2}\leq \frac{1}{2},\quad \text{or} \;\;
x_k=0<x_0\leq x_1\leq\cdots\leq x_{k-2}\leq 1,
$$
then $x_{k-1}$ determined by them is nonzero.
\end{remark}
\begin{lemma}\label{matrix-lemma}
For any given integers $l\geq 0,r\geq 0$ and numbers
$$0\leq \cos 2a_1\leq \cdots\leq \cos 2a_{r}<1,$$
if $m:=2r+l+1\leq d$ and $d\geq 3$, then we can solve the following equation
\begin{equation}\label{eqsm}
\sum_{j=0}^{m}s_{j,m-j}\sqrt{\tbinom{d}{j}}\sqrt{\tbinom{d}{m-j}}=0,
\end{equation}
such that $s_{j,m-j}=s_{m-j,j}$ and
$|s_{j,m-j}|=|s_{m-j,j}|=\cos2a_{j+1},~~~0\leq j\leq r-1,$
while for other $r\leq j\leq m-r$, $|s_{j,m-j}|$ takes value in
$\{\underbrace{1, 1,\cdots,1}_l, \underbrace{\lambda, \lambda}_2\}.$
Here, $\lambda\in (-1,1)$ is determined by $\{\cos 2a_1, \cdots, \cos 2a_{r}\}$.
\end{lemma}
\begin{proof}
Set $k:=[\frac{m}{2}]$ and
$\epsilon_j:=\sqrt{\tbinom{d}{j}\tbinom{d}{m-j}},~~0\leq j\leq k.$
Using the combinatorial identity
$$\tbinom{d}{j}\tbinom{d}{m-j}\tbinom{2d}{d}=\tbinom{m}{j}\tbinom{2d-m}{d-j}\tbinom{2d}{m},$$
we obtain that
$0<\epsilon_0<\epsilon_1<\cdots<\epsilon_k.$
Moreover, it is easy to verify that if $d\geq 3$ and $m$ is an even number, then
$\epsilon_k<2\epsilon_{k-1}.$
Now the conclusion follows from Lemma~\ref{elme-lemma1} and Lemma~\ref{elem-lemma2}.
\end{proof}
We are ready to prove Theorem~\ref{thm-Hd}.
\vspace{1mm}
\begin{proof}
Set $l:=2d+1-n$, we need only prove that for any given $0\leq l\leq d-2$, there exists a symmetric complex matrix $S:=(s_{i,j})$ solving \eqref{requirement}
and taking $1$ as its maximal singular value with multiplicity $l$.
In Lemma~\ref{matrix-lemma}, take $r=[\frac{d-l-1}{2}]=[\frac{n-d}{2}]-1$. Then $m:=2r+l+1$ is equal to $d-1$ when $d-l$ is even, or to $d$ when $d-l$ is odd. For any given two groups of numbers
$$0\leq \cos 2a_1\leq \cdots\leq \cos 2a_{r}<1,\quad\quad\quad 0\leq \cos 2b_1\leq \cdots\leq \cos 2b_{r}<1,$$
we solve \eqref{eqsm} to get $\{s_{0, m},~s_{1, m-1}, \cdots,s_{m-1,1},s_{m,0}\}$ and
$\{t_{0, m},~t_{1, m-1}, \cdots,t_{m-1,1},t_{m,0}\},$ with which two solutions
of the required symmetric matrix can be given by
\begin{equation}\label{eq-S}
S=\diag\Big\{\begin{pmatrix}
A&0&^tC\\
0&D&0\\
C&0&B
\end{pmatrix},\,0_{(d-m)\times(d-m)}\Big\},
\end{equation}
where $A, B\in M(r,r)$ are two complex symmetric matrices, and $C:=(c_{i,j})\in M(r,r)$ is a complex matrix with prescribed anti-diagonal as
$$c_{j, r-1-j}=\mu \,s_{j, m-j}+\sqrt{\mu^2-1}\,t_{j,m-j},~~~0\leq j\leq r-1,$$
and $D$ is the matrix
\begin{equation*
\aligned
&\mu\,\antidiag\Big\{s_{r, m-r},~s_{r+1, m-r-1}, \cdots,s_{m-r,r}\Big\}\\
&+\sqrt{\mu^2-1}\,\antidiag\Big\{t_{r, m-r},~t_{r+1, m-r-1}, \cdots,t_{m-r,r}\Big\};
\endaligned
\end{equation*}
here $\mu\in[0,1]$ is a parameter. It is easy to see that there are many $\{A, B, C\}$ to be chosen to satisfy \eqref{requirement}.
The singular values of $S$ defined in \eqref{eq-S} are
$$\{\underbrace{1, 1,\cdots,1}_l,\, \underbrace{\lambda, \lambda}_2,\,\underbrace{0}_{d-m},\,\sigma_1, \sigma_2,\cdots, \sigma_{2r}\},$$
where $\lambda\in (-1,1)$ is determined by $\{\cos 2a_1, \cdots, \cos 2a_{r},\cos 2b_1,\cdots,\cos 2b_{r}\}$, and
$0\leq \sigma_1\leq \sigma_2\leq\cdots\leq \sigma_{2r}$ are singular values of
$\begin{pmatrix}
A&^tC\\
C&B
\end{pmatrix}.$
Note that as a matrix norm, $\sigma_{2r}$ is no more than the Frobenius norm, which can be controlled to be less than $1$ (this is an open condition) with suitable choice of $\{A, B, C\}$. Hence the matrix $S$ we have constructed can take $1$ as its maximal singular value with multiplicity $l$.
Take such a matrix $S$ and let
$\lambda:=\cos 2\theta_{0},~~\sigma_{j}:=\cos 2\theta_j,~~1\leq j\leq 2r.$ Then we obtain a linearly full constantly curved holomorphic 2-sphere of degree $d$ in $\mathcal{Q}_{n-1}$ given by
$$
\gamma_d=
\begin{pmatrix}
\Lambda_{1}&& \\
\Lambda_{2}&& \\
&&Id_l
\end{pmatrix}UZ_d,
~~~~\text{or}~~~~
\begin{pmatrix}
\Lambda_{1}&&& \\
\Lambda_{2}&&& \\
&&\frac{1}{\sqrt{2}}&\\
&&\frac{1}{\sqrt{-2}}&\\
&&&Id_l
\end{pmatrix}UZ_d,
$$
where $\Lambda_{1}=\diag\{\cos \theta_{0},\ldots,\cos \theta_{2r}\},~\Lambda_{2}=\diag\{\sqrt{-1}\sin \theta_{0},\ldots,\sqrt{-1}\sin \theta_{2r}\}$, and $U$ is a unitary matrix coming from the {\rm SVD} of $S$ in \eqref{eq-S}. It is clear by construction that $\gamma_d\in \mathbf{H}_{d,n}\setminus \mathbf{H}_{d,n-1}$.
We thus obtain by a straightforward counting that
$2r(2r-5)+9$
gives a lower bound to the dimension of the moduli space of all required complex symmetric matrices $S$, which implies the dimension estimate of $\mathbf{H}_{d,n}$.
\end{proof}
\begin{remark}\label{lijin}
Theorem~{\rm\ref{thm-Hd}} warrants that ${\mathbf H}_{d,n}$ is not empty so long as $n\geq d+2$. We will establish in the following proposition that ${\mathbf H}_{5,5}$ is empty and ${\mathbf H}_{4,4}/O(4;\mathbb{R})={\mathbf H}_{4,5}/O(5;\mathbb{R})$ is made up of a single point, so that Theorem~{\rm\ref{thm-Hd}} is optimal in general.
{\rm(}In fact, ${\mathbf H}_{3,3}/O(3;\mathbb{R})$ and ${\mathbf H}_{3,4}/O(4;\mathbb{R})$ are also empty, which is part of the classification for $d\leq 3$ in section 5.{\rm)}
\end{remark}
\begin{prop}\label{Pr}{\rm\cite{zbMATH05590797}}
${\mathbf H}_{5,5}=\varnothing$. ${\mathbf H}_{4,4}/O(4;\mathbb{R})={\mathbf H}_{4,5}/O(5;\mathbb{R})$ is made up of a single point.
\end{prop}
\begin{proof}
We give a singular-value proof. When $n=d+1$, 1 is the singular value of the symmetric $n\times n$-matrix $S$ with multiplicity $d$. Let $\lambda\geq 0$ be the remaining singular value. Then there is a unitary matrix $U$ such that $S=U\,\diag(\lambda,1,1,\cdots,1)\,^{t}U$. So,
\begin{equation}\label{u}\overline{S}S=\overline{U}\;\diag(\lambda^2,1,1,\cdots, 1) \;^tU.\end{equation}
Expanding the right hand side of~\eqref{u} and utilizing that $U:=(u_{ij})$ is unitary, we derive
\begin{equation}\label{SS}
\overline{S}S=(t_{kl}),\quad\quad t_{kl}:=\delta_{kl}+(\lambda^2-1)\,\overline{u_{k0}}u_{l0},\quad 0\leq k,l\leq d.
\end{equation}
Now let $d=n=5$. It follows that $\lambda =1$ and $S$ is itself unitary. We can assume by Lemma~\ref{kill parameter lemma} below that the top three and bottom two of the anti-diagonals in the $6\times 6$ matrix $S$ are zero. Multiplying against different columns sets up a straightforward elimination process, bearing that $S$ is of rank 6, to let us end up with the $2\times 2$ anti-diagonal block form
$$
S=\antidiag(A, B, A).
$$
But then $A$ and $B$ being unitary contradicts~\eqref{requirement}. This proves that ${\mathbf H}_{5,5}$ is empty.
We now assume $d=4$, continuing to denote the symmetric matrix now of size $5\times 5$ by $S$. If $\lambda=1$, then $S$ is unitary, a similar analysis as in the preceding case results in
\begin{equation}\label{singleton}
S:=\antidiag(1,-1,(-1)^{2},\ldots,(-1)^{d}).
\end{equation}
By the simple combinatorial identity $\sum_{i=0}^{4}(-1)^{i}\tbinom{4}{i}=0$, we know $S\notin \mathbf{S}_{4,5}\setminus\mathbf{S}_{4,4}$.
So in fact this constantly curved 2-sphere lies in a 3-quadric sitting in ${\mathbb C}P^4$, i.e., it belongs to $\mathbf{H}_{4,4}$.
Otherwise, $\lambda<1$ now.
Again, we can choose $S$ so that the top three and the bottom two anti-diagonals are zero, from which we see that the $(0,4)$-entry of $\overline{S}S$ is zero; in particular $\overline{u_{00}}u_{40}=0$. If, say, $u_{00}=0$, then~\eqref{SS} implies $t_{0j}=0$ for $j\neq 0$, which in turn implies that the first column of $S$ is unitarily orthogonal to other columns, etc.
It follows that a similar process of elimination as in the case $d=5$ returns us, if we put $S:=(s_{ij}), 0\leq i,j\leq 4 $, the values $s_{03}=-s_{12}=1$ and zero for all other entries. However, we also know that the Veronese curve $Z_d$ must satisfy~\eqref{requirement}, from which we deduce $s_{03}+\sqrt{6}s_{12}=0$. This contradiction shows the impossibility when $\lambda<1$. Thus, ${\mathbf H}_{4,4}/O(4;\mathbb{R})={\mathbf H}_{4,5}/O(5;\mathbb{R})$ is a singleton set, represented by~\eqref{singleton}.
\end{proof}
\begin{lemma}\label{kill parameter lemma}
For each $[S]\in \mathscr{S}_{d}/U(1)\times SU(2)$, we can find a representative $S_{0}:=(s_{k,l})\in [S]$, such that
$$s_{0,0}=s_{1,0}=s_{0,1}=s_{d,d}=s_{d-1,d}=s_{d,d-1}=0.$$
Moreover, if $d\geq 3$, $S_0$ can be chosen also satisfying
$$s_{1,1}=s_{0,2}=s_{2,0}=0.$$
\end{lemma}
\begin{proof} The first equality holds by a direct calculation from \eqref{requirement}. To verify the second,
consider the projection $\Pi_{1}$ of $\mathscr{S}_{d}$ into the first summand $\mathcal{V}^{d-2}$; see Proposition \ref{quadric contains rnc and decomposition}. Write
\begin{equation*}
\Pi_{1}(S_{0})=\sum_{j=0}^{2d-4}\alpha_{j}\;u^{2d-4-l}v^{j}.
\end{equation*}
Consider the induced representation of Lie algebra $\mathfrak{sl}(2,{\mathbb C})$ on $\mathscr{S}_{d}$ from that on $Sym_{d+1}(\mathbb{C})$, set $J_{3}:=\diag(-1,1)/2\in \mathfrak{sl}(2,{\mathbb C})$. The eigenvector space of $J_{3}$ in $\mathscr{S}_{d}$ corresponding to eigenvalue $d-2$ is of dimension $1$ because it is
\begin{equation*}
Span_{\mathbb{C}}\{\frac{1}{2}(e_{0}\otimes e_{2}+e_{2}\otimes e_{0}),~e_{1}\otimes e_{1}\}\cap~\mathscr{S}_{d}.
\end{equation*}
So we have $s_{1,1}=s_{0,2}=s_{2,0}=0$ if and only if $\alpha_{0}=0$. For $g=\begin{pmatrix}
a & b \\
-\bar{b} & \bar{a} \\
\end{pmatrix}\in SU(2)
$, by simple calculation
\begin{equation*}
\aligned
& \Pi_{1}(\varrho^{\frac{d}{2}}\otimes \varrho^{\frac{d}{2}}(g)S_{0})=\varrho^{d-2}(g)\circ\Pi_{1}(S_{0})\\
&=\sum_{l=0}^{2d-4}\alpha_{l}\;(\bar{a}u-bv)^{2d-4-l}(\bar{b}u+av)^{l}
=:\sum_{l=0}^{2d-4}\widetilde{\alpha}_{l}\;u^{2d-4-l}v^{l},
\endaligned
\end{equation*}
where $\widetilde{\alpha}_{0}=\sum_{l=0}^{2d-4}\alpha_{l}\bar{a}^{2d-4-l}\bar{b}^{l}$. It is easy to be verified that $a,b\in \mathbb{C},~|a|^{2}+|b|^{2}=1$ can be chosen such that $\widetilde{\alpha}_{0}=0$.
\end{proof}
To conclude this section, we present a result about the existence of complex symmetric matrix $S\in\mathbf{S}_{d,n}$ taking $0$ as its minimal singular value with a given multiplicity. We point out that this is useful in the construction of constantly curved holomorphic 2-spheres in a singular hyperquadric, which will not be discussed in this paper.
\begin{lemma}\label{matrix-lemma1}
For any given $m\leq d$, there exists solutions to the equation
$$\sum_{j=0}^{m}s_{j,\,m-j}\;\sqrt{\tbinom{d}{j}\,\tbinom{d}{m-j}}=0,$$
such that $s_{j,\, m-j}=s_{m-j,\,j}\neq0$ for all $0\leq j\leq m$.
\end{lemma}
\begin{proof}
Similar to the proof of Lemma~\ref{matrix-lemma}, this follows from Lemma~\ref{elme-lemma1}, Remark~\ref{lemma1-remark}, Lemma~\ref{elem-lemma2} and Remark~\ref{lemma2-remark}.
\end{proof}
\begin{theorem}\label{thm-presin0}
For any given $q\geq 0$, there exists a constantly curved holomorphic $2$-sphere of degree $d$, by which the $d+1$ plane spanned takes $0$ as its minimal singular value with multiplicity $q$.
\end{theorem}
\begin{proof}
The proof is similar to that of Theorem~\ref{thm-Hd}, except that in the proof we use Lemma~\ref{matrix-lemma1} instead of Lemma~\ref{matrix-lemma}. In fact, now the required symmetric matrix can be defined by
$$S=\diag\Big\{\antidiag\{s_{0,\, m},~s_{1,\, m-1},\;\, \cdots,\;\,s_{m-1,\,1},\;\,s_{m,\,0}\},\,\,\underbrace{0,\cdots,0}_{q}\Big\},$$
where $m=d-q$ and $\{s_{0,\, m},~s_{1,\, m-1}, \;\,\cdots,s_{m-1,\,1},\;\,s_{m,\,0}\}$ is given in Lemma~\ref{matrix-lemma1}.
\end{proof}
Consequently, combining Theorem~\ref{thm-Hd} with Theorem~\ref{thm-presin0}, we may prescribe the multiplicities of both singular values $0$ and $1$.
\section{Classification of constantly curved holomorphic 2-spheres of degree $\leq 3$ in $\mathcal{Q}_{n-1}$}\label{sec-classify}
Constantly curved holomorphic 2-spheres of degree $d\leq 3$ in $\mathcal{Q}_{n-1}$ are discussed in this section. Since we need only consider the linearly full case, we deal only with $\mathcal{Q}_{n-1}$ with $d\leq n\leq 2d+1$. Classification of the moduli space
$$\mathbf{H}_{d,n}/O(n+1;\mathbb{R}),~~~1\leq d\leq 3,~~~d\leq n\leq 2d+1,$$
is obtained.
\subsection{Case of $d=1$.}
It is clear that there are no isotropic lines in $\mathbb{C}P^{n}$ when $n\leq 2$, so that $\mathbf{H}_{1,1}$ and $\mathbf{H}_{1,2}$ are empty. In $\mathbb{C}P^3$, the isotropic line can be determined, up to a real orthogonal transformation and reparametrization of $\mathbb{C}P^{1}$, by
\begin{equation*}
\gamma:\mathbb{C}P^{1}\rightarrow \mathcal{Q}_{2},\quad\quad\quad\quad
[u,v]\mapsto ~^{t}[~\frac{1}{\sqrt{2}}(u,v,\sqrt{-1}u,\sqrt{-1}v)].
\end{equation*}
Hence ${\mathbf H}_{1,3}/O(4;\mathbb{R})$ is a singleton set.
\subsection{Case of $d=2$.}
By a direct computation, it is easy to verify that
\begin{small}
$$\mathbf{S}_{2,5}/U(1)\times SU(2)\!\!=\!\!\Big\{\![\antidiag(\sigma,-\sigma,\sigma)]\Big|\sigma\in [0,1]\!\Big\},~~\mathbf{S}_{2,4}/U(1)\times SU(2)\!\!=\!\!\Big\{\![\antidiag(1,-1,1)]\!\Big\}.$$
\end{small}
Combining this with Theorem~\ref{simple classification}, we have the following.
\begin{theorem}
The moduli space
$$\mathbf{H}_{2,2}/O(3;\mathbb{R})=\mathbf{H}_{2,3}/O(4;\mathbb{R})=\mathbf{H}_{2,4}/O(5;\mathbb{R})$$
is a singleton set, and
$$\mathbf{H}_{2,5}/O(6;\mathbb{R})\cong [0,1].$$
\end{theorem}
The constantly curved holomorphic 2-sphere of degree $2$ in $\mathcal{Q}_{1}\subset\mathcal{Q}_2\subset\mathcal{Q}_3$ can be parameterized as
$$\gamma: [u,v]\mapsto ~^{t}[\sqrt{-1}(u^{2}+v^{2}),\,u^{2}-v^{2},\,2uv,\,0,\,0].$$
The $1$-family of constantly curved holomorphic 2-spheres of degree $2$ in $\mathcal{Q}_{4}$ can be parametrized as
\begin{small}
$$\gamma_\sigma: [u,v]\mapsto ~^{t}[\sqrt{-1}\sigma(u^{2}+v^{2}),\,\sigma(u^{2}-v^{2}),\,2\sigma uv,\,\sqrt{1-\sigma^2} (u^{2}+v^{2}),\,\sqrt{\sigma^2-1}(u^{2}-v^{2}),\,2\sqrt{\sigma^2-1}uv].$$
\end{small}
To conclude the discussion of this case, we present a figure (Fig. 6.1) to show the distribution of constantly curved holomorphic 2-spheres of degree $2$ in $\mathcal{Q}_4$.
\begin{figure}[H]\label{Figure for deg 2.0}
\centering
\begin{tikzpicture}[scale=0.60]
\draw[->] (0,0) -- (xyz cs:x=5) node[anchor=north west] {$\sigma_{1}$ };;
\draw[->] (0,0) -- (xyz cs:y=5) node[anchor=south west] {$\sigma_{2}$ };;
\draw[->] (0,0) -- (xyz cs:z=5) node[anchor=north west] {$\sigma_{0}$ };
\draw[thin] (0,0)--(xyz cs:z=0,x=0,y=3)--(xyz cs:z=0,x=3,y=3);
\draw[thin] (xyz cs:z=0,x=3,y=3)--(0,0);
\draw[thick] (xyz cs:z=0,x=1.2,y=2)--(xyz cs:z=0,x=0,y=3);
\fill[fill=yellow!80!white] (xyz cs:z=0,x=0,y=3)--(xyz cs:z=0,x=1.2,y=2)--(0,0)--(xyz cs:z=0,x=0,y=3);
\fill[fill=yellow!80!white] (xyz cs:z=0,x=3,y=3)--(xyz cs:z=0,x=1.2,y=2)--(0,0)--(xyz cs:z=0,x=3,y=3);
\fill[fill=gray!40!white] (xyz cs:z=0,x=0,y=3)--(xyz cs:z=0,x=3,y=3)--(xyz cs:z=0,x=1.2,y=2)--(xyz cs:z=0,x=0,y=3);
\draw[thick,green] (xyz cs:z=0,x=3,y=3)--(xyz cs:z=0,x=1.2,y=2);
\draw[ultra thick, red] (xyz cs:z=0,x=1.2,y=2)--(0,0);
\draw (0,0) node[anchor=north] {(0,0,0)};
\draw (xyz cs:z=0,x=0,y=3) node[anchor=east] {(0,0,1)};
\draw (xyz cs:z=0,x=0,y=3) node[anchor=east] {(0,0,1)};
\draw (xyz cs:z=0,x=3,y=3) node[anchor=west] {(0,1,1)};
\draw (xyz cs:z=0,x=1.2,y=2) node[anchor=north west] {(1,1,1)};
\draw[thick, gray] (xyz cs:z=0,x=1.2,y=2)--(xyz cs:z=0,x=0,y=3);
\draw[thick, gray] (xyz cs:z=0,x=0,y=3)--(xyz cs:z=0,x=3,y=3);
\fill[fill=red] (0,0) circle (0.08cm);
\fill[fill=blue] (xyz cs:z=0,x=1.2,y=2) circle (0.12cm);
\fill[fill=green] (xyz cs:z=0,x=3,y=3) circle (0.08cm);
\fill[fill=gray!40!white] (xyz cs:z=0,x=0,y=3) circle (0.08cm);
\end{tikzpicture}
\caption{$\Theta_{2,2}=\mathbf{H}_{2,2}/O(3;\mathbb{R}), \,\Theta_{2,3}, \,\Theta_{2,4}, \,\Theta_{2,5}, \mathbf{H}_{2,5}/O(6;\mathbb{R})$}
\end{figure}
The tetrahedron in this figure
represents $\Theta_{2,5}$, i.e., the orbit space of $G(3,6,\mathbb{C})/O(6;\mathbb{R})$ (see Theorem~\ref{moduli of plane}).
$\mathbf{H}_{2,5}/O(6;\mathbb{R})$ is represented by the red line segment; it comprises, up to equivalence, of all planes whose intersection with $\mathcal{Q}_5$ contain constantly curved holomorphic 2-spheres of degree $2$.
The blue endpoint of this segment represents $\Theta_{2,2}\cong G(3,3,\mathbb{C})/O(3;\mathbb{R})$; the constantly curved holomorphic 2-spheres lying in this plane are not linearly full in $\mathcal{Q}_{4}$. $\Theta_{2,3}\cong G(3,4,\mathbb{C})/O(4;\mathbb{R})$ is represented by the green line segment; it lies on the boundary of the grey triangle representing $\Theta_{2,4}\cong G(3,5,\mathbb{C})/O(5;\mathbb{R})$.
\subsection{Case of $d=3$.}\label{deg=3}
From Lemma \ref{kill parameter lemma}, it is easy to verify that
\begin{equation}\label{eq-Sform}
\mathscr{S}_{3}/U(1)\times SU(2)=\Big\{[\begin{pmatrix}
0 & 0 & 0 & \frac{3}{2}y \\
0 & 0& -\frac{1}{2}y & -\frac{\sqrt{3}}{2}z \\
0& -\frac{1}{2}y & z & 0 \\
\frac{3}{2}y & -\frac{\sqrt{3}}{2}z & 0 & 0 \\
\end{pmatrix}]\,\Big|\,y,\,z\in[0,+\infty)\Big\},
\end{equation}
where we have used the action of $U(1)$ and $SU(2)$ to modify the values of $y$ and $z$ such that they are nonnegative real numbers.
\begin{theorem}
$\mathbf{H}_{3,3}=\mathbf{H}_{3,4}=\varnothing$, $\mathbf{H}_{3,5}/O(6;\mathbb{R})$ is a singleton set, an
{\bf (1)} $\mathbf{H}_{3,6}/O(7;\mathbb{R})$ is bijective to the closure of a $1/4$-circle, and,
{\bf (2)} $\mathbf{H}_{3,7}/O(8;\mathbb{R})$ is bijective to the closure of a $1/4$-disk.
\end{theorem}
\begin{proof}
\iffalse
Firstly, we compute $\mathscr{S}_{3}$, the set of all quadrics contains the standard rational normal curve $Z_{3}=~^{t}[u^{3},\sqrt{3}u^{2}v,\sqrt{3}uv^{2},v^{3}]$. Let $S$ be a $4\times 4$ complex symmetric matrix, such that $^{t}Z_{3}SZ_{3}=0$, then $S$ must be in the following form
\begin{equation}\label{the set of matrix for degree 3}
S=\begin{pmatrix}
0 & 0 & -x & \frac{3}{2}y \\
0 & \frac{2}{\sqrt{3}}x & -\frac{1}{2}y & -\frac{\sqrt{3}}{2}z \\
-x & -\frac{1}{2}y & z & 0 \\
\frac{3}{2}y & -\frac{\sqrt{3}}{2}z & 0 & 0 \\
\end{pmatrix},
\end{equation}
for some parameters $x,y,z\in \mathbb{C}$. The action of $U(1)\times SU(2)$ on $\mathscr{S}_{3}$ is non-trivial now, and we claim that for each class in $\mathscr{S}_{3}/U(1)\times SU(2)$, we can find a representative $S$, such that $x=0$, and $y,z$ are nonnegative real numbers.
Firstly, by Lemma \ref{kill parameter lemma}, without lose of generality, we may assume $x=0$. By scaling multiplication of $U(1)$, we may assume $y\geq 0$. Choose $g=\diag(e^{\sqrt{-1}\theta},e^{-\sqrt{-1}\theta})\in SU(2)$, then by \eqref{irre repre in matrix form}, it is easy to get $\rho^{3}(g)=\diag(e^{-3\sqrt{-1}\theta},e^{-\sqrt{-1}\theta},e^{\sqrt{-1}\theta},e^{3\sqrt{-1}\theta})$, and
\begin{equation}
\rho^{3}(g)\cdot \begin{pmatrix}
0 & 0 & 0 & \frac{3}{2}y \\
0 & 0& -\frac{1}{2}y & -\frac{\sqrt{3}}{2}z \\
0& -\frac{1}{2}y & z & 0 \\
\frac{3}{2}y & -\frac{\sqrt{3}}{2}z & 0 & 0 \\
\end{pmatrix}\cdot ~^{t}\rho^{3}(g)=\begin{pmatrix}
0 & 0 & 0 & \frac{3}{2}y \\
0 & 0& -\frac{1}{2}y & -\frac{\sqrt{3}}{2}ze^{2\sqrt{-1}\theta} \\
0& -\frac{1}{2}y & ze^{2\sqrt{-1}\theta} & 0 \\
\frac{3}{2}y & -\frac{\sqrt{3}}{2}ze^{2\sqrt{-1}\theta} & 0 & 0 \\
\end{pmatrix};
\end{equation}
hence we may assume $z\geq 0$.
\fi
Take a real symmetric matrix $S\in\mathscr{S}_{3}$ having the form given in \eqref{eq-Sform} and let $\lambda_1, \lambda_2,\lambda_3, \lambda_4$ be the eigenvalues of $S$. Their absolute values are just the singular values of $S$.
We find the characteristic polynomial of $S$ to be
\begin{equation}\label{charac polynomial}
\lambda^{4}+(-z)\lambda^{3}+(-\frac{3}{4}z^{2}-\frac{5}{2}y^{2})\lambda^{2}+(\frac{3}{4}z^{3}+\frac{9}{4}y^{2}z)\lambda+\frac{9}{16}y^{4}=0.
\end{equation}
The eigenvalues of $S$ are
\begin{equation}\label{eigenvalues
\aligned
&\lambda_{1}=A-\frac{1}{2}[B+\frac{1}{2}C]^{\frac{1}{2}}-D,\quad\quad\quad&
\lambda_{2}&=A+\frac{1}{2}[B+\frac{1}{2}C]^{\frac{1}{2}}-D,\\
& \lambda_{3}=A-\frac{1}{2}[B-\frac{1}{2}C]^{\frac{1}{2}}+D,\quad\quad\quad\quad&
\lambda_{4}&=A+\frac{1}{2}[B-\frac{1}{2}C]^{\frac{1}{2}}+D,\\
\endaligned
\end{equation}
where $A=\frac{1}{4}z,~B=4y^{2}+2z^{2},~C=4z(y^{2}+\frac{z^{2}}{4})^{\frac{1}{2}},~D=\frac{1}{2}(y^{2}+\frac{z^{2}}{4})^{\frac{1}{2}}$. It is routine to check that $\lambda_{4}\geq -\lambda_{1}\geq \lambda_{2}\geq -\lambda_{3}\geq0$.
\iffalse
, and
\begin{align}\label{conditions for identity
\begin{split}
(1)&~B-\frac{1}{2}C\geq 0,~\text{and the identity holds if and only if $y=z=0$},\\
(2)&~-\lambda_{3}\geq 0,~\text{and the identity holds if and only if $y=0$},\\
(3)&~\lambda_{2}\geq -\lambda_{3},~\text{and the identity holds if and only if $z=0$},\\
(4)&~-\lambda_{1}\geq \lambda_{2},~\text{and the identity holds if and only if $y=0$},\\
(5)&~\lambda_{4}\geq -\lambda_{1},~\text{and the identity holds if and only if $z=0$}.
\end{split}
\end{align}
\fi
So the singular values of $S$ (in nondecreasing order) are given by $$\sigma_{1}=-\lambda_{3},~~~\sigma_{2}=\lambda_{2},~~~\sigma_{3}=-\lambda_{1},~~~\sigma_{4}=\lambda_{4}.$$
It follows that
\begin{equation}\label{injective
z=\sigma_{4}-\sigma_{3}+\sigma_{2}-\sigma_{1},\quad
9y^{4}/16=\sigma_{1}\sigma_{2}\sigma_{3}\sigma_{4},
\end{equation}
which implies the map
\begin{align}\label{singular values for deg 3
\phi:[0,\infty)\times[0,\infty)\rightarrow \mathbb{R}^{4},\quad\quad
(z,y)\mapsto (\sigma_{1},\sigma_{2},\sigma_{3},\sigma_{4})
\end{align}
is injective. This means $S(y,z)$ can be determined uniquely by its singular values, so that we have $\mathscr{S}_{3}/U(1)\times SU(2)\cong [0,\infty)\times [0,\infty)$.
Next, we consider the subsets $\mathbf{S}_{3,n}/U(1)\times SU(2)$, for which the following relations between singular values and $y,z$ are needed. They can be obtained from \eqref{eigenvalues} as follows.
\begin{align}\label{singular value distribution for deg 3
\begin{split}
(1)&~\text{If $yz\neq 0$, then $\sigma_{4}>\sigma_{3}>\sigma_{2}>\sigma_{1}>0$.}\\
(2)&~\text{If $y\neq 0,~z=0$, then $\sigma_{4}=\sigma_{3}>\sigma_{2}=\sigma_{1}>0$.}\\
(3)&~\text{If $y=0,~z\neq 0$, then $\sigma_{4}>\sigma_{3}=\sigma_{2}>\sigma_{1}=0$.}\\
(4)&~\text{If $y=z=0$, then $\sigma_{1}=\cdots=\sigma_{4}=0$.}\\
\end{split}
\end{align}
A consequence is that $\sigma_4=\sigma_3=\sigma_2\neq0$ cannot happen, which implies $\mathbf{S}_{3,3}=\mathbf{S}_{3,4}=\varnothing$, and hence $\mathbf{H}_{3,3}=\mathbf{H}_{3,4}=\varnothing$.
For a matrix in $\mathbf{S}_{3,5}$, there must hold $\sigma_4=\sigma_3=1$. Then it follows from \eqref{singular value distribution for deg 3} that $z=0$ and $9y^4=16\sigma_1^2$. Substituting these and $\lambda=\sigma_1$ into \eqref{charac polynomial}, we solve to see that $\sigma_1=\sigma_2=\frac{1}{3}$ and $y=2/3$. This implies $\mathbf{H}_{3,5}/O(6;\mathbb{R})\cong\mathbf{S}_{3,5}/U(1)\times SU(2)$ is a singleton set.
To determine
\begin{small}
\begin{equation*}
\mathbf{S}_{3,6}/U(1)\times SU(2)\cong \{(y,z)\in\mathbb{R}^{2}_{\geq 0}|\sigma_{4}(y,z)=1\}\subset \mathbf{S}_{3,7}/U(1)\times SU(2)\cong\{(y,z)\in\mathbb{R}^{2}_{\geq 0}|\sigma_{4}(y,z)\leq 1\},
\end{equation*}
\end{small}
we need only note from \eqref{eigenvalues} that $\sigma_4(y,z)=1$ is a component $\eta$ of the following real algebraic curves of degree $4$
$$9y^4+12z^3+36y^2z-40y^2-12z^2-16z+16=0.$$
The part of the curve $\eta$ lying in the first quadrant is plotted below (Fig. 6.2), labeled by the red color; it represents the moduli space $\mathbf{H}_{3,6}/O(7;\mathbb{R})$.
In the figure, the blue area bounded by the curve $\eta$ and the coordinate axes gives the moduli space $\mathbf{H}_{3,7}/O(8;\mathbb{R})$.
\end{proof}
\begin{figure}[h]\label{Figure for deg 3}
\centering
\begin{tikzpicture}[scale=0.60]
\begin{polaraxis}
\fill[fill=blue!70!white,opacity=0.5] (canvas polar cs:angle=0, radius=3.12cm)--(canvas polar cs:angle=90, radius=2.10cm)--(canvas polar cs:angle=0, radius=0cm)--cycle;
\addplot[fill=blue!70!white,opacity=0.5,domain=0:90,samples=360,smooth] (x,{sqrt(1+tan(x)^2)/(1/4+1/2*sqrt(4*tan(x)^2+2-2*sqrt(tan(x)^2+1/4))+1/2*sqrt(tan(x)^2+1/4))});
\addplot[red,domain=0:90,samples=360,smooth,ultra thick] (x,{sqrt(1+tan(x)^2)/(1/4+1/2*sqrt(4*tan(x)^2+2-2*sqrt(tan(x)^2+1/4))+1/2*sqrt(tan(x)^2+1/4))});
\fill[fill=red] (canvas polar cs:angle=0, radius=3.1cm) circle (0.08cm) node[anchor=north east] {(1,0)};
\fill[fill=green] (canvas polar cs:angle=90, radius=2.09cm) circle (0.08cm) node[anchor=south west] {(0,\,$2/3$)};
\end{polaraxis}
\end{tikzpicture}
\caption{$z-y$ plane}
\end{figure}
To conclude, we give an explicit example of constantly curved holomorphic 2-sphere of degree $3$ in $\mathcal{Q}_{6}$ to illustrate the constructive procedure given in Remark~\ref{construction}.
\begin{example}
For convenience, we denote the matrix in \eqref{eq-Sform} by $S(\frac{1}{2},\frac{1}{3})$ with parameters $z=\frac{1}{2},~y=\frac{1}{3}$. Now
\begin{equation*}
S(\frac{1}{2},\frac{1}{3})=\begin{pmatrix}
0 & 0 & 0 & \frac{1}{2} \\
0& 0 & -\frac{1}{6} & -\frac{\sqrt{3}}{4} \\
0 & -\frac{1}{6} & \frac{1}{2} & 0 \\
\frac{1}{2} & -\frac{\sqrt{3}}{4} & 0 & 0 \\
\end{pmatrix}.
\end{equation*}
Using \eqref{eigenvalues} and \eqref{singular values for deg 3}, we get the singular values of $S(\frac{1}{2},\frac{1}{3})$ to be
\begin{equation*
\sigma_{1}=\frac{\sqrt{19}}{12}-\frac{1}{3},\quad\sigma_{2}=\frac{1}{2},\quad
\sigma_{3}=\frac{2}{3},\quad\sigma_{4}=\frac{\sqrt{19}}{12}+\frac{1}{3}.\\
\end{equation*}
Choose $a_j\in[0,\frac{\pi}{4}]$ such that $\cos2a_{j}=\sigma_{j},~1\leq j\leq 4$, and define $V_{\vec{\sigma}}$
as in \eqref{jm alpha0}. To the real symmetric matrix $S(\frac{1}{2},\frac{1}{3})$, a group of eigenvectors $v_1,v_2,v_3,v_4$ corresponding to the eigenvalues $-\sigma_{1},\,\sigma_2,\,-\sigma_3,\,\sigma_{4}$, respectively, is
\begin{equation*}
\aligned
&^{t}v_{1}=\begin{pmatrix}
1 &
\frac{4\sqrt{57}}{27}+\frac{\sqrt{3}}{54} &
-\frac{\sqrt{57}}{27}+\frac{10\sqrt{3}}{27} &
-\frac{\sqrt{19}}{6}+\frac{2}{3} &
\end{pmatrix},~~^{t}v_{2}=\begin{pmatrix}
1 &
0 &
-\frac{3\sqrt{3}}{2} &
1
\end{pmatrix},\\
&^{t}v_{3}=\begin{pmatrix}
1 &
-\frac{14}{27}\sqrt{3} &
-\frac{2\sqrt{3}}{27} &
-\frac{4}{3}
\end{pmatrix},~~^{t}v_{4}=\begin{pmatrix}
1 &
-\frac{4\sqrt{57}}{27}+\frac{\sqrt{3}}{54} &
\frac{\sqrt{57}}{27}+\frac{10\sqrt{3}}{27} &
\frac{\sqrt{19}}{6}+\frac{2}{3}
\end{pmatrix}.
\endaligned
\end{equation*}
Set $U\in U(4)$ to be a unitary matrix whose rows from top to bottom are
$$\sqrt{-1}\;^{t}v_{1}/|v_{1}|,\quad
^{t}v_{2}/|v_{2}|,\quad
\sqrt{-1}\;^{t}v_{3}/|v_{3}|,\quad
^{t}v_{4}/|v_{4}|.$$
It satisfies $^{t}U\diag\{\sigma_{1},\sigma_{2},\sigma_{3},\sigma_{4}\}U=S(\frac{1}{2},\frac{1}{3}),$ so that $V_{\vec{\sigma}}U Z_{3}$ is a holomorphic $2$-sphere of degree $3$ and constant curvature in $\mathcal{Q}_{6}$.
\end{example}
\section{More geometry of minimal 2-spheres constructed by the SVD method}\label{sec-prob}
The main objective of this section is to discuss the two questions of Peng, Xu and Wang, raised in~\cite[Section 3,~p.~459]{PengWangXu} and mentioned in the introduction, as to construct nonhomogenous minimal as well as totally real minimal 2-spheres of constant curvature in $\mathcal{Q}_{n-1}$, such that they are also minimal in $\mathbb{C}P^{n}$.
As seen in the preceding sections, all constantly curved 2-spheres minimal in both $\mathcal{Q}_{n-1}$ and $\mathbb{C}P^{n}$ can be constructed by the SVD method, for which the norm of the second fundamental form $||B||$ is computed in this section. Then we show that $||B||$ is not generally constant, so that the generic 2-sphere constructed this way is not homogenous in $\mathcal{Q}_{n-1}$.
Recall the decomposition of $Sym_{d+1}(\mathbb{C})$ into $SU(2)$-invariant subspaces (see the discussion around \eqref{general projection} and \eqref{important subsapce of symm matrices}), and the definition of the set $\mathbf{H}_{d,n,p}$ in \eqref{rnc in hyperquadric}. The following is essentially Proposition \ref{quadric contains rnc and decomposition} with the promised proof.
\begin{prop}\label{summary theorem}
Let $d,n,p$ be three integers satisfying $1\leq d\leq n,~~0\leq p\leq [\frac{d}{2}]$. Consider a $2$-sphere given by $EZ_{d,p}$, where $E\in M(n+1,d+1)$ satisfying $E^{*}E=Id$. Let $S:=~^{t}\!EE$ and let the entries of $S$ be $s_{ij}$, where $0\leq i,j\leq d$, and $s_{ij}=s_{ji}$. Then the following are equivalent.
{\bf (1)} The $2$-sphere $EZ_{d,p}$ lies in the hyperquadric $\mathcal{Q}_{n-1}$.
{\bf (2)} $^{t}Z_{d,0}\, S\, Z_{d,l}=0,~0\leq l\leq 2p+1$.
{\bf (3)} $^{t}Z_{d,k}\, S\, Z_{d,l}=0,~0\leq k+l\leq 2p+1$.
{\bf (4)} The $2$-sphere $EZ_{d,k}$ lies in the hyperquadric $\mathcal{Q}_{n-1}$ for all $0\leq k\leq p$.
{\bf (5)} $\sum_{i,j=0}^{d}s_{ij}~j^{2k}\sqrt{\tbinom{d}{i}\tbinom{d}{j}}~z^{i+j-2k}=0,~0\leq k\leq p$.
{\bf (6)} $S\in \ker \Pi_{0}\cap\cdots\cap\Pi_{p}$.
\end{prop}
\begin{proof} (sketch)
Note that $(3)\Rightarrow (4)\Rightarrow (1)$ is clear. So, it suffices to show $(1)\Rightarrow (2)\Rightarrow(3),~(2)\Rightarrow (5)\Rightarrow (1)$, and $(5)\Leftrightarrow (6)$.
{\bf (1) $\Rightarrow$ (2)} Taking $\frac{\partial}{\partial z}$ on both sides of $^{t}Z_{d,p}\, S\,Z_{d,p}=0$, by \eqref{recursive formulas}, we get
\begin{equation}\label{induction step}
^{t}Z_{d,p}\, S\, Z_{d,p+1}=0.
\end{equation}
Taking $\frac{\partial}{\partial \bar{z}}$ on both sides of \eqref{induction step}, by \eqref{facts of veronese sequence} we see $^{t}Z_{d,p-1}\, S\, Z_{d,p+1}=0$.
Then taking $\frac{\partial}{\partial z}$ again yields $^{t}Z_{d,p-1}\, S\cdot Z_{d,p+2}=0$.
Continuing the process $p$ times, we arrive at
\begin{equation}\label{final get}
^{t}Z_{d,0}\, S\, Z_{d,2p+1}=0.
\end{equation}
Then taking $\frac{\partial}{\partial \bar{z}}$ on \eqref{final get} for $2p+1$ times, we get (3) by \eqref{facts of veronese sequence}.
{\bf (2) $\Rightarrow$ (3)} Assuming $(2)$, then taking $\frac{\partial}{\partial z}$ on both sides, we get
$^{t}Z_{d,1}\, S\, Z_{d,l}=0,$
where $0\leq l\leq 2p$. Then by induction, we obtain (3).
{\bf (2) $\Rightarrow $ (5)} From $^{t}Z_{d,0}\, S\, Z_{d,l}=0,~0\leq l\leq 2p+1$, we deduce
\begin{equation*}
\sum_{i,j=0}^{d}s_{ij}~j(j-1)\cdots(j-l+1)\sqrt{\tbinom{d}{i}\tbinom{d}{j}}~z^{i+j-l}=0,~~~0\leq l\leq 2p+1.
\end{equation*}
Then expand
$j(j-1)\cdots(j-l+1)$
and note that the coefficients before $j^{k},~0\leq k\leq l$, are all non-zero.
{\bf (5) $\Rightarrow$( 1)} We use induction to prove it. The claim is right for $p=0$. Assume it is right for $0\leq k\leq p-1$. Then by the induction assumption, $(6)$ implies that
$^{t}Z_{d,0}\, S \, Z_{d,l}=0,$
for $0\leq l\leq 2p$. By an argument similar to that in $(3)\Rightarrow(4)$, we get $^{t}Z_{d,p}\, S \, Z_{d,p}=0$, which is equivalent to $(1)$.
{\bf (5) $\Leftrightarrow$ (6)} Consider the induced action of $\mathfrak{su}(2)$ on $Sym_{d+1}(\mathbb{C})$ and extend it to the action of $\mathfrak{sl}(2;\mathbb{C})$. For $t:=i+j$ fixed, where $0\leq t\leq 2d$, i.e., considering the entries on each anti-diagonal, we claim that the system of linear equations in (5) are linearly independent. We then count the multiplicity of the eigenvalues of $J_{3}:=\diag(-1,1)/2\in \mathfrak{sl}(2;\mathbb{C})$.
To prove the claim,
if $t=2m$, set $a_{ij}=s_{ij}\sqrt{\tbinom{d}{i}\tbinom{d}{j}}$. Consider the following system of linear equations
$$
2\sum_{0\leq i\leq m-1}a_{i\,2m-i}+a_{m\, m}=0,\quad
2\sum_{0\leq i\leq m-1}a_{i\, 2m-i}~[i^{2k}+(2m-i)^{2k}]+a_{m\, m}~m^{2k}=0,
$
for $1\leq k\leq m$. We need only show that the associated $(m+1)\times(m+1)$ coefficient matrix of the unknowns $a_{ij}$ is nonsingular, which follows from the observation that it is a product of a lower triangular and a Vandermonde matrix.
If $t=2m+1$, a similar argument to in the preceding case takes care of the following system of linear equations,
$
\sum_{0\leq i\leq m}a_{i\, 2m+1-i}=0,\quad \quad
\sum_{0\leq i\leq m}a_{i\, 2m+1-i}\,[i^{2k}+(2m+1-i)^{2k}]=0,~1\leq k\leq m.
$
\end{proof}
\iffalse
Now we have a descending chain (see \eqref{quad contain rnc and singular value less than 1} for the definition of $\mathbf{S}_{d,p}$)
\begin{equation}
\mathbf{S}_{d}=\mathbf{S}_{d,0}\supsetneq \mathbf{S}_{d,1}\supsetneq\cdots\supsetneq\mathbf{S}_{d,[\frac{d}{2}]-1}\supsetneq\mathbf{S}_{d,[\frac{d}{2}]}=\{0\},
\end{equation}
and each one is a proper subset of the previous one. Recall the map $\varpi_{d}$ in~\eqref{omega} for computing the singular values of symmetric matrices. Following the same proof of Theorem \ref{classification of cdn}, we get
\begin{theorem}\label{classification of Hdnp}
Let $d,n,p$ be three integers satisfying $0\leq d\leq n,~0\leq p\leq [\frac{d}{2}]$. Then we have
\begin{equation}
\mathbf{H}_{d,n,p}=\{AV_{\vec{a}}UZ_{d}|~A\in O(n+1;\mathbb{R}),~\vec{a}\in \varsigma^{-1}_{d,n}(\varpi_{d}(\mathbf{S}_{d,p})),~U\in U(d+1),~^{t}UQ_{\vec{a}}U\in\mathbf{S}_{d,p}\}.
\end{equation}
\end{theorem}
\subsection{Equivalence problem.}
As in the case for $p=0$ considered in Section~\ref{4.2}, we say two curves $C_{1}$ and $C_{2}$ in $\mathbf{H}_{d,n,p}$ are {\em equivalent}, if there is an $A\in O(n+1;\mathbb{R})$ such that
\begin{equation}
Im~A\cdot C_{1}= Im~C_{2},
\end{equation}
in which case we denote $C_{1}\sim C_{2}$.
Denote the restriction of $\varpi_{d}$ on $\mathbf{S}_{d,p}$ by $\varpi_{d,p}$. Following the same proof of Theorem \ref{simple classification}, we deduce
\begin{theorem}\label{simple classi for minimal}
Let $d,n,p$ be three integers satisfying $0\leq d\leq n,~0\leq p\leq [\frac{d}{2}]$.
{\bf (1)} If $d\leq n<2d+1$, then $\bigsqcup_{\vec{\sigma}\in Im~\varsigma_{d,n}}\varpi_{d,p}^{-1}(\vec{\sigma})/U(1)\times SU(2)\cong\mathbf{H}_{d,n,p}/\sim$.
{\bf (2)} If $2d+1\leq n$, then $\mathbf{S}_{d,p}/U(1)\times SU(2)\cong\mathbf{H}_{d,n,p}/\sim$.
\end{theorem}
\fi
For minimal 2-sphere $EZ_{d,p}\in{\mathbf H}_{d,n,p}$,
we have
$ds^{2}=\lambda^{2}dzd\bar{z}$ (see Subsections~\ref{2.2} and~\ref{2.4}),
where $\lambda=\sqrt{d+2p(d-p)}/(1+|z|^{2})$, and
\begin{equation*}
X=\frac{1}{\lambda}\frac{EZ_{d,p+1}}{|Z_{d,p}|},~~Y=-\frac{|Z_{d,p}|}{\lambda |Z_{d,p-1}|^{2}}EZ_{d,p-1}.
\end{equation*}
Recall Subsection~\ref{2.2}, the Gaussian curvature $K_{d,p}$ is $4/(d+2p(d-p))$ and the K\"ahler angle $\theta_{d,p} \in [0,\pi]$ satisfies $\cos\theta_{d,p}=(d-2p)/(2p(d-p)+d)$. From item (5) in Theorem \ref{summary theorem}, we know $Y$ lies in the hyperquadric $\mathcal{Q}_{n-1}$, so $\tau_{Y}=|^{t}Y\,Y|=0$. Since this 2-sphere is minimal in both $\mathcal{Q}_{n-1}$ and $\mathbb{C}P^{n}$, Theorem~\ref{minimal in cpn and qn-1} implies $\tau_{XY}=|^{t}\!X\,Y|=0$. It follows from \eqref{formula of norm of second fundamental form} that the norm of the second fundamental form satisfies
\begin{equation}
||B||^{2}=-2K_{d,p}+2+6\cos^{2}\theta_{d,p}-4\tau^{2}_{X}.
\end{equation}
Since for $Z_{d,p}$, the Gaussian curvature $K_{d,p}$ and the K\"ahler angle $\theta_{d,p}$ are constant, $||B||^{2}$ is constant if and only if $\tau_{X}$ is constant. By a direct computation,
\begin{equation*}
\tau_{X}=|^{t}\!X\,X|=\frac{(d-p)!}{(d+2p(d-p))\cdot d!p!}\frac{|^{t}Z_{d,p+1}SZ_{d,p+1}|}{(1+|z|^{2})^{d-2p-2}},
\end{equation*}
where $S=~^{t}\!EE$. Following the argument of Theorem \ref{summary theorem} and by \eqref{local sections equivalent}, we derive
\begin{align}\label{final computations
\begin{split}
|^{t}Z_{d,p+1}SZ_{d,p+1}|&=|^{t}Z_{d,0}S\frac{\partial^{2p+2}}{\partial z^{2p+2}}Z_{d,0}|,\\
&=|\sum_{i,j=0}^{d}s_{ij}~\sqrt{\tbinom{d}{i}\tbinom{d}{j}}z^{i+j-(2p+2)}j(j-1)\cdots(j-2p-1)|.
\end{split}
\end{align}
\begin{theorem}\label{non homogenous thm}
Suppose $EZ_{d,p}\in\mathbf{H}_{d,n,p}$ is a linearly full minimal $2$-sphere. If we set $S:=~^{t}\!EE\in \mathbf{S}_{d,n,p}$, then we have the following.
{\bf (1)} If $1\leq d\leq 2$, then $||B||^{2}$ is constant.
{\bf (2)} If $p=[\frac{d}{2}]$, then $S=0$, $||B||^{2}$ is constant, $2d+1= n$, and
$\mathbf{H}_{d,2d+1,[\frac{d}{2}]}/O(2d+2;\mathbb{R})$
is a singleton set, given by $[V_{\vec{0}}Z_{d,[\frac{d}{2}]}]$, where $\vec{0}\in\mathbb{R}^{d+1}$ is the zero vector.
{\bf (3)} If $d$ is even and $p=[\frac{d}{2}]-1$, then $||B||^{2}$ is constant.
{\bf (4)} If $d\geq3$, $0\leq p<[\frac{d}{2}]$, and $p\neq [\frac{d}{2}]-1$ when $d$ is even, then $||B||^{2}$ is constant, if and only if, $EZ_{d,p+1}\in \mathbf{H}_{d, n, p+1}$.
\end{theorem}
\begin{proof}
{\bf (1)} By a direct computation using \eqref{final computations}.
{\bf (2)} If $p=[\frac{d}{2}]$, then by taking conjugation, it follows from Proposition~\ref{summary theorem} that $EZ_{d,q}$ lies in $\mathbf{H}_{d,n,q}$ for any $0\leq q\leq d$. We conclude $S=0$, which implies all its singular values vanish. Combining this with the linearly full assumption, we have $2d+1= n$ by Theorem~\ref{coro-dim}. Substituting $S=0$ into \eqref{final computations}, we have $|\tau_X|=0$ and hence $||B||^2$ is a constant. That the moduli space $\mathbf{H}_{d,2d+1,[\frac{d}{2}]}/O(2d+2;\mathbb{R})$ is a singleton set follows from $\mathbf{S}_{d,2d+1,[\frac{d}{2}]}=\{[0]\}$ by using Theorem \ref{simple classification}.
{\bf (3)} If $d$ is even, and $p=\frac{d}{2}-1$, then the only possible nonvanishing entries of $S$ are on the main anti-diagonal, i.e. $i+j=d$. So, $(1+|z|^{2})^{d-2p-2}=1$ and \eqref{final computations} is constant, which implies $|\tau_{X}|$ is constant and then $||B||^2$ is constant.
{\bf (4)} In this case, we need only show that $\tau_{X}$ is constant if and only if it is zero. For convenience, denote the polynomial $^{t}Z_{d,0}\,S\,\frac{\partial^{2p+2}}{\partial z^{2p+2}}Z_{d,0}$ in \eqref{final computations} by $g(z)$. This is a polynomial of $z$. Note that $\tau_{X}$ is zero if and only if $g(z)=0$.
If $\tau_{X}$ is constant, then we have
\begin{equation*}
|g(z)|^{2}/(1+|z|^{2})^{2d-4p-4}=c,
\end{equation*}
for some constant $c$. So $c(1+|z|^{2})^{2d-4p-4}=g(z)\cdot \overline{g(z)}$. If $d\geq 3,~0\leq p<[\frac{d}{2}]$, and $p\neq \frac{d}{2}-1$ when $d$ is even, then $2d-4p-4>0$. If $c\neq 0$, then
$|g(z)|^{2}$ is divided by $1+z\bar{z}$. It is well known that $\mathbb{C}[z,\bar{z}]$ is a unique factorization domain.
Since $1+z\bar{z}$ is irreducible in $\mathbb{C}[z,\bar{z}]$, by unique factorization, either $g(z)$ or $\overline{g(z)}$ is divisible by $1+|z|^{2}$, say, $g(z)$. It leads to a contradiction by a degree count of $\bar{z}$ in $1+z\bar{z}$ and $g(z)$. So $c=0$, which implies $|\tau_X|^2=0$.
\end{proof}
Now we can answer the question raised by Peng, Wang and Xu, mentioned in {\bf Problem 1} said in the introduction as follows
\begin{coro}
Let $d,n,p$ be three integers satisfying $3\leq d\leq n,~0\leq p<[\frac{d}{2}]$, and $p\neq [\frac{d}{2}]-1$ when $d$ is even. Then the generic minimal $2$-sphere $EZ_{d,p}$ in $\mathbf{H}_{d,n,p}$ is not homogenous.
\end{coro}
By item (4) of the preceding theorem, the corollary follows from the fact that $\mathbf{S}_{d,n,p+1}=\mathbf{S}_{d,n,p}\cap\ker\Pi_{p+1}$ is a proper subset of $\mathbf{S}_{d,n, p}$ with codimenison given by
\begin{equation}\label{difference}
\dim\mathbf{S}_{d,n,p}-\dim \mathbf{S}_{d,n,p+1}=4d-8p-6.
\end{equation}
Note that the constantly curved holomorphic 2-spheres of degree $3$ constructed in Section \ref{deg=3} are not homogeneous, if the parameters $(y,z)$ corresponding to them are not equal to $(0,0)$.
More generally, observe that when $p=0$, item (4) of Theorem~\ref{non homogenous thm} says that the holomorphic 2-sphere of degree $d$ of constant curvature assumes constant $||B||^2$ if and only if the tangent developable surface ${\mathcal D}$ of the Veronese curve lies in the quadric defined by the symmetric matrix $S$. In Eisenbud~\cite{GreenConjecture}, regarding Green's conjecture, it is pointed out that if we let $z_0,\cdots,z_d$ be the homogeneous coordinates of ${\mathbb C}P^d$, $w_i=z_i/\binom{d}{i},0\leq i\leq d,$ and consider the $2\times d$ matrix
$$
\begin{pmatrix}w_0&w_1&\cdots&w_{d-1}\\w_1&w_2&\cdots&w_d\end{pmatrix},
$$
for which we let $\Delta_{a,b}, 0\leq a,b\leq d-3,$ be the quadric polynomials obtained by taking the determinant of the minor associated with the columns $a$ and $b$, then the quadratic equations of ${\mathcal D}$ are given by
$$
\Gamma_{a,b}:=\Delta_{a+2,b}-2\Delta_{a+1,b+1}+\Delta_{a,b+2}=0,\quad 0\leq a,b\leq d-3.
$$
Moreover, $\Gamma_{a,b}$ generate the ideal of ${\mathcal D}$ when $d\geq 6$.
The linear span of these $\Gamma_{a,b}$, of real dimension $2\binom{d-2}{2}=d^2-5d+6$,
constitutes the space of symmetric matrices containing ${\mathcal D}$, in agreement with $\dim\mathbf{S}_{d,n,1}$ and made explicit of the dimension given in~\eqref{difference} when $n=2d+1$ (note that $\dim \mathbf{S}_{d,2d+1,0}=d^2-d$).
Meanwhile, we present the following corollary of Theorem~\ref{non homogenous thm}, which relates to {\bf Problem 2} mentioned in the introduction. Note that the minimal surface is totally real if and only if its K\"ahler angle is $\pi/2$. For the Veronese maps, the only totally real case is $Z_{d,\frac{d}{2}}$, where $d$ is even; see the formula \eqref{eq-kahler} for $\cos\theta_{d,p}$.
\begin{coro}\label{cor-uniqueness}
A totally real linearly full minimal $2$-spheres of constant curvature $8/(d^2+2d)$ in $\mathcal{Q}_{n-1}$, such that it is also minimal in $\mathbb{C}P^{n}$, exists only when $d$ is even and $n=2d+1$. Moreover, it is unique up to isometric transformations of $\mathcal{Q}_{n-1}$.
\end{coro}
To conclude the paper, we propose two interesting questions for further study:
\textbf{Question 1}: In view of Remark \ref{d,d+2rmk} and Proposition \ref{Pr}, are $\mathbf{H}_{d,d}$ and $\mathbf{H}_{d,d+1}$ always empty when d is odd? On the other side, when $d$ is even, we know $\mathbf{H}_{d,d}\neq \varnothing$ in general. Is $\mathbf{H}_{d,d}=\mathbf{H}_{d,d+1}$ always true in the case of even $d$?
\\
\textbf{Question 2}:
Our work requires that the $2$-spheres be minimal in both the hyperquadric and ${\mathbb C}P^{n}$, while in \cite[p. 1022, (2)]{JiaoLiS^2inQn}, Jiao and Li found a constantly curved $2$-sphere which is only minimal in the hyperquadric. How to construct such $2$-spheres, minimal only in the hyperquadric, in a systematic way?
\\
\textbf{Acknowledgement}:
The second author is supported by NSFC No. 11601513, the Fundamental Research Funds for Central Universities, and the CSC scholarship while visiting WUSTL. The third author is supported by NSFC No.11871450 and acknowledges the support from the UCAS joint PhD Training Program. He would also thank Professor Xiaoxiang Jiao for guidance and suggestions. They both would thank the first author and the Department of Mathematics at WUSTL for warm hospitality during their visit.
|
1,477,468,750,350 | arxiv | \section{Introduction}
Recent remarkable progress in understanding the duality between
planar ${\cal N}$=4 super Yang-Mills theory and superstring theory in $AdS_5 \times S^5$\ based on integrability
opens up the possibility of computing
various observables exactly in `t Hooft coupling $\lambda$ or in string tension $ {\sqrt{\l}}\ \over 2\pi$.
Most of the progress was achieved for the scaling dimensions $\Delta_i(\lambda)$ of primary operators ${\cal O}}\def \no {\nonumber_i$ which determine the 2-point functions $\langle {\cal O}}\def \no {\nonumber(x^{(1)}) {\cal O}}\def \no {\nonumber(x^{(2)}) \rangle $
(for reviews see \cite{beis}). The next step is to understand 3-point functions
$\langle {\cal O}}\def \no {\nonumber_i (x^{(1)}) {\cal O}}\def \no {\nonumber_j(x^{(2)}) {\cal O}}\def \no {\nonumber_k (x^{(3)}) \rangle $ which, in addition to $\Delta_i$, are
determined by non-trivial functions $C_{ijk}(\lambda)$.
Higher-point correlation functions, though in principle dictated by the OPE, are much more complicated. For example, conformal invariance implies
that a 4-point
correlator $\langle {\cal O}}\def \no {\nonumber_1 (x^{(1)}) ... {\cal O}}\def \no {\nonumber_4(x^{(4)}) \rangle $ should, in general, contain
a non-trivial function of the two conformal cross-ratios
${\rm u}_1 = { |x^{(12)}|^2 |x^{(34)}|^2 \over |x^{(13)}|^2 |x^{(24)}|^2 }, \ \
{\rm u}_2 = { |x^{(12)}|^2 |x^{(34)}|^2 \over |x^{(14)}|^2 |x^{(23)}|^2 }$\ \
($x^{(ij)}_m \equiv x^{(i)}_m - x^{(j)}_m$, $m=0,1,2,3$) and $\lambda$.
Correlators of primary operators are natural observables in CFT. In addition, in a gauge
theory, one may consider also expectation values of
Wilson loops. An important class of these, related to gluon scattering amplitudes
(see \cite{AM1,koo,br} and \cite{refs} for reviews), are expectation values of Wilson loops in
the fundamental representation
$\langle W_n \rangle$ corresponding to polygons
built out of null lines with $n$ cusps (located at
$\{x^{(i)}_m \}, \ i=1,..., n$ with $|x^{(i,i+1)}|^2=0, \ x^{(n+1)}\equiv x^{(1)} $).
They were previously studied at weak \cite{korkor} and at strong \cite{Martin,AM1} coupling.
Conformal invariance (broken by the presence of the cusps in a controllable fashion)
implies \cite{Drum} that for $n=4,5$ these expectation values are
fixed functions of $x^{(i)}$ (depending on a few $\lambda$-dependent coefficients, in particular,
on the cusp anomalous dimension \cite{polya,kor})
while for $n > 5$ they should depend on $3n-15$ cross-ratios of the cusp coordinates.
The first non-trivial example is $\langle W_6 \rangle$ which
is expressed in terms of a function of $\lambda$ and three cross-ratios.
For recent
progress in computing this function at weak and at strong coupling see
\cite{xec,Bubble,xecc}.
As suggested in \cite{ak}, there is a close relation
between certain correlators
of local (BPS) operators and expectation values of cusped Wilson loops:
a correlator
$K_n=\langle \hat \OO (x^{(1)}) ... \hat \OO(x^{(n)}) \rangle $ of primary operators
(e.g., the highest weight part of 20' scalar)
located at positions of the null cusps is proportional to
the expectation value of
the null polygon Wilson loop in the adjoint representation (or to
$\langle W_n \rangle^2$ in
the planar approximation we will consider here). More precisely,
$\lim_{|x^{(i,i+1)}| \to 0} K_n/K_{n0} =\langle W_n \rangle^2 $,
where $K_{n0} \sim \prod^{n}_{i=1} |x^{(i,i+1)}|^{-2} + ...$
is the most singular term in the tree-level ($\lambda=0$) part of $K_n$.
In this paper we study a new observable that involves both a local operator
and a cusped
Wilson loop, i.e. $\langle W_n {\cal O}}\def \no {\nonumber(a) \rangle$ ($a$ will denote the position of
the local operator).\footnote{For
BPS (circular) Wilson loops and their generalizations
such correlators were studied previously in \cite{cor,za02,zp,gom,alt}, see also
\cite{miwa}. The null polygon loop is, in a sense, a natural generalization
of a circular loop
as it is ``locally-BPS''. Furthermore, this kind of polygons is closed under conformal transformations.}
One motivation is that such correlators
may lead to new simple examples where one may be able to
interpolate from weak to strong coupling. In particular, in the first non-trivial case $n=4$
such correlator happens to be a function of just
{\it one} non-trivial conformal ratio formed from the coordinates of the cusps $x^{(i)}$ and the operator $a_m$
(for $n >4$ it will be a function of $3n-11$ conformal ratios).
For comparison,
in the case of a circular Wilson loop
(which, in fact,
may be viewed as an $n \to \infty$ limit of a regular null polygon)
the dependence of the correlator $\langle W_\infty {\cal O}}\def \no {\nonumber(a) \rangle$ on the
location of the operator $a$
is completely fixed \cite{cor,gom,alt} by conformal invariance.
Determining such a function (both at weak and at strong coupling)
should be easier than the function of the two conformal ratios in
the 4-point correlator case or the function of the three conformal ratios in the 6-cusp Wilson
loop case. We shall demonstrate this below by explicitly computing the
leading contributions
to $\langle W_4 {\cal O}}\def \no {\nonumber(a) \rangle$ both at strong and at weak coupling (for ${\cal O}}\def \no {\nonumber$
being the dilaton or a chiral primary operator).
Another motivation to study such ``mixed'' correlators is that they
may shed more light on the
relation \cite{ak} between the correlators of null-separated operators and
cusped Wilson loops
mentioned above. That relation was verified at weak coupling, but checking
it explicitly at strong coupling
remains an important open problem. For example, one may start with
a $(n+1)$ -point correlator and consider a limit in which only $n$ of
the locations of the operators become null-separated and attempt
to relate this limit to
$\langle W_n {\cal O}}\def \no {\nonumber(a) \rangle$ with $a= x^{(n+1)}$.
More explicitly, since the derivative of a correlator
over the gauge coupling brings down
a power of the super YM action which is the same as the
integrated dilaton operator,
the relation $\langle \hat \OO (x^{(1)}) ... \hat \OO(x^{(n)}) \rangle \sim \langle W_n \rangle^2$
implies that
\begin{equation}
\langle \hat \OO (x^{(1)}) ... \hat \OO(x^{(n)}) \int d^4 a\ {\cal O}}\def \no {\nonumber_{dil}(a)
\rangle \sim 2 \langle W_n \rangle \langle W_n \int d^4 a\ {\cal O}}\def \no {\nonumber_{dil}(a) \rangle\ . \label}\def\foot{\footnote}\newcommand{\rf}[1]{(\ref{#1}){ptr}\end{equation}
Assuming that
the integral over $a$ can be omitted and, furthermore, the dilaton operator
can be replaced by a generic local operator one may conjecture that
$
\langle \hat \OO (x^{(1)}) ... \hat \OO(x^{(n)}) \ {\cal O}}\def \no {\nonumber(a)
\rangle \ \sim \ \langle W_n \rangle \langle W_n \ {\cal O}}\def \no {\nonumber(a) \rangle $,
i.e. that
\begin{equation} \lim_{|x^{(i)} - x^{(i+1)}| \to 0}
{ \langle \hat \OO (x^{(1)}) ... \hat \OO(x^{(n)}) \ {\cal O}}\def \no {\nonumber(a)
\rangle \over \langle \hat \OO (x^{(1)}) ... \hat \OO(x^{(n)})
\rangle} \ \ \sim \ \ { \langle W_n \ {\cal O}}\def \no {\nonumber(a) \rangle \over \langle W_n \rangle } \ . \label}\def\foot{\footnote}\newcommand{\rf}[1]{(\ref{#1}){121}
\end{equation}
Finally, it would be very
interesting to
understand what is a counterpart of $\langle W_n {\cal O}}\def \no {\nonumber(a) \rangle$
on the ``T-dual'' \cite{AM1}
scattering-amplitude side, e.g., if there is any relation to
form factors given by $ \langle A(x^{(1)} ) ...A(x^{(n)} ) {\cal O}}\def \no {\nonumber(a) \rangle$ where $A$'s stand for local fields like vector
potential, cf. \cite{amf}.\footnote{The two are obviously related when
the operator is at zero momentum (i.e. integrated over $a$), but in general
one expects a complicated non-local relation involving a sum over contributions of
different types of operators.}
\
Let us briefly review the contents of this paper.
In Section 2, we shall use general symmetry considerations
to determine the structure of the correlator \rf{1.1} of a null $n$-polygon Wilson loop and a
conformal primary operator. We shall explicitly discuss the case of $n=4$ where
the result will be expressed in terms of a
function $F$ of only one non-trivial conformal ratio \rf{ges} depending on the locations of the operator and the cusps.
Taking the $|a|\to \infty$ limit then determines the corresponding OPE coefficient \cite{cor}.
In Section 3 we explicitly compute the $n=4$ correlator
at strong coupling using semiclassical string theory methods \cite{cor,za02}, i.e.
evaluating the vertex operator corresponding to ${\cal O}}\def \no {\nonumber$ on the string surface \cite{Martin,AM1}
ending on the null quadrangle.
We shall explicitly determine the strong-coupling form of the function $F$
for the two cases: when ${\cal O}}\def \no {\nonumber$ is the dilaton
or is a chiral primary operator.
We shall also discuss the generalization to
the case of non-zero R-charge or angular momentum in $S^5$.
In Section 4 we note that since the string world surface ending on a null quadrangle is related \cite{KRTT}
(by a euclidean continuation and conformal transformation) to the surface describing
folded spinning string \cite{gkp} in the infinite spin limit \cite{ft},
the correlator computed in Section 3 may be
related to the strong-coupling limit of 3-point correlator of two infinite spin twist-2 operators and a
dilaton operator.
The latter correlator
may be computed \cite{bt1,rt} using similar semiclassical methods \cite{zar,cos}.
We point out that while the integrands in the two expressions are indeed the same,
the ranges of integration are different. The two integrals, however,
are indeed proportional for a special choice of the locations of the twist-2 operators.
In Section 5 we discuss the computation of the correlator
$\langle W_n {\cal O}}\def \no {\nonumber(a) \rangle$ at strong coupling for higher number of cusps $ n >4$.
Unfortunately, the explicit form of the space-time solution is not know in this case,
but using the approach of \cite{AM2} we are able to compute the correlator
numerically in the limit when the dilaton is far away from the null polygon,
i.e. to find the OPE coefficient corresponding to the dilaton
in the expansion of the Wilson loop in its size.
In Section 6 we turn to the evaluation of this correlator at weak coupling, i.e.
in perturbative gauge theory. We explicitly see that the
leading term in $\langle W_4 {\cal O}}\def \no {\nonumber(a) \rangle$
has a form consistent with the one expected on symmetry grounds
with the function $F(\zeta)=\lambda h_0 + O(\lambda^2)$, where $h_0$ is a constant.
We consider a generalization to $n >4$ and compute the
OPE coefficient for the dilaton in the case of a regular null polygon for arbitrary $n$.
We also compute the leading order $\lambda$ term in $F$ in the case
of the regular null polygon with $n=6$.
In Section 7 we summarize our results and mention some open questions.
In Appendix A we discuss the general structure of the correlator
$\langle W_4 {\cal O}}\def \no {\nonumber(a) \rangle$.
In Appendix B we consider some analytic results which can be obtained
for $n >4$ even number of cusps in the limit when the dimension
of the local operator is very large.
\section{Structure of correlation function of cusped Wilson loop and a
local operator }
Below we will consider the correlation function
\begin{equation}
{\cal C}(W_n, {a})=\frac{\langle W_n {\cal O} ({a})\rangle}{\langle W_n\rangle}\,,
\label{1.1}
\end{equation}
where $W_n$ is a polygonal Wilson loop made out of $n$ null lines (see Figure 1) and ${\cal O}$ is a local scalar operator inserted at a generic point
${a}=\{a_m\}=(a_0, a_1, a_2, a_3)$.
While the expectation value $\langle W_n\rangle$
of such Wilson loops is known to have UV divergences due to the presence of the
cusps \cite{polya,kor,korkor} (enhanced in the null case)
we will see that
the ratio~\eqref{1.1} is finite, i.e. does not require a regularization.
\begin{figure}[ht]
\centering
\includegraphics[width=45mm]{polygon.eps}
\caption{\small Cusped polygonal Wilson loop, in this figure, with six edges. Consecutive cusps, for instance $x^{(1)}$ and $x^{(2)}$, are null separated.}
\label{fig1}
\end{figure}
\subsection{General considerations}
As follows from conformal symmetry, the non trivial part of
$\langle W_n\rangle$ depends only on the conformally invariant ratios constructed using
the coordinates of the cusps~\cite{Drum}. The number of such conformal ratios for $n >5$ is
$4n-n -15=3n-15$. Here $4n$ stands for the total
number of coordinates, $n$ is the number of
null conditions on the polygon lines
and $15$ is the dimension of the conformal group.\footnote{In~\cite{Drum} this counting
was found using anomalous Ward identities in the framework of
perturbative gauge theory. In \cite{Bubble} the
same result was found at strong coupling by counting
the number of moduli of the Hitchin's equation with certain boundary conditions
determining the corresponding minimal surface in $AdS_5$.}
Furthermore, we expect \eqref{1.1} to be finite, since divergences from the numerator will be canceled by divergences from the denominator.
The number $3n-15$ of independent conformal ratios is exactly the same as the one that would appear
in a correlator of $n\geq 4$ primary operators
located at the corners of a null polygon.
In general, the structure of $n$-point correlator $\langle {\cal O}}\def \no {\nonumber(x^{(1)}) ... {\cal O}}\def \no {\nonumber(x^{(n)})\rangle$
is fixed by conformal symmetry up to a function of conformal ratios.
The number of these conformal ratios
is always given by $c_n=4n -\gamma_n$, where $4n$ is the total number of coordinates and
$\gamma_n$
is the number of generators of the conformal group broken by the precense of the local operators. For $n=2$, $3$, $4$
we have
${\gamma} } \def \hD {\hat \Delta_2=8$, ${\gamma} } \def \hD {\hat \Delta_3=12$ and ${\gamma} } \def \hD {\hat \Delta_4=14$
so that $c_2=0, \ c_3=0, \ c_4=2$.\footnote{This agrees with the familiar count
of the conformal ratios
${\rm u}^{(s)}= \prod^{n}_{i < j} |x^{(i)}_m -x^{(j)}_m|^{2 \nu^{(s)}_{ij}}$.
Here $\nu^{(s)}_{ij}$ is a basis in the space of
symmetric $n \times n $ matrices $\nu_{ij}$. Scaling invariance
and inversion symmetry
imply that one should have $\nu_{ii}=0, \
\sum_{j=1}^{n} \nu_{ij} =0$ for all $i=1, ..., n+1$.
This leaves $ {1 \over 2} n (n+1) - n - n = {1 \over 2} n (n-3)$ parameters
\cite{syma}
but for $n >6$ not all of the corresponding conformal ratios are functionally independent
(there are additional Gram determinant constraints).}
A random configuration of $n>4$ points breaks the conformal
group completely, i.e. $\gamma_n=15$ and thus for $n >4$ we have
$c_n= 4n -15$.\footnote{This
can be seen, for example, as follows. For $n\geq 2$,
we can fix translations
and special conformal transformations by putting one point at the origin and one at infinity.
This configuration of two points
preserves dilatations and rotations which gives $7$ parameters implying that
the number of the broken generators for $n=2$ is $8$. If we add one more point at some
arbitrary finite position we break dilatations and certain rotations.
What survives is the subgroup of the Lorentz group which preserves one vector. This
subgroup is 3-dimensional so that $\gamma_3=12$.
If we add the fourth point the surviving subgroup has to preserve two vectors and, hence, is
one-dimensional, ${\gamma} } \def \hD {\hat \Delta_4=14$.
If we add one more point all the conformal group becomes broken.}
If the operators are located at the corners of a null polygon we have to impose $n$
additional constraints which gives $d_n=3n -\gamma_n$ for the number of conformal ratios,
i.e. $d_4=-2$, $d_5 =0$ and thus $d_n= 3n-15$ for $n > 4$.
Adding an operator ${\cal O}$ in~\eqref{1.1} at a generic point
brings in 4 parameters so that $ {\cal C}(W_n, {a}) $ with $n \geq 4$
should be a non-trivial function
of $3n-11$
conformally invariant combinations $\zeta_k$
constructed out of
the coordinates $x_m^{(i)}$ of the $n$ cusps and the point $a_m$.\footnote{We shall use
the notation $\zeta_k$ to distinguish these conformal ratios from standard cross-ratios
${\rm u}_k$ which appear in correlators of local operators at generic points.}
This is, of course, the
same as the number of conformal ratios parametrising a correlator of $n+1$
operators with only $n$ points being null-separated,
\begin{equation} \label}\def\foot{\footnote}\newcommand{\rf}[1]{(\ref{#1}){redd}
c_{n+1} - n = 4(n+1) -15 - n =3n-11 \ . \end{equation}
Like for correlators of primary operators or Wilson loops, the general structure
of ${\cal C} (W_n, a)$ in \rf{1.1} should be determined by conformal invariance. We shall
assume that in contrast to $\langle W_n\rangle$, which contains UV divergences, the correlator \rf{1.1} should be UV finite (up to a possible renormalization of the
operator ${\cal O}}\def \no {\nonumber$). As we shall argue below, in this case the
conformal invariance together with the expected OPE
property fixes ${\cal C} (W_n, a)$ up to a single function $F$
depending on $3n-11$ conformal ratios $\zeta_k$.
In general, ${\cal C} (W_n, a)$ should be a function of $n$ distances
$|a- x^{(k)}|$ and ${1 \over 2} n (n-3)$ non-zero ``diagonals'' of the null polygon
$|x^{(i)}- x^{(j)}|$, $i \not= j\pm 1$.\footnote{We shall use the notation:
$|x-x'|^2= (x_m-x_m')^2 = - (x_0-x_0')^2 + (x_1-x_1')^2 +
(x_2-x_2')^2 +(x_3-x_3')^2 $.}
It should also transform like the operator
${\cal O}}\def \no {\nonumber(a)$ with dimension $\Delta$
under (i) dilatations and (ii) inversions, i.e.
(i) ${\cal C}\to h^{-\Delta}{\cal C}$
under $ x^{(i)} \to h x^{(i)}, \ a\to h a$, and
(ii) ${\cal C}\to |a|^{2\Delta}{\cal C}$ under $a_m \to \frac{a_m}{|x|^2}, \
x^{(i)}_m \to \frac{x^{(i)}_m}{ | x^{(i)} |^2}$.\footnote{Since special
conformal transformations are generated by translations and
inversions, it is enough to consider only the transformation under the inversions.
Note that under the inversions $ |x-x'|^2 \to \frac{| x - x' |^2}{|x|^2 |x'|^2}$.}
The large $|a|$ behavior of ${\cal C}$ can be fixed
by consistency with the expected OPE expansion: for small Wilson
loop one may represent it
in terms of a sum of local operators \cite{shif,cor}
\begin{equation}
{W_n \over \langle W_n \rangle} = 1 + \sum_{k} c_k \ {\rm r}^{\Delta_k}\ {\cal O}}\def \no {\nonumber_k(0) + ... \ , \label}\def\foot{\footnote}\newcommand{\rf}[1]{(\ref{#1}){22}
\end{equation}
where
${\rm r}$ is the characteristic size of a loop,
${\cal O}}\def \no {\nonumber_k$ are conformal primary operators with dimensions $\Delta_k$, and
dots stand for contributions of their conformal descendants.\footnote{For example, in pure YM
theory \cite{shif}:
$\langle W(C) \rangle_{C\to 0} =1 + c_0 {\rm r}^4 \langle {\cal O}}\def \no {\nonumber_{dil.0} \rangle +...$,
where ${\cal O}}\def \no {\nonumber_{dil.0} \sim {1 \over N} \text{tr} F^2_{mn}$,
${\rm r}^4 $ stands for the square of the area of a disc bounded by $C$
and $c_0 =a_1 g^2 + a_2 g^4 + ...$. }
Taking the position $a$ of the operator ${\cal O}$ to be far away from the
null polygon one should then get
\begin{equation}
{\langle W_n {\cal O} ({a})\rangle \over \langle W_n \rangle}\Big|_{|a|\to \infty}
\ \sim \ \langle {\cal O}^{\dagger}(0) {\cal O} ({a})\rangle\ \sim
\frac{1}{|{a}|^{2\Delta}}\,,
\label{1.3}
\end{equation}
where ${\cal O}^{\dagger}$ conjugate to ${\cal O} $ is among the
operators present in \rf{22}. Since all distances $|a-x^{(k)}|$
between the operator and the cusps
should appear on an equal footing this suggests the following ansatz
\begin{equation}
{\cal C}(W_n, a)=
\frac{{\cal F}(a,x^{(i)})}{\prod_{k=1}^n
|a-x^{(k)}|^{{2\over n}\Delta}}\,,
\label{e1}
\end{equation}
where ${\cal F}$ is finite in the$|a|\to \infty$ limit,
i.e. it may depend on $|a- x^{(k)}|$ only through their ratios.
The dependence of ${\cal F}$ on $|x^{(i)}- x^{(j)}|$ is constrained
by the transformations under dilatations and inversions mentioned above which implies that
under these two transformations we should have
\begin{equation}\label}\def\foot{\footnote}\newcommand{\rf}[1]{(\ref{#1}){tra}
(i)\
{\cal F} \to h^{\Delta} {\cal F} \ , \ \ \ \ \ \ \ \ \ \ \ (ii) \ {\cal F} \to
(|x^{(1)}|\ldots |x^{(n)}|)^{-\frac{2 }{n}\Delta} {\cal F} \ . \end{equation}
These conditions are solved, e.g.,
by taking ${\cal F} \sim \prod_{i< j-1}^n |x^{(i)}-x^{(j)}|^{\mu}$ with
$\mu= \frac{2 }{n (n-3)}\Delta$. In addition, ${\cal F}$ may contain
a factor $F$ depending only on conformal ratios $\zeta_k$ which is manifestly invariant under the
dilatations and inversions. As we argue in Appendix A,
this is, in fact, the general structure of $\cal C$, i.e.
we are led to the following expression for \rf{1.1}
\begin{equation}
{\cal C}(W_n, a)= \frac{\prod^n_{i < j-1} | x^{(i)}- x^{(j)}|^{ \frac{2 }
{n (n-3)}\Delta } }{\prod_{k=1}^n |a- x^{(k)}|^{{2\over n}\Delta}}\ F(\zeta_1,
...,\zeta_{3n-11})\,.
\label{1.2}
\end{equation}
In general, $\Delta$ and $F$ in \rf{1.2} may depend also on the coupling
$\lambda$, i.e. they may look different at weak and at strong coupling, but the
general structure \rf{1.2} should be universal.
The same structure \rf{1.2} follows also from
the general form of the correlator of local operators if the relation \rf{121}
is assumed to be true.
As is well know, conformal invariance implies that a correlator of
$q$ primary operators ${\cal O}}\def \no {\nonumber_i(x^{(i)})$ of dimensions $\Delta_i$ at generic positions
should have the form
\begin{eqnarray} \label}\def\foot{\footnote}\newcommand{\rf}[1]{(\ref{#1}){133}
&&\langle {\cal O}}\def \no {\nonumber_1 (x^{(1)}) ... {\cal O}}\def \no {\nonumber_q (x^{(q)}) \rangle
= T_q \ {\rm F}_q ({\rm u}_1, ..., {\rm u}_{c_q}) \ , \ \ \ \ \ \ \ \
T_q \equiv \prod^q_{i < j} | x^{(i)}- x^{(j)}|^{ - {\gamma} } \def \hD {\hat \Delta_{ij} }\ , \\
&&
{\gamma} } \def \hD {\hat \Delta_{ij}= {2 \over q-2} \Big( \Delta_i + \Delta_j - {1 \over q-1}
\sum^q_{k=1} \Delta_k\Big) \ , \ \ \ \
\ \ \ c_4=2 \ , \ \ \ c_{q > 4} = 4q -15 \label}\def\foot{\footnote}\newcommand{\rf}[1]{(\ref{#1}){quq}
\ , \end{eqnarray}
where ${\rm F}_q $ is a function of conformally-invariant cross-ratios.
Considering $q=n+1$ with $n$ operators being the same, ${\cal O}}\def \no {\nonumber_k = \hat \OO,
\ \Delta_k = \hD $ and ${\cal O}}\def \no {\nonumber_{n+1} = {\cal O}}\def \no {\nonumber$, \ $ \Delta_{n+1} = \Delta$ we find
\begin{equation} \label}\def\foot{\footnote}\newcommand{\rf}[1]{(\ref{#1}){kp}
{T_{n+1} } = \prod^n_{i < j} | x^{(i)}- x^{(j)}|^{ - { 2 \over n-1} ( \hD - { 1 \over n} \Delta)
}
\prod^n_{k=1} | x^{(n+1)}- x^{(k)}|^{ - { 2 \over n}\Delta }\ , \ \ \ \ \
{T_{n} } = \prod^n_{i < j} | x^{(i)}- x^{(j)}|^{ - { 2 \over n-1} \hD }\ ,
\end{equation}
so that in the ratio of the two correlators in \rf{121} we have
\begin{equation} \label}\def\foot{\footnote}\newcommand{\rf}[1]{(\ref{#1}){rat}
{T_{n+1}\over T_n } =
{ \prod^n_{i < j} | x^{(i)}- x^{(j)}|^{ { 2 \over n(n-1)} \Delta } \over
\prod^n_{k=1} | a- x^{(k)}|^{ { 2 \over n} \Delta } }\ , \ \ \ \ \ \ \ \ \ \ \
a\equiv x^{(n+1)} \ .
\end{equation}
To get a non-trivial expression in the null-separation limit
$| x^{(i)}- x^{(i+1)}| \to 0$ we will need to assume that $n$ of such vanishing
factors in numerator of \rf{rat} get cancelled against
similar factors in some cross-ratios contained in ${\rm F}_{n+1}/{\rm F}_{n}$.
That will change the powers of the
remaining ${1 \over 2} n (n-1) -n= {1 \over 2} n (n-3) $
non-zero factors $| x^{(i)}- x^{(j)}|$ in \rf{rat} and also reduce the total number
of non-trivial conformal ratios (now denoted by $\zeta_r$) by $n$ as in \rf{redd}.
The result will then have the same form as in \rf{1.2}.
Indeed, the combination one needs to multiply \rf{rat} by to cancel
the vanishing $| x^{(i)}- x^{(i+1)}|$ factors in the numerator and to match
the prefactor in \rf{1.2} with $\mu_{ij}= {2 \Delta \over n(n-3)} $ is ($x^{(n+1)}\equiv x^{(1)}$)
\begin{equation}
{ \prod^n_{i < j-1} | x^{(i)}- x^{(j)}|^{ { 4 \over n(n-1)(n-3)} \Delta } \over
\prod^n_{k=1} | x^{(k)}- x^{(k+1)}|^{ { 2 \over n(n-1)} \Delta } }\ .
\label}\def\foot{\footnote}\newcommand{\rf}[1]{(\ref{#1}){mul}
\end{equation}
One can check that this expression is invariant under both dilatations and inversions and can thus be expressed in terms of cross-ratios.
Let us note that one could, in principle, treat $x^{(i)}$
and $a$ on a different footing, aiming at determining the dependence on $a_m$ for
fixed positions of the cusps $x^{(i)}$ viewed as given parameters.
In this case the $|x^{(i)}-x^{(j)}|$ dependent factor in \rf{1.2} could be
formally absorbed into the function $F$.
One may wonder how the rational powers in \rf{1.2} may appear
in a weak-coupling perturbation theory. The point is that one can
recover integer powers for an appropriate $F$.
We shall comment on this issue in
Appendix A on the example of $n=5$ and $\Delta=4$.
\subsection{$n=4$ case}
Let us now look in detail at the first non-trivial example: $n=4$.\footnote{The case of $n=3$ is trivial as there is no solution for coordinates of a null triangle in real 4d Minkowski space.}
Here the number of variables $\zeta_k$
is $3\times 4 -11=1$, i.e. $F$ should be
a function of a {\it single} variable $\zeta_1\equiv \zeta$.
For $n=5$ the number of conformal ratios is already 4.
This makes the correlation
function~\eqref{1.1} for $n=4$ a particularly interesting and simple case to study.
As it follows from the above discussion, this variable $\zeta$ can be viewed
as the unique conformal ratio which one can build out of the coordinates
$x^{(i)}_m$ \ ($i=1, \dots, 4$) of 4
cusps and the location $a_m$ of the operator ${\cal O}$.
Assuming that the null quadrangle is ordered as
$x^{(1)},x^{(2)},x^{(3)},x^{(4)}$ (i.e.
$|x^{(1)}- x^{(2)}|^2=|x^{(2)}- x^{(3)}|^2= |x^{(3)}- x^{(4)}|^2
=|x^{(4)}- x^{(1)}|^2= 0$) it is easy to see
that the unique non-trivial conformally-invariant combination of these 5 points is
\begin{equation}
\zeta= \frac{ |a - x^{(2)} |^2\ | a - x^{(4)} |^2 \ | x^{(1)} - x^{(3)} |^2 }
{ |a- x^{(1)} |^2\
| a- x^{(3)} |^2 \ | x^{(2)} - x^{(4)} |^2 } \ .
\label{ges}
\end{equation}
In this case there is also a unique choice for the
$x^{(i)}$-dependent factor in \rf{1.2}:
$ ( | x^{(1)} - x^{(3)} | | x^{(2)} - x^{(4)} |)^{\Delta/2}$
that ensures the right dimensionality of the result.
We conclude that the correlation function~\eqref{1.1}
for $n=4$ should have the following general form
\begin{equation}
{\cal C}(W_4, a)=\frac{( |x^{(1)} -x^{(3)}| |x^{(2)} -x^{(4)}|)^{\Delta/2}} {\prod_{i=1}^4 |a -x^{(i)}|^{\Delta/2}}\ F(\zeta)\,,
\label{1.15}
\end{equation}
where $\Delta$ is the dimension of the operator ${\cal O}}\def \no {\nonumber$ and $\zeta$ is given
by~\eqref{ges}.
As discussed above, the same conclusion applies also to a correlator of 4 equivalent null-separated
operators and an extra operator ${\cal O}}\def \no {\nonumber$. Indeed, for $n=4$ it is easy to see that
\rf{rat} is to be multiplied, according to \rf{mul}, by
\begin{equation}
{ (| x^{(1)}- x^{(3)}| | x^{(2)}- x^{(4)}| ) ^{ \Delta/3 } \over
\prod^4_{k=1} | x^{(k)}- x^{(k+1)}|^{ \Delta/6 }
}
\ ,
\label}\def\foot{\footnote}\newcommand{\rf}[1]{(\ref{#1}){mula}
\end{equation}
which is a product of two cross-ratios in power $\Delta/6$.
It is
interesting to note that depending on just {\it one} conformal ratio, the $n=4$
correlator \rf{1.15}
is an ``intermediate'' case
between a 3-point function
$\langle {\cal O}}\def \no {\nonumber(x^{(1)}) {\cal O}}\def \no {\nonumber(x^{(2)}) {\cal O}}\def \no {\nonumber(x^{(3)}) \rangle$
which is completely fixed by conformal invariance (up to a function of the coupling)
and a generic 4-point function $\langle {\cal O}}\def \no {\nonumber(x^{(1)})...{\cal O}}\def \no {\nonumber(x^{(4)}) \rangle$
which depends on two conformal ratios.
In the limit when $|a| \to \infty$ we get
\begin{eqnarray}
&& {\cal C} (W_4, a)_{|a| \to \infty}
=\frac{{\rm C}}{|a|^{2 \Delta}}\ , \label{1.16} \\
&& {\rm C} \equiv ( |x^{(1)} -x^{(3)}| |x^{(2)} -x^{(4)}|)^{\Delta/2} \ F (\zeta_\infty) \ , \ \ \ \ \ \ \ \
\zeta_\infty= \frac{ | x^{(1)} - x^{(3)} |^2} { | x^{(2)} - x^{(4)} |^2 }
\,,
\label{1.166}
\end{eqnarray}
where $ {\rm C} $ thus determines the corresponding OPE coefficient in \rf{22}.
Another special limit is when the position of the operator approaches
the location of one of the cusps, e.g., $a \to x^{(1)}$.
Setting $a_m = x_m^{(1)} + \epsilon \alpha_m$, $\epsilon \to 0$, and using that
the vectors $x^{(1)}-x^{(2)}$ and $x^{(1)}-x^{(4)}$ are null
we find from~\eqref{ges} that $\zeta$ is, generically, finite in this limit and is given by
\begin{equation}
\zeta_{a\to x^{(1)}}
= \frac{4\alpha \cdot (x^{(1)}-x^{(2)}){\ \ }\alpha \cdot (x^{(1)}-x^{(4)}) }
{\alpha^2\ |x^{(2)}-x^{(4)}|^2}\,, \ \ \ \ \ \ \ \
a_m = x_m^{(1)} + \epsilon \alpha_m \ .
\label{1.17}
\end{equation}
Similarly, the limit of the pre-factor in~\eqref{1.15} is
\begin{equation}
\prod_{i=1}^4 |a-x^{(i)}|^{\Delta/ 2} {}_{_{a\to x^{(1)}}}
\ \to \ \
4 \ep^\Delta\ \alpha\cdot (x^{(1)}-x^{(2)} ) \ \alpha\cdot (x^{(1)}-x^{(4)} )\ |x^{(1)}-x^{(3)}|^2 \,.
\label{1.18}
\end{equation}
Thus
\begin{equation} {\cal C}( W_4^{(reg)}, {a}){}_{_{a\to x^{(1)}}} \ \sim \
{ 1 \over |a- x^{(1)}|^\Delta} \ . \label}\def\foot{\footnote}\newcommand{\rf}[1]{(\ref{#1}){1.88} \end{equation}
Note that this is the same behavior that would be expected
if the Wilson loop were replaced by a product of 4
same-type operators (e.g., scalar operators as in \cite{ak})
at the positions of the cusps:
$ \langle W_4 {\cal O} ({a})\rangle \to \langle
\hat \OO(x^{(1)})... \hat \OO(x^{(4)})\ {\cal O} ({a})\rangle$.
Then the limit $ a\to x^{(1)}$ would be determined by the OPE,
$ \hat \OO (x^{(1)})\ {\cal O} ({a})\ \sim { 1 \over
|a - x^{(1)} |^{\Delta}} \hat \OO (x^{(1)}) $.
One may also consider a limit when $a$ does not approach
a cusp but becomes null-separated from it, i.e. $|a- x^{(1)}| \to 0$.
In this case the correlator will be divergent, i.e.
having $|a- x^{(i)}| \not=0$ is important for finiteness.
This is
analogous to the observation in \cite{ak} that keeping $|x^{(i)}
- x^{(i+1)}|$ finite in the correlator of local operator effectively regularizes the
null cusp divergences of the corresponding Wilson loop.
\
Below we will explicitly verify the general form \rf{1.2},\rf{1.15} of the
correlator \rf{1.1} at leading orders in the
strong-coupling (section 3) and the weak-coupling (section 6)
expansions and compute the corresponding function $F$.
\section{Correlation function of 4-cusp Wilson loop \\
with a local operator
at strong coupling}
In this section
we will compute \rf{1.1} for $n=4$ corresponding
to the 4-cusp Wilson loop at strong coupling.
The result will have the expected form ~\eqref{1.15} and we will
explicitly determine the function $F(\zeta)$.
We shall always consider the planar limit of maximally supersymmetric
Yang-Mills theory and
assume that the operator
${\cal O}}\def \no {\nonumber$ is such that for large `t Hooft coupling $\lambda$ its dimension
$\Delta$ is much smaller than $\sqrt{\lambda}$.\footnote{If the
operator would carry charges that are of order ${\sqrt{\l}}\ $ at
large coupling it would modify the minimal surface that
determines the leading order of the semiclassical expansion.}
In particular,
$\cal O$ will be chosen as the dilaton operator or the chiral primary operator.
We shall follow the same semiclassical string theory
approach that was used in the case of the circular Wilson loop in \cite{cor,za02} (see also \cite{zar,cos,rt,bt2,alt}).
In string-theory description the local operator ${\cal O}({a})$ is represented by a marginal
vertex operator~\cite{pt}
\begin{equation}
{\rm V}} \def \XX {{\rm X}}\def \ep {\epsilon({a})=\int d^2 \xi\ V[X(\xi); {a}]\,,
\label{2.1}
\end{equation}
where
$X$ stands for the 2d fields that enter the $AdS_5 \times S^5$ superstring action.
In general, \eqref{1.1} is then given by
\begin{equation}
{\cal C}(W_n, {a})= \frac{1}{\langle W_n \rangle} \int [dX] \ {\rm V}} \def \XX {{\rm X}}\def \ep {\epsilon(a) \ e^{-I[X]}
\,.
\label{2.2}
\end{equation}
Here $I$ is the string action proportional to the tension $T= {\sqrt \lambda \over 2 \pi}$
and the path integral is performed over the euclidean
world-sheets with topology of a disc (we consider only the planar approximation)
and the boundary conditions set out by the Wilson loop at $z=0$.
Considering the limit when $ {\sqrt{\l}}\ \gg 1$ and assuming
that the operator represented by ${\rm V}} \def \XX {{\rm X}}\def \ep {\epsilon$ is ``light'' \cite{rt} (i.e. the corresponding
scaling dimension and charges are much smaller than $ {\sqrt{\l}}\ $)
one concludes that
this path integral is dominated by the same semiclassical string surface
as in the absence of ${\rm V}} \def \XX {{\rm X}}\def \ep {\epsilon$, i.e. as in the case of $\langle W_n \rangle$.
The resulting leading-order value of \rf{2.2} is then given by \rf{2.1}
evaluated on this classical solution, i.e.
\begin{equation}
{\cal C}(W_n, a)_{_{{\sqrt{\l}}\ \gg 1}} = \Big( \int d^2 \xi\ V[X(\xi); a] \Big)_{semicl.}\,.
\label{2.3}
\end{equation}
\subsection{Correlation function with dilaton operator}
One simple case is when the local operator ${\cal O}}\def \no {\nonumber$ is the
dilaton operator ${\cal O}}\def \no {\nonumber_{dil}\sim \text{tr} ( F^2_{mn} Z^j)+...$
(where we included also the R-charge $j$ dependence).
The corresponding vertex operator has the form~\cite{rt}
\begin{eqnarray}
&&{\rm V}} \def \XX {{\rm X}}\def \ep {\epsilon_{dil} ({a}) =c_{dil} \int d^2 \xi \ \Big[\frac{z}{z^2+ (x_m-a_m)^2}\Big]^\Delta\ \XX^j \
U_{dil} \ ,
\label{2.4}\\
&& \XX^j = \big(\cos \theta \ e^{i \varphi}\big)^j \ , \ \ \ \ \ \ \ \ \ \Delta = 4 + j \ , \label}\def\foot{\footnote}\newcommand{\rf}[1]{(\ref{#1}){ku}
\end{eqnarray}
where $j\ll {\sqrt{\l}}\ $ is an angular momentum along $S^1$ in $S^5$.
The operator $U_{dil}$ equals the $AdS_5 \times S^5$ Lagrangian %
\begin{equation}
U_{dil} ={\cal L}={\cal L}_{AdS_5} + {\cal L}_{S^5} +{\rm fermions} \ , \ \ \ \ \ \
{\cal L}_{AdS_5}= \frac{1}{z^2}[ (\partial_{\alpha}z)^2 + (\partial_{\alpha}x_m)^2] \ .
\label{2.5}
\end{equation}
Furthermore,
$c_{dil}$ is the normalization coefficient given by~\cite{cor,zar,rt}\footnote{$N$
is the rank of gauge group here representing a factor of string coupling.
Note that the normalization of the operator ${\rm V}} \def \XX {{\rm X}}\def \ep {\epsilon$ is important in order to compute
the correlation function \rf{2.3} and this normalization is currently known only
for the BPS operators \cite{cor,zar,rt}.}
\begin{equation}
c_{dil}= \frac{{\sqrt{\l}}\ }{8 \pi N}\sqrt{(j+1)(j+2)(j+3)} \,.
\label{2.6}
\end{equation}
Below we shall mostly consider the case of $j=0$ when
\begin{equation}
j=0: \ \ \ \ \ \ \ \ \ \ \ \ \ \Delta=4 \ , \ \ \ \ \ \ c_{dil}= \frac{\sqrt 6 \ {\sqrt{\l}}\ }{8 \pi N} \ ,
\end{equation}
and return to the case of $j \not=0$ at the end of this subsection.
\subsubsection{Regular 4-cusp case}
Let us start with the case when the Wilson loop is
the regular (i.e. equal-sided) quadrangle with 4 cusps
(Figure 2a).
%
\begin{figure}[ht]
\centering
\includegraphics[width=75mm]{pol2.eps}
\caption{\small $(x_1,x_2)$ plane
projection of (a) regular and (b) irregular quadrangle Wilson loop.}
\label{fig2}
\end{figure}
The classical euclidean world-sheet surface in $AdS_5$ ending on this Wilson loop was found in~\cite{AM1}
and is given by\footnote{Here $(u,v)$
cover the full plane, but since infinity is not identified
the world sheet has topology of a disc.}
\begin{eqnarray}
&& z=\frac{{\rm r}}{\cosh u\ \cosh v}\,, \ \ \ \qquad
x_0= {\rm r}\ \tanh u\ \tanh v\,, \nonumber\\
&&
x_1= {\rm r}\ \tanh u \,, \qquad x_2= {\rm r}\ \tanh v \,,\ \quad x_3=0\,; \ \ \quad u, v \in (-\infty, \infty)\,.
\label{1.4}
\end{eqnarray}
Here $z$ is the radial direction of the Poincare patch of
$AdS_5$ and $x_m=(x_0, x_1, x_2, x_3)$ are
the coordinates on the boundary.
The parameter ${\rm r}$ corresponds to the overall scale of the loop.
To simplify later formulas
we will set ${\rm r} =1$ (it is easy to restore ${\rm r}$ by simply replacing
$z \to {\rm r}^{-1} z $, $x_m \to {\rm r}^{-1} x_m $).
The cusps correspond to
$(u, v) \to (\pm \infty, \pm \infty)$
and thus are located at
\begin{eqnarray}
&& x^{(1)}=(1,\ 1,\ 1,\ 0)\,, \qquad\ \ \ \ \ \ x^{(2)}=(-1,\ 1, -1,\ 0)\,,\no \\
&&
x^{(3)}=(1,-1,-1,\ 0)\,, \qquad\ \ \ x^{(4)}=(-1,-1,\ 1,\ 0)\,,
\label{2.16}
\end{eqnarray}
Substituting~\eqref{2.16} into~\eqref{ges} gives the following explicit form of
the
conformal ratio $\zeta$ that is expected to appear in the correlator
\begin{eqnarray}
&& \zeta = \frac{({1 \over 2} {\rm q}- a_0 - a_1 + a_2) ({1 \over 2} {\rm q}- a_0 + a_1 - a_2)}
{ ({1 \over 2} {\rm q}+ a_0 - a_1 - a_2) ({1 \over 2} {\rm q}+ a_0 + a_1 + a_2)} \ ,
\label{1.13} \\
&& {\rm q}\equiv 1-a_0^2+a_1^2+a_2^2 +a_3^2\,.
\label{1.14}
\end{eqnarray}
Substituting the classical
solution~\eqref{1.4} into \eqref{2.4} we obtain\footnote{Below in this
section the expression
for a correlator will always stand for its leading ${\sqrt{\l}}\ \gg 1$ value.}
\begin{equation}
{\cal C}_{dil}( W_4^{(reg)}, {a})=2 c_{dil} \int_{-\infty}^{\infty} d u d v \
\Big[ \frac{ (\cosh u\ \cosh v)^{-1} }
{{\rm q} -2 a_1 \tanh u -2 a_2 \tanh v + 2 a_0 \tanh u \tanh v} \Big]^4 \ ,
\label{2.7}
\end{equation}
where ${\rm q}$ is given by~\eqref{1.14} and we used the fact that
on the solution~\eqref{1.4} one has $U_{dil}=2$ in \rf{2.15}
(note also that here
$\int d^2 \xi = \int dudv$).
The integral is straightforward
to do by introducing the variables $U=\tanh u,\ V=\tanh v$ and we get
\begin{eqnarray}
&&{\cal C}_{dil}( W_4^{(reg)}, {a})=c_{dil}
\frac{16 a_1 a_2 - 8 {\rm q} a_0 - ({\rm q}^2 +4 a_0^2 -4 a_1^2 -4 a_2^2)\ \log \zeta}
{12({\rm q} a_0 -2 a_1 a_2)^3}\,,
\label{2.8}
\end{eqnarray}
where we have used \eqref{1.13}.
The result is thus finite, in
contrast to the area of the
4-cusp surface that requires a regularization \cite{AM1}. Let us note that
if we consider the dilaton operator at
zero momentum, i.e. integrate over the point $a$,
we will recover the divergent area expression as
then the dilaton vertex operator with $\Delta=4$ in \rf{2.4}
will become proportional to the string action
(the $[...]^\Delta$ factor in \rf{2.4} effectively provides a regularization for $\Delta
\not=0$). This is, of course, related to the fact that an insertion of the zero-momentum
dilaton is equivalent to taking a derivative over the string tension which brings down a
factor of the string action.
Let us now show that the result \eqref{2.8} is indeed consistent with eq.~\eqref{1.15}
for $\Delta=4$.
We observe that
\begin{eqnarray}
&& {\rm q}^2+4 a_0^2 -4 a_1^2 -4a_2^2= \frac{1 +\zeta}{2}P_1 P_2\,,\ \ \ \
P_{1,2}\equiv {\rm q}+2 a_0 \mp2 a_1 \mp2 a_2\,,
\no \\
&& {\rm q} a_0-2 a_1 a_2 =\frac{1-\zeta}{8}P_1 P_2\,.
\label{2.13}
\end{eqnarray}
If we substitute eqs.~\eqref{2.13} into~\eqref{2.8}
we get (restoring the dependence on the
scale parameter ${\rm r}$ in \rf{1.4})
\begin{equation}
{\cal C}_{dil}( W_4^{(reg)}, {a})=\frac{64 {\rm r}^4 c_{dil} }{3} \ \frac{1}{P_1^2 P_2^2 } \ \frac{1}{(\zeta-1)^3}
[-2(\zeta-1) +(\zeta +1) \log \zeta]
\,.
\label{2.14}
\end{equation}
Finally, one can check that
\begin{equation}
P_1^2 P_2^2= \zeta^{-1} \prod_{i=1}^4 |a- x^{(i)}|^2\,, \ \ \ \ \ \ \ \ \ \
|x^{(1)}- x^{(3)} |^2 |x^{(2)}- x^{(4)} |^2 = 64 {\rm r}^4 \ ,
\label{2.15}
\end{equation}
where $x^{(i)}$ are the locations \rf{2.16} of the cusps in~\eqref{1.4}.
We conclude that the correlator
\rf{2.7} is given by eq.~\eqref{1.15} with
\begin{equation}
F(\zeta) =\frac{ c_{dil}}{3}\ \frac{\zeta}{(\zeta-1)^3}[-2(\zeta-1) +(\zeta +1) \log \zeta]\,.
\label{2.17}
\end{equation}
In the limit $|a|\to \infty$ (see \rf{1.16}) we get $\zeta_\infty=1$ and thus
\begin{equation}
{\cal C}_{dil}( W_4^{(reg)}, {a})_{_{|a|\to \infty} }
\ =\ \frac{32 c_{dil}\ {\rm r}^4 }{9 \ |a|^8}\,.
\label{2.18}
\end{equation}
which determines
the OPE coefficient
of ${\cal O}_{dil}$ in the expansion \rf{22} of the Wilson loop
$W_{4}^{(reg)}$.\footnote{
The same expression can be obtained
by taking $|{a}|$ large directly in~\eqref{2.4} and doing the
resulting simple integral $\sim |a|^{-8} \int d^2 \xi \ z^4$.}
In the limit when $a$ approaches a cusp ($a_m=x^{(1)}_m + \ep \alpha_m, \ \ \ep \to 0$)
(see \rf{1.17},\rf{1.18})
we get
\begin{eqnarray}
&&{\cal C}_{dil}( W_4^{(reg)}, {a})_{a\to x^{(1)}}
\ \to \
-\frac{2}{3 \ep^4}\ \frac{1}{[-3 \alpha_0^2+ \alpha_3^2 +(\alpha_1-\alpha_2)^2 +2 \alpha_0 (\alpha_1+\alpha_2)]^2 }
\nonumber\\
&&
\times \Big[1 - \frac{\alpha_3^2+ (\alpha_1+\alpha_2-\alpha_0)^2}{-3 \alpha_0^2+ \alpha_3^2 +(\alpha_1-\alpha_2)^2 +2
\alpha_0 (\alpha_1+\alpha_2) }
\log \frac{\alpha_1^2 +\alpha_2^2 +\alpha_3^2-\alpha_0^2}{2(\alpha_0-\alpha_1)(\alpha_0-\alpha_2)}\Big]\,.
\label{2.28.2}
\end{eqnarray}
The behavior $\epsilon^{-\Delta}=\epsilon^{-4}$ is in
agreement with the general expression \rf{1.88}.\footnote{Note that
eq.~\eqref{2.28.2} gives the general behavior near the cusp. However, if we approach
it along a specific path we can have additonal singularities which
can all be found from~\eqref{2.28.2}.}
Let us note also that one may consider a different limit
when $a$ does not approach the cusp
$x^{(1)}$ but becomes null separated from it, i.e.
$| a- x^{(1)}| \to 0$. In this case the correlator is logarithmically divergent:
${\cal C}_{dil}( W_4^{(reg)}, {a})
\sim \log | a- x^{(1)}|$. If $a$ becomes at the same time null-separated from the
two adjacent cusps (say, $x^{(1)},x^{(2)}$) then $\zeta$ stays finite and the correlator has a power divergence
from the prefactor:
${\cal C}_{dil}( W_4^{(reg)}, {a})
\sim | a- x^{(1)}|^{-2} | a- x^{(2)}|^{-2}$.
\subsubsection{Irregular 4-cusp case}
The above calculation can be generalized
to the case of an irregular quadrangle, i.e. the one with unequal diagonals
$s \neq t$ (Figure 2b). The corresponding solution can be found by applying a conformal
transformation
to \rf{1.4} ~\cite{AM1}
\begin{eqnarray}
&&
z= {f(u,v)}\ {\cosh u\ \cosh v }\,, \qquad
x_0= {\sqrt{1+b^2}\ f(u,v)\ \tanh u\ \tanh v}\,, \nonumber \\
&&
x_1 =f(u,v)\ { \tanh u}\ , \qquad
x_2 =f(u,v)\ { \tanh v} \,, \qquad x_3=0\,, \nonumber\\
&&
f(u,v)\equiv \frac{{\rm r}}{1+ b \tanh u\ \tanh v}\,, \ \ \ \ \ \ \ \ \ |b|\leq 1 \ .
\label{2.19}
\end{eqnarray}
$b=0$ corresponds to the regular quadrangle case \rf{1.4}.
The cusps
are found by taking $(u, v) \to (\pm \infty, \pm \infty)$ and are located at
(cf. \eqref{2.16}; here we set ${\rm r}=1$)
\begin{eqnarray}
&&
x^{(1)}_m=(\frac{\sqrt{1+b^2}}{1+b}, {\ } \frac{1}{1+b}, {\ }\frac{1}{1+b}, {\ }0)\,, \quad
x^{(2)}_m=(-\frac{\sqrt{1+b^2}}{1-b}, {\ } \frac{1}{1-b}, {\ }\frac{-1}{1-b}, {\ }0)\,,
\nonumber\\
&&
x^{(3)}_m=(\frac{\sqrt{1+b^2}}{1+b}, {\ } \frac{-1}{1+b}, {\ }\frac{-1}{1+b}, {\ }0)\,, \quad
x^{(4)}_m=(-\frac{\sqrt{1+b^2}}{1-b}, {\ } \frac{-1}{1-b}, {\ }\frac{1}{1-b}, {\ }0)\,.
\label{2.20}
\end{eqnarray}
The Wilson loop is the quadrangle $x^{(1)},x^{(2)},x^{(3)},x^{(4)}$.\footnote{It is easy to check that
the vectors connecting the 4 cusps
$x^{(12)}=x^{(1)}-x^{(2)}$, $x^{(23)}=x^{(2)}-x^{(3)}$, $x^{(34)}=
x^{(3)}-x^{(4)}$, $x^{(41)}=x^{(4)}-x^{(1)}$ are null.
The two non-trivial parameters $s$ and $t$ are given by~\cite{AM1}
$
-(2 \pi)^2 s = 2 x^{(23)} \cdot x^{(34)}= | x^{(2)} - x^{(4)} |^2= \frac{8}{(1-b)^2}\,, \ \ \ \
- (2 \pi)^2 t = 2 x^{(12)}\cdot x^{(23)}= | x^{(1)} - x^{(3)} |^2= \frac{8}{(1+b)^2}\,.
$
}
Since ${U_{dil}}$ in \rf{2.5} is invariant under $SO(2,4)$, its value on this solution should be
the same as for $b=0$, i.e. $U_{dil}=2$. Substituting~\eqref{2.19}
into ~\eqref{2.4} gives
\begin{equation}
{\cal C}_{dil}( W_4^{(irreg)}, {a})=2 c_{dil} \int_{-\infty}^{\infty} d u d v
\ \Big[ \frac{(\cosh u\ \cosh v)^{-1} }
{{\rm q} -2 a_1 \tanh u -2 a_2 \tanh v + 2 \tilde{a}_0 \tanh u \tanh v} \Big]^4\,,
\label{2.22}
\end{equation}
where $\tilde{a}_0$ is defined by
\begin{equation}
\tilde{a}_0 = a_0 \sqrt{1+b^2} + {1 \over 2} b ({\rm q}-2) \ ,
\label{2.23}
\end{equation}
while ${\rm q}$ is again given by~\eqref{1.14} (without the replacement $a_0\to \tilde{a}_0$).
As \rf{2.7} and ~\eqref{2.22} are related by replacing $a_0\to \tilde{a}_0$
we get from \eqref{2.8},\eqref{1.13}
\begin{eqnarray}
&&{\cal C}_{dil}( W_4^{(irreg)}, {a})=c_{dil}
\frac{16 a_1 a_2 - 8 {\rm q} \tilde{a}_0 - ({\rm q}^2 +4 \tilde{a}_0^2 -4 a_1^2 -4 a_2^2)\ \log \zeta}
{12({\rm q} \tilde{a}_0 -2 a_1 a_2)^3}\,,\label{2.24} \\
&&
\zeta= \frac{ ({1 \over 2} {\rm q}-2 \tilde{a}_0 - a_1 + a_2) ({1 \over 2} {\rm q}- \tilde{a}_0 + a_1 - a_2)}
{({1 \over 2} {\rm q}+ \tilde{a}_0 - a_1 - a_2)({1 \over 2} {\rm q}+ \tilde{a}_0 + a_1 + a_2)} \,.
\label{2.26}
\end{eqnarray}
It is straightforward to check, using the locations of the cusps
in~\eqref{2.20}, that the argument of the logarithm in \rf{2.26}
is again the conformally-invariant ratio in ~\eqref{ges}.
Note that while $b$ may be interpreted as the parameter of a conformal
transformation relating the regular and the irregular polygons,
$a$ is kept fixed under
this transformation, so that $\zeta$ in \rf{2.26} now depends on $b$ compared to the one in
\rf{1.13},\rf{2.8}.
After the
same steps as in the case of the regular quadrangle we find that eq.~\eqref{2.24}
can indeed be written as~\eqref{1.15}, where $\Delta=4$, $x^{(i)}$'s are given
by~\eqref{2.20} and
\begin{eqnarray}
&& |x^{(1)}- x^{(3)} |^2 |x^{(2)}- x^{(4)} |^2 =
{64 {\rm r}^4 \over (1-b^2)^2} \ \label}\def\foot{\footnote}\newcommand{\rf}[1]{(\ref{#1}){bbb},\\
&&F(\zeta) =
\frac{ c_{dil}}{3}\ \frac{\zeta}{(\zeta-1)^3}[-2(\zeta-1) +(\zeta +1) \log \zeta]\,.
\label{2.27}
\end{eqnarray}
The function $F(\zeta)$ is thus the same as in \rf{2.17}, as expected.
We find also that
\begin{equation}
{\cal C}_{dil}( W_4^{(irreg)}, {a})_{_{|a|\to \infty}}
\ = \ { {\rm C}_4 \over |a|^8 } \ , \ \ \ \ \ \ \ \ \
{\rm C}_4=\frac{4c_{dil}\ {\rm r}^4 }{3b^3 \ }
\Big[-2b + (1+b^2)\log \frac{1+b}{1-b} \Big]\,,
\label{2.28}
\end{equation}
which reduces to ~\eqref{2.18} in the limit $b \to 0$.
Let us note also
that starting with the general expression for the correlator \rf{1.15},\rf{2.17}
we may consider the case (obtained by a conformal transformation from \rf{1.14} \cite{ryang})
when two of the four cusps (e.g., $x^{(3)},x^{(4)}$) are sent to infinity.
Then \rf{1.15} with $\Delta=4$ takes the following form:
\begin{equation} \label}\def\foot{\footnote}\newcommand{\rf}[1]{(\ref{#1}){ry}
{\cal C}_{dil}( W_4)_{x^{(3)},x^{(4)}\to \infty} = {1 \over
| a- x^{(1)}|^{2} | a- x^{(2)}|^{2}} \ F(\tilde \zeta)\ ,
\ \ \ \ \ \ \ \ \ \tilde \zeta = {| a- x^{(2)}|^{2}\over | a- x^{(3)}|^2} \ . \end{equation}
\subsubsection{Case of dilaton with non-zero $S^5$ momentum }
The above discussion
can be generalized to the case when the dilaton operator \rf{2.4} carries an angular
momentum $j$ along $S^1 \subset S^5$. Again,
we have to
evaluate~\eqref{2.4} on the solution~\eqref{1.4} or~\eqref{2.19}. Since these solutions do not
depend on the sphere coordinates the factor $\XX^j$ in \rf{2.4} equals to unity so that
the correlation function is given by the same expressions as in~\eqref{2.7} or \eqref{2.22}
with the power 4 replaced with $\Delta=4+j$.
In the case of the more general
solution~\eqref{2.19} we get instead of \rf{2.22}
\begin{equation}
{\cal C}_{dil}( W_4^{(irreg)}, {a})=2 c_{dil} \int_{-\infty}^{\infty} d u d v
\ \Big[ \frac{ (\cosh u \ \cosh v)^{-1} }
{{\rm q} -2 a_1 \tanh u -2 a_2 \tanh v + 2 \tilde{a}_0 \tanh u \tanh v} \Big]^{\Delta}
\label{2.30.1}
\end{equation}
where $\tilde{a}_0$ is defined in \rf{2.23}.
Since the answer should be of the form~\eqref{1.15},
we are interested in computing the function $F(\zeta)$ of one variable only, so
we may set $a_1=a_2=a_3=0$.
Having computed
$\bar F(a_0) = F(\zeta(a_0))$ we may find $a_0=a_0 (\zeta)$ from~\eqref{2.26}
and thus restore $F(\zeta)$.
Performing the integral~\eqref{2.30.1} we obtain
\begin{equation}
{\cal C}_{dil}( W_4^{(irreg)}, a_0)=\frac{2 \pi c_{dil}}{(1-a_0^2)^{\Delta}} \
\Big(\frac{\Gamma[\frac{\Delta}{2}]}{\Gamma[\frac{\Delta+1}{2}]}\Big)^2\
{\ }_2F_1(\frac{1}{2}, \frac{\Delta}{2}, \frac{\Delta+1}{2}, \varrho^2)\,,
\label{2.31}
\end{equation}
where${\ }_2F_1$ is the hypergeometric function and $\varrho$ is a function of $a_0$
given by
\begin{equation}
\varrho\equiv \frac{2 \tilde{a}_0}{1-a_0^2}= \frac{ 2 a_0 \sqrt{1+b^2} - b (1+a_0^2)}{1-a_0^2} \ .
\label{2.34}
\end{equation}
To extract $F$ in \rf{1.15}
we have to multiply~\eqref{2.31} by the factor
$$
|x^{(1)}- x^{(3)} |^{-\Delta/2}|x^{(2)}- x^{(4)} |^{-\Delta/2}
\prod_{i=1}^4 |x_{m}^{(i)}-a_m|^{\Delta/2}\,.
$$
This gives
\begin{equation}
F(\zeta(\varrho))= {2^{-{3 \over 2} \Delta +1} \pi \ c_{dil}}\ \Big(
\frac{\Gamma[\frac{\Delta}{2}]}{\Gamma[\frac{\Delta+1}{2}]}\Big)^2 \
(1-\varrho^2)^{\Delta/2}{\ }_2F_1(\frac{1}{2}, \frac{\Delta}{2}, \frac{\Delta+1}{2}, \varrho^2)\,.
\label{2.33}
\end{equation}
Finally, we can express $\varrho$ in terms of $\zeta$ using~\eqref{2.26}:
\begin{equation}
\varrho = \frac{ 1- \sqrt{\zeta}}{1+ \sqrt \zeta }\,.
\label{2.35}
\end{equation}
One can check that setting $j=0$, i.e. $\Delta =4$,
gives back our earlier expression~\eqref{2.27}.
\subsubsection{Generalization to cusped Wilson loop with an $S^5$ momentum }
One can formally repeat the above discussion in the case
when the euclidean 4-cusp world surface \rf{1.4} is generalized to the presence of a
non-zero angular momentum in $S^5$ (we again set the scale ${\rm r}=1$)\footnote{This solution is related by an analytic continuation
and a conformal transformation \cite{KRTT} to the large spin limit of the folded $(S,J)$ spinning
string \cite{ft}, see also section 4.}
\begin{eqnarray}
&& z=\frac{1}{\cosh k u\ \cosh v}\,, \qquad \ \ \ \
x_0= \tanh k u\ \tanh v\,, \nonumber\\
&&
x_1= \tanh k u \,, \quad x_2= \tanh v \,, \quad x_3=0\ ,\ \ \ \ \ \ u, v \in (-\infty, \infty)\ ,\no \\
&&\varphi=- i \ell u\,, \ \ \ \ \ \ \ \ \ \ k^2 = 1 + \ell^2 \ .
\label{2.36}
\end{eqnarray}
This background
\rf{2.36} solves the string equations in conformal gauge.
Here $\varphi$ is an angle of a big circle in $S^5$ and $\ell$
may be interpreted as a density of the corresponding
angular momentum. We assume that
$u$ plays the role of a euclidean time; since $v$ is non-compact,
the total angular momentum is formally infinite.
The presence of $k$ does not influence the positions of the 4
cusps \rf{2.16}.
The correlation function with the dilaton operator is again given by \rf{2.3},\rf{2.4}.
Since now $\varphi$ in \rf{ku}
is non-zero\footnote{In \rf{ku} $\theta=0$ as we consider rotation in a big circle of $S^5$.}
there will be a non-trivial
dependence on the product of the dilaton momentum
$j$ and the angular momentum density $\ell$ (cf. \rf{2.7},\rf{2.22},\rf{2.30.1})
%
\begin{eqnarray}
&&{\cal C}_{dil}( W_{4, \ell}^{(reg)}, {a}) \label{2.40} \\ &&=
2 c_{dil} \int_{-\infty}^{\infty} d u d v \ \Big[ \frac{ (\cosh k u\ \cosh v)^{-1} }
{{\rm q} -2 a_1 \tanh k u -2 a_2 \tanh v + 2 a_0 \tanh k u\ \tanh v} \Big]^{4+j}\ e^{ j \ell u} \ .
\no
\end{eqnarray}
We used that in \rf{2.5} ${\cal L}_{AdS_5} =k^2 +1\ , \ \ \
{\cal L}_{S^5}= (\partial_{\alpha}\varphi)^2=-\ell^2$ so that again $U_{dil}= {\cal L}=2$.
The resulting integral can be studied for generic $j$ (it is useful to
change variable $u \to u'=k u $).
For $j=0$ we get back to
the same expression as~\eqref{2.7} with an extra factor of $ k^{-1} = (1 + \ell^2)^{-1/2}$, i.e.
\begin{equation} j=0: \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \
{\cal C}_{dil}( W_{4,\ell }^{(reg)}, {a})=\frac{1}{\sqrt{1+\ell^2}}\
{\cal C}_{dil}( W_{4}^{(reg)}, {a})\,.
\label{2.41}
\end{equation}
It is also straightfoward to discuss a generalization to
irregular 4-cusp surface with $b\not=0$ (cf. \rf{2.27}).
\subsection{Correlation function with chiral primary operator}
Let us now consider a similar computation
with the chiral primary operator ${\cal O}}\def \no {\nonumber_j = \text{tr} Z^j$
instead of the dilaton operator.
The bosonic part of the corresponding vertex operator
\cite{cor,zar,rt} can be written in a form similar to \rf{2.4}
\begin{eqnarray}
&& {\rm V}} \def \XX {{\rm X}}\def \ep {\epsilon_j ({a})=c_j \int d^2 \xi\ \Big[\frac{z}{z^2+(x_m-a_m)^2}\Big]^\Delta \ \XX^j \
U\,,
\label{2.43}\\
&& \Delta = j \ , \ \ \ \ \ \ \ \ c_j=\frac{\sqrt{\lambda}}{8 \pi N} \sqrt{j} (j+1) \ ,
\label{2.44}
\end{eqnarray}
where $\XX^j$ is the same as in \rf{2.4} while the 2-derivative $U$
part is more complicated~\cite{bt2}
\begin{eqnarray}
&&
U=U_1+U_2+U_2\,, \ \ \ \ \ \ \ \ \
U_1=\frac{1}{z^2}\big[(\partial_{\alpha}x_m)^2 -(\partial_{\alpha}z)^2\big]-{\cal L}_{S^5}\,,
\label}\def\foot{\footnote}\newcommand{\rf}[1]{(\ref{#1}){2.451}\\
&&
U_2=\frac{8}{(z^2 +|x-a|^2)^2}
\Big[|x-a|^2 (\partial_{\alpha}z)^2 - [(x_m -a_m) \partial_{\alpha} x_m]^2\Big]\,,
\nonumber\\
&&
U_3 =\frac{8 (|x-a|^2 -z^2)}{z (z^2 +|x -a|^2)^2} \ (x_n-a_n)\partial_{\alpha}x_n
\ \partial_{\alpha}z\,.
\label{2.45}
\end{eqnarray}
For simplicity, we will consider the case of the regular 4-cusp Wilson loop;
the corresponding solution~\eqref{1.4} does not deepend on $S^5$ coordinates so $\XX^j=1$.
To find the function $F(\zeta)$ in \rf{1.15} it is sufficient, as in section 3.1.3, to
choose the special case of $a=(a_0, 0,0,0) .$
Remarkably, in this case $U$ can be put into the following
simple form (cf. \rf{2.34})
\begin{eqnarray}
&&U_{{a=(a_0, 0,0,0)}} =\frac{2}{\cosh^2 u \ \cosh^2 v} \frac{1+\varrho^2 -
(\sinh u\ \sinh v + \varrho\ \cosh u\ \cosh v)^2}{(1+\varrho\ \tanh u\ \tanh v)^2}\,,
\label{2.46}
\\
&&\varrho\equiv \frac{2 a_0}{1-a_0^2}\,.
\label{2.47}
\end{eqnarray}
Substituting it into~\rf{2.3},\eqref{2.43} gives
\begin{eqnarray}
&&{\cal C}_j(W_4^{(reg)}, a_0)=\frac{2 c_j}{(1-a_0^2)^j} \ \int_{-\infty}^{\infty}d u dv\
\Big[\frac{(\cosh u\ \cosh v )^{-1}}{1+\varrho\ \tanh u\ \tanh v}\Big]^{j+2}
\nonumber\\
&&\qquad\qquad \qquad\qquad\qquad \qquad \times \Big[1+\varrho^2 -\big(\sinh u \ \sinh v + \varrho\ \cosh u\
\cosh v\big)^2\Big]
\label{2.48}
\end{eqnarray}
For an arbitrary $j$ this integral is rather complicated but can be easily done
for specific values of $j$. For instance, for $j=2$ we obtain:
\begin{equation}
{\cal C}_2(W_4^{(reg)}, a_0)=\frac{4 c_2}{3(1-a_0^2)^2 \varrho}\ \log\frac{\varrho+1}{\varrho-1}\,.
\label{2.49}
\end{equation}
To compute $F(\zeta)$ we have to multiply~\eqref{2.49} by the factor (see \rf{1.15}) \
\begin{equation} \big( |x^{(1)} - x^{(3)}| \ |x^{(2)} - x^{(4)}|\big)^{-1} \
\prod_{i=1}^4 |a- x^{(i)} |=\ { 1 \over 8} [(a_0-1)^2 +2] [(a_0+1)^2 +2] \,,
\label{2.50}
\end{equation}
where the positions of the cusps $x^{(i)}$ are given by~\eqref{2.16} and
we used that
$\Delta=j=2$.
Expressing $\varrho$ in terms of $\zeta$ in \rf{1.13}, \rf{1.14}.
gives the same relation as in~\eqref{2.35}.
As a result, we find (cf. \rf{2.17})\footnote{Let us note that the same (up to a constant factor)
expression
is found by formally setting $\Delta=2$ in \rf{2.31},\rf{2.33}.}
\begin{equation}
j=2: \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \
F(\zeta)=\frac{ c_2}{3}\ \frac{\sqrt{\zeta} }{\zeta-1}\ \log \zeta \,.
\label{2.51}
\end{equation}
We can then restore the dependence of the correlator on all 4 coordinates of $a_m$ getting
the following analog of \rf{2.8}
\begin{equation}
{\cal C}_2(W_4^{(reg)}, {a})=-\frac{c_2\ {\log \zeta}}{3 ({\rm q} a_0 -2 a_1 a_2)}\,,
\label{2.52}
\end{equation}
where ${\rm q}$ is given by~\eqref{1.14} and $\zeta$ is defined in~\eqref{1.13}.
Taking the position of the operator $a$ to infinity
we get the corresponding OPE coefficient
(we restore the factor of the scale ${\rm r}$ of the loop)
\begin{equation}
{\cal C}_2(W_4^{(reg)}, {a})_{|a|\to \infty} = \frac{8 c_2\ {\rm r}^2}{3 \ |a|^4}\,.
\label{2.53}
\end{equation}
Here the power of $|a|$ reflects the value of the dimension $\Delta=2$ (cf. \rf{2.18}).
We may also study the limit when $a_m$ is approaching the position of
one of the cusps. Choosing $a_m$ as in~\eqref{1.17} and expanding
for small $\ep$ gives (cf. \rf{2.28.2})
\begin{equation}
{\cal C}_2(W_4^{(reg)}, {a}) \to
\frac{1}{3 \ep^2}\
\frac{\log \frac{\alpha_1^2 +\alpha_2^2 +\alpha_3^2-\alpha_0^2}{2(\alpha_0-\alpha_1)(\alpha_0-\alpha_2)}}
{ -3 \alpha_0^2+ \alpha_3^2 +(\alpha_1-\alpha_2)^2 +2 \alpha_0 (\alpha_1+\alpha_2) }\,.
\label{2.54}
\end{equation}
\section{Comment on relation to 3-point correlator with
two infinite spin twist-2 operators }
As is well known, the anomalous dimension of the large spin $S$ twist-2 operator
(the coefficient of the $\ln S$ term in it) is closely related to the UV anomaly
in the expectation value of a Wilson loop with a null cusp \cite{kor,Martin,AM1,me,AMtwo,refs}.
At strong coupling, this relation can be understood \cite{KRTT} by relating the
corresponding string world surfaces by a world-sheet euclidean continuation and an $SO(2,4)$
conformal transformation. This may be effectively interpreted as a relation between
the 2-point function of twist-2 operators $\langle {\cal O}}\def \no {\nonumber^\dagger_S (x) {\cal O}}\def \no {\nonumber_S (0)
\rangle $ with $S\to \infty$
and the singular part of the expectation of the cusped Wilson loop
$\langle W_4 \rangle $.
Below we shall discuss if such a relation may apply
also if one includes in the respective
correlators an extra ``light'' operator ${\cal O}}\def \no {\nonumber_\Delta$ \ ($ \Delta \ll {\sqrt{\l}}\ $),
\begin{equation}
{\cal C} (W_4,a) =
\frac{\langle W_4\ {\cal O}_\Delta ({a})\rangle}{\langle W_4 \rangle}\, \ \ {\rm vs.} \ \ \
K ({\rm x}^{(1)}, {\rm x}^{(2)}, a)= \frac{\langle {\cal O}_S^{\dagger} ({\rm x}^{(1)} )\ {\cal O}_S({\rm x}^{(2)} )\ {\cal O}_\Delta({a})\rangle}
{\langle {\cal O}_S^{\dagger}({\rm x}^{(1)} )\ {\cal O}_S({\rm x}^{(2)} ) \rangle}\,.
\label{3.1}
\end{equation}
This relation may be expected only in the {\it infinite} spin limit and
one is to assume a certain correspondence between the
locations of the 4 cusps $x^{(i)}$ in $W_4$ and the positions ${\rm x}^{(1)},{\rm x}^{(2)}$
of the twist-two operators.
The reason why this relation may be expected
at strong coupling is the following.
Computed at strong coupling with a ``light'' dilaton operator
both correlators in \rf{3.1}
are given by the vertex operator corresponding to ${\cal O}}\def \no {\nonumber_\Delta$
evaluated on the semiclassical world surfaces associated, respectively,
with $\langle W_4 \rangle$
and with
$\langle {\cal O}_S^{\dagger}({\rm x}^{(1)} )\ {\cal O}_S({\rm x}^{(2)} ) \rangle$.
These two surfaces are closely related \cite{KRTT,bt1}.
As we shall see below, the integrands for the corresponding semiclassical
correlators will be the same but the integration regions, however,
will differ. For a special choice of
the location of the ``light'' operator the results for the two integrals
will be the same up to a factor of 2.
Indeed, the solution~\eqref{1.4} is essentially the same as
the semiclassical trajectory
supported by the two twist-2 operators in the large spin limit
$S \gg \sqrt{\lambda}$
\begin{eqnarray}
&& z=\frac{1}{\cosh (\kappa \tau_e)\ \cosh (\mu \sigma)}\,, \quad
x_0= \tanh (\kappa \tau_e)\ \tanh (\mu \sigma)\,, \nonumber\\
&&
x_1= \tanh (\kappa \tau_e) \,, \quad x_2= \tanh (\mu \sigma) \,, \quad x_3=0\,,
\label{3.3}\\
&&\kappa= \mu \gg 1 \ , \ \ \ \ \ \ \
\kappa =\frac{\Delta_S-S}{\sqrt{\lambda}}\,, \qquad \mu =\frac{1}{\pi}\log S\,.
\label{3.3.0}
\end{eqnarray}
The relation
$\kappa=\mu$ follows from the Virasoro
condition which is also implied by the marginality of the twist 2 operator.
This background is equivalent to the one
found by a euclidean rotation ($\tau \to i \tau_e$) of the large spin limit
of the folded spinning string in $AdS_5$.\footnote{
To find the
solution with the singularities
prescribed by the two twist-2 operators one has to
act on~\eqref{3.3} with a two-dimensional conformal
map which sends the cylinder to the plane
with two marked points, see ~\cite{bt1,b} for details.}
Here $\tau_e \in (-\infty, \infty)$. In the original closed string solution
$\sigma \in [0,2\pi]$;
in fact, in \rf{3.3} we have $\sigma \in [0, {\pi\over 2}]$
and 3 other segments are assumed to be added similarly.
In the infinite spin limit $\mu \to \infty $ so
we may formally set $v= \mu \sigma \in (0, \infty)$.
Introducing also $u= \kappa \tau_e\in (-\infty, \infty)$ we conclude
that \rf{3.3} becomes equivalent to \eqref{1.4}, up
to the ``halved'' range of $v$.
More precisely, the operators considered in ~\cite{bt1} were defined on a euclidean
4-space and the corresponding surface had imaginary $x_2$; the real
surface equivalent to \rf{3.3} is obtained by the Minkowski continuation in
the target space $x_2 \to i x_0, \ x_0 \to x_2$.
Below we shall interchange the notation
$x_1 \leftrightarrow x_2$ compared to ~\cite{bt1}.
Since in ~\cite{bt1} the
operators were assumed to be located at $R^4$ points ${\rm x}^{(1,2)} =(\pm 1, 0, 0, 0)$
taking this continuation into the account
we conclude that
the solution~\eqref{3.3} is the semiclassical
trajectory saturating the two-point correlator of twist-2 operators
located at the following special points in the
Minkowski 4-space\footnote{Note that
if $\mu \sigma$ in~\eqref{3.3} were not extending to infinity, the boundary behavior of~\eqref{3.3}
would be different
from a rectangular Wilson loop. The boundary would be reached only for $u \to \pm \infty$
and on the boundary we would have a piece of the null line $x_1=x_0$ located
at $x_2=1$ and a piece of the null line $x_1=-x_0$ located at $x_2=-1$. These null lines
would no longer be connected on the boundary.}
\begin{equation}
{\rm x}^{(1)} =(0, 1, 0, 0) \ ,\ \ \ \ \ \ \ \ \ \ {\rm x}^{(2)} =(0, -1, 0,0) \ .
\label{3.3.1}
\end{equation}
Assuming for definiteness that ${\cal O}}\def \no {\nonumber_\Delta$ is the dilaton operator
we conclude that the leading strong-coupling term
in the correlator of two infinite spin twist-2 operators and the dilaton
$K_{dil} =\frac{\langle {\cal O}_S^{\dagger} ({\rm x}^{(1)}) \ {\cal O}_S ({\rm x}^{(2)})\ {\cal O}_{dil}({a})\rangle}
{\langle {\cal O}_S^{\dagger}({\rm x}^{(1)})\ {\cal O}_S({\rm x}^{(2)}) \rangle}$
is then given by the same integral as in~\eqref{2.7} with the only difference being
that instead of the range of integration
$v \in (-\infty, \infty)$ that we had in the Wilson loop case
now
in the closed string case we have $v \in [0, \infty)$ with the whole
integral multiplied by 4.
Equivalently, the spatial
integral for the folded string solution is done over $\sigma \in [0, {\pi \over 2})$
with the result multiplied by 4 \cite{rt}.
The topology
of the world sheet parametrized by $(u,v)$ should be a disc (or a plane with infinity removed)
in the Wilson loop case and the cylinder (or a half-plane with infinity removed)
in the folded closed string case.
Note that if we set $a_0=a_2=0$ in the integrand in~\eqref{2.7} it becomes an
even function of $v$ and thus the integral over $v\in [0, \infty)$
is just half the integral over $v \in (-\infty,\infty)$.
Thus for the special values of $a$ the two correlators in \rf{3.1} are directly related.
For general $a$ we get for $K_{dil}$ an expression that is
different from \rf{2.8}
\begin{eqnarray}
&& K_{dil}= \frac{c_{dil} }{{3 ({\rm q}^2- 4 a_1^2)({\rm q} a_0- 2 a_1 a_2)^3}} \Big[
-4\big[{\rm q}^2-4a_1(a_0+a_1) +2{\rm q} a_2\big] ({\rm q} a_0 -2 a_1 a_2) \no \\
&& + ({\rm q}^2- 4 a_1^2) ({\rm q}^2+4a_0^2 -4 a_1^2 -4 a_2^2)
\log \frac{ ({1 \over 2} {\rm q}+ a_1) ({1 \over 2} {\rm q}+ a_0 - a_1 - a_2)}{({1 \over 2} {\rm q}- a_1)({1 \over 2} {\rm q}- a_0 + a_1 - a_2)}
\Big]\,.
\label{3.3.2}
\end{eqnarray}
This expression containing the logarithm of a coordinate ratio
appears to be in conflict with the expected structure of the 3-point function
of conformal primary operators ($x^{(ij)}=x^{(i)} - x^{(j)}$)
\begin{equation}
\langle {\cal O}_{\Delta_1} ({{\rm x}}^{(1)})
{\cal O}_{\Delta_2} ({{\rm x}}^{(2)})
{\cal O}_{\Delta_3} ({{\rm x}}^{(3)})
\rangle=
\frac{C_{123}}{|{{\rm x}}^{(12)}|^{\Delta_1+\Delta_2-\Delta_3}
|{{\rm x}}^{(23)}|^{\Delta_2+\Delta_3-\Delta_1}
|{{\rm x}}^{(31)}|^{\Delta_3+\Delta_1-\Delta_2}
}\,.
\label{3.4}
\end{equation}
In the present case with $\Delta_S \gg \Delta_{dil} =4$ and $a={{\rm x}}^{(3)}$
i\rf{3.4} can be explicitly written as
\begin{equation}
\frac{\langle {\cal O}_S^{\dagger} ({\rm x}^{(1)})\ {\cal O}_S
({\rm x}^{(2)})\ {\cal O}_{\Delta}({a})\rangle}
{\langle {\cal O}_S^{\dagger}({\rm x}^{(1)})\ {\cal O}_S({\rm x}^{(2)}) \rangle}
\ \approx \
\frac{C}{
|{{\rm x}}^{(1)} - a|^{\Delta} |{{\rm x}}^{(2)}- a|^{\Delta}
}\,,
\label{3.44}
\end{equation}
where $C$ is a constant which is finite in the $S \to \infty$ limit \cite{rt}.
An explanation of this apparent puzzle is as follows.
The standard argument leading to \rf{3.4} assumes
that
the 3 points ${\rm x}^{(i)}$ can be spatially separated (as is always the case in $R^4$ but not in $R^{1,3}$ we consider here).
If we
choose the dilaton operator insertion point $a$ to be away from the
$(x_0,x_2)$-plane ($(x_1,x_2)$ plane of rotation of the original spinning string)
we can spatially
separate all the operators. For example, we may set $a_0=a_2= 0$.
In this case the integrand in~\eqref{2.7} becomes an
even function of $v$ and the integral over $v\in [0, \infty)$
is just half the integral over $v \in (-\infty,\infty)$, i.e.
~\eqref{2.8} should be proportional to \rf{3.44}.
Indeed, taking the limit $a_0=a_2= 0$
in ~\eqref{2.8}, \rf{3.44} we find that the logarithmic term goes away and
we obtain
\begin{equation}
{\cal C}_{dil}( W_4^{(reg)}, a)_{_{a_0=a_2=0}}= {1 \over 2} (K_{dil}){_{{a_0=a_2=0}}}
=
\frac{32 c_{dil}}{9}\ \frac{1}{[(a_1-1)^2+a_3^2]^2\ [(a_1+1)^2+a_3^2]^2} \,.
\label{3.5}
\end{equation}
This is consistent with \rf{3.44} if we recall the expression \rf{3.3.1}
for the locations of the twist-2 operators.
Similar analysis can be repeated for the correlation function with the chiral primary operator
with $\Delta=j=2$.
Taking the limit
$a_0=a_2=0$ in ~\eqref{2.52} leads to a similar expression
\begin{equation}
{\cal C}_{2}( W_4^{(reg)}, a)_{_{a_0=a_2=0}}= \frac{8 c_2}{3}\frac{1}{[(a_1-1)^2
+a_3^2]\ [(a_1+1)^2 +a_3^2]}\,,
\label{3.6}
\end{equation}
again consistent with \rf{3.4},\rf{3.44}.
Finally, let us consider the generalized cusp
solution with $S^5$ momentum~\eqref{2.36}.
The generalization of the solution \rf{3.3} to the case when large spin operators carry
also a large angular momentum $J$ in $S^5$ is given by
\begin{eqnarray}
&&z=\frac{1}{\cosh (\kappa \tau_e)\ \cosh (\mu \sigma)}\,, \qquad
x_0= \tanh (\kappa \tau_e)\ \tanh (\mu \sigma)\,, \no\\
&&x_1= \tanh (\kappa \tau_e) \,, \qquad x_2= \tanh (\mu \sigma) \,, \qquad x_3=0\,,
\label}\def\foot{\footnote}\newcommand{\rf}[1]{(\ref{#1}){3.6.1}\\
&&
\varphi=-i \nu \tau_e \,, \qquad \nu =\frac{J}{\sqrt{\lambda}}\ ,\
\ \ \ \ \ \
\kappa^2 =\mu^2 +\nu^2\,.
\label}\def\foot{\footnote}\newcommand{\rf}[1]{(\ref{#1}){3.6.2}
\end{eqnarray}
Taking the scaling limit \cite{ft}
when $\kappa,\mu,\nu \to \infty$ with fixed
\begin{equation}
k\equiv {\kappa\over \mu}\ , \ \ \ \ \ \ \ \ \ell \equiv {\nu \over \mu}= {\pi J \over {\sqrt{\l}}\ \ln S} \
, \ \ \ \ \ \ \ \ k^2 =1 + \ell^2 \ ,
\label}\def\foot{\footnote}\newcommand{\rf}[1]{(\ref{#1}){3.8}
\end{equation} and setting
$v = \mu \sigma\in [0, \infty), \ u= \mu \tau_e\in (-\infty, \infty)$
we obtain the background \rf{2.36}, again
up to a different range of $v$.
Taking the limit $a_0=a_2=0$ in ~\eqref{2.41} with $\ell\not=0$
gives
\begin{equation} {\cal C}_{dil}( W_{4,\ell }^{(reg)}, {a})_{_{a_0=a_2=0}}=
\frac{32 c_{dil}}{9 \sqrt{1+\ell^2}}\ \frac{1}{[(a_1-1)^2+a_3^2]^2\ [(a_1+1)^2+a_3^2]^2} \,.
\label{3.7}
\end{equation}
Comparing with the result ~\cite{rt} of the semiclassical computation of the
corresponding correlator
$\frac{\langle {\cal O}_{S, J}^{\dagger} {\cal O}_{S, J} {\cal O}_{dil}({a})\rangle}
{\langle {\cal O}_{S, J}^{\dagger} {\cal O}_{S, J} \rangle}$
we find again the agreement: the 3-point function coefficient
in eq. (4.18) of \cite{rt} also scales as ${1 \over \sqrt{1+\ell^2}}$ in the
limit \rf{3.8}.
The above discussion establishes a certain
relation between the two correlators in \rf{3.1}
at strong coupling. An open question is whether
a similar relation may hold also at weak coupling.
Another natural question is whether this relation may generalize
to correlators involving $W_n$ with $n>4$ and the corresponding number of
large spin twist-2 operators.
\section{On correlator of Wilson loops with $n > 4$ cusps and dilaton at strong coupling }
For number of cusps $n >4$ there are no explicit solutions like~\eqref{1.4}
or~\eqref{2.19} known so far. This makes it difficult to generalize the analysis
of the previous sections to $n>4$. Below we will calculate numerically
the large distance ($|a|\to \infty$) limit of the correlator \rf{1.1},\rf{2.3} with an even number of cusps $n$ at strong coupling,
i.e.
the corresponding OPE coefficient.
Minimal surfaces ending on regular polygons with an even number of
cusps were studied in \cite{AM2}, to which we refer the reader for details.
Such surfaces can be embedded in $AdS_3$
and are described, using the Pohlmeyer reduction of the classical $AdS_3$ string equations,
in terms of the two reduced-model fields:
a holomorphic function $p(\xi)$ and a field $\alpha(\xi,\bar \xi)$ satisfying the generalized sinh-Gordon equation
\begin{equation}
\label{sinh-gordon}
\partial \bar \partial \alpha +p \bar p\ e^{-2\alpha}-e^{2\alpha}=0\,.
\end{equation}
Here $\partial = {\partial \over \partial \xi}$ and $\xi$ is a complex coordinate on the world sheet $\xi= u+i v$.
In order to reconstruct the corresponding solution of the original string equations
one is to solve the following two matrix auxiliary linear problems
($\beta= (\xi,{\bar \xi})$)
\begin{eqnarray}\label}\def\foot{\footnote}\newcommand{\rf}[1]{(\ref{#1}){lin}
(\partial_\beta +B^{L}_\beta)\psi_{L} =0\,, \ \ \ \ \ \ \ \ \ (\partial_\beta +B^{R}_{\beta})\psi_{R} =0\,, \
\end{eqnarray}
where the components of the flat connections are
\begin{eqnarray}
B^L_{\xi}= \left( \begin{array}{cc}
{1 \over 2} \partial \alpha & -e^\alpha \\
-e^{-\alpha}p(\xi) & -{1 \over 2} \partial \alpha \end{array} \right),~~~\ \ \ B^L_{\bar \xi}=\left( \begin{array}{cc}
-{1 \over 2} \bar \partial \alpha & -e^{-\alpha}\bar p(\bar \xi) \\
-e^\alpha & {1 \over 2} \bar \partial \alpha \end{array} \right),\\
B^R_{\xi}= \left( \begin{array}{cc}
-{1 \over 2} \partial \alpha & e^{-\alpha}p(\xi) \\
-e^{\alpha} & {1 \over 2} \partial \alpha \end{array} \right),~~~B^R_{\bar \xi}=\left( \begin{array}{cc}
{1 \over 2} \bar \partial \alpha & -e^\alpha \\
e^{-\alpha} \bar p(\bar \xi) & -{1 \over 2} \bar \partial \alpha \end{array} \right)\,.
\end{eqnarray}
The flatness of these connections is equivalent to eq. (\ref{sinh-gordon})
and the holomorphicity of $p(\xi)$. The solutions describing regular polygons
correspond to the holomorphic function being a homogeneous polynomial of degree $k-2$, with $n=2k$ and $\alpha$
being a function of the radial coordinate $|\xi|=\sqrt{\xi \bar \xi} $ only, i.e.
\begin{equation}
p(\xi)=\xi^{k-2}\,,~\ \ \ \ \ \ \ ~~\bar p(\bar \xi)=\bar \xi^{k-2}\,,~~ \ \ \ \ \ ~\alpha=\alpha(|\xi|)\,.
\end{equation}
In this case the sinh-Gordon equation \rf{sinh-gordon} reduces to the Painleve III equation.
The boundary conditions are that $\alpha$ is regular everywhere and $\hat \alpha = \alpha-{1 \over 4} \log p \bar p$
vanishes at infinity.
The solution to this equation can be written in terms of Painleve transcendentals.
While for computing the area of the world-sheet we only need to know the reduced fields,
in order to compute the correlation function with the
dilaton according to \rf{2.3} we need an explicit expression for the space-time coordinates.
The general strategy of finding the solution for the string space-time coordinates is as follows.
One should first find
the solutions $\psi_{L},\psi_R$.
Then the solution for the
string $AdS_3$ embedding coordinates $Y_M$ is given by
\begin{equation}
{\rm Y}_{a,\dot a} =( \psi_L^T)_a (\psi_R)_{\dot a} = \left( \begin{array}{cc}
Y_{-1}+i Y_0&Y_1-i Y_2 \\
Y_1+i Y_2& Y_{-1}-i Y_0 \end{array}\right)\,.
\end{equation}
The form of the solution in Poincare coordinates is determined by
\begin{eqnarray}
z= \frac{1}{Y_{-1}},~~~\ \ x_0=\frac{Y_0}{Y_{-1}},~~~ \ \ x_1=\frac{Y_1}{Y_{-1}},~~~ \ \ x_2=\frac{Y_2}{Y_{-1}} \,.
\label}\def\foot{\footnote}\newcommand{\rf}[1]{(\ref{#1}){77}
\end{eqnarray}
Unfortunately, we do not know how solve the above linear problems for $\psi_{L,R}$, i.e.
how to compute the coordinates in \rf{77} and thus the integral in \rf{2.3} for $n > 4$. \footnote{In \cite{amf,Bubble} and subsequent developments (see \cite{Alday:2010kn} for a review) it was understood how to use integrability in order to compute the area of the world-sheet even without knowing its shape. It would be interesting to learn how to apply similar tricks to solve the problem at hand. On the other hand, following the methods of \cite{Gaiotto:2011tf} one could try to construct the shape of the world-sheet and then
solve the current problem.}
For that reason in this paper we shall focus on the
simpler problem of computing the large distance limit $|a| \to \infty$ of the correlation function
or the OPE coefficient.
In this limit the integral in ~\eqref{2.4} takes a simpler form
(here $\XX^j=1$; cf. \rf{1.16})
\begin{eqnarray}
&& {\cal C}} \def \CC {\C_{dil} (W_n, a)_{|a|\to \infty}= \frac{{\rm C}_n }{|{a}|^{2 \Delta}} \ ,
\ \ \ \ \ \ \ \ \ \ {\rm C}_n= c_{dil}\ I_{n,\Delta} \ , \label}\def\foot{\footnote}\newcommand{\rf}[1]{(\ref{#1}){444} \\
&& I_{n,\Delta}= 8 \int d u d v \ z^\Delta \ e^{2\alpha} =
8
\int du dv \ \frac{e^{2\alpha}}{Y_{-1}^\Delta}\,,
\label{B0}
\end{eqnarray}
where $\Delta=4+j$\ (here we keep the angular momentum $j$ of the dilaton
general).
We used the fact that in the Pohlmeyer reduction the string Lagrangian or $U_{dil}$ in \rf{2.4}
is given by
$8 e^{2\alpha}$.
The integral \rf{B0} (and, in fact, the full integral in \eqref{2.4})
can be computed exactly in the two cases: (i) $n=4$ we already discussed above (see
\rf{2.31}) and (ii)
$n\to \infty$ when the cusped polygon becomes a circular Wilson loop.
In the $n=4$ case we get using ~\eqref{2.4},\eqref{1.4} (cf. \rf{2.1},\rf{2.31})
\begin{equation}
I_{4,\Delta} = {2 \pi } \ \Big(\frac{\Gamma[{\Delta\over 2}]}{\Gamma[{\Delta+1 \over 2}]} \Big)^2\,.
\label{n=4}
\end{equation}
The world-surface ending on a circular loop is given (in conformal gauge) by
~\cite{cor}
\begin{eqnarray}
&&z=\tanh u\,, \qquad x_1 =\frac{\cos v}{\cosh u}\,, \qquad
x_2 =\frac{\sin v}{\cosh u}\,, \qquad x_0=x_3=0\,; \nonumber\\
&&u\in [0, \infty)\,, \quad v\in [0, 2 \pi]\,.
\label{cir}
\end{eqnarray}
This leads to
\begin{equation}
I_{\infty,\Delta}= \frac{4 \pi }{\Delta-1} \ \,.
\label{cir2}
\end{equation}
For the case of general even $n$ and $j=0$, i.e.
$\Delta=4$ we can compute the integral \rf{B0} numerically.
The numerics is very accurate since the
factor $z^4$ makes the integrand decay very fast.
Let us simply present the results for $I_n \equiv I_{n, \Delta=4}$ in a few cases
\begin{eqnarray}
I_4={ 32\over 9}& \approx & 3.55556\no \\
I_6& \approx &3.9090
\no \\
I_8& \approx & 4.04496\no \\
I_{10}& \approx &4.10648\no \\
I_{\infty}=\frac{4 \pi}{3} & \approx & 4.18879\,,
\label}\def\foot{\footnote}\newcommand{\rf}[1]{(\ref{#1}){six}\end{eqnarray}
where we have also included the two special cases
discussed above when $I_n$ is known exactly.
Unfortunately, we could not find analytic expressions in agreement with these
numbers.\footnote{$I_6$ is well approximated by $4 \sqrt{\frac{3}{\pi}}$.}
\section{Correlation function of cusped Wilson loop with\\ dilaton operator at weak coupling}
Let us now consider the computation of the correlator \rf{1.1}
\begin{equation}
{\cal C}(W_n, {a})=\frac{\langle W_n {\cal O} ({a})\rangle}{\langle W_n\rangle}\,,
\label{6.1}
\end{equation}
in the weakly coupled planar $SU(N)$
${\cal N}=4$ supersymmetric gauge theory. Here the expectation values are computed
using gauge theory path integral and\footnote{The additional coupling to the scalars in the locally-supersymmetric Wilson loop
\cite{Juan} drops out because the null polygon contour
consists of null lines.}
\begin{equation}
W_n =\frac{1}{N}{\rm tr}\ {\cal P}\ e^{i g \oint_{\gamma}A_{m}dx^{m}}\,,
\label{6.2}
\end{equation}
Here we rescaled the fields with the coupling constant $g$ (with $\lambda = g^2 N$)
so that the ${\cal N}=4$ Lagrangian is
\begin{equation}
{\cal L}_{{\cal N}=4}=-\frac{1}{4}{\rm tr}(F_{mn}^2 + \dots)
\label{6.3}
\end{equation}
with $g$ appearing only in the vertices. We use the conventions
\begin{equation}
A_{m}=A_{m}^r T^r\,, \qquad {\rm tr}(T^r T^s)=\delta^{r s}\,, \qquad r, s =1,\dots, N^2-1\,.
\label{6.3.1}
\end{equation}
The path $\gamma$ in~\eqref{6.2} is the union of $n$ null segments of the form
\begin{equation}
\gamma^{(i)}_{m} (t)=x^{(i)}_{m} + t (x^{(i+1)}_{m}-x^{(i)}_{m})\,,\ \ \qquad t \in [0, 1]\,,
\label{6.3.2}
\end{equation}
where $x^{(i)}_{m}$ ($i=1,...,n$) denote the locations of the cusps.
The dilaton operator (which is a supersymmetry descendant of $\text{tr} Z^2$)
is essentially
the ${\cal N}=4$ gauge theory Lagrangian up to a total derivative (see, e.g., ~\cite{KTvR})\footnote{
Up to the scalar and the fermion equation of motion terms ${\cal O}_{dil}$ is thus given by the
YM Lagrangian plus the Yukawa and the quartic scalar interaction terms.}
\begin{equation}
{\cal O}_{dil}=\ \hat{c}_{dil}\ {\rm tr}(F_{mn}^2 +\Phi^I \partial^2 \Phi^I +\bar \psi \gamma \cdot
\partial \psi +\dots)\,,
\label{6.4}
\end{equation}
where $\Phi^I$ are the scalars and $\psi$ are the fermions and we did not write
explicitly the terms
of order $g$ and $g^2$.
The normalization coefficient $\hat{c}_{dil}$ is given by~\cite{Liu}
\begin{equation}
\hat{c}_{dil}=\frac{\pi^2}{4 \sqrt{3} N}\,.
\label{6.5}
\end{equation}
The leading order contribution to \rf{6.1}
(to which we will refer as the ``tree level'' one) is proportional to $g^2$
as one can easily see from~\eqref{6.1}, \eqref{6.2}. To compute~\eqref{6.1} to this order we
have to expand $W_n$ to order $g^2$. Hence, we can set $g=0$ in the Lagrangian~\eqref{6.3}
and in the dilaton operator~\eqref{6.4}. Therefore, for the purpose of computing the leading order term
in \rf{6.1} we can take
\begin{equation}
{\cal O}_{dil} \ \to \ \hat{c}_{dil}\ {\rm tr} F_{mn}^2=
2 \hat{c}_{dil}\ (\partial_{m} A_{n}^r \partial^{m} A^{n r} -
\partial_{m} A_{n}^r \partial^{n} A^{m r})\,.
\label{6.6}
\end{equation}
The gluon propagator in the above conventions is
\begin{equation}
\langle A_{m}^r({x}) A_n^s(0) \rangle =
-\frac{1}{4\pi^2 {|x|}^2}\ \eta_{mn} \delta^{r s}\,.
\label{6.7}
\end{equation}
We will see that just like at strong coupling, the weak coupling correlator ~\eqref{6.1} is
finite, i.e. we do not need
to introduce a UV regularization in~\eqref{6.7}. Also note that to compute~\eqref{6.1}
to order $g^2$ we can replace $\langle W_n \rangle$ in the denominator with unity. Therefore,
we obtain
\begin{eqnarray}
&&{\cal C}^{(g^2)}_{dil}(W_n, {a})=
\langle W_n {\cal O}_{dil}({a})\rangle_{tree}\nonumber \\
&&
=-\frac{2 \hat{c}_{dil}\ g^2}{N}
\langle {\cal P} \oint A_k^s({x}) dx^k \oint A_{l}^s({x}')dx'^l
(\partial_p A_q^r \partial^p A^{q r}- \partial_p A_q^r \partial^q A^{p r})({a}) \rangle \ .
\label{6.8}
\end{eqnarray}
The path ordering symbol ${\cal P}$ means that ${x}'$ in
the second integral is placed between the origin (an arbitrary point along the loop, for instance one of the cusps) and ${x}$.
Now using that
\begin{equation}
\langle A_{k}^r({x}) \partial_p A_q^s({a})\rangle
= -\frac{1}{4\pi^2}{\partial \over \partial {a^p}} \frac{\eta_{k q} \delta^{r s}}{|{a}-{x}|^2} =
-\frac{1}{2\pi^2} \frac{(a-x)_p \eta_{kq} \delta^{r s}}{|{a}-{x}|^4} \ ,
\label{6.9}
\end{equation}
and
performing the Wick contractions we obtain ($\lambda=g^2 N$)
\begin{eqnarray}
&&{\cal C}^{(g^2)}_{dil}(W_n, {a})=
-\frac{\hat{c}_{dil}\ \lambda}{\pi^4} \ {\cal P} \Big(
\oint \oint \Big[
\frac{({a} -{x})\cdot({x}-{x}')}{|{a}-{x}|^4|{a}-{x}'|^4}
d {x} \cdot d {x}' \no
\\ && \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ - \frac{({a} -{x}) \cdot d{x}'}{|{a}-{x}|^4}
\frac{({a}-{x}') \cdot d{x}} {|{a}-{x}'|^4} \Big]\Big) \,.
\label{6.10}
\end{eqnarray}
So far our discussion have been general and applicable for any number of cusps
$n$. Let us now specify to $n=4$.
Computing the ${\cal P}$-ordered integrals
in~\eqref{6.10} we obtain for generic locations of 4 null cusps
\begin{equation}
{\cal C}_{dil}(W_4^{(reg)}, {a}) =-\frac{\hat{c}_{dil}\ \lambda}{2\pi^4 }\
\frac{ |x^{(1)} - x^{(3)}|^2\ |x^{(2)} - x^{(4)} |^2 }{\prod_{i=1}^4 |a-x^{(i)}|^2}\,.
\label{6.11}
\end{equation}
This agrees with the expected structure \rf{1.15} of the correlator
(for the dilaton $\Delta=4$) with
the leading weak-coupling term in the
function $F(\zeta)$ thus being simply a constant
\begin{equation}
F(\zeta)=-\frac{ \hat{c}_{dil}\ \lambda}{2\pi^4}\,.
\label{6.12}
\end{equation}
Note that the structure of \rf{6.11} is exactly the same as the one appearing
in the 1-loop correction to the 4-cusp Wilson loop $ \langle W_4 \rangle$ (given by a scalar box diagram).
Indeed, integrating \rf{6.11} over $a$ we get the integrated dilaton operator or gauge theory action
insertion into the Wilson loop, which is proportional to derivative of $ \langle W_4 \rangle$ over gauge coupling
\cite{Drum,kor}.
This observation may allow one to extract higher order corrections to \rf{6.11}
by comparing to integrands of higher-order corrections to $ \langle W_4 \rangle$.
When computing the analogs of the integrals in~\eqref{6.10} for $n >4$
we have two different types of contributions.
The first one is when the two line integrals are taken along the same segment.
Let us call this contribution $T_{ii}$ where the $i$-th segment is parametrized
by~\eqref{6.3.2}.
After some computation we obtain (up to the obvious factor $-\frac{\hat{c}_{dil}\ \lambda}{\pi^4}$)
\begin{equation}
T_{ii}({a})=
-\frac{1}{2} \frac{[({a}-{x}^{(i)})\cdot({x}^{(i+1)}-{x}^{(i)})]^2}
{[({a}-{x}^{(i)})\cdot({a}-{x}^{(i)}))^2(({a}-{x}^{(i)})
\cdot(2 {x}^{(i+1)}-{a}-{x}^{(i)})]^2}\,.
\label{6.13}
\end{equation}
The other type of contribution appears when the two contractions are made in different segments.
In this case we obtain
\begin{eqnarray}
&&
T_{ij}= \nonumber\\
&&
\frac{({a}-{x}^{(i)})\cdot({a}-{x}^{(j)}){\ }
({x}^{(i+1)}-{x}^{(i)})\cdot({x}^{(j+1)}-{x}^{(j)})}
{|{a}-{x}^{(i)}|^2{\ }
|{a}-{x}^{(j)}|^2 {\ }
({a}-{x}^{(i)}) \cdot ({a}+{x}^{(i)}-2 {x}^{(i+1)}) {\ }
({a}-{x}^{(j)}) \cdot( {a}+{x}^{(j)}-2{x}^{(j+1)})} \nonumber\\
&&
-\frac{({a}-{x}^{(i)})\cdot({x}^{(j+1)}-{x}^{(j)}){\ }
({a}-{x}^{(j)})\cdot({x}^{(i+1)}-{x}^{(i)})
}
{|{a}-{x}^{(i)}|^2 {\ }
|{a}-{x}^{(j)}|^2 {\ }
({a}-{x}^{(i)}) \cdot ({a}+{x}^{(i)}-2 {x}^{(i+1)}) {\ }
({a}-{x}^{(j)}) \cdot( {a}+{x}^{(j)}-2{x}^{(j+1)})} \,. \nonumber\\
\label{6.14}
\end{eqnarray}
These expressions are completely general. Hence, the full answer (which is rather lengthy) will be the sum of
such contributions.
Let us specify~\eqref{6.13}, \eqref{6.14} to the case of regular polygons with
even $n$ sides with the cusps located at\footnote{Note that for $\to \infty$
this null polygon becomes a unit circle in the (12) plane.}
\begin{equation}
x^{(i)} =
\Big((-1)^i \frac{\sqrt{1-\cos{ 2\pi\over n}}}{1+\cos {2 \pi\over n}}, {\ }
\frac{\cos({ \pi\over n}(2i+1))}{\cos { \pi\over n}},{\ }
\frac{\sin({ \pi\over n}(2i+1))}{\cos{ \pi\over n}},{\ } 0\Big)\,. \label{6.15}
\end{equation}
The problem is purely combinatorial, but there does not seem to be a simple universal
formula for generic $n$. It is relatively easy, however, to
compute the OPE coefficient by placing the operator very far from the loop:
taking $|{a}|$ large we obtain (cf. \rf{1.3},\rf{1.16},\rf{444})
\begin{equation}
{\cal C}_{dil}(W_n^{(reg)}, {a}) _{|a|\to \infty} =
\frac{{\rm C}_n}{|{a}|^8}\,, \ \ \ \ \ \ \ \ \ {\rm C}_n=
-\frac{ 2 \hat{c}_{dil} \lambda}{\pi^4}\ n^2 \tan^2{ \pi\over n} \ .
\label{6.16}
\end{equation}
For generic location of the dilaton operator one can check that the result is consistent with the general expectation (\ref{1.2}) (we have checked this explicitly for $n=5,6$.) For instance,
for $n=6$ and the case of a regular polygon the result depends on three conformal ratios
(since the polygon is regular only three cross-ratios are independent) and we obtain
\begin{equation}
F(\zeta_1,\zeta_2,\zeta_3)=-\frac{ \hat{c}_{dil}\ \lambda}{2\pi^4}
\frac{ \zeta_1 \zeta_2 \zeta_3 ( \zeta_3-1) + \zeta_3^2 - \zeta_2^3}
{\Big[\zeta_1 \zeta_2^2 \zeta_3^2 (\zeta_2-\zeta_3)^2 \Big( \zeta_1 \zeta_3 ( \zeta_2-1)- \zeta_2^2 +\zeta_3\Big) \Big]^{1/3}}\ , \label}\def\foot{\footnote}\newcommand{\rf}[1]{(\ref{#1}){619}
\end{equation}
where the conformal ratios are defined by
\begin{eqnarray}
&&\zeta_1= \frac{|x^{(1)}-x^{(3)}|^2|a-x^{(5)}|^2}{|x^{(1)}-x^{(5)}|^2|a-x^{(3)}|^2} \ , \ \ \ \ \ \ \ \ \
\zeta_2= \frac{|x^{(2)}-x^{(4)}|^2|a-x^{(6)}|^2}{|x^{(2)}-x^{(6)}|^2|a-x^{(4)}|^2}\ , \no \\
&&\zeta_3= \frac{|x^{(1)}-x^{(4)}|^2|a-x^{(3)}|^2|a-x^{(6)}|^2}{|x^{(3)}-x^{(6)}|^2|a-x^{(1)}|^2|a-x^{(4)}|^2}
\ . \label}\def\foot{\footnote}\newcommand{\rf}[1]{(\ref{#1}){zee}
\end{eqnarray}
\section{Concluding remarks}
In this paper we considered the correlator \rf{1.1} of a null $ n$-polygon Wilson loop
with a local operator, such as the dilaton (${\cal O}}\def \no {\nonumber_{dil} \sim \text{tr} F^2_{mn} +...$)or a chiral primary operator.
Based on symmetry considerations we determined its general form \rf{1.2}, expressing it in terms of
a function $F$ of $3n-11$ conformal ratios involving the position of the operator
and the positions of the cusps.
In the first non-trivial case of $n=4$
this function $F$ depends on just one conformal ratio $\zeta$ (defined
in \rf{ges}), making the corresponding correlator
\rf{1.1},\rf{1.15} one of the simplest non-trivial observables
that one would
like eventually to compute exactly for all values of the `t Hooft coupling $\lambda$.
The value of $F$ determines, in particular, the corresponding
OPE coefficient \rf{1.166} in the expansion \rf{22} of the Wilson loop in terms of
local operators.
We have found the leading terms in $F$ both at strong coupling
(using
semiclassical string theory) and at weak coupling (using perturbative planar
gauge theory).
At leading order at strong coupling we find that
$F \sim {{\sqrt{\l}}\ } $ and has non-trivial dependence on $\zeta$
\rf{2.17} while at leading order in weak coupling $F \sim {\lambda } $ and is constant
\rf{6.12}.
In the case of more general dilaton operator with non-zero R-charge $j$ (with $\Delta=4+j$)
the strong-coupling expression for $F$ is given by a hypergeometric function \rf{2.33}.
Similar results were found in the case of the chiral primary operator \rf{2.48},\rf{2.51}.
It would be important to compute subleading terms in the two respective expansions:
\begin{eqnarray} \label}\def\foot{\footnote}\newcommand{\rf}[1]{(\ref{#1}){71}
&&F_{\lambda \gg 1} = { 1 \over N} \big[ {\sqrt{\l}}\ f_0 (\zeta)
+ f_1(\zeta) + {1 \over {\sqrt{\l}}\ } f_2( \zeta) + ... \big] \ , \\
&&F_{\lambda \ll 1} = { 1 \over N} \big[ \lambda h_0 + \lambda^2 h_1(\zeta) + \lambda^3 h_2( \zeta) + ... \big] \ . \label}\def\foot{\footnote}\newcommand{\rf}[1]{(\ref{#1}){72}
\end{eqnarray}
Another open problem is the extension to the case of the $n >4$ cusped Wilson loop.
Let us note that in the case of the dilaton operator integrating \rf{1.1}
over the point $a$ we get the insertion of the action and so the resulting correlator
should be proportional to a derivative over $\lambda$ of the logarithm of the null-polygon Wilson loop.
Thus, in particular, the knowledge of $\langle W_n \rangle $ at higher orders in $\lambda$ provides a constraint on
integral of \rf{1.1} at lower order order in $\lambda$; in general, this is not, however,
enough to determine the functions $h_n(\zeta)$ in \rf{72}.
Part of the original motivation for the present work was to
shed more light on the relation \cite{ak} between a
correlator of null-separated local operators and the square of
corresponding cusped Wilson loop. We conjectured
a more general relation \rf{121}
connecting correlators with one extra operator at an arbitrary position
to the correlator \rf{1.1} we considered in this paper. It
would be interesting to try to verify the relation \rf{121} for $n=4$
at weak coupling.
There are several possible extensions of our present work.
One may consider the case when the local operator ${\cal O}}\def \no {\nonumber$
is not ``light'' at strong coupling but is allowed to
carry a large charge
(e.g., R-charge or angular momentum in $S^5$ so that $\Delta \sim {\sqrt{\l}}\ $). As in the circular loop case in \cite{za02},
then the semiclassical surface will need to be modified to account for the
presence of the sources provided by the vertex operator ${\rm V}} \def \XX {{\rm X}}\def \ep {\epsilon$ in the string path integral
(see also \cite{alt}).\footnote{For example, one may consider the operator inserted at
$|a|=\infty$. The resulting semiclassical world surface should then have a topology of a disc with a puncture, i.e. of a cylinder. For example,
in the case of a dilaton with large charge $j$ we would have
then a source provided by the vertex operator $ \sim
a^{-2\Delta} \int du dv \ z^\Delta e^{i j\varphi} U_{dil}$.
A naive candidate for such modified surface is the generalization
\rf{2.36} of the regular 4-cusp surface to the presence of $S^5$ angular
momentum density. However, this surface has zero dilatation charge
$z^{-2} ( z \partial_u z + x_m \partial_u x^m)$, while to support the source provided
by $z^\Delta$ one needs a surface with a dilatation charge proportional to $\Delta$
(cf. \cite{za02}).
It remains an open question how to find such a surface.}
One may consider also a correlator of a Wilson loop with
several ``light''
($\Delta \ll {\sqrt{\l}}\ $)
operators. At leading order in strong-coupling expansion such a correlator
should
factorize like in the case of the correlators
two ``heavy'' ($\Delta \sim {\sqrt{\l}}\ $) operators and several ``light'' ones \cite{rt,bt2}, i.e.
$ \langle W_n {\cal O}}\def \no {\nonumber(a_1) {\cal O}}\def \no {\nonumber(a_2) \rangle \sim \langle W_n {\cal O}}\def \no {\nonumber(a_1)\rangle \langle W_n {\cal O}}\def \no {\nonumber(a_2)\rangle $.
This follows from the fact that for ${\sqrt{\l}}\ \gg 1$ these correlators
are found, like in \rf{2.3}, by evaluating the corresponding vertex operators
on the world surface ending on the null polygon that defines $W_n$.\footnote{Contribution
of a more singular term proportional to a power of $|a_1 - a_2|^{-2}$
is suppressed at large $\lambda$.
Note also that in general
$ \langle W_n {\cal O}}\def \no {\nonumber(a_1) {\cal O}}\def \no {\nonumber(a_2) \rangle$ should depend, say for $n=4$,
on $3n-11 + 4 = 5$
conformal ratios, this strong-coupling factorization implies
that the leading term depends only on $ 2 \times (3n-11) = 2$ conformal ratios.}
The study of such more general correlators may be of interest in
trying to understand better the relation \cite{ak} between the
correlator of null-separated local operators and the square of
corresponding cusped Wilson loop.
\section*{Acknowledgments}
We are grateful to G. Korchemsky, R. Roiban
and E. Sokatchev for stimulating discussions
and useful suggestions.
The work of E.I.B. is supported by an STFC fellowship.
Part of this work was completed while AAT was visiting TPGU at Tomsk and he
acknowledges the support of the RMHE grant 2011-1.5-508-004-016.
|
1,477,468,750,351 | arxiv | \section{Introduction}
\input{sections/01_introduction}
\section{Limitations of MLM Approaches for Vision and Language}
\label{sec:current_approach}
\input{sections/02_limitations_current_approach}
\section{Alternative Masking Strategies}
\input{sections/03_alternative_masking_strategies}
\section{Experiments}
\input{sections/04_experiments}
\section{Analysis and Discussion}
\input{sections/05_analysis_and_discussion}
\section{Related Work}
\input{sections/06_related_work}
\section{Conclusions}
\input{sections/07_conclusions}
\section*{Acknowledgements}
\input{sections/08_acks}
\subsection{Background}
Multiple studies have been proposed to modify the MLM objective in text-only domains \cite{joshi-etal-2020-spanbert, sun2019ernie, clark2020electra,Levine:2021}. However, less research has been dedicated to the implications of MLM in vision and language tasks.
\citet{shin2021perspectives} recently reviewed how the transformer architecture \cite{vaswani2017attention} has been incorporated into vision-language cross-modal tasks. They show that most VLP models perform MLM in the same way as introduced in BERT \cite{devlin2018bert} for text-only data, randomly masking tokens with 15\% probability. Further, virtually all models are pre-trained on a handful of pre-training cross-modal datasets, including Conceptual Captions (CC; \citealp{sharma2018conceptual}); SBU captions \cite{ordonez2011im2text} and the LXMERT pre-train dataset, which is a combination of COCO \cite{lin2014microsoft}, Visual Genome \cite{krishna2017visual}, VQA \cite{goyal2017making}, VG-QA \cite{zhu2016visual7w}, and GQA \cite{hudson2019gqa}.
Importantly, all these datasets consist of $<$sentence, image$>$ pairs, where the sentence is usually a caption describing the image or, in VQA, an image-related question.
\subsection{Limitations}
\paragraph{In many cases, no token is masked.}
Image captions tend to be shorter than the documents in BERT pre-train data, such as Wikipedia articles. BERT input sequence length is 512 tokens, while in VLP datasets the sequence length is $\approx$20 tokens.
For this reason, when masking 15\% of the tokens in the VLP models, there are cases where \textit{no token} is masked. For example, in LXMERT we find that in 36\% of the sentences, no token is masked.
\paragraph{Many masked words are stop-words and punctuation.}
\label{sec:image_nec}
We observe that over 45-50\% of tokens masked by either LXMERT, CC, and SBU are stop-words or punctuation marks.\footnote{We used nltk and gensim stop words lists.} We now describe an experiment that shows that this distribution causes the image to be under-utilized during MLM pre-training.
We follow the approach of amnesic probing \cite{elazar2021amnesic}. The intuition is that if the image is being used for cross-modal MLM, then the removal of the image should negatively influence the ability of the model to solve the task. If the removal of the image has little or no influence on the ability to solve cross-modal MLM, then the image is not a contributing factor in this task.
We consider the published pre-trained LXMERT model.\footnote{\url{https://github.com/airsplay/lxmert}} We evaluate it at inference time with the MLM task twice: with and without the image,\footnote{Without the image, we block access to the image and use the model as a single-stream model, without the co-attention layers from the image to the text. The model receives only the text and needs to complete the masked tokens.} using different masking strategies. We use the LXMERT pre-train validation data ($\approx$214K sentences). To estimate the image necessity for a masked token during MLM, we introduce the \lossgap{} metric, which is the difference in validation loss of the model prediction with and without the image.
For example, in Figure~\ref{fig:fig_motorcycle}, the loss \emph{without the image} for predicting ``motorcycle'' is 3.96, and the loss with the image is 0.25, the \lossgap{} is 3.71.
In addition, we report the \emph{Accuracy@5} metric, which is whether the label is among the top 5 most confident predictions of the model. We compare three masking strategies, keeping a 15\% probability to mask a token: (1) \originalstrategy{} masking strategy, where a token is masked uniformly at 15\% probability; (2) masking only stop-words and punctuation; and (3) masking only content words, which is the complementary group of stop words and punctuation.
Results are presented in Table~\ref{tab:perplexity}. We observe that the model validation accuracy on stop-words and punctuation is almost perfect (96\%) even without the image.
On the other hand, in the case of content words, accuracy is much lower without the image, and adding it increases accuracy by roughly 20\%.
\begin{figure*}[!hbt]
\centering
\begin{minipage}{0.40\textwidth}
\includegraphics[width=\textwidth]{images/fig_motorcycle.png}
\end{minipage}
\begin{minipage}{0.5\textwidth}
\centering
\captionsetup{type=table}
\scalebox{0.72}{
\begin{tabular}{@{}ll@{}}
\toprule
Sentence & A person performs a stunt jump on a [MASK]. \\
Masked token & motorcycle \\
Top 5 predictions & motorcycle, bike, ramp, bicycle, cycle \\
Top 5 predictions w/o image & building, wall, beach, field, street \\
Loss & 0.25 \\
Loss w/o image & 3.96 \\
\lossgap{} & 3.71 \\ \bottomrule
\end{tabular}
}
\end{minipage}
\caption{An example from the extracted \lossgap{} data. The masked word is \textit{motorcycle}. Model predictions (``Top 5 predictions'') are better correlated with the image when it is given, and the loss is 0.25. Without the image, the predictions (``Top 5 predictions w/o image'') are tokens that do not appear in the image, and the loss is much higher (3.96). The \lossgap{} is the gap: 3.71.}
\label{fig:fig_motorcycle}
\end{figure*}
\begin{table*}[!hbt]
\resizebox{\textwidth}{!}{%
\begin{tabular}{@{}ccccccc@{}}
\toprule
\textbf{Masking strategy} & \multicolumn{2}{c}{\textbf{With Image}} & \multicolumn{2}{c}{\textbf{Without Image}} & \multicolumn{2}{c}{\textbf{Image Necessity}} \\ \midrule
\textbf{Metric} & \textbf{image loss (exp)} & \textbf{Accuracy @ 5} & \textbf{image loss (exp)} & \textbf{Accuracy @ 5} & \textbf{\lossgap{} (exp)} & \textbf{Accuracy @ 5} \\ \midrule
\textbf{\originalstrategy{}} & 3.2 & 89\% & 8.9 & 78\% & 5.7 & 10\% \\
\textbf{Stop-words \& punctuation, 15\%} & 1.5 & 98\% & 2.9 & 96\% & 1.4 & 2\% \\
\textbf{Content words, 15\%} & 9.4 & 76\% & 38.7 & 56\% & 29.3 & 20\%
\\ \bottomrule
\end{tabular}
}
\caption{Performance of the LXMERT model on the MLM task, when different words are masked, with and without the image. Accuracy on stop-words and punctuation is almost perfect even when no image is present. However, for content words, the image does contribute to increased accuracy.}
\label{tab:perplexity}
\end{table*}
\subsection{Semantic Classes}
\label{sec:semantic_classes}
\paragraph{\textit{Objects}, \textit{Attributes}, and \textit{Relationships}}
We use the definitions of \textit{objects}, \textit{attributes}, and \textit{relationships} as described in Visual Genome \cite{krishna2017visual}. \textit{Objects} represent physical entities in the image (e.g., a tiger, or a carrot). \textit{Attributes} are properties of objects, such as colors or physical state (e.g., upright). Finally, \textit{relationships} connect between two objects. These can be actions (e.g., a tiger is \textit{eating} a carrot), spatial relations (e.g., the tiger is \emph{behind} the carrot), etc.
In order to mask the tokens that belong to those semantic classes, we first need to identify them in a given sentence. Some datasets (e.g., GQA) include scene-graph annotations of these classes for each image. We use the annotations as ground-truth and develop heuristics to identify them automatically. For example, an \textit{Object} can be reliably annotated by identifying nouns which are also in the Visual Genome objects list. This simple heuristic achieves an accuracy of $\approx$90\% and recall of $\approx$97\% for ientifying objects on the LXMERT pre-train dataset.\resolved{\roy{accuracy/recall in identifying Objects I assume? say this explicitly}} We elaborate on these heuristics in Appendix~\ref{sec:using_obj_att_rel}.
\paragraph{Concreteness}
We hypothesize the image contributes more when predicting concrete concepts (e.g., tiger) compared to abstract concepts (e.g., hunger).
To that end, we use a dataset of lexical concreteness presented in \cite{brysbaert2014concreteness}. This dataset provides concreteness scores (on a scale of 1-5) for over 91\% of the lemmas in LXMERT pre-training dataset.
\subsection{Proposed Strategies}
\label{sec:proposed_strategies}
We consider the following masking strategies:
\begin{itemize}
\itemsep0em
\item \emph{\originalstrategy{}}:
the original masking strategy as defined in the LXMERT paper, 15\% random token masking.
\item \emph{Objects}: Randomly mask one object word.\footnote{In $>97.2\%$ of the sentences there is at least one object. In other cases, we mask a word at random.}
\item \emph{\cwstrategy{}}: Mask exactly one word in each sentence. Instead of almost 50--50 partition between masking stop-words and content words, increase the probability to mask content word to 80\%.
\item \emph{Top concrete}: Mask one of the top concrete words in the sentence, weighted by their order.\footnote{Of the three words with the highest concreteness value in the sentence, mask the most concrete word with 55\% probability, the second most concrete with 30\% probability, and the third most with 15\% probability.}
\item \emph{Stop-words \& punctuation}: as baseline, mask only stop-words \& punctuation, keeping a 15\% probability of masking.
\item \emph{Random 1 word}: An ablation of masking a single random word.
\end{itemize}
\textbf{Tokenization}:
The words in the sentences are tokenized using BERT tokenizer. For strategies requiring word-level masking (Objects, Content words, Top concrete, \originalstrategy{}, Random 1 word), we mask all of the corresponding word-pieces (e.g., ``A tiger is eat \#ing'' is masked as ``A tiger is [MASK] [MASK]'').
\subsection{Downstream Tasks}
\label{sec:downstream_tasks}
\paragraph{Experimental setup} We pre-train the LXMERT architecture with the proposed masking strategies, experimenting with increasing amounts of pre-training data (10\%, 20\%, 50\%, 100\%), training for 7 epochs.\footnote{While the published LXMERT model was pre-trained for 20 epochs, we pre-train for 7 epochs because we conduct multiple pre-train experiments, and prefer to spend our budget on more experiments than a few very expensive ones.}
All other hyper-parameters are the same as the original implementation. We only modify the MLM objective, fine-tuning on three downstream tasks (VQA, GQA, NLVR2).
For VQA and GQA, we report the mean of two experiments with different random seeds. The NLVR2 dataset is smaller ($\approx$10\% of GQA), so we report three experiments with different random seeds. Following common practice~\citep{tan-bansal-2019-lxmert}, we test GQA on the \textit{test-dev} split; NLVR2 on the public test set \textit{test-P}; and VQA on the \textit{minival} split. See corresponding papers for more details.
\begin{figure}[tb!]
\centering
\begin{minipage}{0.40\textwidth}
\includegraphics[width=\textwidth]{images/parade_large.png}
\end{minipage}
\begin{minipage}{0.5\textwidth}
\centering
\captionsetup{type=table}
\scalebox{0.65}{
\begin{tabular}{@{}ll@{}}
\toprule
Published LXMERT & \myul[red]{bathroom}, \myul[red]{beach}, \myul[red]{city}, \myul[red]{kitchen}, \myul[green]{woman} \\
Objects & \myul[green]{motorcycle}, \myul[red]{bathroom}, \myul[green]{parade}, \myul[green]{man}, \myul[green]{crowd} \\
\midrule
Ground truth objects & glasses, gang, motorcycle, shirt, man, parade, ... \\
\bottomrule
\end{tabular}
}
\end{minipage}
\caption{Example of top 5 predictions for the prompt based object detection task, for the prompt ``A photo of a [MASK]''. Green underline indicate that the model predicted an object that appear in the ground truth objects (obtained from the scene graph). The model trained with \emph{Objects} masking strategy is more responsive to the image content compared to the baseline model.}
\label{fig:fig_parade_comparison}
\end{figure}
\paragraph{Results} Figure~\ref{fig:res_vqa_gqa_nlvr} presents our downstream tasks results.\footnote{Results tables presented in Appendix~\ref{sec:full_results}.}
For brevity, we focus on the \textit{Objects} masking strategy, though the trend is similar for the other alternative strategies. We observe that our alternative masking strategies consistently outperform the \textit{\originalstrategy{}} strategy, especially in low resource settings.
Pre-training with the \textit{Objects} strategy yields gains of 0.72--0.86\% on VQA and GQA, and 4\% on NLVR2 with 10\% of the pre-train data; 0.64--0.95\% gains on VQA and GQA, and 1.35\% on NLVR2 with 20\%; 0.5--1.02\% gains on VQA and GQA, and 1.6\% in NLVR2 with 50\%. With 100\%, the improvement is minor in GQA, VQA, but still noticeable (1.08\%) on NLVR2 (The \cwstrategy{} strategy achieves 0.49 gain on GQA with 100\%). \footnote{Preliminary experiments show that increasing the number of epochs leads to smaller gains, which emphasizes the benefits of our method in low resource settings.}
\paragraph{Ablation studies} The gains observed when using our proposed strategies can result from both changes we made to address the limitations of standard MLM presented in Section~\ref{sec:current_approach}: masking a single word in each sentence (rather than not masking any word in some cases) and deciding which word to mask (rather than randomly masking tokens). To isolate the contributing factors, we design additional experiments. We pre-train with 10\% and 20\% of the data with the \textit{random 1 word} strategy, and present the mean accuracy on the VQA and GQA in Figure~\ref{fig:ablations}. We see that this strategy outperforms the \textit{\originalstrategy{}} strategy, but under-performs \textit{Objects}. In addition, in Appendix~\ref{sec:ablation} we show experiments of varying masking probabilities rather than the baseline's 15\%, with and without multiple masked tokens per sentence, and allowing sentences without any masked token. Out of all tested settings, masking a single word achieves the best downstream results.
We conclude that the benefit of our proposed strategies comes from both choosing a single word to mask, and masking tokens that are more important.
For completeness, we experiment with the \textit{stop-words \& punctuation} strategy with 10\% and 20\% of the data on VQA and GQA.
As expected, this strategy under-performs the \textit{\originalstrategy{}}; by 1.4\% when pre-training with 10\% of the data, and 3.37\% with 20\% the data.
\begin{figure}[hbt!]
\centering
\includegraphics[width=\columnwidth]{images/res_vqa_gqa_nlvr.png}
\caption{VQA, GQA and NLVR2 downstream tasks results for models with different masking strategies and increasing amounts of pre-train data. The left Y axis describes the accuracy, the right Y axis describes the percentage of the full setup performance (trained with 20 epochs and 100\% of the pre-train data).
Our alternative masking strategies consistently improve over the \originalstrategy{} masking strategy, especially in low resource settings.
\resolved{\roy{missing STDs}}
}
\label{fig:res_vqa_gqa_nlvr}
\end{figure}
\begin{figure}[hbt!]
\centering
\includegraphics[width=0.8\columnwidth]{images/ablations.png}
\caption{Ablation results for randomly masking a single word. The plot shows the average results for GQA and VQA. A model that masks a single word outperforms one with the original strategy of randomly masking 15\% of the tokens, but under-performs a model that masks a single \textit{object} word. We conclude that the gain of our proposed strategies comes from both masking a single word, and selecting tokens that are more important.
}
\label{fig:ablations}
\end{figure}
\subsection{Prompt Based Object Detection}
\label{sec:prompt_base}
\begin{figure*}[h]
\centering
\includegraphics[width=\textwidth]{images/precision_at_k.png}
\caption{Precision/recall curve for prompt-base object detection task. Our models substantially improve over the published LXMERT, despite training over only a third of its epochs and half of its training data.}
\label{fig:image_classification}
\end{figure*}
To further examine the value of our proposed masking strategies, we
examine in what way the pre-trained models trained with different strategies differ. To do so, we use prompts, and study whether a model trained for only completing \emph{Objects} (for example) will be more responsive to the image contents compared to the baseline model.
For example, given the image in Figure~\ref{fig:fig1}, we can query the model using the prompt ``A photo of a [MASK]'', and count how many of the objects (``tiger'', ``carrot'') are in its top $k$ predictions. We compare our alternative pre-trained models, pre-trained on 50\% of the data, with the original pre-trained LXMERT model. We evaluate them on 2193 images from the LXMERT \textit{minival} split, which the model did not observe during pre-training. Given a (prompt, image) pair, we intersect each model's top $k$ predictions with the ground-truth objects list obtained from the image ground truth scene-graph, available for these images.
We use several prompts: ``A photo of a [MASK]'' (inspired by CLIP \cite{radford2021learning}), ``A [MASK] in the photo'', and ``A [MASK]''. We present a precision for different values of $k$ in Figure~\ref{fig:image_classification}.
Our models achieve improved precision score over published LXMERT, despite training over only a third of its epochs and half of its training data. The precision metric is simply the number of correct predictions (intersection of predictions with ground-truth objects), divided by the number of predictions. For example, when considering five top predictions ($k$=5), the published LXMERT achieves 10\% precision, compared to 18\% precision for the model trained with \emph{Content words} masking strategy. When $k$=10, the improvement is 11\% $\rightarrow$ 16\%,\resolved{\roy{do you mean 11 -> 16? you have a backslash that I think shouldn't be there}} etc. Additional results and ROC curve are available in Section~\ref{sec:full_results} in the Appendix. Our results indicate that our proposed models are more responsive to the image compared to the model trained with the \originalstrategy{} strategy. An example comparing the \originalstrategy{} model and model trained with \emph{Objects} masking strategy is presented in Figure~\ref{fig:fig_parade_comparison}. Four of the top five predictions of the model trained with \emph{Objects} masking strategy appear in the list of ground-truth objects, while the model trained with \originalstrategy{} strategy predicts only one of the ground-truth objects.
\subsection{Hierarchy of Masked Semantic Classes}
\label{sec:hierarchy}
We have shown that our strategies improve results over the \originalstrategy{}. In this section, we aim to understand if the tokens we mask make the model actively rely on the image. For this purpose, we extract the image necessity for a masked token using the \lossgap{} metric (see Section~\ref{sec:image_nec}) for every token. We use the original LXMERT pre-trained model and validation data. For each sentence, we iterate over each token, mask and predict it with and without the image. An example from the extracted \lossgap{} data is presented in Figure~\ref{fig:fig_motorcycle}.\footnote{We publish this extracted data for future work.}
Following, Figure~\ref{fig:spectral_bar} presents a hierarchy of the different semantic classes described in Section~\ref{sec:semantic_classes}, ranked by their \lossgap{}.\footnote{The groups are not mutually exclusive.}
We draw several observations based on that plot. First, we note that objects that appear in both text and the scene graph (dubbed grounded objects, e.g., ``tiger'') are more important than non-grounded objects. Our intuition is that grounded concepts have higher \lossgap{} compared to non-grounded concepts, as the model benefits from masking the latter. For example, consider the sentence ``Is there a \textit{tiger} in the image?'', for an image without any tiger (i.e., \textit{tiger} is not grounded). In this case, the model would not have the ability to differentiate the true word (\textit{tiger}) from any other object in the vocabulary that is also not in the image.
In addition, we observe that the objects semantic class is the most important one. We see a connection between the hierarchy and downstream performance obtained by our different strategies. \textit{Stop-words \& punctuation} are ranked the lowest, and indeed pre-training with the \textit{Stop-words \& punctuation} strategy achieves the lowest results. The strategies of \textit{Objects} and \textit{Top concrete} are ranked high, and indeed they achieve improved results compared to the \originalstrategy{}.
\begin{figure*}[!hbt]
\centering
\newcommand{\figlen}[0]{\columnwidth}
\includegraphics[width=\textwidth]{images/spectral_bar.png}\\
\caption{Hierarchy of semantic classes and its importance by the \lossgap{} metric (Loss without image - Loss with image).}
\label{fig:spectral_bar}
\end{figure*}
\subsection{MLM Performance across Word Classes}
Many works~\cite{lu2019vilbert, tan-bansal-2019-lxmert, chen2020uniter} assume that a VLP model should include an MLM component that is capable of predicting \textit{every} masked token, including objects, properties, but also stop words and punctuation. Does a model that uses our \textit{Objects} strategy, and masks only objects, learn to complete words from other classes? If not, can such a pre-training strategy be effective?
To examine this questions, we extend the experiment described in Section~\ref{sec:current_approach} to additional masking strategies, comparing between the different models pre-trained on 50\% of the data. Results are presented in Table~\ref{tab:val4models}. We see that the model trained with the \emph{\originalstrategy{}} masking strategy is able to complete masked words from different classes (performance are above 70\% for all cases).
\resolved{\roy{the term ``is able to mask'' is ill-defined. Talk about performance levels, as you do below for objects.}\yonatan{fixed, I ment "able to complete masked words" not "able to mask". I also added "performance are above 70\% for all cases".}} However, the model trained with \textit{Objects} masking strategy indeed learned to complete only objects. Nonetheless, its downstream performance is in fact higher than the \textit{\originalstrategy{}} model. We conclude that a model does not necessarily need to be able to complete all semantic classes, and some classes are more beneficial than others. For example, the \textit{Objects} model's performance is quite low on both completing stop-words (4\%), which is considered an easy task, and on attributes (22\%).
A possible explanation for these findings might be that the model is evaluated mostly on retrieving objects, and had we tested it on other classes, its performance would have substantially decreased. To test this hypothesis, we inspect the same model's performance on questions with answers from different semantic types. To do so, we experiment with the GQA dataset, which includes partitioning of the answers into different semantic types, including \textit{Objects}, \textit{Relations} (subject or object of a described relation, e.g., ``what is the girl wearing?"),\resolved{\roy{I would say this is an object question (the answer is an object)}\yonatan{That's not object detection. The question requires object detection skills, but it is more extensive than simply object detection}} and \textit{Attributes} (the properties or position of an object).
The results for the semantic type partition are presented in Table~\ref{tab:gqa_semantic_subset}. Comparing between the models trained with \textit{Objects} and \textit{\originalstrategy{}} masking strategies, the \textit{Objects} masking strategy achieves improved performance in \textit{Relationships} and \textit{Attributes}, although it never masked these kinds of tokens, and its MLM performance on these classes is considerably lower. It seems that masking only objects might assist the models to learn additional semantic classes.
\begin{table*}[!hbt]
\centering
\begin{tabular}{@{}ccccc@{}}
\toprule
\textbf{Model} & \multirow{2}{*}{\textbf{\begin{tabular}[c]{@{}c@{}}\originalstrategy{}\end{tabular}}} & \multirow{2}{*}{\textbf{Objects}} & \multirow{2}{*}{\textbf{\begin{tabular}[c]{@{}c@{}}\cwstrategy{}\end{tabular}}} & \multirow{2}{*}{\textbf{Top concrete}} \\
\textbf{Masking Strategy} & & & & \\ \midrule
\originalstrategy{} & 87\% & 27\% & 70\% & 36\% \\
Stop-words \& punctuation, 15\% & 98\% & 4\% & 80\% & 13\% \\
Content words, 15\% & 74\% & 57\% & 62\% & 62\% \\
Objects & 76\% & 85\% & 82\% & 83\% \\
Attributes & 70\% & 22\% & 59\% & 50\% \\
Relationships & 89\% & 15\% & 75\% & 25\%
\\ \bottomrule
\end{tabular}
\caption{MLM Validation Accuracy@5 for different pre-training strategies, tested on different masking strategies. Interestingly, the model trained with \textit{Objects} strategy achieves low performance on all semantic classes except objects, but still achieves improved results compared to the model trained with \originalstrategy{} strategy.}
\label{tab:val4models}
\end{table*}
\begin{table}[]
\centering
\resizebox{\columnwidth}{!}{
\begin{tabular}{@{}llll@{}}
\toprule
\multirow{2}{*}{\textbf{\begin{tabular}[c]{@{}l@{}}Question \\ semantic type\end{tabular}}} & \multirow{2}{*}{\textbf{\# Questions}} & \multicolumn{2}{l}{\textbf{Masking Strategy}} \\
& & \textbf{\originalstrategy{}} & \textbf{Objects} \\ \midrule
Objects & 778 & 86.89 & 87.79 \\
Attributes & 5,186 & 63.17 & 63.96 \\
Relations & 5,308 & 49.72 & 50.47
\\ \bottomrule
\end{tabular}}
\caption{GQA semantic types partition performance. The model trained with \textit{Objects} masking strategy achieves improved performance compared to the baseline model on \textit{Relationships} and \textit{Attributes}, although it never masked these kind of tokens.}
\label{tab:gqa_semantic_subset}
\end{table}
\subsection{Vision Language Pre-training (VLP)}
Recently, many VLP models have been proposed \cite{lu2019vilbert, tan-bansal-2019-lxmert, chen2020uniter}. The pre-training objectives in many cases are: (1) Masked language modeling (MLM), where a model predicts masked tokens given the sentence and the image. (2) Masked region modeling (MRM), where the model predicts masked visual object features, and (3) Sentence-image matching, where the model predicts whether the sentence belongs to the image. Some models also add the visual question answering objective during the pre-training phase~\citep{tan-bansal-2019-lxmert, li2021semvlp}. Previous works have found that the MLM objective is an important pre-training task affecting the quality of the learned representations \cite{chen2020uniter, huang2020pixel, hendricks2021decoupling}. However, the MRM objective was not always found to be important \cite{su2019vl, hendricks2021decoupling}, and the same for sentence-image prediction \cite{hendricks2021decoupling, li2019visualbert}. For this reason, we focus on the MLM objective.
\subsection{Alternative MLM objectives in vision and language}
Concurrently with our work, \citet{zellers2021merlot} presented an approach for pre-training over YouTube videos. They suggested a strategy of corrupting highly visual words in the masked language modeling task, observing that vanilla BERT-style often masks ungrounded words like ``umm'' or ``yeah''.
We share the same motivation to mask highly visual words.
\subsection{Challenges in VQA generalization}
\paragraph{Visual understanding} Language and vision tasks inherently demand deep understanding of both the text and the image. However, many works show that models can succeed on VQA \emph{datasets} using strong language priors, and by relying on superficial cues, and there are still challenges to overcome for tasks with more compositional structure~\cite{jabri2016revisiting, zhang2016yin,goyal2017making,agarwal2020towards,bitton2021automatic,dancette2021beyond}.\resolved{\roy{Our naacl work is cited as arxiv}}
Balanced datasets such as VQA 2.0~\cite{goyal2017making} and GQA \cite{hudson2019gqa} have been presented to address these challenges. Novel models with richer visual representations \cite{zhang2021vinvl} were also presented, and some works tried to encourage the model to look at the ``correct'' image regions \cite{liu2021answer, yang2020object}.
\paragraph{Bias} \citet{yang2021causal} and \citet{hendricks2018women} have shown that attention-based vision-language models suffer from bias that misleads the attention module to focus on spurious correlations in training data, and leads to poor generalization. Some examples are presented in Appendix~\ref{sec:examples}, Figure~\ref{fig:the_pain}. To mitigate the language priors bias, it may be beneficial to increase the focus on the image during pre-training.
\subsection{Detection of Objects, Attributes and Relationships}
\label{sec:using_obj_att_rel}
\paragraph{Using the annotated scene-graph as ground truth}
A simple way to detect \textit{objects}, \textit{attributes}, and \textit{relationships} in captions, is to obtain it, given that the image has scene-graph annotation from Visual-Genome or GQA. In LXMERT pre-training data, 83\% of the sentences have scene-graph annotations for their corresponding image. For example, given the sentence, image pair: ``The rabbit is eating the orange carrot'', and an image, the ground truth by the scene-graph will include \textit{Objects}: rabbit, carrot; \textit{Attributes}: orange; and \textit{Relationships}: eating.
When obtained from the scene-graph, we call it ``Grounded'' (Grounded objects, grounded attributes, and grounded relationships).
\paragraph{Predicting \textit{objects}, \textit{attributes}, and \textit{relationships} in each caption:} For more general and scalable method when scene-graph is not available, we can use matching heuristics. We use the Part-of-speech tagging (POS), and we aggregate lists of Objects, Attribute and Relationships from Visual Genome dataset annotations.\footnote{\url{http://visualgenome.org/api/v0/api_home.html}} Those are our heuristics:\footnote{Our full code, including code to detect the semantic type tokens will be published}
\begin{itemize}
\item \textit{Objects} are words with POS = ``NOUN'' and in Visual Genome objects list.
\item \textit{Attributes} are words with POS = ``ADJ'' and in Visual Genome attributes list.
\item \textit{Relationships} are words with POS = ``ADP'' or ``VERB'', and in Visual Genome relationships list.
\end{itemize}
Those simple rules are our predictions for detecting \textit{Objects}, \textit{Attributes}, and \textit{Relationships} in a sentence.
\paragraph{Validation of the \textit{objects} \textit{attributes} and \textit{relationships} task: } We can now evaluate the predicted \textit{objects}, \textit{attributes} and \textit{relationships} with the ground-truth obtained from the scene-graph. The grounding method (matching between the caption and the scene-graph) we use is simple: exact match between the word in the scene-graph and the caption. Using a more complex grounding algorithm will not change our predictions, but it can only improve our results (For example, if the caption has ``women'' that was predicted as \textit{Object}, and the scene-graph has ``woman'', it is currently counted as ``False-Positive'' because it's not exact match). Results are presented at Table~\ref{tab:detection_obj_att_rel}.
\begin{table}[]
\begin{tabular}{@{}cccc@{}}
\toprule
& \textbf{\# items} & \textbf{Accuracy} & \textbf{Recall} \\ \midrule
\textbf{Objects} & 7,484,940 & 89.89 & 97.39 \\
\textbf{Attributes} & 3,240,096 & 92.91 & 79.91 \\
\textbf{Relationships} & 3,195,345 & 86.42 & 96.88 \\ \bottomrule
\end{tabular}
\caption{Detection performance of \textit{Objects}, \textit{Attributes}, and \textit{Relationships}.}
\label{tab:detection_obj_att_rel}
\end{table}
\subsection{Concrete and Abstract definitions}
\label{sec:concrete_and_abstract}
The concreteness annotation dataset \cite{brysbaert2014concreteness} is annotated by 20-30 annotators. The rating scale is 1-5, when 1 is abstract, and 5 is concrete. This is how they define concrete: ``A concrete word comes with a higher rating and refers to
something that exists in reality ; you can have immediate experience of it through your senses (smelling, tasting, touching, hearing, seeing) and the actions you do. The easiest way to explain a word is by pointing to it or by demonstrating it.''
This is how they define abstract: ``An abstract word comes with a lower rating and refers to something you cannot experience directly through your senses
or actions. Its meaning depends on language. The easiest way to explain it is by using other words''.
\section{Additional Experiments}
\label{sec:ablation}
\subsection{How good is current pre-training?}
We want to asses contribution of the current LXMERT pre-training. We conduct fine-tune experiments with LXMERT without pre-tain.
Results are presented at Table~\ref{tab:how_good_is_curr_pt}. We see that pre-training adds $\approx$6.5 in GQA, $\approx$4.8 in VQA, and $\approx$23.8 in NLVR2.
\begin{table}[hbt!]
\resizebox{0.45\textwidth}{!}{%
\begin{tabular}{@{}|c|c|c|c|@{}}
\toprule
\textbf{Dataset} & \textbf{GQA} & \textbf{VQA} & \textbf{NLVR2} \\ \midrule
No pre-train & 53.24 & 65.10 & 51.07 \\ \midrule
\begin{tabular}[c]{@{}c@{}}Pre-training all data\\ Reported LXMERT GitHub results\end{tabular} & 59.80 & 69.90 & 74.95 \\ \bottomrule
\end{tabular}
}
\caption{Downstream task performance for limited pre-training methods.}
\label{tab:how_good_is_curr_pt}
\end{table}
\subsection{How to change the 15\% masking amount?}
\label{sec:how_to_change_15}
In Section~\ref{sec:current_approach} we discussed that 15\% with short captions ($\approx$6.86) causes that with third of the cases no token is masked, in another third 1 token is masked, and in the last third, multiple tokens are masked.
We isolate those factors by conducting 3 experiments:
\begin{itemize}
\item Not allowing 0 masked (if 0 tokens were masked, sampling 1 token to mask).
\item Not allowing multiple masked (if multiple tokens were masked, sample 1 token from them to mask)
\item Masking only 1 word.
\end{itemize}
Results are presented at Table~\ref{tab:how_to_change_15}.
\begin{table}[]
\centering
\resizebox{0.5\textwidth}{!}{%
\begin{tabular}{@{}|c|c|c|c|@{}}
\toprule
& \textbf{GQA} & \textbf{VQA} & \textbf{NLVR2} \\ \midrule
\textbf{\originalstrategy{}} & 54.4 & 65.06 & 58.55 \\ \midrule
\textbf{Don't allow 0 masked} & 54.98 & 65.4 & 59.45 \\ \midrule
\textbf{Don't allow multiple masked} & 54.46 & 65 & 58.82 \\ \midrule
\textbf{Mask 1 word} & 55.07 & 65.26 & 61.25 \\ \bottomrule
\end{tabular}
}
\caption{Changing 15\% masking amount. Masking 1 word achieves the higher downstream tasks results.}
\label{tab:how_to_change_15}
\end{table}
We can see that not allowing multiple masked tokens helps a bit. Not allowing 0 masked tokens helps more. And masking 1 word is the better overall strategy.
\subsection{Full results tables}
\label{sec:full_results}
\begin{table*}[!htbp]
\centering
\begin{tabular}{lllll}
\toprule
& \multicolumn{4}{l}{\textbf{\% of pre-train data}} \\
\textbf{Masking Strategy} & \textbf{10} & \textbf{20} & \textbf{50} & \textbf{100} \\ \midrule
\textbf{\originalstrategy{}} & 65.05 $_{\pm0.02}$ & 65.86 $_{\pm0.06}$ & 67.14 $_{\pm0.2}$ & 68.79 $_{\pm0.02}$ \\
\textbf{\begin{tabular}[c]{@{}l@{}}\cwstrategy{} \end{tabular}} & 65.53 $_{\pm0.04}$ & 66.37 $_{\pm0.04}$ & 67.86 $_{\pm0.08}$ & 68.94 $_{\pm0.05}$ \\
\textbf{Objects} & 65.77 $_{\pm0.05}$ & 66.5 $_{\pm0.04}$ & 67.64 $_{\pm0.08}$ & 68.94 $_{\pm0.06}$ \\
\textbf{Top concrete} & 65.54 $_{\pm0.21}$ & 66.32 $_{\pm0.02}$ & 67.47 $_{\pm0.1}$ & 68.8 $_{\pm0.03}$
\\ \bottomrule
\end{tabular}
\caption{Full VQA 2.0 results, mean$_{\pm std}$}
\label{tab:full_vqa_results}
\end{table*}
\begin{table*}[!htbp]
\centering
\begin{tabular}{lllll}
\toprule
& \multicolumn{4}{l}{\textbf{\% of pre-train data}} \\
\textbf{Masking Strategy} & \textbf{10} & \textbf{20} & \textbf{50} & \textbf{100} \\ \midrule
\textbf{\originalstrategy{}} & 54.39 $_{\pm0.01}$ & 55.14 $_{\pm0.02}$ & 57.47 $_{\pm0.13}$ & 58.87 $_{\pm0.04}$ \\
\textbf{\begin{tabular}[c]{@{}l@{}}\cwstrategy{}\end{tabular}} & 55.46 $_{\pm0.04}$ & 56.27 $_{\pm0.33}$ & 58.07 $_{\pm0.09}$ & 59.36 $_{\pm0.08}$ \\
\textbf{Objects} & 55.25 $_{\pm0.21}$ & 56.08 $_{\pm0.10}$ & 58.49 $_{\pm0.01}$ & 59.02 $_{\pm0.03}$ \\
\textbf{Top Concrete} & 55.31 $_{\pm0.12}$ & 56.56 $_{\pm0.35}$ & 58.38 $_{\pm0.25}$ & 58.9 $_{\pm0.04}$
\\ \bottomrule
\end{tabular}
\caption{Full GQA results, mean$_{\pm std}$}
\label{tab:full_gqa_results}
\end{table*}
\begin{table*}[!htbp]
\centering
\begin{tabular}{lllll}
\toprule
& \multicolumn{4}{l}{\textbf{\% of pre-train data}} \\
\textbf{Masking Strategy} & \textbf{10} & \textbf{20} & \textbf{50} & \textbf{100} \\ \midrule
\textbf{\originalstrategy{}} & 59.67 $_{\pm1.04}$ & 65.1 $_{\pm1.13}$ & 68.75 $_{\pm0.53}$ & 70.73 $_{\pm0.65}$ \\
\textbf{\begin{tabular}[c]{@{}l@{}}\cwstrategy{}\end{tabular}} & 61.65 $_{\pm0.95}$ & 67.25 $_{\pm0.48}$ & 70.85 $_{\pm0.06}$ & 71.63 $_{\pm0.44}$ \\
\textbf{Objects} & 63.7 $_{\pm0.14}$ & 66.45 $_{\pm1.2}$ & 70.36 $_{\pm0.91}$ & 71.81 $_{\pm0.51}$ \\
\textbf{Top Concrete} & 62.49 $_{\pm0.72}$ & 66.4 $_{\pm0.56}$ & 70.29 $_{\pm0.22}$ & 71.8 $_{\pm0.1}$
\\ \bottomrule
\end{tabular}
\caption{Full NLVR2 results, mean mean$_{\pm std}$}
\label{tab:full_nlvr2_results}
\end{table*}
\begin{figure*}[h]
\centering
\includegraphics[width=\textwidth]{images/prompt_base_pr_white.png}
\caption{Precision-recall curve for prompt-base object detection task. Our models achieve improved results over published LXMERT, although trained with a half of the pre-train data and a third of the epochs.}
\label{fig:image_classification_pr}
\end{figure*}
\subsection{Examples}
\label{sec:examples}
\begin{figure*}[]
\centering
\newcommand{\figlen}[0]{\columnwidth}
\includegraphics[width=0.75\textwidth]{images/the_pain.png}\\
\caption{
LXMERT mistakes observed on examples from GQA and VQA.
The tendency of VLP models is to predict something that is correlated with the text, or common answers.
In many cases, the prediction is not an item that even appears in the image.
}
\label{fig:the_pain}
\end{figure*}
\begin{figure*}[tb!]
\centering
\begin{minipage}{0.40\textwidth}
\includegraphics[width=\textwidth]{images/ex_bathroom_clean.png}
\end{minipage}
\begin{minipage}{0.5\textwidth}
\centering
\captionsetup{type=table}
\scalebox{0.65}{
\begin{tabular}{@{}ll@{}}
\toprule
Published LXMERT & \myul[green]{bathroom}, \myul[red]{kitchen}, \myul[red]{bedroom}, \myul[red]{beach}, \myul[red]{city} \\
Objects & \myul[green]{bathroom}, \myul[red]{restroom}, \myul[green]{sink}, \myul[green]{toilet}, \myul[green]{mirror} \\
\midrule
Ground truth objects & tile, toilet, wash cloth, tub, sink, mirror, ... \\
\bottomrule
\end{tabular}
}
\end{minipage}
\begin{minipage}{0.40\textwidth}
\includegraphics[width=\textwidth]{images/ex_baseball.png}
\end{minipage}
\begin{minipage}{0.5\textwidth}
\centering
\captionsetup{type=table}
\scalebox{0.65}{
\begin{tabular}{@{}ll@{}}
\toprule
Published LXMERT & \myul[red]{beach}, \myul[green]{field}, \myul[red]{bathroom}, \myul[green]{woman}, \myul[green]{man} \\
Objects & \myul[red]{beach}, \myul[green]{field}, \myul[green]{baseball}, \myul[green]{woman}, \myul[red]{game} \\
\midrule
Ground truth objects & bat, shirt, catcher, glove, lot, distance, ... \\
\bottomrule
\end{tabular}
}
\end{minipage}
\caption{Additional examples of top 5 predictions for the prompt based object detection task, for the prompt ``A photo of a [MASK]''. Green underline indicate that the model predicted an object that appear in the ground truth objects (obtained from the scene graph).}
\label{fig:fig_bathroom_comparison}
\end{figure*} |
1,477,468,750,352 | arxiv | \section{Introduction}
Tensor network (TN) methods have long proven their power as an indispensable tool in simulating quantum- and classical many-body systems \cite{Schollwoeck2011DMRG_MPS_review,Orus2014TN_review}.
As first realized by S.\ White with the density matrix renormalization group (DMRG) algorithm \cite{White1992DMRG,White1993DMRG}, a variational ansatz on the manifold of matrix product states (MPS) \cite{Rommer1997MPS}, TNs provide an efficient parameterization of low-entangled wave functions in quantum many body state-space \cite{Eisert2013TNReview_Entanglement}.
While the MPS naturally captures the relevant low-energy spectrum in particular of one-dimensional (1D) gapped Hamiltonians obeying area laws of entanglement \cite{Audenaert2002EntanglementChain,Plenio2005AreaLaw, Eisert2010AreaLaw,Wolf2006AreaLawFermions}, TNs have been generalized to more complex scenarios:
In over two decades of evolution, they have been successfully applied to higher dimensions \cite{Verstraete2006PEPS,Tagliacozzo2009TTN2D,Gerster2017bTTNHofstadter}, critical phenomena \cite{Vidal2007MERA,Vidal2008MERA,Silvi2010bTTN,Orus2014MultipartiteTN}, as well as finite temperature and the study of closed and open system dynamics \cite{Verstraete2004MPDO,Werner2016LPTN}, and lattice gauge theories \cite{Silvi2014LatticeGaugeTN,Tagliacozzo2014LatticeGaugeTN,Pichler2016U1LatticeGauge}, just to name a few examples.
TNs have also been equipped with structure to encode and exploit symmetries in the model under investigation \cite{Singh2010SymTN,Singh2011SymU1,Singh2012SymSU2,Weichselbaum2012SymNonabelian}.
Truncated singular value decompositions (SVDs) are widely used in TN algorithms to compress states into their respective TN state manifold.
Examples include the time-evolving block decimation (TEBD) \cite{Vidal2003TEBD,Vidal2004TEBD}, the tensor renormalization group \cite{Levin2007TRG}, the corner transfer matrix renormalization group \cite{Nishino1996CTMRG} and the projected entangled pair states \cite{Verstraete2004PEPS, Murg2007PEPS2}, but also traditional DMRG which is often formulated in terms of truncated eigenvalue decomposition.
In TN numerical practice, SVDs have the additional advantage to provide relevant isometries by orthonormality of the singular vectors, and reveal valuable information of the encoded network state, e.g.\ in the form of entanglement measures based on singular values.
The traditional way to compute a truncated SVD is to first perform the full SVD of a matrix, and then discard the smallest singular values.
This is reliable and accurate, but also a very costly operation that often dominates computational complexity of TN algorithms.
Intuitively, it is also not the most economic protocol:
A lot of effort is spent in computing all singular values and -vectors, many of which are then discarded.
By avoiding the full SVD, a truncated SVD can be obtained more efficiently, especially when the number of retained singular values is small.
Well known methods of this class are simultaneous subspace iteration or Krylov subspace methods like Lanczos- or implicitly restarted Arnoldi algorithms \cite{Golub2012MatrixComputations,Demmel1997NumericalLA}.
Their relevance in large-scale data classification and compression in `big data' applications \cite{Erichson2017RandomCPTensorDecomp,Hastie2009StatisticalLearning}, signal processing \cite{Cichocki2015SignalProcessingTensors}, face recognition \cite{Vijayakumari2013FaceRecognitionSurvey,Pai2015FaceRecognitionIllumination}, DNA analysis \cite{Wall2003MicroarrayAnalysisSVD} and other fields is a driving force behind the ongoing development of faster algorithms.
A use case in the approximative contraction of unstructured TNs has also been reported \cite{Jermyn2017automatic}.
Randomized algorithms outperform prior approaches in both speed and reliability \cite{Halko2011LowRankProbabilistic}.
Specifically, the randomized SVD (\RSVDPREFIX{}SVD{}) based on a probabilistic low-rank matrix-factorization algorithm \cite{Halko2011LowRankProbabilistic} is capable of delivering accurate results with failure probabilities that can be made arbitrarily small, independent of peculiar choices like starting vectors that are common in deterministic methods.
\RSVDPREFIX{}SVD{} thus promises to significantly accelerate TN methods that spend a considerable amount of resources in truncated SVDs.
Recently, significant speedup due to \RSVDPREFIX{}SVD{} has been reported in the TEBD simulation of open system dynamics \cite{Tamascelli2015TEBDRSVD}.
In particular, the authors of \cite{Tamascelli2015TEBDRSVD} showed that the robust \RSVDPREFIX{}SVD{} outperforms deterministic SVD algorithms in delivering a limited number of largest singular values (and corresponding vectors) while maintaining high accuracy in the simulated dynamics.
It is however an open question whether \RSVDPREFIX{}SVD{} can be applied with similar success in scenarios beyond the open system dynamics, since \RSVDPREFIX{}SVD{} performance and accuracy are closely tied to the encountered spectra of singular values.
This question applies especially to critical systems where the singular values are expected to decay slowly.
In this paper we demonstrate superior performance of \RSVDPREFIX{}SVD{} in the very original application field of TN methods, namely in identifying ground state properties of low-dimensional quantum lattice Hamiltonians.
We confirm significant speedup in different physical scenarios, including situations when the system is critical.
Embedded in full-fledged TN simulations, we compare the \RSVDPREFIX{}SVD{} against the truncated full SVD from state-of-the-art LAPACK implementations \textit{D/ZGESDD} \cite{Anderson1999LAPACK_userguide}, referred to as \DSVDPREFIX{}SVD{} in the following.
As benchmarks, we use variants of the quantum Ising model in imaginary TEBD time evolution and a DMRG-style ground state search with the hierarchical binary tree TN (TTN{}) \cite{Shi2006TTN,Hackbusch2009TTNScheme,Silvi2010bTTN,Murg2010TTN,Nakatani2013TTN,Gerster2014TTN,Gerster2016BoseHubbard}.
It will become apparent that a simple replacement of \DSVDPREFIX{}SVD{} with \RSVDPREFIX{}SVD{}-code can lead to speedups between one and two orders of magnitude, while preserving the same precision, even when state-of-the-art TN techniques are employed \cite{Singh2010SymTN,Singh2011SymU1}.
The paper is organized as follows:
First, we motivate the use of truncated SVD as a tool of information compression in typical TN scenarios in Sec.~\ref{sec:Compression}.
We continue with a short review of the \RSVDPREFIX{}SVD{} method and how it can help achieving faster compression in
Sec.~\ref{sub:RSVDAlgorithm}.
We then introduce our benchmark models in Sec.~\ref{sec:BenchmarkSetup} and present a detailed performance analysis by switching from \DSVDPREFIX{}SVD{} to \RSVDPREFIX{}SVD{} in Sec.~\ref{sec:Results}.
Sec.~\ref{sec:Discussion} concludes the paper with a discussion of the results and with practical tips for the implementation and identification of situations that may benefit from \RSVDPREFIX{}SVD{}.
\section{Low-rank factorization}\label{sec:Compression}
The maximal bond dimension $\chi$ of a TN is a fundamental parameter:
It can be linked to the amount of quantum entanglement that can be hosted in the network state \cite{Eisert2013TNReview_Entanglement}.
At the same time, $\chi$ determines the computational complexity of algorithms performed on the network.
Typical operations include the computation of expectation values, propagation in real or imaginary time and renormalization-steps updating the network description in iterative algorithms.
All these operations can result in the growth of index dimensions beyond the maximally allowed bond dimension.
A compression step is then achieved by means of a truncated SVD.
\subsection{Truncated SVD}
Let $A$ be a real- or complex-valued $m$-by-$n$ matrix with $m\ge n$.
In our case, $A$ usually represents the contraction of two tensors, and it can also be given in the form of a matrix product $X'Y'$.
The compression step then provides a rank-$\chi$ factorization $XY$ which is a good approximation $A\approx XY$, but also limits $X$ to an $m$-by-$\chi$ matrix and $Y$ to a $\chi$-by-$n$ matrix.
A standard solution is to compute the rank-$\chi$ truncated SVD as follows:
\begin{namedalgorithm}{\DSVDPREFIX{}SVD}
\caption{}
\label{alg:truncatedSVD}
\KwIn{$m$-by-$n$ matrix $A$, integer $\chi$}
\KwOut{rank-$\chi$ truncated SVD of $A$}
\BlankLine
Compute the full SVD of $A=U \Sigma V^\dagger$\;
Extract the $\chi$ largest singular values from $\Sigma$ and corresponding columns of $U$ and $V$\;
\end{namedalgorithm}
In particular, $U$ is an $m$-by-$n$ matrix, $\Sigma$ and $V$ are $n$-by-$n$ matrices, and we assume $\Sigma$ is the diagonal matrix containing the singular values $\sigma_j=\Sigma_{jj}$ in descending order $\sigma_1\ge\sigma_2\ge\dots\ge\sigma_n\ge0$.
We then discard the $n-\chi$ smallest singular values (assuming $\chi \le n$) and obtain for instance $X_{ij}=U_{ij}$ and $Y_{jk}=\sigma_j V^\dagger_{jk}$ for $j=1,\dots,\chi$.
The truncation error $\delta_\text{trunc}:=\|A-XY\|$ is then known to be minimal \cite{Eckart1936LowRank,Mirsky1960Symmetric} when measured in spectral norm ($\delta_\text{trunc}=\sigma_{\chi+1}$) or Frobenius norm ($\delta_\text{trunc}^2=\sum_{k=\chi+1}^{n} \sigma_k^2$).
The availability of highly optimized SVD routines makes the implementation of \DSVDPREFIX{}SVD{} straightforward.
However, while it provides high accuracy, actually computing all $n$ singular values and -vectors in the full SVD of $A$ still requires $\mathcal{O}(mn^2)$ floating-point operations.
When the compression ratio $\chi/n$ becomes small, a more efficient protocol for computing the truncated SVD of $A$ is the \RSVDPREFIX{}SVD{} algorithm.
\subsection{Randomized algorithm}\label{sub:RSVDAlgorithm}
The basic idea of \RSVDPREFIX{}SVD{} is simple:
First, the input matrix $A$ is approximated with a rank-$\ell$ matrix $A_\ell \approx A$, which is obtained with randomness. From there, a rank-$\chi$ truncated SVD of $A_\ell$ is obtained at significantly lowered computational cost compared to a full SVD of $A$.
Two characteristic choices lead to an accurate $A_\ell$:
\begin{enumerate*}
\item Oversampling the approximation with $\ell>\chi$ \cite{Martinsson2011RandLowRankNoPI}, and
\item employing a randomized power-iteration of length $q$ \cite{Rokhlin2010LowRankPIScheme}.
\end{enumerate*}
We state the complete algorithm first, as put forward in \cite{Halko2011LowRankProbabilistic}, and then discuss the impact of both parameters $\ell$ and $q$ on computational cost and quality of the outcome.
\begin{namedalgorithm}{\RSVDPREFIX{}SVD{}}
\caption{}
\label{alg:randomizedSVD}
\KwIn{$m$-by-$n$ matrix $A$, integers $\chi$, $\ell$, $q$}
\KwOut{approximate rank-$\chi$ truncated SVD of $A$}
\BlankLine
Generate an $n$-by-$\ell$ Gaussian matrix $\Omega$\;\label{alg:RSVDgaussian}
Compute $Y:=\left(AA^\dagger\right)^q A\Omega$\;\label{alg:RSVDsample}
Store in $Q$ the orthonormalized columns of $Y$\;\label{alg:RSVDortho}
Compute the rank-$\chi$ truncated SVD of $B:=Q^\dagger A$\;\label{alg:RSVDtruncate}
\end{namedalgorithm}
In detail, the algorithm begins by drawing a random test matrix $\Omega$ from a standard Gaussian distribution in step~\ref{alg:RSVDgaussian}.
Note that other choices may work as well, and that the quality of random numbers is not of crucial importance.
Step~\ref{alg:RSVDsample} then produces an $m$-by-$\ell$ sample $Y$ of the range of $A$, by multiplying the columns of the test-matrix with $\left(AA^\dagger\right)^q A$.
This process emphasizes the most relevant singular vectors, associated with large singular values $\sigma$, by a factor of $\sigma^{2q+1}$.
In order to maintain numerical stability when these factors range over several orders of magnitude, step~\ref{alg:RSVDsample} is carried out as a power iteration with subsequent QR factorizations to keep the sample orthonormal (see \cite{Halko2011LowRankProbabilistic}, Algorithm~4.4).
Step~\ref{alg:RSVDortho} then provides a basis of the sampled, relevant contributions to the range of $A$ in the orthonormal columns of the $m$-by-$\ell$ matrix $Q$.
A rank-$\ell$ approximation of $A$ is now available by projection into that subspace: $A_\ell:=QQ^\dagger A$.
Such an explicit construction is however not required.
Instead, step~\ref{alg:RSVDtruncate} invokes a rank-$\chi$ \DSVDPREFIX{}SVD{} factorization $\tilde{U}\tilde{\Sigma}\tilde{V}^\dagger$ of the typically much smaller $\ell$-by-$n$ matrix $B:=Q^\dagger A$.
Due to $A_\ell\approx A$, the $\chi$ largest singular values of $A$ are approximated in $\tilde{\Sigma}$.
If required, approximate associated left- and right singular vectors of $A$ are given by $Q\tilde{U}$ and $\tilde{V}$, respectively.
Both are exact isometries.
The average \RSVDPREFIX{}SVD{} compression error $\varepsilon_\text{\RSVDPREFIX{}SVD}:=\mathbb{E}\left(\|A-Q\tilde{U}\tilde{\Sigma}\tilde{V}^\dagger\|\right)$ depends on the spectrum of singular values, and can be made arbitrarily close to the minimal truncation error $\delta_\text{trunc}$ in either Frobenius- or spectral norm:
Following the analysis in \cite{Gu2015LowRankPIError}, a minimal oversampling of $\ell\ge\chi+2$ already guarantees
\begin{equation}
\varepsilon_\text{\RSVDPREFIX{}SVD} \le \sqrt{\delta_\text{trunc}^2 + \mathcal{C}^2(n,\ell) \chi \sigma_{\ell-1}^2 \left(\sigma_{\ell-1} / \sigma_\chi\right)^{4q}}
\label{eq:error_rsvd}
\end{equation}
with $\mathcal{C}^2(n,\ell)$ in $\mathcal{O}(n\ell)$.
Note that $\delta_\text{trunc}$ depends on the selected norm, unlike the additional terms introduced by the randomized approach.
While highest accuracy is expected for quickly decreasing singular values, a striking feature of \RSVDPREFIX{}SVD{} is that already small powers $q>0$ drive those contributions, which add to $\delta_\text{trunc}$ in Eq.~\eqref{eq:error_rsvd}, to zero exponentially fast, even in cases of slowly decaying singular values.
Furthermore, sufficient oversampling in $\ell$ makes the probability of a substantial deviation from the average error bound arbitrarily small \cite{Gu2015LowRankPIError}.
Throughout our benchmarks, we make the conservative choice $\ell=2\chi$, which is suitable to keep the \RSVDPREFIX{}SVD{}-error within a small factor of $\delta_\text{trunc}$ even for $q=0$ \cite{Halko2011LowRankProbabilistic}.
In this configuration, \RSVDPREFIX{}SVD{} promises an asymptotic speedup over \DSVDPREFIX{}SVD{} in the order of the compression ratio
\begin{equation}
T_{R}/T_{T} \propto \chi/n \,,
\end{equation}
where $T_{R}$ and $T_{T}$ are the respective R{}- and \DSVDPREFIX{}SVD{} run times on similar input $A$.
The proportionality is due to the lower \RSVDPREFIX{}SVD{} computational complexity, which is dominated by the matrix-matrix products of $\mathcal{O}{\left(mn\ell(q+1)\right)}$ in sampling $Y$.
The improved scaling of the \RSVDPREFIX{}SVD{} algorithm is complemented by its conceptual simplicity, which directly translates to a fast, stable and easily parallelizable implementation in terms of highly optimized linear algebra routines as provided by level-3 \textit{BLAS} and \textit{LAPACK} \cite{Anderson1999LAPACK_userguide}.
Various \RSVDPREFIX{}SVD{}-implementations are available, for instance in \textit{MATLAB}~\cite{Szlam2014RSVDinMATLAB}, in R~\cite{Erichson2016LowRankProbabilisticInR}, and via C-libraries such as \textit{RSVDPACK}~\cite{Voronin2015RSVDPACK} or the \textit{RRSVD-Package}~\cite{Tamascelli2015TEBDRSVD} which our benchmarks are based on.
\section{Benchmarks}\label{sec:BenchmarkSetup}
\begin{figure}
\includegraphics[width=1.\columnwidth]{figure1.pdf}
\caption{ \label{fig:svd_update}
Compression steps in our benchmarked tensor networks of bond dimension $\chi$ and local dimension $d$.
The truncated SVD (highlighted in red) retains at most $\chi$ largest singular values (red bar) and produces isometric tensors (partially shaded).
Left: The TEBD algorithm updates the MPS (top) by absorbing nearest-neighbor evolution exponentials in the form of a full four-link tensor (B) or a sum over $K$ Kronecker products (P), resulting in different compression problems (bottom).
Right: In binary tree TNs (top), we minimize the energy of an effective Hamiltonian by jointly optimizing two adjacent tensors which are finally rank-$\chi$ factorized (bottom, left to right).
}
\end{figure}
We benchmark \RSVDPREFIX{}SVD{} against \DSVDPREFIX{}SVD{} performance, when employed in state-of-the-art TN algorithms.
Our focus lies on closed system ground states, and we compare both run time and precision of the relevant physical quantities in the outcomes.
We first outline the TN algorithms that drive our benchmark simulations and the role played by compression.
Afterwards, we report model Hamiltonians and parameters. We close this section with a brief account on the numerical implementation.
\subsection{TN algorithms}\label{sub:TNalgorithms}
We employed TEBD imaginary time evolution on MPSs, and DMRG-style variational ground state search in the TTN{} \cite{Gerster2014TTN} with double-tensor optimization.
Both algorithms are well established techniques in ground state search of quantum lattice Hamiltonians.
They iteratively approximate those ground states in TNs of a selected maximal bond dimension $\chi$, defined over $d$-dimensional `physical' tensor indices that correspond to lattice sites (see Fig.~\ref{fig:svd_update}).
Specifically, both algorithms perform local update steps on adjacent tensors, which require a truncated SVD to recompress bond indices.
Note that it is the absence of loops (network cycles) in MPS and TTN{} geometries that makes truncated SVD an optimal protocol here, as it maintains maximum quantum fidelity between the states before and after the compression of a single bond \cite{White1992DMRG,White1993DMRG,Silvi2018LoopfreeTNMethods}
The two methods, however, rely on different local update steps:
In the TEBD algorithm, designed for time evolution with nearest-neighbor interactions, the update step consists of an application of a (real or imaginary) time-evolution exponential on two adjacent lattice sites. In standard TEBD, the exponential takes the form of a single four-index tensor or `block' (B) $u_\text{NN}$.
It can also be given by a sum of Kronecker products (P) of single-site operators $\sum_{k=1}^K u_L^k \otimes u_R^k$, which can be more time- and memory efficient for $K<d$.
Both strategies pose different compression problems (left side of Fig.~\ref{fig:svd_update}):
The block-update contracts directly into a square matrix of dimension $\chi d$, while in the product-update we obtain a $(\chi K)$-by-$(\chi d)$ matrix instead.
In both cases, the resulting matrix $A$ must be compressed into a rank-$\chi$ factorization with compression ratios $1/d$ and $1/\min(K,d)$, respectively.
Consequently, we expect \RSVDPREFIX{}SVD{} to significantly speed up TEBD simulations on lattices with larger local dimensions $d$:
In terms of computational complexity, TEBD with typical bond dimension $\chi \ge d$ is dominated by the \DSVDPREFIX{}SVD{} compression step of $\mathcal{O} \left(\chi^3 d^3\right)$ in block- (B) and $\mathcal{O} \left(\chi^3 K^2 d\right)$ in product- (P) updates for $K\le d$.
The asymptotic \RSVDPREFIX{}SVD{} speedup can reduce this scaling to $\mathcal{O} \left(\chi^3 d^2\right)$ (for B), as demonstrated by \cite{Tamascelli2015TEBDRSVD}, and to $\mathcal{O} \left(\chi^3 K d\right)$ (for P), respectively -- which are typical costs exhibited by other operations within TEBD as well.
In the TTN{} setting, instead, the update step directly replaces two adjacent tensors with a matrix $A$ associated to the lowest eigenvector of an effective Hamiltonian.
The matrix $A$ is at most a $\chi^2$-by-$\chi^2$ square matrix.
On some lower levels of the tree geometry, smaller dimensions can be encountered, with $d^2$ at the physical indices on the bottom (right side of Fig.~\ref{fig:svd_update}).
The majority of run time however is spent on the large update matrices, and these require a compression by a ratio $1/\chi$.
A massive speedup of the compression step, in the order of the bond dimension, can thus be expected from employing \RSVDPREFIX{}SVD{} instead of \DSVDPREFIX{}SVD{}.
A feature of all simulations is that we explicitly target the symmetry-invariant ground states under certain global Abelian symmetries of the Hamiltonian.
These grant us an inner block-structure in all tensors, which enhances efficiency and precision of the simulation \cite{Singh2010SymTN,Singh2011SymU1}.
In the compression problem, we therefore encounter strictly block-diagonal matrices $A$, encoded in $N$ non-trivial blocks.
The dimensions of these blocks correspond to degeneracies of symmetry sectors, and add up to the respective full dimensions of $A$.
In all benchmarked situations, $N$ equals the (small) number of global symmetry sectors, and the optimal TN ground state approximations display more or less evenly sized block dimensions.
Since matrix factorizations can be done block-wise, all the actual matrix dimensions passed to the truncated SVD algorithm are thus roughly those of $A$ divided by $N$.
But as the truncation rank $\chi_s \approx \chi / N$ per block is similarly reduced, no change in the compression ratio and hence in the asymptotic speedup occurs.
Note that the truncation rank per block is usually not known a priori, as it depends on the number of large singular values $\sigma_j \ge \sigma_{\chi}$ therein.
This information is only directly available with \DSVDPREFIX{}SVD{}, where all singular values of all blocks are computed.
\RSVDPREFIX{}SVD{} on the other hand delivers just the requested number of singular values for each block, and some estimate of the appropriate truncation $\chi'_s \approx \chi_s$ must be made beforehand.
After \RSVDPREFIX{}SVD{}s are then performed in all sectors, we post-select the $\chi$ largest singular values and obtain the new optimal block dimensions $\chi_s$.
In our TEBD simulations we choose a block-wise truncation rank $\chi'_s = \chi/N + c$ with a small constant $c$ that allows for some variation in sector sizes (typically less than $5\%$).
For TTN{}, we instead make the simplest maximal choice $\chi'_s = \chi$, which reduces the achievable speedup by a (small) factor $N$ but does not require any estimates.
\subsection{Models}
We simulated the quantum Ising model with ferromagnetic interaction in a tunable transverse field $h$, on two different lattices:
First, a 1D spin-$S$ chain of length $L$ with Hamiltonian
\begin{equation}
\label{eq:ising_chain}
{H}_\text{chain} = -\frac{1}{S^2} \sum_{j} X_j X_{j+1} + \frac{h}{S} \sum_{j} Z_{j} \,,
\end{equation}
where $X$ and $Z$ are local spin operators (we set $\hbar=1$) and subscripts denote application sites.
In general, in a computational spin-$Z$ eigenbasis $\left\{ \ket{m} \right\}$ of local dimension $d=2S+1$ with integer or half-integer magnetic quantum numbers $m \in \left\{-S,-S+1,\dots,S\right\}$, we have
\begin{subequations}
\begin{eqnarray}
\braket{m'|Z|m} &=& m \times \delta_{m',m} \,,\\
\braket{m'|X|m} &=& \sqrt{\left(S+1\right) \left(m+m'-1\right) - m m'} \times \nonumber \\
&& \times \left(\delta_{m',m+1}+\delta_{m'+1,m}\right) / 2 \,.
\end{eqnarray}
\end{subequations}
For $S=1/2$, $X$ and $Z$ reduce to standard Pauli matrices and the model is exactly solvable with quantum critical point at $|h|=h_c=1$.
For $S\to\infty$, the transition point shifts with $h_c\to2$ \cite{Penson1984TransvIsingHighSpinCritical}.
The TEBD is performed for $S>1/2$ in open boundary conditions with values of $h$ in various distances to the critical points, which we estimated from finite-size scaling techniques \cite{Fisher1972FSScaling}.
In our TTN{} benchmark we focus exclusively on $S=1/2, h=h_c$ in periodic boundary conditions.
The second benchmark is the simulation of a spin-$1/2$ two-dimensional (2D) square-lattice Ising model in cylindrical boundary conditions of length $L$ and circumference (or width) $W$.
With respective site-subscripts $i$ and $j$, the Hamiltonian reads
\begin{equation}
\label{eq:ising_cylinder}
{H}_\text{cyl} = -\sum_{i,j}X_{i,j} X_{i,j+1} -\sum_{i,j}X_{i,j} X_{i+1,j} +h\sum_{i,j}Z_{i,j} \,.
\end{equation}
By summation over $i$, we map this Hamiltonian onto an open chain of length $L$ with local dimension $d=2^{W}$.
For reasonably small values $W$, the ground state can be approximated in a MPS and its critical behavior can be studied with DMRG \cite{Jongh1998Ising2DDMRG}.
We performed imaginary TEBD at various values of $h$, including points in proximity of the critical field at around $h_c \approx 3.044$, as reported with high precision in Monte Carlo and TN studies on the square lattice \cite{Bloete2002IsingMC,Rizzi2010MERACritical}.
As is well known, in the thermodynamic limit, the one- and two-dimensional Ising models of Eqs.~\eqref{eq:ising_chain} and \eqref{eq:ising_cylinder} exhibit spontaneous ferromagnetic order for $|h|<h_c$, which breaks down in the paramagnetic phase for $|h|>h_c$.
Both phases are gapped, however at $|h| = h_c$, the systems become critical and gapless.
In the case of 1D lattices, we know that the ground states of a short-ranged, gapped system obey area laws for the entanglement entropy, while this is not true for a critical, gapless system \cite{Audenaert2002EntanglementChain,Plenio2005AreaLaw,Eisert2010AreaLaw,Wolf2006AreaLawFermions}.
Since squares of the singular values in loop-free TN compression steps correspond to reduced density eigenvalues of lattice bipartitions, singular values are directly linked to bipartite entanglement measures such as the von~Neumann entropy, and thus the error analysis Eq.~\eqref{eq:error_rsvd} of \RSVDPREFIX{}SVD{} is linked to the physical properties of the ground state.
For this reason we perform our benchmarks at various values of $h$, including values in close proximity to $h_c$.
We expect the latter to pose the most demanding situation for \RSVDPREFIX{}SVD{} due to a potentially slow decay of tail singular values \cite{Calabrese2008CriticalSvals1D}, which make greater amounts of computational resources necessary (via parameters $q,\ell$) to avoid larger errors in Eq.~\eqref{eq:error_rsvd}.
As a comment, we remark that the benchmarked MPS and TTN{} simulations are best suited for non-critical systems due to finite bond dimensions $\chi$ that limit correlations and entanglement.
However, the selected finite lattice sizes admit simulations at and around $h=h_c$, as is typical in extrapolating critical properties via finite-size scaling techniques \cite{Fisher1972FSScaling, Cardy2012FSScaling}.
Furthermore, TTN{} have capabilities beyond MPS in encoding quantum critical ground states \cite{Silvi2010bTTN}.
Both Ising models in Eqs.~\eqref{eq:ising_chain} and \eqref{eq:ising_cylinder} exhibit a global parity symmetry because their Hamiltonians commute with $\bigotimes_{j=1}^L {P}_j$, being defined locally by $\braket{m'|P|m} = \left(-1\right)^{m+S} \times \delta_{m',m}$.
Local basis states transform as $P\ket{m^\pm}=\pm\ket{m^\pm}$ and fall either in the even `$+$' or odd sector `$-$' of dimensions $d_+$, $d_- \approx d/2$ respectively.
Rotations in the cylindrical boundary conditions provide an additional Abelian $Z_{W}$ cyclic symmetry for \eqref{eq:ising_cylinder}.
As a consequence, even and odd sectors further decompose into $W$ different angular momentum sectors.
As mentioned in Sec.~\ref{sub:TNalgorithms}, we encode these symmetries explicitly, which allows us to restrict the TN state representation to the ground state global invariant sector $s=0$, that is the even parity and rotationally invariant subspace.
\subsection{Implementation}
Here we report the detailed implementation of a fair run-time- and precision comparison between \DSVDPREFIX{}SVD{} and \RSVDPREFIX{}SVD{}, and discuss technical details of the benchmarks.
We performed complete runs of our TEBD and TTN{} benchmark algorithms by iterating double-tensor updates until the energy expectation value of the TN state stagnates within some threshold $\delta E$.
Each run was repeated for different field $h$, maximal bond dimension $\chi$, lattice length $L$ and a selected spin $S$ or width $W$, either with \DSVDPREFIX{}SVD{} or \RSVDPREFIX{}SVD{} in the compression steps.
For the precision comparison, we extracted expectations of energy and magnetization order, correlation- and entanglement properties and singular values from the produced final states.
The magnetization order $M$ was measured from nonlocal correlations,
\begin{equation}
M=\sqrt{ \sum_{k\neq k'} \braket{X_{k} X_{k'}} / \mathcal{N} } \,,
\label{eq:magnetization}
\end{equation}
where $k$ goes over all lattice sites and $\mathcal{N}$ counts the number of expectations summed over.
The estimate for the correlation-length $\bar{\xi}$ was computed from expectations values of $X_{(k)}\equiv X_k$ in the chain and $X_{(k)}\equiv X_{i,j}$ in the cylinder as follows:
\begin{equation}
\bar{\xi} = \sqrt{ \sum_{r>1} (r-1)^2 C_r / \sum_{r>1} {C_r} } \,.
\label{eq:corrlen}
\end{equation}
Here, $C_r$ denotes the bulk average over MPS sites $j$ of $\braket{X_{(i,)j},X_{(i,)j+r}}$.
The additional site index $i$ appears only in the two-dimensional model and is averaged over as well to extract only the `horizontal' correlation length subject to compression through the MPS bondlinks.
Note that $\bar{\xi}$ tends to underestimate the actual correlation-length and saturates below $L/\sqrt{6}$ if it becomes large compared to the system size.
Furthermore, profiles of the von~Neumann entropy $S_N(j)=-\sum_k \lambda^2_k \log(\lambda^2_k)$ have been obtained from the compressed singular values at MPS bonds $j=1,\dots,L-1$.
We also profiled the individual run times spent in the truncated SVDs of compression steps, $T_{T}$ and $T_{R}$, and the time spent in all remaining parts of the algorithm, $\overline{T}_{T}$ and $\overline{T}_{R}$ for \DSVDPREFIX{}SVD{} and \RSVDPREFIX{}SVD{} runs, respectively.
All these run times have been divided by the number of iterations performed in the simulation.
However, we have found no substantial differences in the number of update steps performed with \DSVDPREFIX{}SVD{} and \RSVDPREFIX{}SVD{}, as reported in Sec.~\ref{sec:Results}.
We therefore obtain the average speedup in compression due to \RSVDPREFIX{}SVD{} from
\begin{equation}
\tau := f \cdot T_{T} / T_{R} \,,
\label{eq:speedup}
\end{equation}
where $f:=\overline{T}_{R} / \overline{T}_{T}$ is the ratio of run times spent outside compression.
Since our benchmarks have been performed on shared cluster nodes, we introduced the factor $f$ to equalize the effect of the computational environment on the bare compression times.
Thus, simulation runs that were slowed done by other computations on a cluster node can be fairly compared to faster executed simulation runs.
The complete simulation protocol for TEBD was as follows:
Starting from a product state with randomized tensors of bond dimension one, the algorithm is run in imaginary time with some sufficiently large initial time step $dt} %TAMA changed from {\text{dt}$ in the local imaginary time-evolution exponential.
After a few first iterations out of typically many hundred, the bond dimension saturates the allowed maximum, and we can safely assume $\chi$ to be the typical compression rank.
The simulation stops when convergence of the energy is detected as follows:
Throughout the simulation, the change of the expectation value of the energy is monitored in regular intervals.
Whenever this change drops below the targeted precision threshold $\delta E$, the simulation time step $dt} %TAMA changed from {\text{dt}$ is subsequently reduced by a constant factor.
Convergence is declared when the total energy decrement between two time-step reductions falls below $\delta E$, too.
With smaller choices of $\delta E$, better approximations of the final MPS to the actual ground state of the system can be expected within the bond dimension $\chi$, but at the cost of increased number of iterations and run time.
The TTN{} ground state search employs randomized initial states remaining at maximal bond dimension throughout the entire simulation.
The same initial states were used in comparative \DSVDPREFIX{}SVD{} and \RSVDPREFIX{}SVD{} runs.
The algorithm then performs sequences of double-tensor updates on adjacent tensors, until the difference in energy expectation between subsequent sweeps falls below machine precision.
All simulations were carried out in double precision arithmetic with complex numbers, except for the imaginary TEBD on the spin-$1/2$ chain which we benchmarked in a TN representation with real elements, a common choice to enhance efficiency under time-reversal invariance.
Linear algebra computations (\textit{BLAS}, \textit{LAPACK}) where performed with the `Intel Math Kernel Library' (MKL) in versions 11.x.
Our fully truncated \DSVDPREFIX{}SVD{} implementation is based on the \textit{LAPACK} \textit{D/ZGESDD} divide-and-conquer algorithm.
For \RSVDPREFIX{}SVD{}, we employed the fixed-rank implementation from the \textit{RRSVD-Package} \cite{Tamascelli2015TEBDRSVD} with parameters $q=4, \ell=2\chi$ (see Sec.~\ref{sub:RSVDAlgorithm}) for any targeted truncation rank $\chi$.
This implementation employs \textit{LAPACK} \textit{D/ZGESVD} for the final factorization in step \ref{alg:RSVDtruncate} of the \RSVDPREFIX{}SVD{} algorithm.
All simulations were executed with single-threaded compression step on 16-way Intel Xeon E5--2670 (2.6 GHZ) compute nodes.
\section{Results}\label{sec:Results}
\begin{figure}
\includegraphics[width=1\columnwidth]{figure2.pdf}
\caption{
\label{fig:speedup_local}
Speedup in compression step due to \RSVDPREFIX{}SVD{} in TEBD simulations of increasing local dimensions.
Top: 2D Ising model ($L=30$) as a function of the widths $W$.
Bottom: 1D Ising model ($L=100$) as a function of local spin $S$.
Dark- and light blue points represent data at bond dimensions $\chi=100$ and $\chi=150$, respectively, both from `block'-update and fixed convergence criteria (1D: $\delta E=10^{-13}$, 2D: $\delta E=10^{-8}$ except $W=8$ was stopped before convergence).
Error bars indicate a $10\%$ error estimate in speedups.
Orange crosses show speedup in higher precision target $\delta E=10^{-14}$, $\chi=100$.
Dashed lines are linear fits for $d>10$ with slopes $\approx 0.10,0.13$ (2D) and $0.18,0.21$ (1D) for $\chi=100,150$ respectively.
}
\end{figure}
\begin{figure}
\includegraphics[width=1\columnwidth]{figure3.pdf}
\caption{ \label{fig:speedup_bond}
Dependency of \RSVDPREFIX{}SVD{}-speedup on the bond dimension in TEBD simulations.
Top: 2D Ising model ($L=30$, $\delta E=10^{-8}$) at width $W=6$.
Blue and gray: `block' (B) and `product' (P) updates, respectively.
Bottom: 1D Ising model ($L=100$, $\delta E=10^{-13}$) at spin $S=5$.
}
\end{figure}
\begin{figure}
\includegraphics[width=1\columnwidth]{figure4.pdf}
\caption{ \label{fig:precision}
Relative error in 1D TEBD final state energy $\Delta E$ (top) and magnetization $\Delta M$ (bottom), compared to extrapolated ground state values $E_\text{best}$ and $M_\text{best}$ for $S=5, L=100$ and transverse fields in the ferro- ($h=1.0$) and paramagnetic ($h=2.0$) phases as well as close to the critical point.
Black and orange bars correspond to T{}- and \RSVDPREFIX{}SVD{} results, respectively, at convergence thresholds $\delta E=10^{-11}$ and $10^{-13}$ (light shaded).
Each group of three bars shows the error with increasing bond dimensions $\chi=50,75,100$ (left to right).
}
\end{figure}
\begin{figure}
\includegraphics[width=1\columnwidth]{figure5.pdf}
\caption{ \label{fig:critical_props}
Correlation properties and singular values in TEBD simulated ground states of 1D (left, $S=5,L=100$) and 2D Ising models (right, $W=6,L=30$) at various transverse fields $h$.
Top panels: Magnetization $M$ (black) and estimates for the correlation length $\bar{\xi}/L$ (gray dashed).
Errors are smaller than point sizes.
Top insets: Von~Neumann Entropies $S_N(j)$ from singular values on MPS bonds.
1D, left: $h=1.5$ (green), $1.762$ (purple with light-purple fit $S^C_N(j)$, see text), $1.774$ (red), 2.0 (orange).
2D, right: $h=2.0$ (green), $3.04$ (red), $3.5$ (orange).
Bottom insets: Singular values (black) at a central bond in the invariant sector for near-critical fields $h=1.7735$ in 1D (left) and $h=3.04$ in 2D (right) and a polynomial fit $\sigma_k\approx (C_1 k + C_2)^{-\gamma}$ (orange, $C_2=0$ in 2D).
Bottom panels: Decay exponents $\gamma$ of singular values.
Up/downwards pointing triangles indicate even/odd sectors, respectively.
Shaded area encloses fit errors.
}
\end{figure}
\begin{figure}
\includegraphics[width=1\columnwidth]{figure6.pdf}
\caption{ \label{fig:svals_decay}
Singular values $\lambda_k$ monitored over algorithm runtime in 1D Ising models at, or close to, the critical field.
Both panels show values in the invariant sector at a central lattice bipartition with $\chi=100$.
The dashed lines indicate the truncation at $\chi_{0}=50$.
Left: Imaginary TEBD ($S=5$, $L=100$, $h=1.7735$, $\delta E=10^{-13}$, block-update `B').
The time step $dt} %TAMA changed from {\text{dt}$ was subsequently reduced to $dt} %TAMA changed from {\text{dt}_i=0.4\times 0.7^i$ for $i=0,\dots,11$ (red to black).
Right: TTN{} ground state search ($S=1/2$, $L=128$, $h=1.0$), after $i=0,\dots,4$ network updates (red to black).
}
\end{figure}
We first report the speedups obtained from upgrading compression steps from \DSVDPREFIX{}SVD{} to \RSVDPREFIX{}SVD{}.
We then present evidence that no loss of precision occurs due to \RSVDPREFIX{}SVD{}.
Finally we present selected ground state properties and spectra of singular values that we encountered in our benchmarks.
All the following speedups have been obtained from independent simulations according to Eq.~\eqref{eq:speedup} with an estimated uncertainty of at most $\Delta \tau \approx 10\,\%$.
Equal numbers of \RSVDPREFIX{}SVD{} and \DSVDPREFIX{}SVD{} compression steps were performed in all TTN{} simulations.
Some imaginary TEBD runs converged in less iterations with either \RSVDPREFIX{}SVD{} or \DSVDPREFIX{}SVD{}, but those fluctuations were negligible compared to $\Delta \tau$.
Speedups up to $\tau\approx24$ have been reached in TEBD simulations of increasing local dimensions, as shown in Fig.~\ref{fig:speedup_local} for the one- and two-dimensional Ising models of Eqs.~\eqref{eq:ising_chain} and \eqref{eq:ising_cylinder}.
We observe that \RSVDPREFIX{}SVD{} outperforms \DSVDPREFIX{}SVD{} for $d > 10$, with speedups directly proportional to $d$ as predicted by the asymptotic cost analysis in Sec.~\ref{sub:TNalgorithms}.
These speedups remain stable under different algorithm parameters, such as changes in convergence criteria (orange crosses in bottom panel of Fig.~\ref{fig:speedup_local}).
We also found no significant dependency on the transverse field $h$:
Thus, all speedups are geometric means over five (2D) and ten (1D) different values of $h$ in various distances from (including close proximity to) the critical point, and each speedup falls within the error bars.
In all cases however, the speedup tends to increase with the bond dimension, as shown in Fig.~\ref{fig:speedup_bond} for selected one- and two-dimensional TEBD simulation.
The latter suggests some saturation at high bond dimension.
Again, all speedups shown are geometric means over at least ten simulations at transverse fields $h$ in various distances from $h_c$, which had no significant impact on the speedup, as can be seen from the error bars that always enclose minimal and maximal speedup.
In support of our TEBD results, the TTN{} benchmarks demonstrates massive \RSVDPREFIX{}SVD{} speedups $\tau$ when bond dimensions are scaled up:
For instance at $\chi=60$ we found $\tau\approx6$, while $\chi=100$ already provided us with $\tau\approx11$ on a spin-$1/2$ lattice of length $L=64$.
Next, we assess the accuracy of the final states delivered by our \DSVDPREFIX{}SVD{} and \RSVDPREFIX{}SVD{} benchmarks.
To this end, we compare the simulation errors in energy expectation value $E$ and non-local magnetization order parameter $M$ of Eq.~\eqref{eq:magnetization} for various simulation parameters such as $h$, $\chi$ and precision target $\delta E$.
The errors are computed from differences $\Delta E = (E - E_\text{best})/E_\text{best}$ and $\Delta M = \left|M - M_\text{best}\right|/M_\text{best}$ to high precision data $E_\text{best}$ and $M_\text{best}$, respectively.
In TEBD simulations, $E_\text{best}$ and $M_\text{best}$ have been extrapolated from bond dimensions and precisions up to $\chi=150$, $\delta E=10^{-14}$ using \DSVDPREFIX{}SVD{}, with uncertainty smaller than all observed differences $\Delta E$ and $\Delta M$ (typically one or more orders of magnitude).
We found that both \DSVDPREFIX{}SVD{} and \RSVDPREFIX{}SVD{} produce comparable simulation errors in all benchmarks, as exemplified in Fig.~\ref{fig:precision} for TEBD simulations of the one-dimensional Ising model for $L=100$ and $S=5$.
We found similar results for up to $L=400$ in various precision targets and bond dimensions $\chi \le 100$ in both para- and ferromagnetic phases as well as close to the critical point.
In two-dimensional TEBD simulations at $W=6$, $L=30$ and in the TTN{} benchmarks, \DSVDPREFIX{}SVD{}- and \RSVDPREFIX{}SVD{} results even matched within computational precision.
The range of physical properties covered by our benchmarks is demonstrated in Fig.~\ref{fig:critical_props}, where the upper panel shows the magnetization $M$ and the estimate for the correlation length $\bar{\xi}$ (see Eq.~\eqref{eq:corrlen}) in the final TEBD simulation states.
These results, taken from \DSVDPREFIX{}SVD{} runs of 1D and 2D Ising models for some of the benchmarked transverse fields $h$, display values of magnetic order and correlation lengths spanning the entire spectrum of possible outcomes.
Furthermore, the von~Neumann entropies $S_N(j)$ on the MPS bonds confirm area-laws in both ordered and unordered phases as well as typical corrections near the 1D critical point, which are well described by a fit to $S^C_N(j) = a + c/6 \log\{ L/\pi \sin( \pi j/L ) \}$ with some constants $a,c$ \cite{Calabrese2009EntanglementEntropyReview}.
The corresponding singular values are detailed in the bottom panel of Fig.~\ref{fig:critical_props}.
Within the bond dimensions $\chi_s$ of individual symmetry sectors, they are well fitted by power-law decays $\sigma_k\approx (C_1 k + C_2)^{-\gamma}$ with fit constants $C_1$, $C_2$ and decay exponents $\gamma$ ranging from $-2$ to $-11$.
This decay of singular values, which relates physical properties to RSVD performace (as discussed in Sec.~\ref{sec:BenchmarkSetup}) is further analyzed in Fig.~\ref{fig:svals_decay} where we present complete spectra of singular values from the local compression problems $A$, including the truncated tail of small singular values, for a central bond and critical transverse field.
In both TEBD (left panel) and TTN{} (right panel) simulations, the spectrum of singular values $\lambda_k$ can be separated into two parts:
For $k\le\chi_s$, the spectrum appears to undergo only minor changes throughout the algorithm run time, and is well described by the actual decay in the final (ground) state (see Fig.~\ref{fig:critical_props} for TEBD) over the majority of the run time.
For $k>\chi_s$, on the other hand, we observe a tail spectrum that does not necessarily follow the characteristics expected from the actual ground state (i.e. $\chi\to\infty$).
Namely, it changes significantly over the algorithm run time, and exhibits the fastest decay in the final iteration(s) of the algorithm:
In case of TEBD, the tail can be seen to be bounded by a rapid polynomial decay, well separated from the retained singular values as it finally becomes proportional to a very small evolution time step $dt} %TAMA changed from {\text{dt}$.
In TTN{}, compression starts from a rather flat tail spectrum, that quickly approaches an exponential decay.
This demonstrates that the compression problem within the TN approximation becomes increasingly well conditioned for \RSVDPREFIX{}SVD{}, even close to the phase transition, as the algorithm converges closer to the ground state.
This allows \RSVDPREFIX{}SVD{} to deliver higher precision (cf. Eq.~\eqref{eq:error_rsvd}, due to oversampling) with higher reliability right in the final stages of the algorithms when most needed.
\section{Discussion and Outlook}\label{sec:Discussion}
We provided evidence for substantially accelerated compression of tensor networks in all benchmarked algorithms by simply replacing the full truncated \DSVDPREFIX{}SVD{} with the \RSVDPREFIX{}SVD{} algorithm.
In particular, \RSVDPREFIX{}SVD{} outperformed \DSVDPREFIX{}SVD{} with the expected asymptotic speedup, that is inversely proportional to the compression ratio, when not more than $10\%$ of singular values were retained.
Remarkably enough, we attained those speedups without loss of precision in the simulated ground states:
With \RSVDPREFIX{}SVD{} we reproduced local expectation values such as the energy, as well as long-range correlation- and entanglement properties, with differences to \DSVDPREFIX{}SVD{} simulations far smaller than the inherent ansatz errors due to a finite bond dimension or number of iterations performed.
By benchmarking with encoded Abelian symmetries, we confirmed the \RSVDPREFIX{}SVD{} speedup in reduced bond- and local dimensions per sector.
Even though small matrix sizes can reduce speedups, \RSVDPREFIX{}SVD{} becomes increasingly useful with the typically large bond dimensions that are required for ground state approximation.
All results, moreover, hold up independently from the various physical scenarios, i.e.\ off- and at quantum critical points over a wide range of correlation lengths and respective spectra of the singular values.
Remarkably, the iterative nature shared by many TN algorithms has been observed to work in favor of \RSVDPREFIX{}SVD{} in that the truncated tail singular values decayed fast in the relevant final iterations, even close to phase transitions.
We expect the presented results to be robust and reproducible in a wide range of tensor network applications.
For instance, our choice of \RSVDPREFIX{}SVD{}-parameters ($q,\ell$) has been extremely conservative, as confirmed by the small differences to \DSVDPREFIX{}SVD{} in the outcomes, and can be fine tuned for much higher efficiency:
Namely, by reducing $q$, \RSVDPREFIX{}SVD{} might outperform \DSVDPREFIX{}SVD{} for compression ratios as moderate as $20\%$ or less.
With \RSVDPREFIX{}SVD{}, precision and efficiency of the compression can further be balanced dynamically, which promises significant reduction of run time in the earlier algorithm stages, as is already standard practice for instance in the eigensolver optimization steps in DMRG.
In this regard, it may prove specifically useful that \RSVDPREFIX{}SVD{} can also deliver a fixed error (instead of fixed rank) approximation:
Parameters such as $\chi$, $\ell$ and possibly $q$ are then dynamically adjusted to deliver a compression within a given error bound \cite{Halko2011LowRankProbabilistic,Gu2015LowRankPIError}.
Such dynamics might also provide an alternative route to fix the compressed sector sizes $\chi_s$ in presence of symmetric TN, even though good estimates (for instance based on previous iterations) plus added oversampling work well as demonstrated.
Moreover, ongoing development of the \RSVDPREFIX{}SVD{} method itself may lead to further optimizations, such as modified power iteration schemes for faster convergence \cite{Musco2015RandomizedKrylowSVD} or single view algorithms \cite{Tropp2016SingleView}.
With the benchmarked ground state simulations, it is clear that \RSVDPREFIX{}SVD{} is indeed not limited to open system real time dynamics with TEBD \cite{Tamascelli2015TEBDRSVD}, and we foresee a broad impact on DMRG and imaginary- or real time evolution codes that operate on ground states, including short-time quenches \cite{Eisert2006QuenchEntanglement} out of equilibrium via TEBD or the time-dependent variational principle \cite{Haegeman2011TDVP}.
This in turn could open new possibilities, for instance, in the numerical study of the Kibble-Zurek mechanism \cite{Kibble1976Cosmic,Zurek1985Cosmological}.
More generally, \RSVDPREFIX{}SVD{} has great potential in all scenarios that make extensive use of truncated SVD with small compression ratios.
Those arise naturally in TN algorithms that operate on potentially large tensors and in various double-tensor update strategies, that are regularly employed in DMRG and time evolution codes when Abelian- or non-Abelian symmetries are encoded, and to avoid meta-stabilities that hinder convergence \cite{White2005SingleCenterDMRG,Hubig2015DMRGSubspaceExpansion}.
Such scenarios include, for example, applications to higher dimensions, lattice models with large local dimensions or applications of TNs in quantum chemistry.
\begin{acknowledgments}
We thank Matthias Gerster for discussions and sharing his TTN{} code.
The authors gratefully acknowledge support from the Carl-Zeiss-Stiftung via Nachwuchsf\"orderprogramm, the state of Baden-W\"urttemberg through bwHPC, the Italian HPC facility CINECA through the TEDDI project and the German Research Foundation (DFG) through the TWITTER grants.
S.M. gratefully acknowledges the support of the DFG via a Heisenberg fellowship.
This work was supported by the ERC Synergy grant BioQ.
\end{acknowledgments}
\vspace{1em}
|
1,477,468,750,353 | arxiv | \section{Introduction}
Acoustic echo will arise if the sound is listened by the speaker itself \cite{echo}. This phenomenon is very commonplace no matter in communications, entertainments or man-machine interaction, and somewhere else. It may be useful in some scenarios, such as entertainments. But, in most cases, especially for voice interactions and communications, it is interfering and should be cancelled from the significant speech audio \cite{aec}.
Since there is a reference signal representing the source of echo, adaptive filters are always employed for acoustic echo cancellation (AEC). There are many adaptive algorithms available, such as least mean square (LMS) \cite{LMS}, normalized LMS (NLMS) \cite{NLMS}, block LMS (BLMS) \cite{BLMS}, and etc. Each has its own merits and special applications. For obtaining considerable performance, filter lengths of several hundreds and sometimes thousands are required. Due to the significant reduction in computational load by using the fast Fourier transform (FFT) for implementing the block BLMS algorithm efficiently, the frequency domain block adaptive filter (FDBAF) based on the LMS algorithm is considered to be most suitable \cite{{PBFDAF}}. Moreover, for accommodating long block delay and large quantization error in FFT, a more flexible frequency domain adaptive filter structure, called the multidelay block frequency domain (MDF) adaptive filter was proposed \cite{MDF}. Further, for obtaining robust echo cancellation, methods for adjusting the learning rate to vary according to conditions such as double-talk and echo path change were also raised \cite{rate}. In brief, there are plenty of algorithms by using adaptive filter for AEC, giving considerable performance.
Unfortunately, there would be some residual echo after adaptive filtering. Though it is much smaller than speech audio in terms of amplitude in most cases, it could also be perceived by human ear and would make communication annoy. These residual echo includes linear residue introduced by mismatching between estimation and the reality and non-linear residue which are mostly caused by non-linear components on the audio devices. The linear residue can be reduced with elaborate structure and methods, such as \cite{rate} \cite{vss1, vss2, vss3, vss4}, leaving the non-linear residue intractable for suppression. Though, some non-linear processing (NLP) methods have already be raised, the algorithm processing are complicated and could be inefficient for suppression \cite{NLP3, NLP4}. Moreover, these NLP methods would bring damage to the speech audios \cite{NLP5}. In addition, some other methods such as non-linear filtering \cite{kalman} and modeling estimation \cite{modeling} are also used for non-linear echo cancellation.
By comparing the spectrum of residual echo with that of the speech audio, this residue can be considered as a type of noise. In addition, the far-end reference signal could also provide some relations for residue suppression. Inspired by this, a combination scheme by concatenating adaptive filter and neural network is proposed in this paper. The echo interfered speech audio is first processed by MDF filter with adaptive learning rate for cancelling primary echo signal. Thereafter, a neural network with perspicuous structure is elaborately designed and trained for residual echo suppression. This method is compared with other prevailing methods in terms of echo return loss enhancement (ERLE), logarithmic spectral distance (LSD), response time (RT), model size.
\section{Algorithm Structure}
\subsection{Combination Scheme}
The integration scheme by combining adaptive filter and the neural network is depicted in Fig. \ref{fig:combine}. Adaptive filter is used for cancelling the linear echo introduced by the multi-path or the room impulse response (RIR) \cite{RIR}. It has been proved to give considerable performance with low complexity. The weighting coefficients of the finite impulse filter (FIR) can be adjusted in time for estimating the RIR, then getting the estimated transcript of the echo signal. However, due to the non-linear components equipped on the devices, such as the loudspeaker with poor linearity, non-linear echo would be introduced. It cannot be cancelled by the adaptive filtering with FIR structure, resulting in residual echo. As is depicted in Fig. \ref{fig:residue}, the residual echo after adaptive filtering would be decreased to a little scale compared with speech audio in terms of amplitude. It could be considered as a special type of noise. Meanwhile, this noise could have some relations with the far-end reference signal. Therefore, based on these observations, a neural network will be designed and specialized trained for suppressing such residual echo as illustrated in Fig. \ref{fig:combine}.
\begin{figure}
\centering
\includegraphics[scale=0.63]{combine}
\caption{Structure of combination scheme}
\label{fig:combine}
\end{figure}
\begin{figure}
\centering
\includegraphics[scale=1]{residue}
\caption{Residual echo after adaptive filtering}
\label{fig:residue}
\end{figure}
\subsection{Adaptive Filter}
Due to the numerous merits involved in the multidelay block frequency domain adaptive filter, such as less memory storage, small FFT size, and allows different configurations to be chosen depending on the hardware used \cite{MDF}, it is employed in the combination scheme for linear echo cancellation. Moreover, as is used in the open source of Speex \cite{speex1, speex2}, the learning rate in the adaptive filter is controlled varying according to conditions such as double-talk and echo path change. In this case, the linear echo can be cancelled greatly and adaptively.
The complex NLMS filter of length $N$ is defined as
\begin{equation}
e\left( n \right) = d\left( n \right) - \hat y\left( n \right) = d\left( n \right) - \sum\limits_{k = 0}^{N - 1} {{w_k}\left( n \right)x\left( {n - k} \right)}
\label{eq1}
\end{equation}
with adaptation step as
\begin{equation}
{\hat w_k}\left( {n + 1} \right) = {\hat w_k}\left( n \right) + \mu \cdot \frac{{e\left( n \right)}}{{\sum\nolimits_{i = 0}^{N - 1} {\left| {x{{\left( {n - i} \right)}^2}} \right|} }} \cdot {x^ * }\left( {n - k} \right)
\label{eq2}
\end{equation}
Where $x\left( n \right)$ is the far-end signal, $d\left( n \right)$ is the received microphone signal, $\hat y\left( n \right)$ is the estimated echo by adaptive filter, $e\left( n \right)$ is the corresponding estimated error, ${w_k}\left( n \right)$ are the filter weights at time $n$, and ${\hat w_k}\left( n \right)$ is the estimated one, $\mu$ is the learning rate.
For obtaining a fast response in the case of double-talk in order to prevent the filter from diverging when double-talk starts, the learning rate is updated by \cite{rate}
\begin{equation}
{\hat \mu _{opt}}\left( {k,l} \right) = \min \left( {\hat \eta \left( l \right)\frac{{{{\left| {\hat Y\left( {k,l} \right)} \right|}^2}}}{{{{\left| {E\left( {k,l} \right)} \right|}^2}}},{\mu _{\max }}} \right)
\label{eq3}
\end{equation}
where $\hat Y\left( {k,l} \right)$ and $E\left( {k,l} \right)$ are the frequency domain counterparts of the $\hat y\left( n \right)$ and $e\left( n \right)$, and $k$ is the frequency index and $l$ is the frame index, $\hat \eta \left( l \right)$ is the estimate leakage coefficient that represents the misadjustment of the filter. It is equal to the linear regression coefficient between the estimated echo power ${P_Y}\left( {k,l} \right)$ and output power ${P_E}\left( {k,l} \right)$:
\begin{equation}
\hat \eta \left( l \right) = \frac{{\sum\nolimits_k {{R_{EY}}\left( {k,l} \right)} }}{{\sum\nolimits_k {{R_{YY}}\left( {k,l} \right)} }}
\label{eq4}
\end{equation}
where the correlations ${R_{EY}}\left( {k,l} \right)$ and ${R_{YY}}\left( {k,l} \right)$ are averaged recursively as:
\begin{equation}
\begin{array}{l}
{R_{EY}}\left( {k,l} \right) = \left( {1 - \beta \left( l \right)} \right){R_{EY}}\left( {k,l} \right) + \beta \left( l \right){P_Y}\left( k \right){P_E}\left( k \right)\\
{R_{YY}}\left( {k,l} \right) = \left( {1 - \beta \left( l \right)} \right){R_{YY}}\left( {k,l} \right) + \beta \left( l \right){\left( {{P_Y}\left( k \right)} \right)^2}\\
\beta \left( l \right) = {\beta _0}\min \left( {\frac{{\hat \sigma _{\hat Y}^2\left( l \right)}}{{\hat \sigma _e^2\left( l \right)}},1} \right)
\end{array}
\label{eq5}
\end{equation}
where ${\beta _0}$ is the base learning rate for the leakage estimate and $\hat \sigma _{\hat Y}^2\left( l \right)$ and $\hat \sigma _e^2\left( l \right)$ are the total power of the estimated echo and the output signal. The variable averaging parameter $\beta \left( l \right)$ prevents the estimate from being adapted when no echo is present.
However, due to non-linear component involved in the device, non-linear residual echo will be arise at the output of adaptive filter. Moreover, some linear residual echo could be introduced if the estimated RIR and the actual one are mismatched. These would all result in considerable residual echo as depicted in Fig. 2. This residual echo would be more severe with the increasing nonlinearity introduced to the device and the increasing estimated error of the RIR.
\subsection{Neural Network}
\subsubsection{Network Structure}
Inspired by \cite{RNN}, the structure of residual echo suppression network based on Recurrent neural network (RNN) is elaborately designed and depicted in Fig. \ref{fig:structure}. Here, each module of RNN is realized by Gated Recurrent Unit (GRU) for data memory and network calculation. This type of structure mainly refers to the functional architecture of conventional echo cancellation, including three functional modules, i.e., double-talk detection, echo estimation and echo cancellation. Double-talk detection detects the signals of the far-end and near-end in real time, and only when a signal at the far-end is detected, would echo suppression be carried out. At this moment, residual echo is estimated from the signal after adaptive filtering. Echo cancellation, estimating the gain of the subband, rapidly changes the level of each frequency band in order to attenuate the echo but allow the signal to pass through. The reason for utilizing subband gain for computation is that it makes the model very simple, requiring very few band calculations. In addition, there are no so-called musical noise artifacts.
\begin{enumerate}
\item \textbf{Feature extraction.} In order to reduce the number of neurons thus reducing the model size, samples or spectrum are not directly used. Instead, the frequency band with bark scale is employed, matching the human perception. In this case, a total of 22 frequency subbands are used, namely, bark-frequency cepstral coefficients (BFCC). In addition, the first-order and second-order differences of the first six BFCC features, the discrete cosine transform (DCT) of the first six pitch correlation coefficients, and the dynamic features, i.e., pitch period and spectral non-stationarity metric were extracted \cite{RNN}. These all result in 42 features in total, acting as the input data of residual echo suppression neural network.
\item \textbf{Double-talk detection (DTD).} Only speech signal together with residual echo would be reserved after adaptive filtering. Since the amplitude of residual echo after adaptive filtering is small, the voice activity of speech can be easily detected. Meanwhile, the voice activity of the reference signal from far-end can also be easily detected due to its purity. In this case, two voice activity detections (VADs) for each channel can be implemented independently, reducing the difficulty of DTD.
\item \textbf{Residual echo estimation.} As a realization of recurrent neural network (RNN), gated recurrent unit (GRU) module is used for estimating residual echo with input features of reference signal, output signal of adaptive filtering and the DTD results. Due to the memory function of RNN model, residual echo can be better estimated compared with other models.
\item \textbf{Residual echo suppression.} A GRU module concatenated by a dense layer is used for echo suppression by calculating subband gains. It will approach to zero if the VAD of the near-end, i.e., the output of adaptive filtering, is zero, and will approach to one, if the VAD of the reference signal from the far-end is zero. Otherwise, a decimals representing the ratio between the speech and that superimposed by residual echo is estimated.
\end{enumerate}
\begin{figure}
\centering
\includegraphics[scale=0.5]{structure}
\caption{Structure of the residual echo suppression network}
\label{fig:structure}
\end{figure}
Since only band gains are calculated by network, it can not be directly applied to each frequency. Therefore, linear interpolation between bands for obtaining frequency gain is required which is illustrated in Fig. \ref{fig:interpolation} and can be formulated as
\begin{equation}
{g_k}\left( m \right) = \left( {1 - \frac{m}{M}} \right) \cdot {g_k} + \frac{m}{M} \cdot {g_{k + 1}}
\label{eq6}
\end{equation}
where ${g_k}\left( m \right)$ is the gain of the $m$-th frequency for the $k$-th band, ${g_k}$ and ${g_{k+1}}$ are the band gains for the $k$-th and the ($k+1$)-th bands, $M$ denoting the band length of the $k$-th band.
\begin{figure}
\centering
\includegraphics[scale=1.5]{interpolation}
\caption{Schematic of linear interpolation}
\label{fig:interpolation}
\end{figure}
\subsubsection{Training of Double-Talk Detection}
The training data can be either manually annotated or simulated. Manual labeling data is obtained by listening whether there is audio either at the far-end or the near-end and where there are. The audio file is recorded by an audio recording device where the audio played by the source superimposed by the one from the device itself is recorded. However, this method is time consuming. Therefore, simulation data is used for training. This can be illustrated in Fig. \ref{fig:dtd} and be summarized as follows:
\begin{enumerate}
\item \textbf{Far-end data preparation.} Far-end data is the reference signal for echo cancellation which is the audio file transmitted in the reference channel before playing out by the loudspeaker of the devices itself. This reference signal is framed and windowed, then used for energy calculation. This energy value is compared with two thresholds so that it is labelled by ``1'' if it is larger than the higher threshold, and labelled by ``0'' if it is lower than the lower threshold, otherwise, labelled by ``0.5''. This labels are calculated frame by frame, representing the probability of audio existing, together with feature vectors.
\item \textbf{Near-end data preparation.} Here, the near-end data is the signal after adaptive filtering where vast echo especially for the linear echo will be cancelled. As for the echo signal, it is obtained by convolving the reference signal with the RIRs. This echo signal is mixed with a clean speech audio file for simulating the microphone receiving signal. This microphone signal is then processed by adaptive filtering. Thereafter, clean speech mixed by residual echo is obtained, representing the near-end data for training. The labels representing whether there is clean speech or not can be easily obtained by directly calculating the energy and comparing with the thresholds. It is notable that since the amplitude of residual echo is relative small compared with that of clean speech, the labels can be also obtained by directly calculating the signal energy after adaptive filtering. Similarly, the feature vectors corresponding to each frame is calculated.
\item \textbf{Training process.} Since the labels for the two channels can be directly obtained by comparing the frame energy with thresholds, the voice detections can be implemented individually with VADs. With the feature vectors and their labels, the VAD modules for each channel can be trained without too much difficulty.
\end{enumerate}
\begin{figure}
\centering
\includegraphics[scale=0.48]{dtd}
\caption{Flowchart of double-talk detection training}
\label{fig:dtd}
\end{figure}
\subsubsection{Training of Residual Echo Suppression}
The aim for residual echo suppression network is calculating the band gains whose training process is depicted in Fig. \ref{fig:gain}. The far-end and the near-end data are prepared in the same way as aforementioned except for the labels of band gains. These can be obtained by calculating the band energy of clean speech denoted by ${E_{s,k}}$ and that of the residual signal after adaptive filtering denoted by ${E_{m,k}}$, and then dividing them band by band for getting the labels, i.e., ${g_k} = \sqrt {\frac{{{E_{s,k}}}}{{{E_{m,k}}}}}$. Meanwhile, the feature vectors of these two channels are the same as aforementioned.
\begin{figure}
\centering
\includegraphics[scale=0.5]{gain}
\caption{Flowchart of frequency band gain training}
\label{fig:gain}
\end{figure}
\section{Performance Evaluation}
$\bf{Model\ Training}$. The model structure can be shown in \ref{fig:structure}. A total of 10 hours of speech and 5 hours of echo data are constructed, resulting in 20 hours for training by using various combinations of gains and filters. In training process, three objective functions should be learned, i.e., the VAD of speech signal, the VAD of reference signal and the band gains for suppression. As is shown in Fig. \ref{fig:loss_plot}, both the training loss and the validation loss go down gradually approaching zero, revealing that a considerable model has been trained.
\begin{figure}[H]
\centering
\includegraphics[scale=0.80]{loss_plot}
\caption{Loss during training process}
\label{fig:loss_plot}
\end{figure}
$\bf{Experiments\ Validation}$. a) \emph{Band Gain}. A piece of audio speech consisting by a string of wakeup words interfered by text to speech (TTS) audio played by itself are used for measurements. The calculated results about the VADs and the band gains are depicted in Fig. \ref{fig:gain_plot}. It can be found that, the band gains would approach zero if reference signal is detected at the momentum of wakeup word appearing. Since the energy of residual echo gathers at low bands, therefore the band gains for suppression would be lower for the low bands than that of the higher bands. b) \emph{Wave Observation}. For evaluating the performance, methods from prevailing open source codes are extracted for comparisons. These can be seen from Fig. \ref{fig:wavcomp} that the residual echo after the proposed RNN algorithm can be suppressed a lot compared with those of Speex and WebRTC. These are more obvious at the speech gaps where only residual echo exist. It can also be found that the spectrums at high bands are cut after WbeRTC AEC, which may be introduced by the non-linear processing (NLP) in the algorithm.
\begin{figure}[H]
\centering
\includegraphics[scale=0.70]{gain_plot}
\caption{Band gain calculated by neural network}
\label{fig:gain_plot}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[scale=0.65]{wavcomp}
\caption{Wave comparisons}
\label{fig:wavcomp}
\end{figure}
c) \emph{Performance Comparisons}. The ERLE value representing the echo suppression performance, and the LSD representing the spectrum loss in terms of the voice caused by AEC are evaluated and listed in Table~\ref{tab:performance}. Since AEC module is mostly implemented on devices, the response time (RT) obtained on the same platform representing the processing speed, and the model size representing the algorithm complexity should also be considered. It can be seen that the proposed scheme can obtain higher ERLE with considerable spectrum loss, processing time. Though the module size of the proposed scheme is larger, since the reference signal is clean speech, the model structure of VAD for this channel can be tailored. Meanwhile, the intermediate results of VADs for echo estimation in the model structure are likely to be clipped. These all could reduce the model size.
\begin{table}[H]
\centering
\caption{Comparisons by experiments}
\label{tab:performance}
\begin{tabular}{|c|c|c|c|c|}
\hline
Algorithms & ERLE & LSD & RT & Size \\
\hline
Speex & 25 dB & 1.01 dB & 0.42 ms/frame & 106 kb\\
\hline
WebRTC & 40 dB & 1.66 dB & 0.45 ms/frame & 140 kb\\
\hline
Proposed & 68 dB & 1.18 dB & 1.63 ms/frame & 450 kb\\
\hline
\end{tabular}
\end{table}
\section{Conclusions}
A combination scheme by concatenating adaptive filter and neural network is proposed for acoustic echo cancellation. The echo can be cancelled in a large scale after adaptive filtering, especially for linear echo, leaving the residual echo a bit. The spectrum of residue is much different compared with the speech audio, and can be considered as a special type of noise. Therefore, this residue is suppressed to a considerable level by employing neural network. Experiments reveal that the proposed scheme can obtain higher performance of echo suppression with considerable spectrum damage and response time.
\bibliographystyle{IEEEtran}
|
1,477,468,750,354 | arxiv | \section{Introduction}
As they evolve, wireless systems seek to provide ever faster bit rates and lower latencies, and a key enabler for these advances in the increase in bandwidth. From 1G to 5G, the spectrum devoted to wireless communication has surged from a handful of MHz to multiple GHz, roughly three orders of magnitude, and this growth is bound to continue as new mmWave bands open up and inroads are made into the terahertz realm \cite{4455844,akyildiz2014terahertz,elayan2019terahertz,8732419,9216613}.
Besides bandwidth, another key resource is power. Leaving aside the power spent in duties unrelated to communication, the power consumed by a device
can be partitioned as ${P_{\mathrm{t}}}/\eta + P_{\sf ADC} + P_{\sf other}$ where ${P_{\mathrm{t}}}$ is the power radiated by the transmitter, $\eta$ is the efficiency of the corresponding power amplifiers, $P_{\sf ADC}$ is the power required by the receiver's analog-to-digital (ADC) converters, and $P_{\sf other}$ subsumes everything else (including oscillators, filters, the transmitter's digital-to-analog (DAC) converters, and the receiver's low-noise amplifier).
With $B$ denoting the bandwidth and $b$ the resolution in bits, each ADC satisfies
\begin{equation}
P_{\sf ADC} = \mathsf{FoM} \, B \, \kappa^b
\label{LMessi}
\end{equation}
where $\mathsf{FoM}$ is a figure of merit and $\kappa$ ranges between two and four \cite{lee2008analog,murmann2016adc}.
Power consumption has traditionally been dominated by ${P_{\mathrm{t}}} / \eta$ and thus high resolutions ($b=8$--$12$ bits) could be employed. In 1G and 2G, a higher $\eta$ was facilitated by the adoption of (respectively analog and digital) signaling formats tolerant of nonlinear amplification, but after 2G this took a backseat to spectral efficiency. Linearity has since reigned, despite the lower $\eta$, as ${P_{\mathrm{t}}} / \eta$ was well within the power budget of devices for the desirable ${P_{\mathrm{t}}}$.
The advent of 5G, with the move up to mmWave frequencies and the enormous bandwidths therein, is a turning point in the sense of $P_{\sf ADC}$ ceasing to be secondary, and this can only accelerate moving forward \cite{skrimponis2020power}.
Consider this progression: with $b=10$ at a typical 4G bandwidth of $B=20$ MHz, $P_{\sf ADC}$ is only a few milliwatts; for $B=2$ GHz, it is already on the order of a watt; and for $B=20$ GHz, it would reach roughly 10 watts.
Indeed, as $B$ continues to grow, $P_{\sf ADC}$ is bound to swallow up the entire power budget of portable devices unless $\mathsf{FoM}$ or $b$ change.
But $\mathsf{FoM}$ is approaching a fundamental limit \cite{murmann2016adc}. Moreover, while holding steady up to about $B=100$ MHz, $\mathsf{FoM}$ drops sustainedly after that mark, which is coincidentally the largest 4G bandwidth. Inevitably then, $b$ has to decrease and, ultimately, it should reach $b=1$, to
drastically curb the power consumption and to further enable dispensing with automatic gain control at the receiver while simplifying the data pipeline between the ADCs and the baseband processing \cite{o2005ultra}.
While 1-bit ADCs curb the spectral efficiency at 1 bit per dimension, the vast bandwidths thereby rendered possible make it exceedingly beneficial. Going from $b=10$ down to $b=1$ cuts the spectral efficiency by a factor of 2--3, but in exchange $B$ can grow by as much as 1000 under the same $P_{\sf ADC}$; the net benefits in bit rate and latency are stupendous. Spectral efficiency is then best recovered by expanding the number of antennas, which $P_{\sf ADC}$ is only linear in.
This naturally leads to multiple-input multiple-output (MIMO) arrangements with 1-bit ADCs.
Although 1-bit ADCs at the receiver do not necessarily entail 1-bit DACs at the transmitter, and in some cases the spectral efficiency could improve somewhat with richer DACs, it is inviting to take the opportunity and adopt 1-bit transmit signals. This not only minimizes the DAC power consumption---somewhat lower than its ADC's counterpart, yet also considerable \cite{nasri2017700,olieman2015interleaved,shu2018calibration}---but it enables the power amplifiers to operate in nonlinear regimes where $\eta$ is higher.
Altogether, 1-bit MIMO architectures might feature prominently in future wireless systems, and not only for mmWave or terahertz operation: these architectures are also a sensible way forward for lower-frequency extreme massive MIMO, with antenna counts in the hundreds or even thousands \cite{de2020non}.
All this interest is evidenced by the extensive literature on transmission strategies and the ensuing performance with 1-bit converters at the transmitter or receiver only (see \cite{nossek2006capacity,mezghani2007modified,mezghani2008analysis,singh2009limits,singh2009multi,7458830,7307134,wang2014multiuser,mo2015capacity,7600443,li2017channel,jacobsson2017throughput,rassouli2018gaussian,8437510,mezghani2020low,jacobsson2016nonlinear,mezghani2007ultra,mezghani2012capacity,8754755,8487043,8811616,8331077,8103022,7472304,8462805,8010806,zeitler2012low,1683157,6545291,mo2017hybrid,liang2016mixed} and references therein),
and by the smaller but growing body of work that considers 1-bit converters at both ends \cite{gao2017power,gao2018beamforming,mezghani2009transmit,usman2016mmse,kakkavas2016weighted,7569655,guerreiro2016use,li2017downlink,gao2018capacity,7967843,7946265,nam2019capacity,Eusipco21,bazrafkan2020asymptotic}.
Chief among the difficulties in this most stringent case stand (\emph{i}) computing the information-theoretic performance limits for moderate and large antenna counts, and (\emph{ii}) precoding to generate signals that can approach those limits.
On these fronts, and concentrating on single-user MIMO,
this paper has a two-fold objective:
\begin{itemize}
\item To provide analytical characterizations of the performance of beamforming and equiprobable signaling, two transmission strategies that are information-theoretically motivated and complementary.
\item To show that a judicious combination of these strategies suffices to operate within a modest gap from the 1-bit capacity in various classes of channels
of high relevance, foregoing general precoding solutions.
\end{itemize}
\section{Signal and Channel Models}
\label{calor1}
\subsection{Signal Model}
Consider a transmitter equipped with ${N_{\mathrm{t}}}$ antennas and 1-bit DACs per complex dimension. The receiver, which features ${N_{\mathrm{r}}}$ antennas and a 1-bit ADC per complex dimension, observes
\begin{equation}
\label{Aleix}
{\boldsymbol{y}} = \text{sgn} \! \left( \sqrt{\frac{{\mathsf{SNR}}}{2 {N_{\mathrm{t}}}}} \, {\boldsymbol{H}} {\boldsymbol{x}} + {\boldsymbol{z}} \right)
\end{equation}
where the sign function is applied separately to the real and imaginary parts of each entry, such that $y_n \in \{\pm1\pm\mathrm{j}\}$, while ${\boldsymbol{H}}$ is the ${N_{\mathrm{r}}} \times {N_{\mathrm{t}}} $ channel matrix,
${\boldsymbol{z}} \sim \mathcal{N}_{\mathbb{C}}({\bf 0}, {\boldsymbol{I}})$ is the noise, and ${\mathsf{SNR}}$ is the signal-to-noise ratio per receive antenna. Each entry of the transmit vector ${\boldsymbol{x}}$ also takes the values $\pm1\pm\mathrm{j}$.
Each antenna in the foregoing formulation could actually correspond to a compact subarray, in which case the model subsumes array-of-subarrays structures for the transmitter and/or receiver \cite{Allerton2006,Torkildson:11,Song:152,Lin:16,ISIT} provided ${\mathsf{SNR}}$ is appropriately scaled.
For each given ${\boldsymbol{H}}$, the relationship in (\ref{Aleix}) embodies a discrete memoryless channel with $4^{{N_{\mathrm{t}}}} \times 4^{{N_{\mathrm{r}}}}$ transition probabilities determined by
\begin{align}
p_{{\boldsymbol{y}}|{\boldsymbol{x}}} & = \prod_{n=0}^{{N_{\mathrm{r}}}-1} p_{\Re\{ y_n \} | {\boldsymbol{x}}} \, p_{\Im\{y_n\} | {\boldsymbol{x}}} ,
\end{align}
where the factorization follows from the noise independence per receive antenna and complex dimension.
Each such noise component has variance $1/2$, hence
\begin{align}
p_{\Re\{y_n\} | {\boldsymbol{x}}} ( 1 | {\bm{\mathsf{x}}}) & = \text{Pr} \! \left[ \sqrt{\frac{{\mathsf{SNR}}}{2 {N_{\mathrm{t}}}}} \Re\{ {\boldsymbol{h}}_n {\bm{\mathsf{x}}} + z_n \} >0 \right] \\
& = \text{Pr} \! \left[ \Re\{ z_n \} > - \sqrt{\frac{{\mathsf{SNR}}}{2 {N_{\mathrm{t}}}}} \Re\{ {\boldsymbol{h}}_n {\bm{\mathsf{x}}} \} \right] \\
& = Q \! \left( \! - \sqrt{\frac{{\mathsf{SNR}}}{{N_{\mathrm{t}}}}} \Re\{{\boldsymbol{h}}_n {\bm{\mathsf{x}}}\} \! \right) \label{Magda}
\end{align}
where ${\boldsymbol{h}}_n$ is the $n$th row of ${\boldsymbol{H}}$ (for $n=0,\ldots,{N_{\mathrm{r}}}-1$) and $Q(\cdot)$ is the Gaussian Q-function. Similarly,
\begin{equation}
p_{\Re\{y_n\} | {\boldsymbol{x}}} ( -1 | {\bm{\mathsf{x}}}) = Q \! \left( \! \sqrt{\frac{{\mathsf{SNR}}}{{N_{\mathrm{t}}}}} \Re\{{\boldsymbol{h}}_n {\bm{\mathsf{x}}}\} \! \right) .
\label{Campins}
\end{equation}
From (\ref{Magda}) and (\ref{Campins}),
\begin{equation}
p_{\Re\{y_n\} | {\boldsymbol{x}}} ( \Re\{ {\mathsf{y}}_n \} | {\bm{\mathsf{x}}}) = Q \! \left( \! - \Re\{{\mathsf{y}}_n\} \sqrt{\frac{{\mathsf{SNR}}}{{N_{\mathrm{t}}}}} \Re\{{\boldsymbol{h}}_n {\bm{\mathsf{x}}}\} \! \right)
\end{equation}
and, mirroring it, finally
\begin{align}
p_{{\boldsymbol{y}}|{\boldsymbol{x}}}({\bm{\mathsf{y}}} | {\bm{\mathsf{x}}}) & = \prod_{n=0}^{{N_{\mathrm{r}}}-1} Q \! \left( \! - \Re\{{\mathsf{y}}_n\} \sqrt{\frac{{\mathsf{SNR}}}{{N_{\mathrm{t}}}}} \Re\{{\boldsymbol{h}}_n {\bm{\mathsf{x}}}\} \! \right) \nonumber \\
& \quad \cdot Q \! \left( \! - \Im\{{\mathsf{y}}_n\} \sqrt{\frac{{\mathsf{SNR}}}{{N_{\mathrm{t}}}}} \Im\{{\boldsymbol{h}}_n {\bm{\mathsf{x}}}\} \! \right) .
\label{Joan}
\end{align}
The transition probabilities correspond to (\ref{Joan}) evaluated for the $4^{N_{\mathrm{r}}}$ possible values of ${\boldsymbol{y}}$ and the $4^{N_{\mathrm{t}}}$ values of ${\boldsymbol{x}}$.
If ${\boldsymbol{H}}$ is known, these transition probabilities can be readily computed. Conversely, if the transition probabilities are known, ${\boldsymbol{H}}$ can be deduced.
The $4^{{N_{\mathrm{t}}}}$ transmit vectors ${\boldsymbol{x}}$ can be partitioned into $4^{{N_{\mathrm{t}}} - 1}$ quartets,
each containing four vectors and being invariant under a $90^\circ$ phase rotation of all the entries: from any vector in the quartet, the other three are obtained by repeatedly multiplying by $\mathrm{j}$. Since a $90^\circ$ phase rotation of ${\boldsymbol{x}}$ propagates as a $90^\circ$ phase rotation of ${\boldsymbol{H}} {\boldsymbol{x}}$, and the added noise is circularly symmetric, the four vectors making up each transmit quartet are statistically equivalent and they should thus have the same transmission probability so as to convey the maximum amount of information.
Likewise, the set of $4^{N_{\mathrm{r}}}$ possible vectors ${\boldsymbol{y}}$ can be partitioned into $4^{{N_{\mathrm{r}}} - 1}$ quartets, and the four vectors ${\boldsymbol{y}}$ within each received quartet are equiprobable.
\subsection{Channel Model}
\label{Carla18}
If the channel is stable over each codeword, then every realization of ${\boldsymbol{H}}$ has operational significance and ${\mathsf{SNR}}$ is well defined under the normalization ${\rm tr}({\boldsymbol{H}}\bH^*)={N_{\mathrm{t}}}{N_{\mathrm{r}}}$. Conversely, if the coding takes place over a sufficiently broad range of channel fluctuations, that significance is acquired in an ergodic sense with $\mathbb{E} \big[ {\rm tr}({\boldsymbol{H}}\bH^*) \big]={N_{\mathrm{t}}}{N_{\mathrm{r}}}$ \cite{LozanoJindal2012}.
The following classes of channels are specifically considered.
\paragraph{Line-of-Sight (LOS) with Spherical Wavefronts}
LOS is the chief propagation mechanism at mmWave and terahertz frequencies, and the spherical nature of the wavefronts is relevant for large arrays and short transmission ranges. For uniform linear arrays (ULAs) \cite{do2020reconfigurable},
\begin{equation}
{\boldsymbol{H}} = {\boldsymbol{D}}_{\rm rx} \tilde{{\boldsymbol{H}}} {\boldsymbol{D}}_{\rm tx}
\label{Oscarinyu}
\end{equation}
where ${\boldsymbol{D}}_{\rm rx}$ and ${\boldsymbol{D}}_{\rm rx}$ are diagonal matrices with entries
\begin{align}
[{\boldsymbol{D}}_{\rm rx}]_{n,n} & = e^{-j \pi \left[ \frac{2n}{\uplambda} {d_{\rm r}} \sin \! \theta_{\rm r} \cos \! \phi + \frac{n^2}{\uplambda D} d^2_{\rm r} \, (1-\sin^2 \! \theta_{\rm r} \cos^2 \! \phi) \right] } \nonumber \\
[{\boldsymbol{D}}_{\rm tx}]_{m,m} & = e^{-j \pi \left[ \frac{2m}{\uplambda} {d_{\rm t}} \sin \! \theta_{\rm t} + \frac{m^2 }{\uplambda D} d^2_{\rm t} \right] }
\label{ventvent}
\end{align}
and with $D$ the range, $\uplambda$ the wavelength, $d_{\rm t}$ and $d_{\rm r}$ the antenna spacings at transmitter and receiver, $\theta_{\rm t}$ and $\theta_{\rm r}$ the transmitter and receiver elevations, and $\phi$ their relative azimuth angle.
In turn, $\tilde{{\boldsymbol{H}}}$ is the Vandermonde matrix
\begin{align}
\setlength\arraycolsep{2pt}
\tilde{{\boldsymbol{H}}} = \begin{bmatrix}
e^{j2\pi\eta \frac{0\times0}{{N_{\mathsf{max}}}}} & \cdots & e^{j2\pi\eta \frac{(N_{\rm t}-1)\times0}{{N_{\mathsf{max}}}}} \\
\vdots & \ddots & \vdots \\
e^{j2\pi\eta \frac{0\times (N_{\rm r}-1)}{{N_{\mathsf{max}}}}} & \cdots & e^{j2\pi\eta \frac{(N_{\rm t}-1)\times(N_{\rm r}-1)}{{N_{\mathsf{max}}}}}
\end{bmatrix}
\label{MesMessi}
\end{align}
where ${N_{\mathsf{max}}}=\max({N_{\mathrm{t}}},{N_{\mathrm{r}}})$ while
\begin{equation}
\eta=\frac{ ({d_{\rm r}} \cos \theta_{\rm r}) ({d_{\rm t}} \cos \theta_{\rm t}) {N_{\mathsf{max}}}}{\uplambda D}
\label{eq:nonPa_eta}
\end{equation}
is a parameter that concisely describes any LOS setting with ULAs.
Uniform rectangular arrays can be expressed as the Kronecker product of ULAs, and expressions deriving from (\ref{Oscarinyu}) emerge \cite{Larsson:05}.
For more complex topologies, the entries of ${\boldsymbol{H}}$ continue to be of unit magnitude, but the pattern of phase variations becomes more cumbersome.
\paragraph{LOS with Planar Wavefronts}
For long enough transmission ranges, the planar wavefront counterpart to (\ref{Oscarinyu})
is obtained by letting $D \to \infty$, whereby the channel becomes rank-1 with
\begin{align}
h_{n,m} = e^{-\mathrm{j} \frac{2 \pi }{\lambda} (n d_{\rm r} \sin \theta_{\rm r} \cos \phi + m d_{\rm t} \sin \theta_{\rm t} )}.
\label{antigens}
\end{align}
\paragraph{IID Rayleigh Fading}
In this model, representing situations of rich multipath propagation, the entries of ${\boldsymbol{H}}$ are IID and $h_{n,m} \sim \mathcal{N}_{\mathbb{C}}(0,1)$.
We note that the frequency-flat representation embodied by ${\boldsymbol{H}}$ is congruous for the two LOS channel models, but less so for the IID model, where the scattering would go hand in hand with frequency selectivity over the envisioned bandwidths. The analysis presented for this model intends to set the stage for more refined characterizations that account for the inevitable intersymbol interference.
In fact, even for the LOS channels, over a sufficiently broad bandwidth there is bound to be intersymbol interference because of spatial widening, i.e., because of the distinct propagation delays between the various transmit and receive antennas \cite{8443598,8354789}.
\section{1-Bit Capacity}
\label{calor2}
Denote by $p_1,\ldots,p_{4^{{N_{\mathrm{t}}}-1}}$ the activation probabilities of the transmit quartets, such that $\sum_k p_k = 1$
and $p_{\boldsymbol{x}}({\bm{\mathsf{x}}}_k)=p_k/4$ with ${\bm{\mathsf{x}}}_k$ any of the vectors in the $k$th quartet.
Letting $\mathcal{H}(\cdot)$ indicate entropy, and with all the probabilities conditioned on ${\boldsymbol{H}}$, the mutual information is
\begin{align}
\!\! \mathcal{I}({\mathsf{SNR}},{\boldsymbol{H}})
& = \mathcal{H}({\boldsymbol{y}}) - \mathcal{H}({\boldsymbol{y}} | {\boldsymbol{x}}) \\
& = \sum_{\ell=1}^{4^{{N_{\mathrm{r}}}}} p_{{\boldsymbol{y}}}({\bm{\mathsf{y}}}_\ell) \log_2 \frac{1}{p_{{\boldsymbol{y}}}({\bm{\mathsf{y}}}_\ell)} - \mathcal{H}({\boldsymbol{y}} | {\boldsymbol{x}}) \\
& = 4 \! \sum_{\ell=1}^{4^{{N_{\mathrm{r}}}-1}} \! p_{{\boldsymbol{y}}}({\bm{\mathsf{y}}}_\ell) \log_2 \frac{1}{p_{{\boldsymbol{y}}}({\bm{\mathsf{y}}}_\ell)} - \mathcal{H}({\boldsymbol{y}} | {\boldsymbol{x}})
\label{NoWiFi2}
\end{align}
where (\ref{NoWiFi2}) follows from the equiprobability of the vectors in each received quartet and ${\bm{\mathsf{y}}}_\ell$ is any of the vectors in the $\ell$th such quartet
while
\begin{align}
p_{{\boldsymbol{y}}}({\bm{\mathsf{y}}}) & = \sum_{k=1}^{4^{{N_{\mathrm{t}}}-1}} \! \frac{p_k}{4} \sum_{i=0}^3 p_{{\boldsymbol{y}} | {\boldsymbol{x}}} ({\bm{\mathsf{y}}} | \mathrm{j}^i {\bm{\mathsf{x}}}_k)
\label{Leiva2}
\end{align}
with $p_{{\boldsymbol{y}} | {\boldsymbol{x}}}$ depending on ${\mathsf{SNR}}$ and ${\boldsymbol{H}}$ as per (\ref{Joan}).
Elaborating on (\ref{Leiva2}),
\begin{align}
p_{{\boldsymbol{y}}}({\bm{\mathsf{y}}}) & = \sum_{k=1}^{4^{{N_{\mathrm{t}}}-1}} \! \frac{p_k}{4} \left[ \prod_{n=0}^{{N_{\mathrm{r}}}-1} Q \! \left( \! - \Re\{{\mathsf{y}}_n\} \sqrt{\frac{{\mathsf{SNR}}}{{N_{\mathrm{t}}}}} \Re\{{\boldsymbol{h}}_n {\bm{\mathsf{x}}}_k \} \! \right) \right. \nonumber \\
& \quad \cdot Q \! \left( \! - \Im\{{\mathsf{y}}_n\} \sqrt{\frac{{\mathsf{SNR}}}{{N_{\mathrm{t}}}}} \Im\{{\boldsymbol{h}}_n {\bm{\mathsf{x}}}_k\} \! \right) \nonumber \\
& \quad + \prod_{n=0}^{{N_{\mathrm{r}}}-1} Q \! \left( \! \Re\{{\mathsf{y}}_n\} \sqrt{\frac{{\mathsf{SNR}}}{{N_{\mathrm{t}}}}} \Im\{{\boldsymbol{h}}_n {\bm{\mathsf{x}}}_k \} \! \right) \nonumber \\
& \quad \cdot Q \! \left( \! - \Im\{{\mathsf{y}}_n\} \sqrt{\frac{{\mathsf{SNR}}}{{N_{\mathrm{t}}}}} \Re\{{\boldsymbol{h}}_n {\bm{\mathsf{x}}}_k \} \! \right) \nonumber \\
& \quad + \prod_{n=0}^{{N_{\mathrm{r}}}-1} Q \! \left( \! \Re\{{\mathsf{y}}_n\} \sqrt{\frac{{\mathsf{SNR}}}{{N_{\mathrm{t}}}}} \Re\{{\boldsymbol{h}}_n {\bm{\mathsf{x}}}_k \} \! \right) \nonumber \\
& \quad \cdot Q \! \left( \! \Im\{{\mathsf{y}}_n\} \sqrt{\frac{{\mathsf{SNR}}}{{N_{\mathrm{t}}}}} \Im\{{\boldsymbol{h}}_n {\bm{\mathsf{x}}}_k \} \! \right) \nonumber \\
& \quad + \prod_{n=0}^{{N_{\mathrm{r}}}-1} Q \! \left( \! - \Re\{{\mathsf{y}}_n\} \sqrt{\frac{{\mathsf{SNR}}}{{N_{\mathrm{t}}}}} \Im\{{\boldsymbol{h}}_n {\bm{\mathsf{x}}}_k \} \! \right) \nonumber \\
& \quad \left. \cdot Q \! \left( \! \Im\{{\mathsf{y}}_n\} \sqrt{\frac{{\mathsf{SNR}}}{{N_{\mathrm{t}}}}} \Re\{{\boldsymbol{h}}_n {\bm{\mathsf{x}}}_k \} \! \right) \right] .
\label{MalTemps2}
\end{align}
In turn, because of the factorization of $p_{{\boldsymbol{y}}|{\boldsymbol{x}}}$ in (\ref{Joan}),
\begin{align}
\mathcal{H}({\boldsymbol{y}} | {\boldsymbol{x}}) & = \! \sum_{n=0}^{{N_{\mathrm{r}}}-1} \big( \mathcal{H}(\Re\{ y_n \} | {\boldsymbol{x}}) + \mathcal{H}(\Im\{ y_n \} | {\boldsymbol{x}}) \big) \\
& = \!\! \sum_{k=1}^{4^{{N_{\mathrm{t}}}-1}} \! \frac{p_k}{4} \sum_{i=0}^3 \! \sum_{n=0}^{{N_{\mathrm{r}}}-1} \big( \mathcal{H}(\Re\{ y_n \} | {\boldsymbol{x}}=\mathrm{j}^i{\bm{\mathsf{x}}}_k) \nonumber \\
& \quad + \mathcal{H}(\Im\{ y_n \} | {\boldsymbol{x}} = \mathrm{j}^i {\bm{\mathsf{x}}}_k) \big) \\
& = \!\! \sum_{k=1}^{4^{{N_{\mathrm{t}}}-1}} \! \frac{p_k}{4} \sum_{i=0}^3 \! \sum_{n=0}^{{N_{\mathrm{r}}}-1} \! \left[ \mathcal{H}_{\rm b} \! \left( Q \! \left( \! - {\textstyle \sqrt{\frac{{\mathsf{SNR}}}{{N_{\mathrm{t}}}} } } \Re\{{\boldsymbol{h}}_n \mathrm{j}^i {\bm{\mathsf{x}}}_k \} \! \right) \! \right) \right. \nonumber \\
& \quad \left. + \mathcal{H}_{\rm b} \! \left( Q \! \left( \! - {\textstyle \sqrt{\frac{{\mathsf{SNR}}}{{N_{\mathrm{t}}}} } } \Im\{{\boldsymbol{h}}_n \mathrm{j}^i {\bm{\mathsf{x}}}_k \} \! \right) \! \right) \right]
\end{align}
where $\mathcal{H}_{\rm b}(p) = -p \log_2 p - (1-p) \log_2(1-p)$ is the binary entropy function. Since changing $i$ merely flips the sign of some of the Q-funcion arguments, and $Q(-\xi) = 1 - Q(\xi)$ such that $\mathcal{H}_{\rm b}(Q(-\xi)) = \mathcal{H}_{\rm b}(Q(\xi))$, it follows that
\begin{align}
\mathcal{H}({\boldsymbol{y}} | {\boldsymbol{x}}) & = \! \sum_{k=1}^{4^{{N_{\mathrm{t}}}-1}} \! p_k \sum_{n=0}^{{N_{\mathrm{r}}}-1} \! \left[ \mathcal{H}_{\rm b} \! \left( Q \! \left( \! - \sqrt{\frac{{\mathsf{SNR}}}{{N_{\mathrm{t}}}} } \Re\{{\boldsymbol{h}}_n {\bm{\mathsf{x}}}_k \} \! \right) \! \right) \right. \nonumber \\
& \quad \left. + \mathcal{H}_{\rm b} \! \left( Q \! \left( \! - \sqrt{\frac{{\mathsf{SNR}}}{{N_{\mathrm{t}}}} } \Im\{{\boldsymbol{h}}_n {\bm{\mathsf{x}}}_k \} \! \right) \! \right) \right] .
\label{TrumpOut2}
\end{align}
The combination of (\ref{NoWiFi2}), (\ref{MalTemps2}), and (\ref{TrumpOut2}) gives $\mathcal{I}({\mathsf{SNR}},{\boldsymbol{H}})$, whose evaluation involves $\mathcal{O}(4^{{N_{\mathrm{t}}}-1} 4^{{N_{\mathrm{r}}}-1})$ terms. This becomes prohibitive even for modest ${N_{\mathrm{t}}}$ and ${N_{\mathrm{r}}}$, hence the interest in analytical characterizations. From $\mathcal{I}({\mathsf{SNR}},{\boldsymbol{H}})$, the 1-bit capacity is
\begin{equation}
C({\mathsf{SNR}},{\boldsymbol{H}}) = \max_{ \{p_k \} : \sum_k p_k = 1 } \mathcal{I}({\mathsf{SNR}},{\boldsymbol{H}})
\label{VAB}
\end{equation}
with maximization over $p_1,\ldots,p_{4^{{N_{\mathrm{t}}}-1}}$.
Since $\mathcal{I}({\mathsf{SNR}},{\boldsymbol{H}})$ is concave in $p_1,\ldots,p_{4^{{N_{\mathrm{t}}}-1}}$ and these probabilities define a convex set, (\ref{VAB}) can be solved with off-the-shelf convex optimization tools. Or, the Blahut-Arimoto
algorithm that alternatively maximizes $p_{\boldsymbol{x}}$ and $p_{{\boldsymbol{x}}|{\boldsymbol{y}}}$
can be applied, with converge guarantees to any desired accuracy \cite{blahut1972computation,arimoto1972algorithm}.
In ergodic settings, what applies is the ergodic spectral efficiency
\begin{equation}
\mathcal{I}({\mathsf{SNR}}) = \mathbb{E}_{\boldsymbol{H}} \big[ \mathcal{I}({\mathsf{SNR}},{\boldsymbol{H}}) \big]
\label{RH}
\end{equation}
and likewise for the ergodic capacity.
Alternatively, if the channel is stable over each codeword, then $\mathcal{I}({\mathsf{SNR}},{\boldsymbol{H}})$ and $C({\mathsf{SNR}},{\boldsymbol{H}})$ themselves are meaningful for each ${\boldsymbol{H}}$.
The 1-bit capacity cannot exceed $2 \min({N_{\mathrm{t}}},{N_{\mathrm{r}}})$ b/s/Hz, with three distinct regimes:
\begin{itemize}
\item Low SNR. This is a key regime at mmWave and terahertz frequencies, given the difficulty in producing strong signals, the high propagation losses, and the noise bandwidth.
\item Intermediate SNR. Here, the spectral efficiency improves sustainedly with the SNR.
\item High SNR. This is a regime of diminishing returns, once the capacity nears $2 \min({N_{\mathrm{t}}},{N_{\mathrm{r}}})$.
\end{itemize}
\subsection{Low SNR}
The low-SNR behavior is most conveniently examined with the mutual information expressed as function of the normalized energy per bit at the receiver,
\begin{equation}
\label{ebnodef}
\frac{E_{\rm b}}{N_0} = \frac{{\mathsf{SNR}}}{\mathcal{I}({\mathsf{SNR}})} .
\end{equation}
Beyond the minimum required value of
\begin{align}
\label{eooo}
\ebno_{\rm \min} = \lim_{{\mathsf{SNR}} \to 0} \, \frac{{\mathsf{SNR}}}{\mathcal{I}({\mathsf{SNR}})} ,
\end{align}
the mutual information behaves as \cite[sec. 4.2]{Foundations:18}
\begin{equation}
\label{fumfumfum}
S_0 \, \frac{\left. \frac{E_{\rm b}}{N_0} \right |_{\rm \scriptscriptstyle dB} - \left. \ebno_{\rm \min} \right |_{\rm \scriptscriptstyle dB}}{3 \, {\rm \scriptstyle dB}} + \varepsilon ,
\end{equation}
where $\varepsilon$ is a lower-order term, $S_0$ is the slope at $\ebnoinline_{\rm min}$ in b/s/Hz/($3$ dB), and $z {|_{\mathrm{\scriptscriptstyle dB}}} = 10 \log_{10} z$.
\begin{figure*}
\begin{align}
S_0 = \frac{2 \, [\dot{\mathcal{I}}(0)]^2}{-\ddot{\mathcal{I}}(0) \log_2 e} = \frac{ \mathbb{E} \big[ {\rm tr}({\boldsymbol{H}} \bm \Sigma_{\boldsymbol{x}} {\boldsymbol{H}}^*) \big]^2 }{\frac{1}{2} \, \mathbb{E} \big[ {\rm tr} \big( ( \text{nondiag}({\boldsymbol{H}} \bm \Sigma_{\boldsymbol{x}} {\boldsymbol{H}}^*) )^2 \big) \big] + \frac{\pi-1}{3} \, \mathbb{E} \big[ \| \Re \{ {\boldsymbol{H}} {\boldsymbol{x}} \} \|^4_4 + \| \Im \{ {\boldsymbol{H}} {\boldsymbol{x}} \} \|^4_4 \big] }
\label{ForzaOriol}
\end{align}
\end{figure*}
$\ebno_{\rm \min}$ and $S_0$ descend from the first and second derivatives of $\mathcal{I}({\mathsf{SNR}})$ at ${\mathsf{SNR}}=0$, which themselves emerge from (\ref{NoWiFi2}), (\ref{MalTemps2}), and (\ref{TrumpOut2}) after a tedious derivation \cite{mezghani2007ultra,mezghani2020low}. Plugging these two derivatives into the definitions of $\ebno_{\rm \min}$ and $S_0$ \cite[sec. 4.2]{Foundations:18},
\begin{align}
\ebno_{\rm \min} & = \frac{1}{\dot{\mathcal{I}}(0)} = \frac{\pi {N_{\mathrm{t}}}}{ \mathbb{E} \big[ {\rm tr}( {\boldsymbol{H}} \bm \Sigma_{\boldsymbol{x}} {\boldsymbol{H}}^* ) \big] \log_2 e }
\label{Argimon}
\end{align}
with $\bm \Sigma_{\boldsymbol{x}} = \mathbb{E} \big[ {\boldsymbol{x}} {\boldsymbol{x}}^* \big] = \sum_k p_k \, {\bm{\mathsf{x}}}_{k} {\bm{\mathsf{x}}}^*_{k} $, while $S_0$ equals (\ref{ForzaOriol})
where $\text{nondiag}(\cdot)$ returns a matrix with its diagonal entries set to zero and $\| \cdot \|_4$ denotes L4 norm.
The expectations in (\ref{Argimon}) and (\ref{ForzaOriol}) are conditioned on ${\boldsymbol{H}}$ when there is operational significance attached to a specific such value, and unconditioned in the ergodic case.
A worthwhile exercise is to appraise the expansion in (\ref{fumfumfum}) against its exact counterpart, a contrast that Fig. \ref{Expansions} presents for ${N_{\mathrm{t}}}={N_{\mathrm{r}}}=1$ in Rayleigh fading. The characterization provided by (\ref{fumfumfum}) is indeed precise, a fact that extends to all other channels considered in the paper.
\begin{figure}
\centering
\includegraphics[width=1\linewidth]{Expansions.eps}
\caption{Capacity as a function of $E_{\rm b}/N_0$ for ${N_{\mathrm{t}}}={N_{\mathrm{r}}}=1$ and Rayleigh fading. The solid lines are the exact capacities (1-bit and full resolution) while the dotted lines represent their respective expansions as per (\ref{fumfumfum}).}
\label{Expansions}
\end{figure}
\subsection{Intermediate SNR}
In 1-bit communication, the intermediate-SNR regime steals relevance from its high-SNR counterpart, which becomes unappealing.
In order to delineate the reach of this intermediate-SNR regime, it is of interest to establish the limiting capacity for ${\mathsf{SNR}} \to \infty$.
Let us define
\begin{equation}
C_{\infty}({\boldsymbol{H}}) = \lim_{{\mathsf{SNR}} \to \infty} C({\mathsf{SNR}},{\boldsymbol{H}})
\end{equation}
and consider channels satisfying ${\boldsymbol{h}}_n {\boldsymbol{x}} \neq 0$ with probability 1 for $n=0,\ldots,{N_{\mathrm{r}}}-1$, such that
the transition probabilities have a positive mass only at $0$ and $1$, meaning that ${\boldsymbol{y}}$ is fully determined by ${\boldsymbol{x}}$.
The vast majority of channels abide by the condition, and in particular the ones set forth in Sec. \ref{Carla18}.
For ${N_{\mathrm{t}}}=1$, a single quartet is available for transmission and, by virtue of its four equiprobable constituent vectors, $C_{\infty}({\boldsymbol{H}}) = 2$.
For ${N_{\mathrm{t}}}>1$ and ${N_{\mathrm{r}}}=1$, it can be verified that (\ref{VAB}) is maximized when a single quartet is activated, depending on ${\boldsymbol{H}}$ \cite{nam2019capacity}. Again, $C_{\infty}({\boldsymbol{H}}) = 2$.
For ${N_{\mathrm{t}}}>1$ and ${N_{\mathrm{r}}}>1$, it must hold that $C_\infty \leq 2 {N_{\mathrm{t}}}$, but this bound is generally not achievable because some vectors ${\boldsymbol{x}}$ map to the same receive vector ${\boldsymbol{y}}$ \cite{gao2017power}.
As the transition probabilities are either $0$ or $1$, every binary entropy function in (\ref{TrumpOut2}) vanishes and $\mathcal{H}({\boldsymbol{y}} | {\boldsymbol{x}}) \to 0$, hence the mutual information comes to equal $\mathcal{H}({\boldsymbol{y}})$.
Letting
\begin{equation}
\mathcal{Y}({\boldsymbol{H}}) = \Big\{ {\boldsymbol{y}} \, | \, {\boldsymbol{y}} = \text{sgn}({\boldsymbol{H}} {\boldsymbol{x}}) \, \forall {\boldsymbol{x}} \in \{ \pm1 \pm \mathrm{j} \}^{{N_{\mathrm{r}}}} \Big\}
\label{Tian}
\end{equation}
denote the set of vectors ${\boldsymbol{y}}$ that can be elicited for channel ${\boldsymbol{H}}$, the maximization of $\mathcal{H}({\boldsymbol{y}})$ occurs when this set is equiprobable. Then,
\begin{equation}
C_\infty({\boldsymbol{H}}) = \log_2 |\mathcal{Y}({\boldsymbol{H}})|
\label{Tian2}
\end{equation}
with $\mathbb{E}[C_\infty({\boldsymbol{H}})]$ being the limiting ergodic capacity.
The evaluation of (\ref{Tian2}) is far simpler than that of $C({\mathsf{SNR}})$ in its full generality.
\section{1-Bit vs Full-Resolution Capacity}
\label{calor3}
A naïve comparison of the 1-bit and full-resolution capacities would indicate that the former always trails the latter.
In terms of power, their gap in dB is at least the difference between their $\ebno_{\rm \min} {|_{\mathrm{\scriptscriptstyle dB}}}$ values; in a scalar Rayleigh-faded channel, for instance, what separates the full-resolution mark of $-1.59$ dB \cite[sec. 4.2]{Foundations:18} from its 1-bit brethren of $0.37$ dB is $1.96$ dB as noted in Fig.~\ref{Expansions}. As shall be seen in the sequel, this gap remains rather steady with MIMO and over a variety of channels.
Such naïve comparison, however, only accounts for radiated power, disregarding any other differences in power consumption between the full-resolution and 1-bit alternatives. While appropriate when the radiated power dominates, this neglect becomes misleading when the digitalization consumes sizeable power and, since this is the chief motivation for 1-bit communication, by definition the comparison is somewhat deceptive.
Indeed, whenever the excess power of a full-resolution architecture, relative to 1-bit, exceeds a $1.96$-dB backoff in ${P_{\mathrm{t}}} / \eta$,
there is going to be a range of SNRs over which, under a holistic accounting of power, the 1-bit capacity is actually higher.
For a very conservative assessment of this phenomenon, let us assume that $\kappa=2$ in (\ref{LMessi}) and that $\eta$, $\mathsf{FoM}$, and $P_{\sf other}$, are not affected by the resolution---in actuality all of these quantities shall be markedly better in the 1-bit case---to obtain the condition
\begin{equation}
\frac{1}{10^{1.96/10}} \frac{{P_{\mathrm{t}}}}{\eta} + \mathsf{FoM} \, B \, 2^{11} \geq \frac{{P_{\mathrm{t}}}}{\eta} + \mathsf{FoM} \, B \, 4
\end{equation}
where we considered two ADCs (${N_{\mathrm{r}}}=1$) and $b=10$ bits for full resolution. The above yields
\begin{equation}
B \geq \frac{0.36 \, {P_{\mathrm{t}}} / \eta}{ \mathsf{FoM} \, (2^{11}-4)} ,
\end{equation}
which, for the sensible values ${P_{\mathrm{t}}}=23$ dBm and $\eta=0.4$, and with a state-of-the-art $\mathsf{FoM}=10$ pJ/conversion \cite{murmann2016adc}, evaluates to $B \geq 8.8$ GHz. This highly conservative threshold drops rapidly as the number of digitally processed antennas grows large and thus, for bandwidths well within the scope of upcoming wireless systems, 1-bit MIMO can be viewed as information-theoretically optimum for at least some range of SNRs.
\section{Transmit Beamforming}
\label{calor4}
Transmit beamforming corresponds to $\bm \Sigma_{\boldsymbol{x}}$ being $\mbox{\text{rank-1}}$, i.e., to ${\boldsymbol{x}}$ being drawn from a single quartet, with such quartet generally dependent on ${\boldsymbol{H}}$. We examine this strategy with an ergodic perspective; for nonergodic channels, the formulation stands without the expectations over ${\boldsymbol{H}}$.
\subsection{Low SNR}
For vanishing SNR, transmit beamforming is not only conceptually appealing, but information-theoretically optimum.
Indeed, (\ref{Argimon}) can be rewritten as
\begin{align}
\ebno_{\rm \min} = \frac{\pi {N_{\mathrm{t}}}}{ \mathbb{E} \big[ \sum_{k} p_k \, \| {\boldsymbol{H}} {\bm{\mathsf{x}}}_k \|^2 \big] \log_2 e } ,
\label{Argimon2}
\end{align}
which is maximized by assigning probability $1$ to the quartet $k^\star = \arg\max \| {\boldsymbol{H}} {\bm{\mathsf{x}}}_k \|^2$ for each realization of ${\boldsymbol{H}}$.
Therefore, it is optimum to beamform, and the optimum beamforming quartet is the one maximizing the received power.
The task is then to determine $k^\star$ from within the $4^{{N_{\mathrm{t}}}-1}$ possible quartets.
For ${N_{\mathrm{t}}}=1$, there is no need to optimize over $k$---only one quartet can be transmitted---and thus
\begin{align}
\ebno_{\rm \min}
& = \frac{\pi }{2 {N_{\mathrm{r}}} \log_2 e} ,
\end{align}
which amounts to $0.37$ dB for ${N_{\mathrm{r}}}=1$ \cite{Verdu2002} and improves by $3$ dB with every doubling of ${N_{\mathrm{r}}}$ thereafter.
For ${N_{\mathrm{t}}} > 1$, it is useful to recognize that
the choices for ${\boldsymbol{x}}$ that are bound to yield high values for $\| {\boldsymbol{H}} {\boldsymbol{x}} \|^2$ are those that project maximally on the dimension of ${\boldsymbol{H}}$ that offers the largest gain, namely the maximum-eigenvalue eigenvector of ${\boldsymbol{H}}^* {\boldsymbol{H}}$. This, in turn, requires that ${\boldsymbol{x}}$ mimic, as best as possible, the structure of that eigenvector; since the magnitude of the entries of ${\boldsymbol{x}}$ is fixed, this mimicking ought to be in terms of phases only.
Formalizing this intuition, it is possible to circumvent the need to exhaustively search the entire field of $4^{{N_{\mathrm{t}}}-1}$ possibilities and
conveniently identify a subset of only ${N_{\mathrm{t}}}$ quartet candidates that is sure to contain the one best aligning with the maximum-eigenvalue eigenvector of ${\boldsymbol{H}}^* {\boldsymbol{H}}$, denoted henceforth by ${\boldsymbol{v}}_0$.
Precisely, as detailed in Appendix \ref{superlliga}, if we let $\varphi_m = \angle(v_{0,m}) + \epsilon$ for $m=0,\ldots,{N_{\mathrm{t}}}-1$, the ${N_{\mathrm{t}}}$ quartets in the subset can be determined as
\begin{equation}
{\boldsymbol{x}}_k = \text{sgn} \big( e^{\mathrm{j} \varphi_{k-1}} {\boldsymbol{v}}_0 \big) \qquad\quad k=1,\ldots,{N_{\mathrm{t}}}
\label{subset}
\end{equation}
where $\epsilon$ is a small quantity, positive or negative. If the channel is rank-1, then this subset is sure to contain the optimum ${\boldsymbol{x}}_{k^\star}$; if the rank is higher, then optimality is not guaranteed, but the best value in the above subset is bound to yield excellent performance.
Turning to the $\ebnoinline_{\rm min}$ achieved by ${\boldsymbol{x}}_{k^\star}$, its explicit evaluation is complicated, yet its value can be shown (see Appendix \ref{superlliga} again) to satisfy
\begin{align}
\frac{\pi}{2 \, \mathbb{E} \big[ \lambda_0 \big] \log_2 e} \leq \ebno_{\rm \min} \leq \frac{\pi^3 {N_{\mathrm{t}}}}{16 \, \mathbb{E} \big[ \lambda_{0} \| {\boldsymbol{v}}_{0} \|^2_1 \big] \log_2 e}
\label{bounds}
\end{align}
where $\lambda_0$ is the maximum eigenvalue of ${\boldsymbol{H}}^* {\boldsymbol{H}}$
while $\| \cdot \|_1$ denotes L1 norm.
For ${N_{\mathrm{r}}}=1$, (\ref{bounds}) specializes to
\begin{align}
\frac{\pi}{2 {N_{\mathrm{t}}} \log_2 e} \leq \ebno_{\rm \min} \leq \frac{\pi^3}{16 \left(1+({N_{\mathrm{t}}}-1) \frac{\pi}{4} \right) \log_2 e} .
\nonumber
\end{align}
Finally, $S_0$ can be obtained by plugging ${\boldsymbol{x}} = {\boldsymbol{x}}_{k^\star}$ and $\bm \Sigma_{\boldsymbol{x}} = {\boldsymbol{x}}_{k^\star} {\boldsymbol{x}}^*_{k^\star}$ into (\ref{ForzaOriol}).
\subsection{Intermediate SNR}
The low-SNR linearity of the mutual information in the received power is the root cause of the optimality of power-based beamforming in that regime. The orientation on the complex plane of the received signals is immaterial---a rotation shifts power from the real to the imaginary part, or vice versa, but the total power is preserved. Likewise, the power split among receive antennas is immaterial to the low-SNR mutual information.
At higher SNRs, the linearity breaks down and the mutual information becomes a more intricate function of ${\boldsymbol{H}} {\boldsymbol{x}}$, such that proper signal orientations and power balances become important, to keep ${\boldsymbol{h}}_n {\boldsymbol{x}}$ away from the ADC quantization boundaries for $n=0,\ldots,{N_{\mathrm{r}}}-1$. This has a dual consequence:
\begin{itemize}
\item Transmit beamforming ceases to be generally optimum, even if the channel is rank-1.
\item Even within the confines of beamforming, solutions not based on maximizing power are more satisfying.
\end{itemize}
As exemplified in Fig. \ref{comparison} for ${N_{\mathrm{r}}}=1$, a beamforming quartet with a better complex-plane disposition at the receiver may be preferable to one yielding a larger magnitude.
This is because, after a 1-bit ADC, only $90^\circ$ rotations and no scalings are possible (in contrast with full-resolution receivers, where ${\boldsymbol{h}} {\boldsymbol{x}}$ can subsequently be rotated and scaled). The best beamforming quartet is the one that simultaneously ensures large real and imaginary parts for ${\boldsymbol{h}}_n {\boldsymbol{x}}$ in a balanced fashion for $n=0,\ldots,{N_{\mathrm{r}}}-1$, and the task of identifying this quartet is a fitting one for learning algorithms \cite{9210010,Eusipco21}.
\begin{figure}
\centering
\includegraphics[width=1\linewidth]{Comparison2.eps}
\caption{Complex plane representation of the four values of ${\boldsymbol{h}} {\boldsymbol{x}}$ for a given ${\boldsymbol{h}}$ and a given quartet, with the ADC quantization boundaries indicated by dashed lines. Left-hand side, for ${\bm{\mathsf{x}}}_{k}$, which has a larger magnitude but worse orientation. Right-hand side, for ${\bm{\mathsf{x}}}_{\ell}$, which has a smaller magnitude but better orientation. On this channel, quartet $k$ yields a higher mutual at low SNR while quartet $\ell$ yields a higher mutual information beyond the low-SNR regime.}
\label{comparison}
\end{figure}
We note that, with full-resolution converters, multiple receive antennas play a role dual to that of transmit beamforming \cite[sec. 5.3]{Foundations:18}, and the spectral efficiency with $N$ transmit and one receive antenna equals its brethren with one transit and $N$ receive antennas. With 1-bit converters, in contrast, transmit beamforming optimizes ${\boldsymbol{h}}_n {\boldsymbol{x}}$ for $n=0,\ldots,{N_{\mathrm{r}}}-1$, to mitigate the addition of noise prior to quantization, while multiple receive antennas yield a diversity of quantized observations from which better decisions can be made on which of the possible vectors was transmitted. This includes majority decisions and erasure declarations in the case of split observations.
\section{Equiprobable Signaling}
\label{calor5}
The complementary strategy to beamforming is to activate multiple quartets, increasing the rank of $\bm \Sigma_{\boldsymbol{x}}$.
Ultimately, all quartets can be activated with equal probability, such that $\bm \Sigma_{\boldsymbol{x}} = 2 {\boldsymbol{I}}$.
This renders the signals IID across the transmit antennas, i.e., pure spatial multiplexing.
We examine this strategy with an ergodic perspective.
\subsection{Low SNR}
With equiprobable signaling, (\ref{Argimon}) gives
\begin{equation}
\ebno_{\rm \min} =\frac{\pi}{2 {N_{\mathrm{r}}} \log_2 e} .
\label{Zaira}
\end{equation}
In addition \cite{mezghani2007ultra},
\begin{align}
\mathbb{E}_{\boldsymbol{x}} \big[ \| & \Re \{ {\boldsymbol{H}} {\boldsymbol{x}} \} \|^4_4 + \| \Im \{ {\boldsymbol{H}} {\boldsymbol{x}} \} \|^4_4 \big] = 6 \, {\rm tr} \big( ( \text{diag}({\boldsymbol{H}} {\boldsymbol{H}}^*) )^2 \big) \nonumber \\
& - 4 \sum_{n=0}^{{N_{\mathrm{r}}}-1} \sum_{m=0}^{{N_{\mathrm{t}}}-1} \left(\Re\{h_{n,m}\}^4 + \Im\{h_{n,m}\}^4 \right) ,
\label{Nil2}
\end{align}
based on which $S_0$ in (\ref{ForzaOriol}) simplifies considerably.
Combining (\ref{bounds}) and (\ref{Zaira}), the low-SNR advantage of optimum beamforming over equiprobable signaling, denoted by $\Delta_{\sf BF}$, is tightly bounded as
\begin{equation}
\frac{8 \, \mathbb{E} \big[ \lambda_{0} \| {\boldsymbol{v}}_{0} \|^2_1 \big]}{\pi^2 {N_{\mathrm{t}}} {N_{\mathrm{r}}}} \leq \Delta_{\sf BF} \leq \frac{\mathbb{E} \big[ \lambda_0 \big]}{{N_{\mathrm{r}}}} .
\label{Oscar}
\end{equation}
This enables some general considerations:
\begin{itemize}
\item The low-SNR advantage of beamforming is essentially determined by the maximum eigenvalue of ${\boldsymbol{H}}^*{\boldsymbol{H}}$. The advantage is largest in rank-1 channels, and minimal if all eigenvalues are equal (on average or instantaneously, as pertains to ergodic and nonergodic settings).
\item If all eigenvalues are equal, beamforming may still yield a lingering advantage for ${N_{\mathrm{t}}}>{N_{\mathrm{r}}}$, but not otherwise. Indeed, for ${N_{\mathrm{t}}} \leq {N_{\mathrm{r}}}$, if all eigenvalues all equal then $\mathbb{E} \big[ \lambda_0 \big] = {N_{\mathrm{r}}}$ and thus $\Delta_{\sf BF} \leq 1$.
\end{itemize}
\subsection{Intermediate SNR}
While beamforming is optimum at low SNR, it is decidedly suboptimum beyond, and activating multiple quartets becomes instrumental to surpass the 2-b/s/Hz mark. This is the case even in rank-1 channels, where the activation of multiple quartets allows producing richer signals; this can be seen as the 1-bit counterpart to higher-order constellations. And, given how the curse of dimensionality afflicts the computation of the optimum quartet probabilities, equiprobable signaling is a very enticing way of going about this.
As will be seen, not only is it implementationally convenient, but highly effective.
\section{Channels of Interest}
\label{calor6}
Capitalizing on the analytical tools set forth hitherto, let us now examine the performance of transmit beamforming and equiprobable signaling in various classes of channels, starting with the nonergodic LOS settings and progressing on to the ergodic IID Rayleigh-faded channel.
\subsection{LOS with Planar Wavefronts}
This channel is rank-1, hence the optimum $\ebnoinline_{\rm min}$ can be achieved with equality by the best beamforming quartet in subset (\ref{subset}).
More conveniently for our purposes here, we can rewrite (\ref{Oscarinyu}) as ${\boldsymbol{H}} = \sqrt{{N_{\mathrm{t}}} {N_{\mathrm{r}}}} {\boldsymbol{u}} \, {\boldsymbol{v}}^*$ where
\begin{align}
u_n & = \frac{1}{\sqrt{{N_{\mathrm{r}}}}} \, e^{-j \pi \frac{2n}{\lambda} {d_{\rm r}} \sin \! \theta_{\rm r} \cos \! \phi } \\
v_m & = \frac{1}{\sqrt{{N_{\mathrm{t}}}}} \, e^{-j \pi \frac{2m}{\lambda} {d_{\rm t}} \sin \! \theta_{\rm t} }.
\end{align}
Irrespective of the array orientations, $\lambda_0 = {N_{\mathrm{t}}} {N_{\mathrm{r}}}$ and $\| {\boldsymbol{v}}_0 \|^2_1 = {N_{\mathrm{t}}}$ such that (\ref{bounds}) reverts to
\begin{align}
\frac{\pi}{2 \, {N_{\mathrm{t}}} {N_{\mathrm{r}}} \log_2 e} \leq \ebno_{\rm \min} \leq \frac{\pi^3 }{16 \, {N_{\mathrm{t}}} {N_{\mathrm{r}}} \log_2 e} ,
\label{noies1}
\end{align}
which depends symmetrically on ${N_{\mathrm{t}}}$ and ${N_{\mathrm{r}}}$.
The significance of $\ebno_{\rm \min}$ as the key measure of low-SNR performance can be appreciated in Fig. \ref{CapMIMObfRank1}, which depicts the low-SNR capacity as a function of $\frac{E_{\rm b}}{N_0}$ for ${N_{\mathrm{t}}}={N_{\mathrm{r}}}=1$, $2$, and $4$ in an exemplary LOS setting. Adding antennas essentially displaces the capacity by the amount by which $\ebno_{\rm \min}$ changes.
Shown in Fig. \ref{EbN0min_vs_N_THz} is how $\ebnoinline_{\rm min}$ improves with the number of antennas (${N_{\mathrm{t}}}={N_{\mathrm{r}}}$) for the same setting. Also shown are the values for equiprobable signaling, undesirable in this case as per (\ref{Oscar}).
The low-SNR advantage of beamforming accrues steadily with the numbers of antennas and the bounds in (\ref{bounds}) tightly bracket the optimum $\ebnoinline_{\rm min}$. As anticipated, the gap of 1-bit beamforming to full-resolution beamforming (included in the figure) remains small.
\begin{figure}
\centering
\includegraphics[width=1\linewidth]{CapMIMObfRank1_new.eps}
\caption{Capacity as a function of $E_{\rm b}/N_0$ for ${N_{\mathrm{t}}}={N_{\mathrm{r}}}=1$, ${N_{\mathrm{t}}}={N_{\mathrm{r}}}=2$, and ${N_{\mathrm{t}}}={N_{\mathrm{r}}}=4$, in a planar-wavefront LOS channel with half-wavelength antenna spacings, $\theta_{\rm t}=0$, $\theta_{\rm r}=\pi/6$, and $\phi=\pi/4$.
}
\label{CapMIMObfRank1}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=1\linewidth]{EbN0min_vs_N_THz.eps}
\caption{Minimum $E_{\rm b}/N_0$ as a function of ${N_{\mathrm{t}}}={N_{\mathrm{r}}}$ for a planar-wavefront LOS channel with half-wavelength antenna spacings, $\theta_{\rm t}=0$, $\theta_{\rm r}=\pi/6$, and $\phi=\pi/4$: 1-bit beamforming (exact values in solid, interval spanned by the bounds in shaded) vs equiprobable signaling. Also shown is the performance with full resolution.}
\label{EbN0min_vs_N_THz}
\end{figure}
Moving up to intermediate SNRs, the beamforming and equiprobable-signaling performance on another setting is presented in Fig. \ref{CvsSNR_LOS_planar}.
Also shown is the actual capacity with $p_1,\ldots,p_{4^{{N_{\mathrm{t}}}-1}}$ optimized via Blahut-Arimoto. Up to when the $2$-b/s/Hz ceiling is approached,
beamforming performs splendidly. Past that level, and no matter the rank-1 nature of the channel, equiprobable signaling is highly superior, tracking the capacity to within a roughly constant shortfall.
This example represents well the intermediate-SNR performance in planar-wavefront LOS channels, a point that has be verified by contrasting the asymptotic performance of equiprobable signaling in a variety of such channels against the respective $C_\infty$.
\begin{figure}
\centering
\includegraphics[width=1\linewidth]{CvsSNR_LOS_planar2.eps}
\caption{Spectral efficiency as a function of ${\mathsf{SNR}}$ for ${N_{\mathrm{t}}}={N_{\mathrm{r}}}=2$ and ${N_{\mathrm{t}}}={N_{\mathrm{r}}}=4$, in a planar-wavefront LOS channel with half-wavelength antenna spacings, $\theta_{\rm t}=\pi/4$, $\theta_{\rm r}=\pi/6$, and $\phi=\pi/4$. In solid, capacity and beamforming; in dashed, equiprobable signaling.}
\label{CvsSNR_LOS_planar}
\end{figure}
\subsection{LOS with Spherical Wavefronts}
The scope of channels in this class is very large, depending on the array topologies and relative orientations; for the sake of specificity, we concentrate on ULAs, and draw insights whose generalization would be welcome follow-up work.
A key property of ULA-spawned channels within this class is that \cite{do2020reconfigurable}
\begin{equation}
{\boldsymbol{H}}^* {\boldsymbol{H}} \approx \frac{{N_{\mathsf{max}}}}{\eta} {\boldsymbol{D}}^*_{\rm tx} {\boldsymbol{F}} \text{diag}( \underbrace{1,\ldots,1}_{\eta {N_{\mathsf{min}}}} , 0,\ldots,0 ) {\boldsymbol{F}}^* {\boldsymbol{D}}_{\rm tx} \nonumber
\end{equation}
where the approximation sharpens with the numbers of antennas while ${\boldsymbol{F}}$ is a unitary Fourier matrix, $ {\boldsymbol{D}}_{\rm tx}$ and $\eta$ are as introduced in Sec. \ref{Carla18}, and ${N_{\mathsf{min}}} = \min({N_{\mathrm{t}}},{N_{\mathrm{r}}})$. Therefore, $\lambda_0 \approx {N_{\mathsf{max}}}/\eta$ and $\| {\boldsymbol{v}}_0 \|^2_1 \approx {N_{\mathrm{t}}}$.
By specializing (\ref{bounds}), the optimum $\ebnoinline_{\rm min}$ attained by beamforming is seen to satisfy
\begin{align}
\frac{\pi \eta}{2 {N_{\mathsf{max}}} \log_2 e} \lesssim \ebno_{\rm \min} \lesssim \frac{\pi^3 \eta }{16 {N_{\mathsf{max}}} \log_2 e} ,
\label{SIlla}
\end{align}
which indicates that
a smaller $\eta$ is preferable at low SNR, meaning antennas as tightly spaced as possible---this renders the wavefronts maximally planar---and array orientations as endfire as possible---this shrinks their effective widths. Indeed, wavefront curvatures trim the beamforming gains, and reducing $\eta$ mitigates the extent of such curvatures.
With growing $\eta$, the low-SNR performance does degrade, but beamforming retains an edge over equiprobable signaling for $\eta<1$ or ${N_{\mathrm{t}}} > {N_{\mathrm{r}}}$. Alternatively, for $\eta=1$ and ${N_{\mathrm{t}}} = {N_{\mathrm{r}}}$, (\ref{SIlla}) is no better than the equiprobable-signaling $\ebnoinline_{\rm min}$ in (\ref{Zaira}).
In fact, for this all-important configuration whose eigenvalues are equal \cite{Haustein:03,Bohagen:05,Larsson:05,Sarris:07,Sheldon:082,do2021terahertz},
\emph{any} transmission strategy achieves this same $\ebnoinline_{\rm min}$;
this can be verified by using ${\boldsymbol{H}}^* {\boldsymbol{H}}= {N_{\mathrm{r}}} {\boldsymbol{I}}$ and $\| {\bm{\mathsf{x}}}_k \|^2 = 2 {N_{\mathrm{t}}}$ in (\ref{Argimon2}), whereby (\ref{Zaira}) emerges irrespective of $p_1,\ldots,p_{4^{N_{\mathrm{t}}}-1}$.
This coincidence of $\ebnoinline_{\rm min}$ for all transmission strategies when $\eta=1$ and ${N_{\mathrm{t}}} = {N_{\mathrm{r}}}$ does not translate to $S_0$, which is decidedly larger for equiprobable signaling, indicating that this is the optimum low-SNR technique for this configuration as illustrated in Fig. \ref{CapMIMObfeq}.
Precisely, applying (\ref{ForzaOriol}) and (\ref{Nil2}), a channel with $\eta=1$ and ${N_{\mathrm{t}}}={N_{\mathrm{r}}}=N$ is seen to exhibit
\begin{equation}
S_0 = \frac{ \frac{2}{\pi-1} N^4 }{N^3-\frac{2}{3} { \displaystyle \sum_{n=0}^{N-1} \sum_{m=0}^{N-1} } \left[ \cos^4 \! \big(2 \pi \frac{n m}{N} \big) + \sin^4 \! \big(2 \pi \frac{n m}{N} \big) \right] } .
\end{equation}
\begin{figure}
\centering
\includegraphics[width=1\linewidth]{CapMIMObfeq.eps}
\caption{Spectral efficiency as a function of $E_{\rm b}/N_0$ for ${N_{\mathrm{t}}}={N_{\mathrm{r}}}=4$ and ${N_{\mathrm{t}}}={N_{\mathrm{r}}}=6$ in an LOS channel with $\eta=1$. In solid, beamforming performance; in dashed, capacity with equiprobable signaling.}
\label{CapMIMObfeq}
\end{figure}
Let us now turn to intermediate SNRs, where the full-resolution wisdom is that the performance depends only on $\eta$ and it improves monotomically with $\eta$ up to $\eta=1$, where capacity is achieved by IID signaling.
All these insights, underpinned by the approximate equality of the $\eta {N_{\mathsf{min}}}$ nonzero eigenvalues of ${\boldsymbol{H}}^* {\boldsymbol{H}}$, cease to hold in the 1-bit realm due to the transmitter's inability of accessing those singular values directly via precoding.
Indeed, when the only ability is to manipulate the quartet probabilities (see Fig. \ref{RobNur} for an example):
\begin{itemize}
\item The performance does not depend only on $\eta$, but further on $\theta_{\rm t}$, $\theta_{\rm r}$, $\phi$, $D$, and $d_{\rm t}$ and $d_{\rm r}$.
\item The optimum configuration need not correspond to $\eta=1$.
\end{itemize}
The main takeaway for our purpose, though, is that at intermediate SNRs equiprobable signaling closely tracks the capacity.
\begin{figure}
\centering
\includegraphics[width=1\linewidth]{CMIMOTHz.eps}
\caption{Main plot: spectral efficiency as a function of ${\mathsf{SNR}}$ for ${N_{\mathrm{t}}}={N_{\mathrm{r}}}=4$, both with optimized (solid lines) and with uniform (dashed lines) quartet probabilities. Inset: spectral efficiency as a function of $\eta$ for ${\mathsf{SNR}} = 5$ dB. The channel is LOS and the arrays are broadside with $d_{\rm t} = d_{\rm r}$.}
\label{RobNur}
\end{figure}
\subsection{IID Rayleigh Fading}
For ${\boldsymbol{H}}$ having IID complex Gaussian entries, we resort to the ergodic interpretation. Shown in Fig. \ref{EbN0min_vs_N} is the evolution of $\ebnoinline_{\rm min}$ with the number of antennas for the optimum strategy (beamforming on every channel realization) as well as for equiprobable signaling.
The bounds in (\ref{bounds}) provide an effective characterization of the optimum $\ebno_{\rm \min}$. Moreover, $\lambda_0$ and ${\boldsymbol{v}}_0$ are independent \cite[lemma 5]{lozano2003multiple} and, although $\mathbb{E}[\lambda_0]$ does not lend itself to a general characterization, for growing ${N_{\mathrm{t}}}$ and ${N_{\mathrm{r}}}$ it approaches $( \sqrt{{N_{\mathrm{t}}}} + \sqrt{{N_{\mathrm{r}}}} )^2$. Thus, beamforming achieves
\begin{align}
\frac{\pi}{2 \, \big( \sqrt{{N_{\mathrm{t}}}} + \sqrt{{N_{\mathrm{r}}}} \big)^{2} \log_2 e} & \lesssim \ebno_{\rm \min} \label{hamburguesada} \\
& \lesssim \frac{\pi^3 {N_{\mathrm{t}}}}{16 \, \big( \sqrt{{N_{\mathrm{t}}}} + \sqrt{{N_{\mathrm{r}}}} \big)^{2} \, \mathbb{E} \big[ \| {\boldsymbol{v}}_0 \|^2_1 \big]} \nonumber
\end{align}
which sharpens with ${N_{\mathrm{t}}}$ and ${N_{\mathrm{r}}}$. For ${N_{\mathrm{t}}}={N_{\mathrm{r}}}=64$, for instance, (\ref{hamburguesada}) gives $\ebno_{\rm \min} \in [-21.77 , -23,71]$ dB, correctly placing the actual value of $-22.21$ dB. The term $\mathbb{E} \big[ \| {\boldsymbol{v}}_0 \|^2_1 \big]$ is readily computable for given values of ${N_{\mathrm{t}}}$ and we note,
as a possible path to taming it anaytically, that ${\boldsymbol{v}}_0$ is a column of a standard unitary matrix, uniformly distributed over an ${N_{\mathrm{t}}}$-dimensional sphere.
\begin{figure}
\centering
\includegraphics[width=1\linewidth]{EbN0min_vs_N.eps}
\caption{Minimum $\frac{E_{\rm b}}{N_0}$ vs ${N_{\mathrm{t}}}={N_{\mathrm{r}}}$ for an IID Rayleigh-faded channel: 1-bit beamforming (exact values in solid, interval spanned by the bounds in shaded) vs equiprobable signaling. Also shown is the performance with full resolution.}
\label{EbN0min_vs_N}
\end{figure}
With equiprobable signaling, $\ebno_{\rm \min}$ is given by (\ref{Zaira}) and we can further characterize $S_0$.
Starting from (\ref{ForzaOriol}) and using
\begin{align}
\big(\text{nondiag}({\boldsymbol{H}} {\boldsymbol{H}}^*)\big)^2 & = ({\boldsymbol{H}}\bH^*)^2 - {\boldsymbol{H}} {\boldsymbol{H}}^* \, \text{diag}({\boldsymbol{H}} {\boldsymbol{H}}^*) \nonumber \\
& \quad - \text{diag}({\boldsymbol{H}} {\boldsymbol{H}}^*) \, {\boldsymbol{H}} {\boldsymbol{H}}^* \nonumber \\
& \quad + \big( \text{diag}({\boldsymbol{H}} {\boldsymbol{H}}^*) \big)^2
\end{align}
in conjuntion with \cite[lemma 4]{lozano2003multiple}
\begin{align}
\mathbb{E} \big[ {\rm tr} \big( ( {\boldsymbol{H}} {\boldsymbol{H}}^*)^2 \big) \big] & = {N_{\mathrm{t}}} {N_{\mathrm{r}}} \, ({N_{\mathrm{t}}} + {N_{\mathrm{r}}}) \\
\mathbb{E} \big[ {\rm tr} \big( {\boldsymbol{H}} {\boldsymbol{H}}^* \, \text{diag}({\boldsymbol{H}} {\boldsymbol{H}}^*) \big) \big] & = {N_{\mathrm{t}}} {N_{\mathrm{t}}} ({N_{\mathrm{t}}}+1) \\
\mathbb{E} \big[ {\rm tr} \big( \text{diag}({\boldsymbol{H}} {\boldsymbol{H}}^*) \, {\boldsymbol{H}} {\boldsymbol{H}}^* \big) \big] & = {N_{\mathrm{t}}} {N_{\mathrm{t}}} ({N_{\mathrm{t}}}+1) \\
\mathbb{E} \big[ {\rm tr} \big( ( \text{diag}({\boldsymbol{H}} {\boldsymbol{H}}^*) )^2 \big) \big] & = {N_{\mathrm{t}}} {N_{\mathrm{t}}} ({N_{\mathrm{t}}}+1)
\end{align}
we have that
\begin{equation}
\mathbb{E} \Big[ {\rm tr} \big( (\text{nondiag}({\boldsymbol{H}} {\boldsymbol{H}}^*) )^2 \big) \Big] = {N_{\mathrm{t}}} {N_{\mathrm{t}}} \, ({N_{\mathrm{r}}} - 1) .
\end{equation}
In turn,
\begin{equation}
\mathbb{E} \big[ \| {\boldsymbol{H}} {\boldsymbol{x}} \|^4_4 \big] = 6 N^2_{\rm t} {N_{\mathrm{r}}}
\end{equation}
and, altogether,
\begin{equation}
S_0 = \frac{2 {N_{\mathrm{t}}} {N_{\mathrm{r}}}}{ (\pi-1) {N_{\mathrm{t}}} + {N_{\mathrm{r}}} -1} ,
\end{equation}
which is an increasing function of both ${N_{\mathrm{t}}}$ and ${N_{\mathrm{r}}}$.
\begin{figure}
\centering
\includegraphics[width=1\linewidth]{C1thruC4_MIMO_IID.eps}
\caption{Ergodic spectral efficiency vs ${\mathsf{SNR}}$ for ${N_{\mathrm{t}}}={N_{\mathrm{r}}}=2$ and $4$ both with optimized (solid lines) and with equiprobable (dashed lines) signaling. The channel is IID Rayleigh-faded.}
\label{C1thruC4_MIMO_IID}
\end{figure}
At intermediate SNRs, equiprobable signaling is remarkably effective (see Fig.~\ref{C1thruC4_MIMO_IID}).
At the same time, the complexity of computing the mutual information---for equiprobable signaling, let alone with optimized quartet probabilities---is compounded by the need to expect it over the distribution of ${\boldsymbol{H}}$, to the point of becoming unwieldy even for very small antenna counts.
Analytical characterizations are thus utterly necessary, and it is shown in Appendix \ref{Topo} that
\begin{align}
\mathbb{E}_{\boldsymbol{H}} \big[ \mathcal{I}({\mathsf{SNR}},{\boldsymbol{H}}) \big] & = \mathbb{E}_{\boldsymbol{H}} \big[ \mathcal{H}({\boldsymbol{y}}) \big] \label{Lluc1} \\
& \!\!\!\!\!\!\!\!\!\!\!\!\!\! - \frac{2 {N_{\mathrm{r}}}}{\sqrt{2 \pi}} \! \int_{-\infty}^\infty \! \mathcal{H}_{\rm b} \! \left( Q \! \left( \! - \sqrt{{\mathsf{SNR}} } \, \xi \! \right) \! \right) e^{-\xi^2/2} \, \mathrm{d}\xi \nonumber
\end{align}
with
\begin{align}
& 2 {N_{\mathrm{r}}} \geq \mathbb{E}_{\boldsymbol{H}} \big[ \mathcal{H}({\boldsymbol{y}}) \big] \geq 2 {N_{\mathrm{t}}} - 2 {N_{\mathrm{r}}} \nonumber \\
& \qquad\;\;\, - \log_2 \! \Bigg[ \sum_{i=0}^{N_{\mathrm{t}}}
\left( \!\!
\begin{array}{c}
{N_{\mathrm{t}}} \\
i
\end{array}
\!\! \right)^{\!\! 2}
\left( \frac{1}{4 \pi^2} \arccos^2 \! \left( \frac{2 i}{{N_{\mathrm{t}}}} -1 \right) \! \right)^{\! {N_{\mathrm{r}}}} \nonumber \\
& \qquad\;\;\, + 2 \sum_{i=0}^{{N_{\mathrm{t}}}} \sum_{j=i+1}^{{N_{\mathrm{t}}}}
\left( \!\!
\begin{array}{c}
{N_{\mathrm{t}}} \\
i
\end{array}
\!\! \right) \!
\left( \!\!
\begin{array}{c}
{N_{\mathrm{t}}} \\
j
\end{array}
\!\! \right)
P^{N_{\mathrm{r}}}_\cap(i,j)
\Bigg] \label{XSM}
\end{align}
where $P_\cap(i,j)$ is given by (\ref{moderna}).
The bounds specified by (\ref{Lluc1})--(\ref{moderna})
are readily computable even for very large numbers of antennas. For ${N_{\mathrm{t}}}={N_{\mathrm{r}}}=64$, for instance, a direct evaluation of $\mathbb{E}_{\boldsymbol{H}} \big[ \mathcal{I}({\mathsf{SNR}},{\boldsymbol{H}}) \big]$ would require the $ \mathcal{I}({\mathsf{SNR}},{\boldsymbol{H}})$ for many realizations of ${\boldsymbol{H}}$, with each such mutual information calculation involving over $10^{75}$ terms. In contrast, the bounds entail the single SNR-dependent integral in (\ref{Lluc1}) along with (\ref{XSM}), which does not depend on the SNR and can be precomputed; Table~\ref{LBHy} provides such precomputation for a range of antenna counts.
Also of interest is that the upper bound becomes exact for ${\mathsf{SNR}} \to 0$.
\begin{figure*}[b]
\begin{align}
\!\! P_\cap (i,j) & = \frac{1}{2 \pi} \int_0^\infty \!\!\!\! \int_0^\infty \text{erfc} \! \left( - \frac{ \gamma ({N_{\mathrm{t}}}-i-j) + \xi (i-j) }{\sqrt{i({N_{\mathrm{t}}}-i)+j({N_{\mathrm{t}}}-j)}} \right) \text{erfc} \! \left( \frac{ \gamma (i-j) - \xi ({N_{\mathrm{t}}}-i-j) }{ \sqrt{i({N_{\mathrm{t}}}-i)+j({N_{\mathrm{t}}}-j)}} \right) e^{-2 \left( \gamma^2+\xi^2 \right)} \, \mathrm{d}\gamma \, \mathrm{d}\xi
\label{moderna}
\end{align}
\end{figure*}
\begin{table}
\renewcommand{\arraystretch}{1.1}
\caption{Lower bound on $\mathbb{E}_{\boldsymbol{H}} \! \big[ \mathcal{H}({\boldsymbol{y}}) \big]$ as a function of ${N_{\mathrm{t}}}$ and ${N_{\mathrm{r}}}$.}
\label{LBHy}
\centering
\begin{tabular}{ |l|c|c|c|c|c|c|c| }
\hline
${N_{\mathrm{r}}} \downarrow$ ${N_{\mathrm{t}}} \to$ & 1 & 2 & 4 & 8 & 16 & 32 \\
\hline\hline
1 & $2$ & $2$ & $2$ & $2$ & $2$ & 2 \\
\hline
2 & $2$ & $3.02$ & $3.65$ & $3.85$ & $3.92$ & $3.96$ \\
\hline
4 & $2$ & $3.81$ & $6.07$ & $7.15$ & $7.57$ & $7.78$ \\
\hline
8 & $2$ & $4$ & $7.79$ & $12.35$ & $14.14$ & $15.03$ \\
\hline
16 & $2$ & $4$ & $8$ & $15.87$ & $24.81$ & $28.14$ \\
\hline
32 & $2$ & $4$ & $8$ & $16$ & $31.96$ & $49.6$ \\
\hline
\end{tabular}
\end{table}
\begin{figure}[ht!]
\centering
\includegraphics[width=1\linewidth]{C1thruC16approx.eps}
\caption{Ergodic spectral efficiency as a function of ${\mathsf{SNR}}$ with equiprobable signaling: the shaded areas are the bounding regions, the solid lines are the actual values for ${N_{\mathrm{t}}}={N_{\mathrm{r}}}=2$ and ${N_{\mathrm{t}}}={N_{\mathrm{r}}}=4$. The channel is IID Rayleigh-faded.}
\label{C1thruC16approx}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=1\linewidth]{Casymmetric.eps}
\caption{Ergodic spectral efficiency as a function of ${\mathsf{SNR}}$ with equiprobable signaling: the shaded areas are the bounding regions, the solid lines are the actual values for ${N_{\mathrm{t}}}=4,{N_{\mathrm{r}}}=1$ and ${N_{\mathrm{t}}}=8,{N_{\mathrm{r}}}=2$. The channel is IID Rayleigh-faded.}
\label{Casymmetric}
\end{figure}
The range specified by the bounds is illustrated in Fig.~\ref{C1thruC16approx} for various values of ${N_{\mathrm{t}}}={N_{\mathrm{r}}}$, alongside the actual spectral efficiencies (obtained via Monte-Carlo) for ${N_{\mathrm{t}}}={N_{\mathrm{r}}}=2$ and ${N_{\mathrm{t}}}={N_{\mathrm{r}}}=4$.
For ${N_{\mathrm{t}}} > {N_{\mathrm{r}}}$, the lower bound approaches its upper counterpart
and, for ${N_{\mathrm{t}}} \gg {N_{\mathrm{r}}}$,
\begin{align}
& \mathbb{E}_{\boldsymbol{H}} \big[ \mathcal{I}({\mathsf{SNR}},{\boldsymbol{H}}) \big] \label{JLA} \\
& \; \approx 2 {N_{\mathrm{r}}} \left( 1 - \frac{1}{\sqrt{2 \pi}} \! \int_{-\infty}^\infty \! \mathcal{H}_{\rm b} \! \left( Q \! \left( \! - \sqrt{{\mathsf{SNR}} } \, \xi \! \right) \! \right) e^{-\xi^2/2} \, \mathrm{d}\xi \right) \! .
\nonumber
\end{align}
Indeed, as detailed in Appendix \ref{Topo}, this approximation becomes an exact result for ${N_{\mathrm{r}}}=1$ or for ${N_{\mathrm{t}}}\to \infty$ with ${N_{\mathrm{r}}}$ arbitrary.
Some examples for ${N_{\mathrm{t}}}=4 {N_{\mathrm{r}}}$, presented in Fig.~\ref{Casymmetric}, confirm how precisely the ergodic spectral efficiency is determined when the antenna counts are somewhat skewed.
\section{Conclusion}
\label{calor7}
A host of issues that are thoroughly understood for full-resolution settings must be tackled anew for 1-bit MIMO communication.
In particular, the computation of the capacity becomes unwieldy for even very modest dimensionalities and the derivation of general precoding solutions becomes a formidable task, itself power-consuming. Fortunately, in the single-user case such general precoding can be circumvented via a judicious switching between beamforming and equiprobable signaling, with the added benefits that these transmissions strategies are much more amenable to analytical characterizations and that their requirements in terms of channel-state information at the transmitter are minimal: $\log_2 4^{{N_{\mathrm{t}}}-1} = 4 \,({N_{\mathrm{t}}}-1)$ bits for beamforming, none for equiprobable signaling.
The transition from beamforming to equiprobable signaling could be finessed by progressively activating quartets as the SNR grows, but the results in this paper suggest that there is a small margin of improvement: a direct switching at some appropriate point suffices to operate within a few dB of capacity at both low and intermediate SNRs.
It would be of interest to gauge this shortfall for more intricate channel models such as those in \cite{9405492,9411894,Undi2021}.
Channel estimation at the receiver is an important aspect, with the need for procedures that avoid having to painstakingly gauge all the transition probabilities between ${\boldsymbol{x}}$ and ${\boldsymbol{y}}$ to deduce ${\boldsymbol{H}}$. Of much interest would be to extend existing results for channel estimation with full-resolution DACs and 1-bit ADCs \cite{ivrlac2007mimo,7600443,li2017channel,wan2020generalized,atzeni2021channel}
to the complete 1-bit realm. Equally pertinent would be to establish the bandwidths over which a frequency-flat representation suffices for each channel model, and to extend the respective analyses to account for intersymbol interference. This is acutely important given the impossibility of implementing OFDM with 1-bit converters.
In those multiuser settings where orthogonal (time/frequency) multiple access is effective, switching between beamforming and equiprobable signaling is also enticing. In other cases, chiefly if the antenna numbers are highly asymmetric, orthogonal multiple access is decidedly suboptimum, and there is room for more general schemes. We hope that the results in this paper can serve as a stepping stone to such schemes.
\appendices
\section{}
\label{superlliga}
Let $\sigma_0,\ldots,\sigma_{{N_{\mathrm{t}}}-1}$ be the singular values of ${\boldsymbol{H}}$, ordered from largest to smallest, while ${\boldsymbol{u}}_m$ and ${\boldsymbol{v}}_m$ are the left and right singular vectors corresponding to $\sigma_m$.
From the singular value decomposition
\begin{equation}
{\boldsymbol{H}} = \!\!\! \sum_{m=0}^{{N_{\mathsf{min}}}-1} \!\!\! \sigma_m {\boldsymbol{u}}_m {\boldsymbol{v}}^*_m
\end{equation}
we have that
\begin{equation}
\| {\boldsymbol{H}} {\boldsymbol{x}} \|^2 = \!\! \sum_{m=1}^{{N_{\mathsf{min}}}-1} \!\! \sigma^2_m \left| {\boldsymbol{v}}^*_m {\boldsymbol{x}} \right|^2 ,
\label{dega}
\end{equation}
which is the quantity to maximize. Under full-resolution transmission, (\ref{dega}) is maximized by ${\boldsymbol{x}} \propto {\boldsymbol{v}}_0$: complete projection on the dimension exhibiting the largest gain and zero projection elsewhere \cite[sec. 5.3]{Foundations:18}.
With 1-bit transmission, perfect alignment with ${\boldsymbol{v}}_0$ is generally not possible, and the goal becomes to determine which ${\boldsymbol{x}}$ best aligns. If ${\boldsymbol{H}}$ is rank-1, then such ${\boldsymbol{x}}$ is sure to maximize (\ref{dega}).
If the rank is plural, however, optimality cannot be guaranteed from best alignment with ${\boldsymbol{v}}_0$ because some other ${\boldsymbol{x}}$ leaning further away could have a more favorable projection across the rest of dimensions. Suppose, for instance, that the rank is $3$; if the ${\boldsymbol{x}}$ best aligned with ${\boldsymbol{v}}_0$ does not further project on ${\boldsymbol{v}}_1$, but only on ${\boldsymbol{v}}_2$, there could be another ${\boldsymbol{x}}$ aligning slightly less with ${\boldsymbol{v}}_0$ but projecting also on ${\boldsymbol{v}}_1$ in a way that yields a higher metric in (\ref{dega}).
This possibility may arise when the largest singular value is not very dominant.
Even then, though, the ${\boldsymbol{x}}$ that projects maximally on ${\boldsymbol{v}}_0$ is bound to perform well.
Values of ${\boldsymbol{x}}$ that align well with ${\boldsymbol{v}}_0$ can be obtained as ${\boldsymbol{x}} = \text{sgn}\big( e^{\mathrm{j} \varphi} {\boldsymbol{v}}_0 \big)$ where $\varphi$ allows setting the absolute phase arbitrarily before quantization. Letting $\varphi $ run from $0$ to $2 \pi$, every entry of the quantized ${\boldsymbol{x}}$ changes four times and a subset of $4 {N_{\mathrm{t}}}$ vectors ${\boldsymbol{x}}$ is obtained. These $4 {N_{\mathrm{t}}}$ vectors actually belong to ${N_{\mathrm{t}}}$ quartets because, if ${\bm{\mathsf{x}}}_k$ is in the $k$th subset, $\mathrm{j} {\bm{\mathsf{x}}}_k$ is sure to be there too. Since identifying one representative per quartet suffices for our purposes, attention can be restricted to those values of $\varphi$ that trigger a change in $\text{sgn}\big( e^{\mathrm{j} \varphi} {\boldsymbol{v}}_0 \big)$, i.e., $\varphi = \angle(v_{0,m})$ for $m=0,\ldots,{N_{\mathrm{t}}}-1$. Letting $\varphi_m = \angle(v_{0,m}) + \epsilon$ with $\epsilon$ a small quantity, we obtain the subset of ${N_{\mathrm{t}}}$ quartet representatives as
\begin{equation}
{\bm{\mathsf{x}}}_k = \text{sgn}\big( e^{\mathrm{j} \varphi_{k-1}} {\boldsymbol{v}}_0 \big) \qquad\quad k=1,\ldots,{N_{\mathrm{t}}} .
\label{picanteria}
\end{equation}
The sign of $\epsilon$ is irrelevant, it merely changes which representative is selected for each quartet.
Confirming the intuition that the ${N_{\mathrm{t}}}$ quartets in (\ref{picanteria}) are good choices, it is proved in \cite{gao2018beamforming} that the quartet that best aligns with ${\boldsymbol{v}}_0$ is sure to be in this subset. Thus, searching a subset of ${N_{\mathrm{t}}}$ candidates suffices to beamform optimally in rank-1 channels, and quasi-optimally in higher-rank channels, without having to search the entire field of $4^{{N_{\mathrm{t}}}-1}$ possibilities.
Let us now turn to the performance.
An upper bound on $\| {\boldsymbol{H}} {\bm{\mathsf{x}}}_{k^\star} \|^2$ can be obtained by assuming that, on every channel realization, there is a value of ${\boldsymbol{x}}$ that aligns perfectly with ${\boldsymbol{v}}_0$. From (\ref{dega}), this gives
\begin{equation}
\| {\boldsymbol{H}} {\boldsymbol{x}} \|^2 \leq 2 {N_{\mathrm{t}}} \sigma^2_0,
\label{consellers}
\end{equation}
which, along with (\ref{Argimon2}), yields the lower bound in (\ref{bounds}).
In turn, a lower bound on $\| {\boldsymbol{H}} {\bm{\mathsf{x}}}_{k^\star} \|^2$ is obtained for any choice of ${\boldsymbol{x}}$, and in particular for ${\boldsymbol{x}} = \text{sgn} \big( e^{\mathrm{j} \varphi} {\boldsymbol{v}}_0 \big)$ with $\varphi \in [-\pi/4,\pi/4]$, such that \cite{gao2018beamforming}
\begin{align}
\left| {\boldsymbol{v}}^*_0 {\bm{\mathsf{x}}}_{k^\star} \right| & \geq \max_{\varphi \in [-\pi/4,\pi/4]} \left| {\boldsymbol{v}}^*_0 \, \text{sgn} \big( e^{\mathrm{j} \varphi} {\boldsymbol{v}}_0 \big) \right| \\
& \geq \mathbb{E}_\varphi \!\! \left[ \left| {\boldsymbol{v}}^*_0 \, \text{sgn} \big( e^{\mathrm{j} \varphi} {\boldsymbol{v}}_0 \big) \right| \right] \\
& = \mathbb{E}_\varphi \!\! \left[ \left| \sum_{m=0}^{{N_{\mathrm{t}}}-1} v^*_{0,m} \, \text{sgn} \big( e^{\mathrm{j} \varphi} v_{0,m} \big) \right| \right] \\
& = \mathbb{E}_\varphi \!\! \left[ \left| \sum_{m=0}^{{N_{\mathrm{t}}}-1} \! \left|v_{0,m}\right| e^{-\mathrm{j} \phi_m} \, \text{sgn} \big( e^{\mathrm{j} (\varphi+\phi_m)} \big) \right| \right] \nonumber \\
& = \mathbb{E}_\varphi \!\! \left[ \left| \sum_{m=0}^{{N_{\mathrm{t}}}-1} \! \left|v_{0,m}\right| e^{\mathrm{j} \varphi} e^{-\mathrm{j} (\varphi+\phi_m)} \, \text{sgn} \big( e^{\mathrm{j} (\varphi+\phi_m)} \big) \right| \right]
\nonumber \\
& = \mathbb{E}_\varphi \!\! \left[ \left| \sum_{m=0}^{{N_{\mathrm{t}}}-1} \! \left|v_{0,m}\right| e^{-\mathrm{j} (\varphi+\phi_m)} \, \text{sgn} \big( e^{\mathrm{j} (\varphi+\phi_m)} \big) \right| \right] \nonumber .
\end{align}
For any $\theta \in [0,2\pi]$, the phase of $e^{-\mathrm{j} \theta} \text{sgn} \big( e^{\mathrm{j} \theta} \big)$ is within $[-\pi/4,\pi/4]$ while $\left| e^{-\mathrm{j} \theta} \text{sgn} \big( e^{\mathrm{j} \theta} \big) \right| = \sqrt{2}$.
Hence, letting $\theta_m=\varphi+\phi_m$,
\begin{align}
\left| {\boldsymbol{v}}^*_0 {\bm{\mathsf{x}}}_{k^\star} \right| & \geq \sqrt{2} \, \mathbb{E}_{\theta_0,\ldots,\theta_{{N_{\mathrm{t}}}-1}} \!\! \left[ \left| \sum_{m=0}^{{N_{\mathrm{t}}}-1} \! \left|v_{0,m}\right| e^{-\mathrm{j} \theta_m} \right| \right] \\
& \geq \sqrt{2} \, \, \mathbb{E}_{\theta_0,\ldots,\theta_{{N_{\mathrm{t}}}-1}} \!\! \left[ \left| \sum_{m=0}^{{N_{\mathrm{t}}}-1} \! \left|v_{0,m}\right| \cos(\theta_m) \right| \right] \\
& = \sqrt{2} \, \sum_{m=0}^{{N_{\mathrm{t}}}-1} \! \left|v_{0,m}\right| \mathbb{E} \big[ \! \cos(\theta_m) \big] \label{CEC} \\
& = \sqrt{2} \, \sum_{m=0}^{{N_{\mathrm{t}}}-1} \! \left|v_{0,m}\right| \frac{2}{\pi} \int_{-\pi/4}^{\pi/4} \!\! \cos(\xi) \, \mathrm{d}\xi \\
& = \frac{4}{\pi} \sum_{m=0}^{{N_{\mathrm{t}}}-1} \! \left|v_{0,m}\right| \\
& = \frac{4}{\pi} \, \| {\boldsymbol{v}}_0 \|_1 ,
\end{align}
where (\ref{CEC}) holds because $\cos(\theta_m) > 0$ for $\theta_m \in [-\pi/4,\pi/4]$.
Disregarding $\left| {\boldsymbol{v}}^*_m {\bm{\mathsf{x}}}_{k^\star} \right|$ for $m>0$ in (\ref{dega}),
\begin{equation}
\| {\boldsymbol{H}} {\bm{\mathsf{x}}}_{k^\star} \|^2 \geq \frac{16}{\pi^2} \, \sigma^2_0 \, \| {\boldsymbol{v}}_0 \|^2_1 ,
\label{BarPlaca}
\end{equation}
which, along with (\ref{Argimon2}), yields the upper bound in (\ref{bounds}).
\section{}
\label{Topo}
In (\ref{TrumpOut2}), for every $n$ and $k$, $\Re\{{\boldsymbol{h}}_n {\bm{\mathsf{x}}}_k \} \sim \mathcal{N}(0,{N_{\mathrm{t}}})$ and $\Im\{{\boldsymbol{h}}_n {\bm{\mathsf{x}}}_k \} \sim \mathcal{N}(0,{N_{\mathrm{t}}})$. Thus, letting $r \sim \mathcal{N}(0,1)$,
\begin{align}
\mathbb{E}_{\boldsymbol{H}} \big[ \mathcal{H}({\boldsymbol{y}} | {\boldsymbol{x}}) \big] & = 2 {N_{\mathrm{r}}} \, \mathbb{E}_r \! \left[ \mathcal{H}_{\rm b} \! \left( Q \! \left( \! - \sqrt{{\mathsf{SNR}} } \, r \! \right) \! \right) \! \right] \\
& = \frac{2 {N_{\mathrm{r}}}}{\sqrt{2 \pi}} \! \int_{-\infty}^\infty \! \mathcal{H}_{\rm b} \! \left( Q \! \left( \! - \sqrt{{\mathsf{SNR}} } \, \xi \! \right) \! \right) e^{-\xi^2/2} \, \mathrm{d}\xi . \nonumber
\end{align}
In turn,
\begin{equation}
\mathbb{E}_{\boldsymbol{H}} \big[ \mathcal{H}({\boldsymbol{y}}) \big] \leq 2 {N_{\mathrm{r}}}
\label{Lluc3}
\end{equation}
with equality for ${\mathsf{SNR}} \to 0$, when the receiver observes only noise and ${\boldsymbol{y}}$ is equiprobably binary on $2 {N_{\mathrm{r}}}$ real dimensions.
As the removal of noise can only decrease it, $\mathcal{H}({\boldsymbol{y}})$ diminishes as the SNR grows, being lower-bounded by its value for ${\mathsf{SNR}} \to \infty$. The expectation of such noiseless lower bound over ${\boldsymbol{H}}$ can be elaborated by generalizing to our complex setting a clever derivation in \cite{gao2017power}, starting from
\begin{align}
& \mathbb{E}_{\boldsymbol{H}} \big[ \mathcal{H}({\boldsymbol{y}}) \big] = \mathbb{E}_{\boldsymbol{H}} \!\! \left[ \sum_{\ell=1}^{4^{{N_{\mathrm{r}}}}} p_{\boldsymbol{y}}({\bm{\mathsf{y}}}_\ell) \log_2 \frac{1}{p_{\boldsymbol{y}}({\bm{\mathsf{y}}}_\ell)} \right] \\
& \qquad = \mathbb{E}_{\boldsymbol{H}} \!\! \left[ \sum_{\ell=1}^{4^{{N_{\mathrm{r}}}}} \mathbb{E}_{\boldsymbol{x}} \big[ p_{{\boldsymbol{y}}|{\boldsymbol{x}}}({\bm{\mathsf{y}}}_\ell | {\boldsymbol{x}}) \big] \log_2 \frac{1}{\mathbb{E}_{\boldsymbol{x}} \big[ p_{{\boldsymbol{y}}|{\boldsymbol{x}}}({\bm{\mathsf{y}}}_\ell | {\boldsymbol{x}}) \big]} \right] \nonumber \\
& \qquad = \mathbb{E}_{\boldsymbol{H}} \!\! \left[ \sum_{\ell=1}^{4^{{N_{\mathrm{r}}}}} \mathbb{E}_{\boldsymbol{x}} \big[ 1\{ \text{sgn}({\boldsymbol{H}} {\boldsymbol{x}}) = {\boldsymbol{y}}_\ell \} \big] \right. \nonumber \\
& \qquad \qquad\quad \cdot \log_2 \frac{1}{\mathbb{E}_{\boldsymbol{x}} \big[ 1\{ \text{sgn}({\boldsymbol{H}} {\boldsymbol{x}}) = {\boldsymbol{y}}_\ell \} \big]} \Bigg]
\label{Aragones}
\end{align}
where $1\{ \cdot \}$ is the indicator function. Since ${\boldsymbol{H}}$
is isotropic and ${\boldsymbol{x}}$ is equiprobable, no ${\bm{\mathsf{y}}}_\ell$ is favored over the rest in terms of the probability of $\text{sgn}({\boldsymbol{H}} {\boldsymbol{x}})$ equalling such ${\bm{\mathsf{y}}}_\ell$. Hence, (\ref{Aragones}) can be evaluated for any specific
${\bm{\mathsf{y}}}_\ell$, say ${\bm{\mathsf{y}}}_1 $ whose entries all equal $1+\mathrm{j}$. This gives
\begin{align}
\mathbb{E}_{\boldsymbol{H}} \big[ \mathcal{H}({\boldsymbol{y}}) \big] & = 4^{N_{\mathrm{r}}} \, \mathbb{E}_{\boldsymbol{H}} \! \Bigg[ \mathbb{E}_{\boldsymbol{x}} \big[ 1\{ \text{sgn}({\boldsymbol{H}} {\boldsymbol{x}}) = {\bm{\mathsf{y}}}_1 \} \big] \nonumber \\
& \quad \cdot \log_2 \frac{1}{\mathbb{E}_{\boldsymbol{x}} \big[ 1\{ \text{sgn}({\boldsymbol{H}} {\boldsymbol{x}}) = {\bm{\mathsf{y}}}_1 \} \big]} \Bigg] .
\end{align}
Likewise, the probability that $\text{sgn}({\boldsymbol{H}} {\boldsymbol{x}}) = {\bm{\mathsf{y}}}_1$ is common to every realization of ${\boldsymbol{x}}$ and thus
\begin{align}
\mathbb{E}_{\boldsymbol{H}} \big[ \mathcal{H}({\boldsymbol{y}}) \big] & = - 4^{N_{\mathrm{r}}} \, \mathbb{E}_{\boldsymbol{H}} \! \Big[ 1\{ \text{sgn}({\boldsymbol{H}} {\bm{\mathsf{x}}}_1) = {\bm{\mathsf{y}}}_1 \} \nonumber \\
& \quad \cdot \log_2 \mathbb{E}_{\boldsymbol{x}} \big[ 1\{ \text{sgn}({\boldsymbol{H}} {\boldsymbol{x}}) = {\bm{\mathsf{y}}}_1 \} \big] \Big]
\end{align}
where all entries of ${\bm{\mathsf{x}}}_1$ equal $1+\mathrm{j}$ and where it is convenient to retain the second expectation over ${\boldsymbol{x}}$ in order to later solve its counterpart over ${\boldsymbol{H}}$.
Then,
\begin{align}
\mathbb{E}_{\boldsymbol{H}} \big[ \mathcal{H}({\boldsymbol{y}}) \big] & = - 4^{N_{\mathrm{r}}} \, \mathbb{E}_{{\boldsymbol{H}} | \text{sgn}({\boldsymbol{H}} {\bm{\mathsf{x}}}_1) = {\bm{\mathsf{y}}}_1 } \! \Big[ 1\{ \text{sgn}({\boldsymbol{H}} {\bm{\mathsf{x}}}_1) = {\bm{\mathsf{y}}}_1 \} \nonumber \\
& \quad \cdot \log_2 \mathbb{E}_{\boldsymbol{x}} \big[ 1\{ \text{sgn}({\boldsymbol{H}} {\boldsymbol{x}}) = {\bm{\mathsf{y}}}_1 \} \big] \Big] \nonumber \\
& \quad \cdot \mathbb{P} [\text{sgn}({\boldsymbol{H}} {\bm{\mathsf{x}}}_1) = {\bm{\mathsf{y}}}_1]
\end{align}
and, since $\mathbb{P} [\text{sgn}({\boldsymbol{H}} {\bm{\mathsf{x}}}_1) = {\bm{\mathsf{y}}}_1] = \frac{1}{4^{N_{\mathrm{r}}}}$ and the factor $1\{ \text{sgn}({\boldsymbol{H}} {\bm{\mathsf{x}}}_1) = {\bm{\mathsf{y}}}_1 \} $ becomes immaterial once the expectation over ${\boldsymbol{H}}$ has been conditioned on $\text{sgn}({\boldsymbol{H}} {\bm{\mathsf{x}}}_1)$,
\begin{align}
& \mathbb{E}_{\boldsymbol{H}} \big[ \mathcal{H}({\boldsymbol{y}}) \big] \\
& \; = - \mathbb{E}_{{\boldsymbol{H}} | \text{sgn}({\boldsymbol{H}} {\bm{\mathsf{x}}}_1) = {\bm{\mathsf{y}}}_1 } \! \Big[ \log_2 \mathbb{E}_{\boldsymbol{x}} \big[ 1\{ \text{sgn}({\boldsymbol{H}} {\boldsymbol{x}}) = {\bm{\mathsf{y}}}_1 \} \big] \Big] \nonumber \\
& \; = - \mathbb{E}_{{\boldsymbol{H}} | \text{sgn}({\boldsymbol{H}} {\bm{\mathsf{x}}}_1) = {\bm{\mathsf{y}}}_1 } \!\! \left[ \log_2 \frac{1}{4^{N_{\mathrm{t}}}} \sum_{k=1}^{4^{N_{\mathrm{t}}}} 1\{ \text{sgn}({\boldsymbol{H}} {\bm{\mathsf{x}}}_k) = {\bm{\mathsf{y}}}_1 \} \! \right] \nonumber \\
& \; = 2 {N_{\mathrm{t}}} - \mathbb{E}_{{\boldsymbol{H}} | \text{sgn}({\boldsymbol{H}} {\bm{\mathsf{x}}}_1) = {\bm{\mathsf{y}}}_1 } \!\! \left[ \log_2 \sum_{k=1}^{4^{N_{\mathrm{t}}}} 1\{ \text{sgn}({\boldsymbol{H}} {\bm{\mathsf{x}}}_k) = {\bm{\mathsf{y}}}_1 \} \! \right] \nonumber \\
& \; \geq 2 {N_{\mathrm{t}}} - \log_2 \sum_{k=1}^{4^{N_{\mathrm{t}}}} \mathbb{E}_{{\boldsymbol{H}} | \text{sgn}({\boldsymbol{H}} {\bm{\mathsf{x}}}_1) = {\bm{\mathsf{y}}}_1 } \big[ 1\{ \text{sgn}({\boldsymbol{H}} {\bm{\mathsf{x}}}_k) = {\bm{\mathsf{y}}}_1 \} \big] \nonumber
\end{align}
where the last step follows from Jensen's inequality. Since the expectation of an indicator function yields the probability of the underlying event,
\begin{align}
& \mathbb{E}_{\boldsymbol{H}} \big[ \mathcal{H}({\boldsymbol{y}}) \big] \geq 2 {N_{\mathrm{t}}} \label{astra} \\
& \qquad\quad - \log_2 \sum_{k=1}^{4^{N_{\mathrm{t}}}} {\mathbb{P}} \big[ \text{sgn}({\boldsymbol{H}} {\bm{\mathsf{x}}}_k) = {\bm{\mathsf{y}}}_1 | \text{sgn}({\boldsymbol{H}} {\bm{\mathsf{x}}}_1) = {\bm{\mathsf{y}}}_1 \big] \nonumber
\end{align}
with
\begin{align}
& {\mathbb{P}} \big[ \text{sgn}({\boldsymbol{H}} {\bm{\mathsf{x}}}_k) = {\bm{\mathsf{y}}}_1 | \text{sgn}({\boldsymbol{H}} {\bm{\mathsf{x}}}_1) = {\bm{\mathsf{y}}}_1 \big] \\
& \qquad\quad = \frac{{\mathbb{P}} \big[ \text{sgn}({\boldsymbol{H}} {\bm{\mathsf{x}}}_k) = {\bm{\mathsf{y}}}_1 \, \cap \, \text{sgn}({\boldsymbol{H}} {\bm{\mathsf{x}}}_1) = {\bm{\mathsf{y}}}_1 \big] }{{\mathbb{P}} \big[ \text{sgn}({\boldsymbol{H}} {\bm{\mathsf{x}}}_1) = {\bm{\mathsf{y}}}_1 \big]} \nonumber \\
& \qquad\quad = 4^{N_{\mathrm{r}}} \, {\mathbb{P}} \big[ \text{sgn}({\boldsymbol{H}} {\bm{\mathsf{x}}}_k) = {\bm{\mathsf{y}}}_1 \, \cap \, \text{sgn}({\boldsymbol{H}} {\bm{\mathsf{x}}}_1) = {\bm{\mathsf{y}}}_1 \big] . \nonumber
\end{align}
As the channel has IID entries, letting ${\boldsymbol{h}}$ be an arbitrary row of ${\boldsymbol{H}}$,
\begin{equation}
{\mathbb{P}} \big[ \text{sgn}({\boldsymbol{H}} {\bm{\mathsf{x}}}_k) = {\bm{\mathsf{y}}}_1 \, \cap \, \text{sgn}({\boldsymbol{H}} {\bm{\mathsf{x}}}_1) = {\bm{\mathsf{y}}}_1 \big] = (P_\cap)^{N_{\mathrm{r}}}
\end{equation}
with
\begin{align}
P_\cap & = {\mathbb{P}} \big[ \text{sgn}({\boldsymbol{h}} {\bm{\mathsf{x}}}_k) = (1+\mathrm{j}) \, \cap \, \text{sgn}({\boldsymbol{h}} {\bm{\mathsf{x}}}_1) = (1+\mathrm{j}) \big] \nonumber \\
& = {\mathbb{P}} \big[ \text{sgn}( \Re\{ {\boldsymbol{h}} {\bm{\mathsf{x}}}_k \}) = 1 \, \cap \, \text{sgn}( \Im\{ {\boldsymbol{h}} {\bm{\mathsf{x}}}_k \}) = 1 \nonumber \\
& \quad \, \cap \, \text{sgn}( \Re\{ {\boldsymbol{h}} {\bm{\mathsf{x}}}_1 \}) = 1 \, \cap \, \text{sgn}( \Im\{ {\boldsymbol{h}} {\bm{\mathsf{x}}}_1 \}) = 1 \big] \\
& = {\mathbb{P}} \big[ \Re\{ {\boldsymbol{h}} {\bm{\mathsf{x}}}_k \} >0 \, \cap \, \Im\{ {\boldsymbol{h}} {\bm{\mathsf{x}}}_k \} >0 \nonumber \\
& \quad \, \cap \, \Re\{ {\boldsymbol{h}} {\bm{\mathsf{x}}}_1 \} >0 \, \cap \, \Im\{ {\boldsymbol{h}} {\bm{\mathsf{x}}}_1 \} >0 \big] \label{Asens} .
\end{align}
Defining $a_k={\boldsymbol{h}} {\bm{\mathsf{x}}}_k $ and $a_1={\boldsymbol{h}} {\bm{\mathsf{x}}}_1$,
\begin{align}
P_\cap & = \int_0^\infty \!\!\!\! \int_0^\infty \!\!\!\! \int_0^\infty \!\!\!\! \int_0^\infty f_{\Re\{a_k\}\Im\{a_k\}\Re\{a_1\}\Im\{a_1\} } (\alpha,\beta,\gamma,\xi) \nonumber \\
& \quad\; \cdot \mathrm{d}\alpha \, \mathrm{d}\beta \,\mathrm{d}\gamma \, \mathrm{d}\xi
\label{mosquiteres}
\end{align}
with $\Re\{a_k\}$, $\Im\{a_k\}$, $\Re\{a_1\}$, and $\Im\{a_1\} $ jointly Gaussian with mean zero and covariance
\begin{equation}
\bm \Sigma_k =
\left[
\begin{array}{cc}
{N_{\mathrm{t}}} {\boldsymbol{I}} & {\boldsymbol{R}}_k \\
{\boldsymbol{R}}_k & {N_{\mathrm{t}}} {\boldsymbol{I}}
\end{array}
\right]
\end{equation}
where ${\boldsymbol{I}}$ is the $2 \times 2$ identity matrix while
\begin{equation}
{\boldsymbol{R}}_k =
\left[
\begin{array}{cc}
{N_{\mathrm{t}}} - i -j & j-i \\
i-j & {N_{\mathrm{t}}}-i-j
\end{array}
\right] .
\end{equation}
with $i$ and $j$ the respective number of entries of $\Re\{a_k\}$ and $\Im\{a_k\}$ that are $-1$, the rest of their entries (along with all the entries of $\Re\{a_1\}$ and $\Im\{a_1\}$) being $+1$. Most importantly, because the entries of ${\boldsymbol{h}}$ are IID, the position of those $-1$ values is immaterial and only their totals $i$ and $j$ matter. Altogether, the joint distribution of $\Re\{a_k\}$, $\Im\{a_k\}$, $\Re\{a_1\}$ and $\Im\{a_1\} $ is as in (\ref{pfizer}) and, plugging it in (\ref{mosquiteres}) and tediously
integrating over two of the dimensions, what emerges is (\ref{moderna}) with the dependence on $i$ and $j$ made explicit and with $\text{erfc}(\cdot)$ the complementary error function.
Returning to (\ref{astra}), and accounting for the number of indices $k$ that map to each $i$ and $j$,
\begin{align}
\mathbb{E}_{\boldsymbol{H}} \big[ \mathcal{H}({\boldsymbol{y}}) \big] & \geq 2 {N_{\mathrm{t}}} - 2 {N_{\mathrm{r}}} \label{flors} \\
& \quad - \log_2 \sum_{i=0}^{{N_{\mathrm{t}}}} \sum_{j=0}^{{N_{\mathrm{t}}}}
\left( \!\!
\begin{array}{c}
{N_{\mathrm{t}}} \\
i
\end{array}
\!\! \right) \!
\left( \!\!
\begin{array}{c}
{N_{\mathrm{t}}} \\
j
\end{array}
\!\! \right)
P^{N_{\mathrm{r}}}_\cap(i,j) . \nonumber
\end{align}
For $i=j$, (\ref{moderna}) can be integrated into
\begin{equation}
P_\cap(i,i)= \frac{1}{4 \pi^2} \arccos \! \left( \frac{2 i }{{N_{\mathrm{t}}}} -1 \right)
\end{equation}
with $P_\cap(0,0)=1/4$ and $P_\cap({N_{\mathrm{t}}},{N_{\mathrm{t}}})=0$.
Furthermore, $P_\cap(i,j)=P_\cap(j,i)$ with $P_\cap(0,{N_{\mathrm{t}}})=P_\cap({N_{\mathrm{t}}},0)=0$.
With these relationships accounted for, (\ref{flors}) yields (\ref{XSM}).
\begin{figure*}
\begin{align}
f_{\Re\{a_k\}\Im\{a_k\}\Re\{a_1\}\Im\{a_1\} } (\alpha,\beta,\gamma,\xi) = \frac{1}{8 \pi^2} \frac{1}{i({N_{\mathrm{t}}}-i)+j({N_{\mathrm{t}}}-j)}
\exp \! \left(-\frac{1}{4} \frac{[\alpha \; \beta \; \gamma \; \xi] \, \bm \Sigma^{-1}_k \, [\alpha \; \beta \; \gamma \; \xi]^{\rm T}}{i({N_{\mathrm{t}}}-i)+j({N_{\mathrm{t}}}-j)} \right)
\label{pfizer}
\end{align}
\end{figure*}
Let us now consider some special cases of interest. For ${N_{\mathrm{t}}}=1$, (\ref{XSM}) reduces to
\begin{equation}
\mathbb{E}_{\boldsymbol{H}} \big[ \mathcal{H}({\boldsymbol{y}}) \big] \geq 2 - 2{N_{\mathrm{r}}} - \log_2 \! \left( 4^{-{N_{\mathrm{r}}}} + 4 P_\cap(0,1) \right)
\end{equation}
and, since $P_\cap(0,1)=0$, further to $\mathbb{E}_{\boldsymbol{H}} \big[ \mathcal{H}({\boldsymbol{y}}) \big] \geq 2$.
In fact, in this case $\mathcal{H}({\boldsymbol{y}}) = 2$ for every channel realization and thus $\mathbb{E}_{\boldsymbol{H}} \big[ \mathcal{H}({\boldsymbol{y}}) \big] = 2$.
For ${N_{\mathrm{r}}}=1$, the scalar quantized signal $y$ takes four equiprobable values---again, not only on average, but for every channel realization---and thus $\mathbb{E}_{\boldsymbol{H}} \big[ \mathcal{H}(y) \big] = 2$.
For fixed ${N_{\mathrm{t}}}$ and ${N_{\mathrm{r}}} \to \infty$, the key is the observation that $P_\cap(i,j)$ achieves its largest value for $i=j=0$, namely $P_\cap(0,0)=1/4$. For $i>0$ and/or $j>0$, $P_\cap(i,j) < 1/4$ because any negative sign in either the real and imaginary parts of ${\bm{\mathsf{x}}}_k$ reduces the probability in (\ref{Asens}).
The largest term in the summations within the logarithm in (\ref{XSM}) equals $4^{-{N_{\mathrm{r}}}}$ and, as ${N_{\mathrm{r}}} \to \infty$, every other term vanishes faster and the lower bound on $\mathbb{E}_{\boldsymbol{H}} \big[ \mathcal{H}({\boldsymbol{y}}) \big]$ converges towards $2 {N_{\mathrm{t}}}$.
Finally, for fixed ${N_{\mathrm{r}}}$ and ${N_{\mathrm{t}}} \to \infty$, the rows of ${\boldsymbol{H}}$ become asymptotically orthogonal \cite[sec. 5.4.2]{Foundations:18} and hence, for every realization of ${\boldsymbol{H}}$, ${\boldsymbol{y}}$ consists of IID complex components. Again, $\mathbb{E}_{\boldsymbol{H}} \big[ \mathcal{H}({\boldsymbol{y}}) \big] = 2 {N_{\mathrm{r}}}$.
For ${N_{\mathrm{r}}} = 1$ and for ${N_{\mathrm{t}}} \to \infty$ with fixed ${N_{\mathrm{r}}}$, the above observations reveal that the lower and upper bounds coincide, fully determining, as per (\ref{JLA}), the ergodic spectral efficiency with equiprobable signaling and IID Rayleigh fading.
\bibliographystyle{IEEEtran}
|
1,477,468,750,355 | arxiv | \section{Introduction}
Multiferroic systems have been intensively studied in the past ten years as the coupling between ferroelectric and magnetic order parameters may lead to novel electronic devices. This coupling can have different microscopic origins, related either to Dzyaloshinskii-Moriya interactions\cite{Katsura2005, Sergienko2006} or to an exchange-striction mechanism, and it is still not fully understood. All multiferroics show complex and mostly non collinear magnetic orders, arising from competing interactions and/or geometrical frustration.
The hexagonal RMnO$_3$ compounds provide text book examples to study multiferroicity. Their crystal structure consists of triangular Mn planes packed along the $\it{c}$ axis and separated by layers of rare earth ions (R= Ho-Lu) or non magnetic ions such as Y or In. As shown recently \cite{Fabreges2009}, the magnetic frustration does not arise only from the triangular geometry of antiferromagnetic (AF) first neighbour interactions in the $\it{ab}$ plane, but from competing interactions between Mn of adjacent planes.
In all compounds, the Mn moments order within a triangular plane in a three sublattice N\'eel structure, corresponding to 120$^\circ$ arrangements of the Mn moments in a triangle.
Four possible AF structures can be stabilized, described by
irreducible representations of the $P6_3cm$ space group with $\bf{k}=0$ propagation vector \cite{Munoz2000}. These structures differ by the orientations of the Mn moments with respect to the $\it{a, b}$ crystal axes and by the relative orientations of Mn moments in adjacent planes. As shown in Ref. \onlinecite{Fabreges2009},
the selection of a given structure is controlled by the Mn position in the unit cell which depends on a unique parameter $\it{x}$ for the $\it{6c}$ sites. The $\it{x}$ value with respect to a critical threshold $\it{x_0}$=$1/3$ tunes the sign of the effective interaction between adjacent Mn planes. Within this frame, one can correlate the type of magnetic structure, the Mn position, and the sign of the effective exchange coupling in the compounds of the RMnO$_3$ family.
InMnO$_3$ is the only compound which does not fit simply with the above scheme. Actually, it corresponds to the peculiar situation where the Mn position is very close to $\it{x_0}$=$1/3$, so that interactions between adjacent Mn planes nearly cancel. Therefore one could expect new types of magnetic orders with two dimensional behavior or stabilized by further neighbour interactions. Moreover the InMnO$_3$ crystal structure has the smallest lattice constant $\it{a}$ and the largest lattice constant $\it{c}$ of the series \cite{Greedan1995}, so that in-plane and out-of-plane interactions differ much more than in the other compounds. The pioneering measurements of Greedan {\em et al.} \cite{Greedan1995} showed that the magnetic structure of InMnO$_3$ indeed differs from those of the whole series, with a \textbf{k}=(0 0 $\frac{1}{2}$) propagation vector, corresponding to a doubled periodicity along $\it{c}$. The sample showed broad magnetic reflections so that a two dimensional order was postulated.
InMnO$_3$ is also interesting for its magnetoelectric properties. Ferroelectric hysteresis loop measurements performed on high quality samples showed no hysteresis below 250\,K, establishing that the pure compound is actually not ferroelectric \cite{Belik2009}, although ferroelectricity was earlier reported \cite{Serrao2006} in some samples below 500\,K. In the pure samples, low frequency permittivity exhibits an anomaly near $T_{\rm N}$, showing evidence for a magnetoelectric coupling. Studies of Fe-substituted InMnO$_3$ showed that these compounds might constitute a new class of nearly room temperature multiferroics \cite{Belik2009b}. In InMnO$_3$, the electronic structure of the In$^{3+}$ ion with a fully filled 4d shell excludes the d$_0$-ness ferroelectricity at play in YMnO$_3$. Considering the peculiar case of InMnO$_3$, a new covalent bonding mechanism was recently proposed to mediate ferroelectricity in hexagonal multiferroics \cite{Oak2011}.
Since the measurements of Greedan {\em et al.}, no neutron study was made on InMnO$_3$. This could be due to the difficulty to synthesize big samples of high purity, and to the high absorption and low scattering power of the In$^{3+}$ ion which complicate the measurements. To shed more light on the peculiar behavior of InMnO$_3$, we have synthesized powder sample of high purity in large amount under high pressure and high temperature conditions \cite{Belik2009}. We performed high resolution neutron study of the crystal structure versus temperature. We studied the magnetic order precisely by combining neutron diffraction and M\"ossbauer spectroscopy in a $^{57}$Fe doped sample, and obtained the first results about the magnetic fluctuations. We determine the magnetic structure precisely using group theory and we propose a possible explanation for its origin based on the influence of pseudo-dipolar interactions.
\section{Experimental Details}
Two samples were synthesized under high pressure.
The first one is a stochiometric InMnO$_3$ sample of about $8\,g$ used for the neutron measurements. A second sample of $0.5\,g$
with chemical formula InMn$_{0.99}$$^{57}$Fe$_{0.01}$O$_3$ was prepared for the M\"ossbauer measurements.
For the synthesis, appropriate mixtures of In$_2$O$_3$ (99.9 \%purity) and Mn$_2$O$_3$ and Fe$_2$O$_3$ were placed in Au capsules and treated at $5\,GPa$ in a
belt-type high pressure apparatus at 1500\,K for 90\,min (heating rate 120\,K/min).
After the heat treatment, the samples were quenched to room temperature, and
the pressure was slowly released. The resultant samples were dense black pellets. X-ray diffraction measurements showed that they contained a small amount (1 mass \%) of cubic In$_2$O$_3$ impurity.
The crystal structure and the evolution of the atomic parameter $\it{x}$ with temperature were determined by measuring a neutron powder diffraction (NPD) pattern at 300~K and at selected temperatures on the high resolution powder diffractometer 3T2 of the Laboratoire L\'eon Brillouin (LLB) at Orph\'ee reactor, with an incident neutron wavelength $\lambda=1.2253$~\AA. The magnetic structure was studied by collecting NPD patterns at several temperatures, between 200~K (above the magnetic transition) and 1.5~K. Both crystal and magnetic structures were refined using the Fullprof suite\cite{Rodriguez1993}. The $^{57}$Fe M\"ossbauer absorption spectra were recorded in the temperature range 4.2 - 140~K. We used a commercial $^{57}$Co:Rh$\gamma$-ray source, mounted on a triangular velocity electromagnetic drive.
\section{Crystal Structure}
The refined NPD pattern at 300 K is shown in Fig.\ref{In_diff_RT}. All Bragg reflexions of the pattern can be indexed within the hexagonal space group $P6_3cm$. The lattice constants $a=5.8837(1)$~\AA\ and $c=11.4829(1)$~\AA\ at 300 K are in perfect agreement with previous results \cite{Greedan1995,Belik2009}. As noticed earlier, they strongly differ from those of the hexagonal RMnO$_3$ series, which scale from one compound to another \cite{Munoz2000,Munoz2001,Xu1995}.
The refined atomic positions reported in Table \ref{position} agree with previous determinations from X-ray diffraction\cite{Greedan1995,Belik2009}. They are very close to those determined in compounds of similar ionic radius (R= Ho, Y, Yb). Each Mn atom is surrounded by oxygen ions forming a MnO$_5$ bipyramidal structure, with 3 O (2 O$_4$ and one O$_3$) ions close to the Mn plane, and two O (O$_1$ and O$_2$) ions at the apexes. Corner sharing MnO$_5$ bipyramids form layers separated along the \textit{c}-axis by In layers in which In ions occupy two distinct crystallographic sites (labelled $2a$ and $4b$).
\begin{figure}[!h]
\centering
\includegraphics[width=8cm,height=5.5cm]{fig1.png}
\caption{(Color online) Observed and Fullprof calculated NPD
pattern at room temperature. The Bragg reflections (tics), and the
difference between the observed and calculated patterns are plotted at
the bottom.}
\label{In_diff_RT}
\end{figure}
\begin{table}
\centering
\begin{tabular}{|l|cccc|}
\hline
Atoms & x & y & z & B$_{iso}$\\
\hline
\hline
In(2a) & 0 & 0 & 0.274(2) & 0.845(120)\\
In(4b) & $\frac{1}{3}$ & $\frac{2}{3}$ & 0.232(2) & 0.490(65)\\
Mn(6c) & 0.345(4) & 0 & 0 & 0.334(43)\\
O$_1$(6c) & 0.307(2) & 0 & 0.165(3) & 0.686(16)\\
O$_2$(6c) & 0.640(1) & 0 & 0.336(3) & 0.686(16)\\
O$_3$(4b) & 0 & 0 & 0.475(2) & 0.954(100)\\
O$_4$(2a) & $\frac{1}{3}$ & $\frac{2}{3}$ & 0.020(2) & 0.575(54)\\
\hline
\hline
Discrepancy & Bragg R-factor & 4.32\% & &\\
Factors & RF-factor & 3.21\% & &\\
\hline
\end{tabular}
\caption{Atom positions, thermal parameters and discrepancy factors at room temperature}
\label{position}
\end{table}
The thermal variation of the positional parameter $x$ of the Mn sites is reported on fig. \ref{In_pos_Mn}. One notices that $x$ decreases with decreasing temperature down to about 150\,K, then becomes very close to 1/3 in the 0$<$T$<$150\,K temperature range, which spans the whole ordered magnetic phase (T$_{\rm N}$=118\,K). Based on this sole observation it is possible to predict that the two possible interplane exchange paths between Mn ions are almost identical (Fig. \ref{In_ech}), which should dramatically decrease the effective exchange coupling along the $c$-axis.
\begin{figure}
\centering
\includegraphics[width=8cm]{fig2.png}
\caption{(Color online) Refined position $x$ of Mn versus temperature in
reduced units of the cell parameter $a$. The horizontal black line is located at $x=1/3$,
the red line is a guide to the eyes.}
\label{In_pos_Mn}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=8cm]{fig3.png}
\caption{(Color online) Interplane exchange paths versus Mn position. Two exchange paths Jz$_1$ and Jz$_2$ are in competition and the $x=$1/3 Mn position corresponds to the specific case Jz$_1$ = Jz$_2$. Insert : inplane exchange paths leading to the 120$^{\circ}$ magnetic configuration.}
\label{In_ech}
\end{figure}
\section{Magnetic Structure} \label{strumag}
The NPD pattern collected at T=1.5\,K on the high resolution diffractometer 3T2 is reported on Fig. \ref{In_fp}:bottom, focusing on the range in the scattering angle 2$\theta$ where magnetic Bragg reflections with half integer $\it{l}$ values can be observed. All magnetic peaks can be indexed within the hexagonal space group $P 6_3 cm$ with a propagation vector \textbf{k}=(0 0 0.50(1)). In contrast with the other members of the family, there is no magnetic contribution at the positions of the structural peaks. The (1 0 $\frac{2l+1}{2}$) Bragg reflections appear below $T_{\rm N}$=120(2)\,K, with a peak width limited by the experimental resolution and their thermal variation is monotonic (Fig. \ref{In_fp}:top). All these features shows the onset below $T_{\rm N}$=120(2)\,K of a three dimensional order for the Mn moments, with a magnetic unit cell doubled along the $\it{c}$ axis, and without spin reorientation transition below $T_{\rm N}$.
\begin{figure}
\centering
\includegraphics[width=8cm]{fig4.png}
\caption{(Color online) Upper panel : integrated intensity of the (1 0 1/2) Bragg reflection versus temperature. The red dashed line is a guide to the eyes. Lower panel : observed NPD pattern at low temperature (T=1.5\,K). The \textbf{k}=(0 0 $\frac{1}{2}$) propagation vector is easily observed through the existence of (1 0 $\frac{(l+1)}{2}$) Bragg reflections.}
\label{In_fp}
\end{figure}
To analyze the magnetic structure we searched for all Irreductible Representations (IR)
compatible with the crystal symmetry using the theory of group representation analysis\cite{Bertaut1963}
and the program Basireps\cite{Rodriguez2001}. The atomic position of Mn ions in the unit cell was kept equal to (1/3 0 0)
close to the position observed experimentally. In the space groupe $P6_3cm$, the $\it{6c}$ site of Mn ions allows 6
irreductible representations labelled from $\Gamma_1$ to $\Gamma_6$ (Fig. \ref{In_Gamma}). The $\Gamma_1$ and $\Gamma_4$
representations are defined by one basis vector associated with a 120$^{\circ}$ magnetic order within the $\it{ab}$ planes
whereas the $\Gamma_2$ and $\Gamma_3$ are defined by two basis vectors, the second one
allowing an out-of-plane component. The $\Gamma_5$ and $\Gamma_6$ representations
correspond to magnetic orders with unequivalent magnetic moments on each sites which have not been
considered, as for the rest of the RMnO$_3$ family\cite{Munoz2000}.
The Fourier component corresponding to the propagation vector $\bf{k}$ for a Mn site $\it{n}$ of the unit cell is expressed as : $M_n(z)=M\,e^{-i\textbf{k}.\textbf{r}_n}$ where $r_n$ denotes the position of the $n^{th}$ Mn ion in the unit cell, referred by its $\it{z}$ coordinate along the $\it{c}$ axis.
In our particular case, the $\bf{k=}$(0 0 $\frac{1}{2}$) propagation vector yields a purely real Fourier component of the magnetic moment in the z=0,1,2,.. Mn planes and purely imaginary components in the z=1/2,3/2,.. planes. In order to overcome this difficulty and to be consistent with the presence of equivalent moments on all Mn sites deduced from the M\"ossbauer results (see below), we have introduced a global phase shift $\phi=2~\pi/8$ in the expression of the structure factor. The phase and amplitude of the Fourier components were used to determine the magnitude of the ordered moment at a given Mn site.
\begin{figure}
\centering
\includegraphics[width=8cm]{fig5.png}
\caption{(Color on line) Magnetic structures associated to the 4 unidimensional irreductible representations of the $P6_3cm$ space group. Red arrows indicates magnetic moments with real Fourier components, black arrows indicates moments with imaginary Fourier components.}
\label{In_Gamma}
\end{figure}
As for the rest of the RMnO$_3$ family, we find that magnetic configurations associated to $\Gamma_1$ and $\Gamma_3$ IR are homometric (namely they share the same structure factor) so they cannot be distinguished in a powder neutron diffraction experiment. The same holds for the $\Gamma_2$ and $\Gamma_4$ magnetic configurations. Our refinements yield a discrepancy factor $R_{mag}=12.54\,\%$ for the $\Gamma_2$ and $\Gamma_4$ IR, much better than for $\Gamma_1$ and $\Gamma_3$ ($R_{mag}=19.8\,\%$). The $R_{Bragg}$ factor in the ordered magnetic phase was close to $5\,\%$. The best fit of our data was obtained for an ordered magnetic moment of $3.25\,\mu_B$ at 1.5\,K, very similar to the moment found in the rest of the hexagonal RMnO$_3$ family\cite{Munoz2000}. We conclude that the Mn moments order in the $\it{a,b}$ planes, in bilayers ordered according to either a $\Gamma_2$ or a $\Gamma_4$ configuration, as for YbMnO$_3$ or ScMnO$_3$ with \textbf{k}=\textbf{0} propagation vector, but with antiferromagnetic relative orientations of two neighboring bilayers.
\section{$^{57}$Fe M\"ossbauer data}
Three $^{57}$Fe M\"ossbauer spectra were recorded, at T=140, 80 and 4.2\,K. The
spectra at 4.2 and 140\,K are represented in Fig.5. At 140\,K, a quadrupolar
hyperfine spectrum is observed, with a quadrupolar splitting $\vert \Delta E_Q
\vert$=0.5(1)\,mm/s, typical for Fe$^{3+}$ in the paramagnetic phase. Below $T_{\rm N}$, at
4.2 and 80\,K, a six-line spectrum is observed, attributable to a single
magnetic hyperfine field, with a small quadrupolar shift $\epsilon$=
0.26(1)\,mm/s. This indicates that all the $^{57}$Fe nuclei experience the same
hyperfine field (48.6\,T at 4.2\,K and 43\,T at 80\,K), hence all the
substituted Fe ions bear the same magnetic moment. One can conclude that the
ordered magnetic moment of the Mn ion is the same on each site.
\begin{figure}
\centering
\includegraphics[width=8cm]{fig6.png}
\caption{$^{57}$Fe M\"ossbauer spectra in InMn$_{0.99}$Fe$_{0.01}$O$_3$
below and above $T_N$=120\,K. At $T=140\,K$ a quadrupolar doublet characteristic of paramagnetic Fe$^{3+}$ is
observed. At $T=4.2\,K$ the spectrum shows a six-lines hyperfine pattern
perfectly reproduced by a single hyperfine magnetic field.}
\label{In_moss}
\end{figure}
It is possible to obtain information about the angle $\theta$ between the
hyperfine field and the principal axis of the electric field gradient (EFG)
tensor, responsible for the quadrupolar hyperfine interaction. Indeed, the
relationship between the quadrupolar splitting obtained in the paramagnetic
phase and the quadrupolar shift measured in the magnetically ordered phase is:
$\epsilon = \Delta E_Q\ \frac{3 \cos^2\theta - 1}{2}$. Since the sign of
$\Delta E_Q$ cannot be determined, one derives two acceptable values for $\theta$:
90$^\circ$ and 35.3$^\circ$. The local symmetry of the Fe(Mn) sites is {\it 6c},
which implies that the EFG tensor has one axis along {\bf c} and the two other
axes in the \textit{a,b} plane, but the principal axis cannot be determined only by
symmetry considerations. Assuming it lies along {\bf c}, then the solution
$\theta=90^\circ$ would be adequate, in analogy with the rest of the RMnO$_3$
family.
\section{Short range correlations in the paramagnetic phase}
The powder diffraction patterns measured on 3T2 above $T_{\rm N}$ (Fig. \ref{In_diffus}) show a strong diffuse scattering, already observed by Greedan \textit{et al} \cite{Greedan1995}. The asymmetric shape of this scattering is directly connected with the presence of two dimensional correlations between Mn moments of a given plane. Using a Warren-like profile \cite{Warren1941} we refined the lengthscale $\xi$ of these correlations (Fig. \ref{In_xi}). The $\xi$ values above $T_{\rm N}$ agree with those deduced previously \cite{Greedan1995}. However in the sample studied in Ref.\onlinecite{Greedan1995}, the 2D correlations persist below $T_{\rm N}$, coexisting with half integer Bragg reflections of finite width, whereas in the present case $\xi$ diverges at $T_{\rm N}$, showing the onset of a purely three dimensional long range magnetic order.
\begin{figure}
\centering
\includegraphics[width=8cm]{fig7.png}
\caption{(Color online) Observed and Fullprof calculated NPD
patterns at several temperatures.Above T$_{\rm N}$ a strong diffuse scattering is observed on the patterns recorded
on 3T2 spectrometer (top) with $\lambda=1.225\,\AA$. This scattering is not visible on the G6.1 patterns (bottom) for which $\lambda=4.74\,\AA$.}
\label{In_diffus}
\end{figure}
Interestingly, spectra collected in the same temperature range on the G6.1 diffractometer using a large incident neutron wavelength showed no signature of this diffuse scattering (Fig. \ref{In_diffus} bottom). To understand this peculiarity, one should notice that a neutron diffractometer probes both elastic and inelastic signal and integrates all contributions at a given scattering angle. The energy range over which this integration is performed depends on the energy of the incident neutron. Knowing that G6.1 is a cold diffractometer with an incident energy $\hbar^2k_i^2=4\,meV$ ($\lambda=4.74\,\AA$) and 3T2 a thermal one with $\hbar^2k_i^2 \approx$ 40\,meV ($\lambda=1.225\,\AA$), one concludes that the observed diffuse scattering above $T_{\rm N}$ corresponds to dynamical short range correlations between Mn moments, involving high energy fluctuations, at a scale of tens of meV.
\begin{figure}
\centering
\includegraphics[width=8cm]{fig8.png}
\caption{(Color online)Refined correlation length versus temperature (red dots) and fit of the critical exponent $\nu$ (solid line). Insert: intensity recorded at T=130\,K on the 3T2 spectrometer. The dashed line is a fit of the diffuse intensity with a Warren function.}
\label{In_xi}
\end{figure}
The analysis of the paramagnetic scattering suggests a picture of uncorrelated Mn planes, in which dynamical magnetic correlations develop with decreasing temperature down to $T_{\rm N}$. The 3D magnetic ordering stabilized at $T_{\rm N}$ should be triggered by a weak interaction between Mn moments belonging to different planes, whose origin is discussed below.
\section{Discussion} \label{disc}
\subsection{Magnetic ordering and frustration}
We first recall the scheme of interactions used in Ref. \onlinecite{Fabreges2009} to discuss the magnetic structures observed in the hexagonal RMnO$_3$ family with $\bf{k}$=$\bf{0}$ propagation vector. In these compounds, a given magnetic structure of symmetry $\Gamma_i$ (i=1-4) is stabilized by near neighbour exchange interactions as well as planar and uniaxial anisotropies, so that the Hamiltonian of the system is composed of three terms :
\begin{eqnarray}
\mathcal{H}_{Heis} & = & \sum_{i,j} J_{ij}\, \textbf{S}_i.\textbf{S}_j + \sum_i D\,(S_i^z)^2 - \sum_i \textbf{h}_i.\textbf{S}_i
\end{eqnarray}
where $S_i$ is the Mn spin on the $i^{th}$ site, J$_{ij}$ the exchange constants, $D$ a planar anisotropy and $\textbf{h}_i$ a local field yielding a preferential orientation for the $\textbf{S}_i$ spin.
The exchange term has two distinct parts, involving in-plane and out-of-plane interactions respectively. Due to the triangular lattice, the in-plane interactions yield a two dimensional 120$^{\circ}$ order, with no preferential orientation of the magnetic moments with respect to the crystal axes. Out-of-plane interactions couple Mn moments from adjacent planes yielding the 3D order. In this scenario, the Mn position is crucial since two possible exchange paths compete along the $\it{c}$ axis. The selection of a given structure is controlled by the Mn position. In InMnO$_3$, the Mn position is close to the critical threshold of 1/3 for which the two exchange paths are strictly equal. This leads to a full compensation of the exchange interactions along the $\it{c}$ axis and to an effective out-of -plane exchange interaction close to zero. This specific position of the Mn ions could explain the dynamical short range 2D order observed above T$_{\rm N}$, and attributed to uncorrelated Mn planes.
The two other terms of equation (1) are respectively the planar anisotropy $D$ which confines the Mn moments in the basal plane, and the local field $\textbf{h}$ which plays the role of a uniaxial anisotropy and selects preferential directions either along or perpendicular to the crystal axes.
These terms however cannot explain the long period 3D structure with $\bf{k}$=(0 0 $\frac{1}{2}$) stabilized in InMnO$_3$. Therefore one needs to consider further neighbor interactions, with different symmetries than the exchange interactions, such as the Dzyaloshinskii-Moriya (DM) or the pseudo-dipolar interaction \cite{vanVleck1937}. A similar approach \cite{Fabreges2008} was proposed to account for the ordering of the Yb moments in YbMnO$_3$. In the following, we focus on the pseudo-dipolar interaction since the DM interaction is hardly compatible with long exchange path (Mn-O-O-Mn and Mn-O-O-O-O-Mn) between Mn of different planes. The pseudo-dipolar interactions is written as :
\begin{eqnarray}
\mathcal{H}_{dip} & = & - \sum_{i,j} \textbf{S}_i\,J_{ij}^{dip}\,\textbf{S}_j \nonumber \\
& = &
-\alpha~\sum_{i}\sum_{j}
\left[
3 \frac{(\textbf{S}_j.\textbf{r}_{ij}).\textbf{r}_{ij}}{r_{ij}^2} - \textbf{S}_j
\right] \textbf{S}_i
\end{eqnarray}
where $\alpha$ is a constant and $\textbf{r}_{ij}$ joins sites $i$ and $j$. The matricial representation of the
pseudo dipolar interaction $J_{ij}^{dip}$ coupling two different Mn sites reads as :
\begin{eqnarray}
J_{ij}^{dip} & = & \alpha
\left[\frac{3}{r_{ij}^2}~\left( \begin{array}{c c c} r_{ij}^xr_{ij}^x & r_{ij}^xr_{ij}^y & r_{ij}^xr_{ij}^z \\ r_{ij}^yr_{ij}^x & r_{ij}^yr_{ij}^y & r_{ij}^yr_{ij}^z \\ r_{ij}^zr_{ij}^x & r_{ij}^zr_{ij}^y & r_{ij}^zr_{ij}^z \end{array} \right) - \,\mbox{l\hspace{-0.50em}1} \right]
\end{eqnarray}
where $\,\mbox{l\hspace{-0.50em}1}$ is the identity matrix. Assuming the $\bf{k}$=(0 0 $\frac{1}{2}$) magnetic structure described above, we calculate the magnetic field $\textbf{B}_i$ induced on the $i^{th}$ site by the surrounding Mn at sites $j$, $\textbf{B}_i = \sum_j J_{ij}^{dip} \textbf{S}_j$. First, we find that the contribution arising from the neighbouring sites in adjacent $z=\pm1/2$ planes is zero. Thus, there is no pseudo dipolar coupling between adjacent layers in agreement with the idea of purely two dimensionnal dynamical correlations above $T_{\rm N}$. In contrast, the contribution from sites in $z=\pm 1$ planes is different from zero. Moreover, the classical energy calculated as $E = - \textbf{B}_i \textbf{S}_i$ is negative (assuming $\alpha$ is positive). In other words, the pseudo-dipolar interaction stabilizes the 3d magnetic structure observed in InMnO$_3$ and drives the $\bf{k}$=(0 0 $\frac{1}{2}$) propagation vector.
\subsection{Spin wave spectrum }
To confirm the possible role of the pseudo-dipolar coupling, we propose to carry out spin dynamics measurements, as specific features associated to the pseudo-dipolar coupling should be easily seen on spin wave dispersion relations. This issue could be sorted out by inelastic neutron scattering experiments performed on a triple axis spectrometer.
From the interaction scheme described above, one can calculate the spectrum of the spin wave excitations in the ordered phase. We use the previous Heisenberg Hamiltonian, to which we add the pseudo dipolar term, written as :
\begin{eqnarray}
\mathcal{H} & = & \mathcal{H}_{Heis} - \sum \textbf{S}_i\,J_{ij}^{dip}\,\textbf{S}_j
\end{eqnarray}
Each term affects the spin wave spectrum in a specific way. The Heisenberg Hamiltonian $\mathcal{H}_{Heis}$ is responsible for the magnitude of the dispersion, namely the in-plane exchange interaction induces the dispersion along the (q$_h$ 0 0) and (0 q$_k$ 0) directions of the reciprocal space, whereas the out-of-plane exchange yields the dispersion along the (0 0 q$_l$) direction. Considering that the exchange interactions along $\it{c}$ nearly cancel due to the specific Mn position, one can predict that no dispersion should be observed along the (0 0 q$_l$) direction, yielding two flat modes. The anisotropy terms induce gaps in the dispersion curves. In RMnO$_3$, the planar anisotropy term induces a large gap of about 6 meV\cite{Petit2007} and the uniaxial term a smaller one, strongly dependent on temperature and likely enhanced by interaction with the rare earth moment \cite{Fabreges2011b}.
As concerns the influence of the pseudo dipolar term on the spin wave spectrum, one notices that this term involves both diagonal and off diagonal elements introducing new coupling between spin components. The diagonal elements act mainly as a combination of exchange and uniaxial anisotropy. Its effect should be easily seen at the zone center, the uniaxial gap increasing with the dipolar interaction strength $\alpha$.
To illustrate this point, spinwave calculations of the dynamical structure factor were made with the following parameters : $J$=2.6 meV, $D$=0.55 meV and $h$=0.1 meV in the case of the magnetic structure of InMnO$_3$ refined above. The results along the (0 0 $q_l$) direction of the reciprocal space are reported on Fig. \ref{In_SW} in case of pseudo-dipolar (left) and interplane exchange (right) coupling. The coupling constant $\alpha$ and $J_{inter}$ were taken equal to 0.01\,meV (antiferromagnetic). In both cases, the spinwave dispersion curves are characterized by two gaps around 5\,meV and 2\,meV induced respectively by $D$, and $h$.
Considering the shape of the dispersion curves, the pseudo-dipolar interaction induces a dispersion of both the 2\,meV and 5\,meV modes. A maximum (respectively minimum) is observed at $\bf{Q}$=(1 0 0) and a minimum (respectively maximum) is observed at $\bf{Q}$=(1 0 $\frac{1}{2}$). On the other hand, the interplane exchange induces a dispersion of the low energy mode with a maximum at $\bf{Q}$=(1 0 0) and a minimum at $\bf{Q}$=(1 0 1), whereas the 5\,meV mode remains almost flat. The pseudo-dipolar interaction is at the origin of a change in the periodicity of the dispersion in perfect agreement with the $\textbf{k}=$(0 0 $\frac{1}{2}$) propagation vector.
Inelastic neutron scattering is mandatory to confirm the scheme of interaction proposed here for InMnO$_3$ as both behaviors are easily distinguishable and should be seen on a triple axis or time-of-flight spectrometer. Up to now, precise measurements were hampered by the low intensity given by the available samples and by the powder averaging, but we hope to perform them in future.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{fig9.png}
\caption{(Color online) Left : numerical calculation of the dynamical structure factor of spinwaves along the (0 0 q$_l$) direction in case of pseudo-dipolar coupling between Mn. Right : numerical calculation of the dynamical structure factor of spinwaves along the (0 0 q$_l$) direction in case of antiferromagnetic interplane exchange coupling between Mn.}
\label{In_SW}
\end{figure}
\section{Conclusion}
In conclusion, our experimental study of InMnO$_3$ by neutron powder neutron diffraction and M\"ossbauer spectroscopy shows the onset of a three dimensional magnetic order below $T_{\rm N}$= 120(2)K. The magnetic order with $\bf{k}$=(0 0 $\frac{1}{2}$) propagation vector shows a doubling of the magnetic unit cell along the $\it{c}$ axis, in contrast with the other compounds of the RMnO$_3$ family. This feature is directly related to the peculiar value of the Mn positional parameter in InMnO$_3$, close to the $1/3$ threshold where the effective exchange interaction along the $\it{c}$ axis cancels. We suggest that weak out-of-plane pseudo-dipolar Mn interactions are responsible for the long period of the magnetic order. This weak coupling together with the strong in-plane coupling yields the onset of two dimensional correlations between fluctuating moments, which settle above $T_{\rm N}$. InMnO$_3$ provides an original example of the links between magnetic frustration and multiferroicity, which should be further studied by inelastic neutron scattering.
This work was partially supported by World Premier International Research Center (WPI) Initiative on Materials Nanoarchitectonics (MEXT, Japan), by the Japan Society for the Promotion of Science (JSPS) through its Funding Program for World-Leading Innovative R\&D on Science and Technology (FIRST Program), and by the Grants-in-Aid for Scientific Research (22246083) from JSPS, Japan.
|
1,477,468,750,356 | arxiv | \section{Introduction}
\label{sec:intro}
Arc-analytic functions play an important role in modern real algebraic and analytic geometry (see, e.g., \cite{KuPa1} and the references therein). They are, however, hardly known outside the specialist circles, which is perhaps partly due to their rather surprising, if not pathological, behaviour in the general analytic setting (see \cite{BMP}). In the algebraic setting, on the other hand, arc-analytic functions form a very nice family, as our main result will hopefully contribute to attesting to.
Let us recall that a function $f:X\to\R$ is called \emph{arc-analytic} when $f\circ\gamma$ is an analytic function for every real analytic arc $\gamma:(-1,1)\to X$. Typically, in the literature, $X$ is assumed to be a smooth real algebraic or analytic variety, or a semialgebraic set.
\medskip
In this article, we are interested in semialgebraic arc-analytic functions in the setting in which they were originally introduced by Kurdyka \cite{Ku}, that is, on arc-symmetric semialgebraic sets. Recall that a \emph{semialgebraic} set in $\R^n$ is one that can be written as a finite union of sets of the form $\{x\in\R^n:p(x)=0,q_1(x) >0,\dots,q_r(x)>0\}$, where $r\in\N$ and $p,q_1,\dots,q_r\in\R[x_1,\dots,x_n]$. A semialgebraic set $X\subset\R^n$ is called \emph{arc-symmetric} if, for every analytic arc $\gamma:(-1,1)\to\R^n$ with $\gamma((-1,0))\subset X$, we have $\gamma((-1,1))\subset X$.
A function $f:X\to\R$ is a \emph{semialgebraic function} when its graph is a semialgebraic subset of $\R^{n+1}$. Every arc-analytic semialgebraic function on an arc-symmetric set is continuous in the Euclidean topology (\cite[Prop.\,5.1]{Ku}).
By a fundamental theorem \cite[Thm.\,1.4]{Ku}, the arc-symmetric semialgebraic sets are precisely the closed sets of a certain noetherian topology on $\R^n$. (A topology is called \emph{noetherian} when every descending sequence of its closed sets is stationary.) Following \cite{Ku}, we will call it the \emph{$\AR$ topology}, and the arc-symmetric semialgebraic sets will henceforth be called \emph{$\AR$-closed sets}.
Given an $\AR$-closed set $X$ in $\R^n$, we denote by $\A(X)$ the ring of arc-analytic semialgebraic functions on $X$.
The elements of $\A(X)$ play the role of `regular functions' in $\AR$ geometry. Indeed, it is not difficult to see (\cite[Prop.\,5.1]{Ku}) that the zero locus of every arc-analytic semialgebraic function $f:X\to\R$ is $\AR$-closed. Recently, it was also shown (\cite[\S\,1, Thm.\,1]{AS}) that every $\AR$-closed set may be realized as the zero locus of an arc-analytic function. Therefore, the $\AR$ topology is, in fact, the one defined by arc-analytic semialgebraic functions.
\medskip
In \cite{AS}, we conjectured that every arc-analytic semialgebraic function on an $\AR$-closed set $X$ in $\R^n$ is a restriction of an element of $\A(\R^n)$. Theorem~\ref{thm:main} gives an affirmative answer to this conjecture.
If $X$ is an $\AR$-closed sets in $\R^n$, we denote by $\I(X)$ the ideal in $\A(\R^n)$ of the functions that vanish on $X$.
\begin{theorem}
\label{thm:main}
Let $X$ be an $\AR$-closed set in $\R^n$, and let $f:X\to\R$ be an arc-analytic semialgebraic function. Then, there exists an arc-analytic semialgebraic $F:\R^n\to\R$ such that $F|_X=f$. In other words,
\[
\A(X)\simeq\A(\R^n)/\I(X)
\]
as $\R$-algebras.
\end{theorem}
\begin{remark}
\label{rem:regulous}
The above theorem seems particularly interesting in the context of continuous rational functions.
Following \cite{KKK}, we will call $f:X\to\R$ a \emph{continuous rational function} when $f$ is continuous (in the Euclidean topology) and there exist a Zariski open dense subset $Y$ in the Zariski closure $\overline{X}^\Zar$ and a regular function $F:Y\to\R$ such that $f|_{X\cap Y}=F|_{X\cap Y}$.
Continuous rational functions have been extensively studied recently (see, e.g., \cite{Kuch}, \cite{KN}, \cite{FHMM}, \cite{KKK}).
It follows from the proof of \cite[Thm.\,1.12]{KKK} (which works also in the $\AR$ setting) that every continuous rational function on an $\AR$-closed set $X\subset\R^n$ is an element of $\A(X)$, and hence admits an arc-analytic semialgebraic extension to $\R^n$.
However, in general, a continuous rational function on $X$ cannot be extended to a continuous rational function on $\R^n$, even if $X$ is Zariski closed (see \cite[Ex.\,2]{KN}). To overcome this problem, Koll{\'a}r and Nowak introduced the notion of a \emph{hereditarily rational function}, that is, a continuous function on an algebraic set which remains rational after restriction to an arbitrary algebraic subset (see \cite{KN} for details). The main result of \cite{KN} asserts that a function $f:Z\to\R$ on an algebraic set $Z\subset\R^n$ is hereditarily rational if and only if $f$ admits a continuous rational extension to $\R^n$.
\end{remark}
\medskip
We shall prove Theorem~\ref{thm:main} in Section~\ref{sec:proof}. We show some immediate corollaries of our main result in Section~\ref{sec:corollaries}. For the reader's convenience, in Section~\ref{sec:prelim}, we recall basic notions and tools used in this note.
\section{Preliminaries}
\label{sec:prelim}
\subsection{$\AR$-closed sets}
First, we shall recall several properties of $\AR$-closed sets that will be used throughout the paper. For details and proofs we refer the reader to \cite{Ku}.
The class of $\AR$-closed sets includes, in particular, the algebraic sets as well as the Nash sets (see below). The $\AR$ topology is strictly finer than the Zariski topology on $\R^n$ (see, e.g., \cite[Ex.\,1.2]{Ku}). Moreover, it follows from the semialgebraic Curve Selection Lemma that $\AR$-closed sets are closed in the Euclidean topology on $\R^n$ (see \cite[Rem.\,1.3]{Ku}).
An $\AR$-closed set $X$ is called \emph{$\AR$-irreducible} if it cannot be written as a union of two proper $\AR$-closed subsets. It follows from noetherianity of the $\AR$ topology (\cite[Prop.\,2.2]{Ku}) that every $\AR$-closed set admits a unique decomposition $X=X_1\cup\dots\cup X_r$ into $\AR$-irreducible sets satisfying $X_i\not\subset\bigcup_{j\neq i}X_j$ for each $i=1,\dots,r$. The sets $X_1,\dots,X_r$ are called the \emph{$\AR$-components} of $X$.
Noetherianity of the $\AR$-topology implies as well that an arbitrary family of $\AR$-closed sets has a well defined intersection. In particular, one can define an $\AR$-closure of a set $E$ in $\R^n$ as the intersection of all $\AR$-closed sets in $\R^n$ which contain $E$.
For a semialgebraic set $E$ in $\R^n$, let $\overline{E}^{\Zar}$ denote the Zariski closure of $E$, that is, the smallest real-algebraic subset of $\R^n$ containing $E$. Similarly, let $\overline{E}^{\AR}$ denote the $\AR$-closure of $E$ in $\R^n$. Consider the following three kinds of dimension of $E$:
\begin{itemize}
\item the geometric dimension $\dim_{\mathrm{g}}\!E$, defined as the maximum dimension of a real-analytic submanifold of (an open subset of) $\R^n$ contained in $E$,
\item the algebraic dimension $\dim_a\!E$, defined as $\dim\overline{E}^{\Zar}$,
\item the $\AR$ topological (or Krull) dimension $\dim_\mathrm{K}\!E$, defined as the maximum length $l$ of a chain $X_0\subsetneq X_1\subsetneq\dots\subsetneq X_l\subset\overline{E}^{\AR}$, where $X_0,\dots,X_l$ are $\AR$-irreducible.
\end{itemize}
It is well known that $\dim_{\mathrm{g}}\!E=\dim_a\!E$ (see, e.g., \cite[Sec.\,2.8]{BCR}). By \cite[Prop.\,2.11]{Ku}, we also have $\dim_a\!E=\dim_\mathrm{K}\!E$.
We shall denote this common dimension simply as $\dim{E}$. By convention, $\dim\varnothing=-1$.
\subsection{Blowings-up and desingularization}
An essential tool in the proof of Theorem~\ref{thm:main} is the blowing-up of $\R^n$ at a Nash subset. Recall that a subset $Z$ of a semialgebraic open $U\subset\R^n$ is called \emph{Nash} if it is the zero locus of a Nash function $f:U \to \R$. A function $f:U\to\R$ is called a \emph{Nash function} if it is an analytic algebraic function on $U$, that is, a real-analytic function such that there exists a non-zero polynomial $P\in\R[x,t]$ with $P(x,f(x))=0$, for every $x \in U$. We denote the ring of all Nash functions on $U$ by $\Na(U)$. We refer the reader to \cite[Ch.\,8]{BCR} for details on Nash sets and mappings.
Let $Z$ be a Nash subset of $\R^n$. Consider the ideal $\I(Z)$ in $\Na(\R^n)$ of all Nash functions on $\R^n$ vanishing on $Z$. By noetherianity of $\Na(\R^n)$ (see, e.g., \cite[Thm.\,8.7.18]{BCR}), there are $f_1,\dots,f_r \in \Na(\R^n)$ such that $\I(Z)=(f_1,\dots,f_r)$. Set
\[
\widetilde{R} \coloneqq \{(x,[u_1,\dots,u_r])\in\R^n\times\mathbb{RP}^{r-1} : \ u_if_j(x)=u_jf_i(x) \mathrm{\ for\ all\ }i,j=1,\dots,r\}\,.
\]
The restriction $\sigma:\widetilde{R}\to\R^n$ to $\widetilde{R}$ of the canonical projection $\R^n\times\mathbb{RP}^{r-1}\to\R^n$ is the \emph{blowing-up of $\R^n$ at (the centre) $Z$}. One can verify that $\widetilde{R}$ is independent of the choice of generators $f_1,\dots,f_r$ of $\I(Z)$.
Since a real projective space is an affine algebraic set (see, e.g., \cite[Thm.\,3.4.4]{BCR}), one can assume that $\widetilde{R}$ is a Nash subset of $\R^N$ for some $N\in\N$. If $X$ is a Nash subset of $\R^n$, then the smallest Nash subset $\widetilde{X}$ of $\widetilde{R}$ containing $\sigma^{-1}(X\setminus Z)$ is called the \emph{strict transform of $X$ (by $\sigma$)}. In this case, if $Z\subset X$, then we may also call $\widetilde{X}$ the \emph{blowing-up of $X$ at $Z$}.
\medskip
For a semialgebraic set $E$ and a natural number $d$, we denote by $\Reg_d(E)$ the semialgebraic set of those points $x\in E$ at which $E_x$ is a germ of a $d$-dimensional analytic manifold. If $\dim{E}=k$, then $\dim(E\setminus\Reg_k(E))<\dim{E}$.
For a real algebraic set $X$, we denote by $\Sing(X)$ the singular locus of $X$ in the sense of \cite[\S\,3.3]{BCR}. Then, $\Sing(X)$ is an algebraic set of dimension strictly less than $\dim{X}$. Note that, in general, we may have $\Sing(X)\supsetneq X\setminus\Reg_k(X)$, where $k=\dim{X}$.
Recall that every algebraic set $X$ in $\R^n$ admits an \emph{embedded desingularization}. That is, there exists a proper mapping $\pi:\widetilde{R}\to\R^n$ which is the composition of a finite sequence of blowings-up with smooth algebraic centres, such that $\pi$ is an isomorphism outside the preimage of the singular locus $\Sing(X)$ of $X$, the strict transform $\widetilde{X}$ of $X$ is smooth, and $\widetilde{X}$ and $\pi^{-1}(\Sing(X))$ simultaneously have only normal crossings. (The latter means that every point of $\widetilde{R}$ admits a (local analytic) coordinate neighbourhood in which $\widetilde{X}$ is a coordinate subspace and each hypersurface $H$ of $\pi^{-1}(\Sing(X))$ is a coordinate hypersurface.)
For details on resolution of singularities we refer the reader to \cite{BM2} or \cite{Hi}.
\subsection{Nash functions on monomial singularities}
Another key component in the proof of Theorem~\ref{thm:main} is the behaviour of Nash functions on the so-called monomial singularities, studied in \cite{BFR}. Let $M\subset\R^n$ be an \emph{affine Nash submanifold}, that is, a semialgebraic set which is a closed real analytic submanifold of an open set in $\R^n$. Let $X\subset M$ and let $\xi\in X$. We say that the germ $X_\xi$ is a \emph{monomial singularity} if there is a neighbourhood $U$ of $\xi$ in $M$ and a Nash diffeomorphism $u:U\to\R^m$, with $u(\xi)=0$, that maps $X\cap U$ onto a union of coordinate subspaces. We say that $X$ is a \emph{set with monomial singularities} if its germ at every point is a monomial singularity (possibly smooth).
Given a semialgebraic subset $E$ of an affine Nash submanifold $M$, a function $f:E\to\R$ is called a \emph{Nash function on $E$} if there exists an open semialgebraic $U$ in $M$, with $E\subset U$, and a Nash function $F\in\Na(U)$ (in the sense defined above) such that $F|_E=f$. The ring of all Nash functions on $E$ will be denoted by $\Na(E)$. If $E\subset M$ is a Nash set, then a function $f:E\to\R$ is called a \emph{c-Nash function} when its restriction to each irreducible component of $E$ is Nash. The ring of c-Nash functions will be denoted by $\!\!{~}^c\!\Na(E)$. Of course, we always have $\Na(E)\subset\!\!{~}^c\!\Na(E)$. By \cite[Thm.\,1.6]{BFR}, if $E\subset M$ is a Nash set with monomial singularities then
\begin{equation}
\label{eq:BFR}
\Na(E)=\!\!{~}^c\!\Na(E)\,.
\end{equation}
\section{Proof of Theorem~\ref{thm:main}}
\label{sec:proof}
For a semialgebraic set $S$ in $\R^n$ and an integer $k$, we will denote by $\Reg_{<k}(S)$ the semialgebraic set of these points $x\in S$ at which $S_x$ is a germ of a manifold of dimension less than $k$.
\begin{lemma}
\label{lem:complement}
If $X$ is an $\AR$-closed set of dimension $k$ in $\R^n$, then
\[
X\cap\overline{\overline{X}^{\Zar}\setminus X} \ \subset \ \Sing(\overline{X}^{\Zar})\cup\overline{\Reg_{<k}(X)}\,.
\]
\end{lemma}
\begin{proof}
Set $\Sing_k(X)\coloneqq\overline{\Reg_k(X)}\setminus\Reg_k(X)$. Then $X$ can be written as a union
\[
X=\Reg_k(X)\cup(\Sing_k(X)\cup\overline{\Reg_{<k}(X)})\,.
\]
It is evident that $\Reg_k(X)\cap\overline{\overline{X}^{\Zar}\setminus X}\ \subset\ \overline{X}^{\Zar}\setminus\Reg_k(\overline{X}^{\Zar})$, and hence
\[
\Reg_k(X)\cap\overline{\overline{X}^{\Zar}\setminus X}\ \subset \ \Sing(\overline{X}^{\Zar})\,.
\]
It thus suffices to show that $\Sing_k(X)\subset\Sing(\overline{X}^{\Zar})$.
Suppose otherwise, and pick $\xi\in\Sing_k(X)\cap\Reg_k(\overline{X}^{\Zar})$. Let $U$ be the connected component of $\Reg_k(\overline{X}^{\Zar})$ that contains $\xi$. Then, $U\cap X$ is a non-empty open subset of $X$. On the other hand, $U\setminus X\neq\varnothing$, for else $X_\xi$ would be a smooth $k$-dimensional germ. Pick any $a\in U\cap X$ and $b\in U\setminus X$, and let $\gamma:(-1,1)\to U$ be an analytic arc in $U$ passing through $a$ and $b$ (which exists, because $U$ is a connected analytic manifold). Then $\gamma^{-1}(X)$ contains a non-empty open subset
of $(-1,1)$, but $\gamma((-1,1))\not\subset X$, which contradicts the arc-symmetry of $X$.
\qed
\end{proof}
\subsubsection*{Proof of Theorem~\ref{thm:main}}
Let $X$ be an $\AR$-closed set in $\R^n$. We argue by induction on dimension of $X$.
If $\dim X=0$, then $X$ is just a finite set and hence an extension $F:\R^n\to\R$ may be even chosen to be polynomial.
Suppose then that $\dim X=k>0$, and every arc-analytic semialgebraic function on every $\AR$-closed set in $\R^n$ of dimension smaller than $k$ admits an arc-analytic semialgebraic extension to the whole $\R^n$.
Given $f\in\A(X)$, let $S(f)$ denote the locus of points $x\in\Reg_k(X)$ such that $f$ is not analytic at $x$. Then, $S(f)$ is semialgebraic and $\dim S(f)\leq k-2$ (see \cite{KuPa}, and cf. \cite[Thm.\,5.2]{Ku}).
Let
\[
Z:=\Sing(\overline{X}^\Zar)\cup\;\overline{S(f)\cup\Reg_{<k}(X)}^{\Zar}\,.
\]
Since taking Zariski closure of a semialgebraic set does not increase the dimension, we have $\dim(Z\cap X)\leq k-1$.
Therefore, by the inductive hypothesis, $f|_{Z\cap X}$ can be extended to an arc-analytic semialgebraic function $g:\R^n\to\R$. By replacing $f$ with $f-g|_X$, we may thus assume that
\begin{equation}
\label{eq:Z}
f|_{Z\cap X}=0.
\end{equation}
We may further extend $f$ to an arc-analytic function on $X\cup Z$, by setting $f|_Z\coloneqq0$, and hence extend it by $0$ to $\overline{X}^{\Zar}$:
\begin{equation}
\label{eq:Zar}
f|_{\overline{X}^{\Zar}\setminus X}\coloneqq0.
\end{equation}
This extension is arc-analytic. Indeed, by Lemma~\ref{lem:complement}, we have $X\cap\overline{\overline{X}^{\Zar}\setminus X}\subset Z$, which, by the arc-symmetry of $X$, implies that an analytic arc $\gamma$ in $\overline{X}^{\Zar}$ is either entirely contained in $X$ or else it intersects $X$ only at points of $Z$.
Let $\pi:\widetilde{R}\to\R^n$ be an embedded desingularization of $\overline{X}^{\Zar}$\!, and let $\widetilde{X}$ be the strict transform of $\overline{X}^{\Zar}$. By \cite[Thm.\,2.6]{Ku}, there are connected components $E_1,\dots,E_s$ of $\widetilde{X}$, each of dimension $k$, such that $\pi(E_1\cup\ldots\cup E_s)=\overline{\Reg_k(X)}$. Set $E:=E_1\cup\dots\cup E_s$.
By \eqref{eq:Z} and \eqref{eq:Zar}, we have $f\circ\pi|_T\equiv0$ for all other connected components $T$ of $\widetilde{X}$, as well as $f\circ\pi|_H\equiv0$ for every hypersurface $H$ of the exceptional locus $\pi^{-1}(\Sing(\overline{X}^\Zar))$.
By \cite[Thm.\,1.1]{BM1}, there exists a finite composition of blowings-up $\sigma:\check{R}\to\widetilde{R}$ \;(with smooth Nash centres) which converts the arc-analytic semialgebraic function $f\circ\pi|_E$ into a Nash function $f\circ\pi\circ\sigma|_{\check{E}}$, where the Nash manifold $\check{E}$ is the strict transform of $E$ by $\sigma$.
Moreover, by \cite[Thm.\,1.3]{KuPa}, the centres of the blowings-up in $\sigma$ can be chosen so that $\sigma$ is an isomorphism outside the preimage of $S(f\circ\pi)$. Consequently, one can assume that $\pi\circ\sigma$ is an isomorphism outside the preimage of $Z$.
Let $W:=(\pi\circ\sigma)^{-1}(\overline{X}^{\Zar})$. By the above, the singular locus of $W$ is contained in $(\pi\circ\sigma)^{-1}(Z)$.
Let $\tau:\widehat{R}\to\check{R}$ be an embedded desingularization of $W$ (with smooth Nash centres), and let $\widehat{W}$ be the strict transform of $W$. Further, let $\widehat{E}$ be the strict transform of $\check{E}$, and let $\Sigma$ denote the exceptional locus of $\tau$. Since the real projective space is an affine algebraic variety, we may assume that $\widehat{R}\subset\R^N$ for some $N\in\N$. Notice that, by \eqref{eq:Z} and \eqref{eq:Zar}, $f\circ\pi\circ\sigma\circ\tau$ is a continuous function on $\widehat{W}\cup\Sigma$, which vanishes identically on every irreducible component of $\widehat{W}$ which is not contained in $\widehat{E}$, as well as on every irreducible component of $\Sigma$. Since $f\circ\pi\circ\sigma\circ\tau|_{\widehat{E}}$ is Nash, by construction, it follows that $f\circ\pi\circ\sigma\circ\tau$ is Nash when restricted to every (Nash) irreducible component of $\widehat{W}\cup\Sigma$. We will write $\widehat{f}$ for $f\circ\pi\circ\sigma\circ\tau|_{\widehat{W}\cup\Sigma}$, for short.
We claim that $\widehat{f}$ can be extended to a Nash function $\widehat{F}:U\to\R$ on an open semialgebraic neighbourhood $U$ of $\widehat{W}\cup\Sigma$ in $\R^N$. Indeed, the set $\widehat{W}\cup\Sigma$ is a finite union of Nash submanifolds of $\R^N$ which simultaneously have only normal crossings. Therefore, by \eqref{eq:BFR}, $\widehat{f}$ admits a required Nash extension if and only if $\widehat{f}|_T$ can be extended to a Nash function on an open semialgebraic neighbourhood of $T$ in $\R^N$ for every irreducible component $T$ of $\widehat{W}\cup\Sigma$. Let then $T$ be such an irreducible component. Since $T$ is a Nash submanifold of $\R^N$, it has a tubular neighbourhood. That is, there exists an open semialgebraic neighbourhood $U_T$ of $T$ in $\R^N$ with a Nash retraction $\varrho_T:U_T\to T$ (see \cite[Cor.\,8.9.5]{BCR}). We may thus extend $\widehat{f}|_T$ to a Nash function $\widehat{F}_T:U_T\to\R$ by setting $\widehat{F}_T(x):=\widehat{f}(\varrho_T(x))$ for all $x\in U_T$. This proves the existence of $\widehat{F}$.
Now, by the Efroymson extension theorem (see \cite{E}, or \cite[Thm.\,8.9.12]{BCR}), the function $\widehat{F}$ admits a Nash extension to the whole $\R^N$; i.e., there exists $G\in\Na(\R^N)$ such that $G|_U=\widehat{F}$. Then, $G|_H\equiv0$ for every hypersurface $H$ of the exceptional locus of $\tau$, since this is the case for $\widehat{F}$.
Finally, we define the extension $F:\R^n\to\R$ of $f$ as
\[
F(x):=\begin{cases}
(G\circ\tau^{-1}\circ\sigma^{-1}\circ\pi^{-1})(x) & \mathrm{if\ }x\notin Z\\
0 & \mathrm{if\ }x\in Z\ .
\end{cases}
\]
To see that $F$ is arc-analytic, let $\gamma:(-1,1)\to\R^n$ be an analytic arc. Let $\widetilde{\gamma}:(-1,1)\to\widetilde{R}$ be the lifting of $\gamma$ by $\pi$, let $\check{\gamma}:(-1,1)\to\check{R}$ be the lifting of $\widetilde{\gamma}$ by $\sigma$, and let $\widehat{\gamma}:(-1,1)\to\R^N$ be the lifting of $\check{\gamma}$ by $\tau$. We claim that
\begin{equation}
\label{eq:arc}
F\circ\gamma=G\circ\widehat{\gamma}\,,
\end{equation}
which implies that $F\circ\gamma$ is analytic.
Indeed, if $\gamma(t)\notin Z$ for some $t\in(-1,1)$, then \eqref{eq:arc} holds because $(G\circ\tau^{-1}\circ\sigma^{-1}\circ\pi^{-1})(\gamma(t))=(G\circ\tau^{-1}\circ\sigma^{-1})(\widetilde{\gamma}(t))=(G\circ\tau^{-1})(\check{\gamma}(t))=G(\widehat{\gamma}(t))$. If, in turn, $\gamma(t)\in Z$, then $\gamma(t)$ lifts by $\pi\circ\sigma\circ\tau$ either to a point $z$ in $\widehat{W}\setminus\widehat{E}$ or else a point $z$ in the exceptional locus of $\tau$. In either case, $G(z)=0$, by construction, and so $G(\widehat{\gamma}(t))=0=F(\gamma(t))$, as required.
\qed
\begin{remark}
\label{rem:where-sing}
It is evident from the above proof that, in fact, one could choose the extension $F:\R^n\to\R$ to be analytic outside of \,$\Sing(\overline{X}^\Zar)\cup\overline{S(f)\cup\Reg_{<k}(X)}^{\Zar}$ (hence, in particular, outside of $\overline{X}^\Zar$).
\end{remark}
\begin{problem}
\label{prob:1}
It would be interesting to know if the extension $F$ can be chosen so that its non-analyticity locus satisfies $S(F)=S(f)$.
\end{problem}
\section{Some immediate applications}
\label{sec:corollaries}
Arc-analytic semialgebraic functions may be defined and studied on arbitrary semialgebraic sets (see, e.g., \cite{S}). It is thus natural to ask which semialgebraic sets enjoy the extension property from Theorem~\ref{thm:main}. The following result shows that, in fact, the arc-symmetric sets are uniquely characterised by the extension property.
\begin{proposition}
\label{prop:only-AR}
For a semialgebraic set $S$ in $\R^n$, the following conditions are equivalent:
\begin{itemize}
\item[(i)] $S$ is arc-symmetric.
\item[(ii)] Every arc-analytic semialgebraic function on $S$ admits an arc-analytic semialgebraic extension to the whole $\R^n$.
\end{itemize}
\end{proposition}
\begin{proof}
The implication $\mathrm{(i)}\Rightarrow\mathrm{(ii)}$ is given by Theorem~\ref{thm:main}.
For the converse, let $S$ be a semialgebraic subset of $\R^n$ that is not arc-symmetric. This means that there exists an analytic arc $\gamma:(-1,1)\to\R^n$ such that $\gamma((-1,0))\subset S$ but $\gamma((0,1))\not\subset S$. Pick a point $a=(a_1,\dots,a_n)\in\gamma((0,1))\setminus S$, and define
\[
f(x)=\frac{1}{\sum_{i=1}^n (x_i-a_i)^2}\,,\quad\mathrm{where\ }x=(x_1,\dots,x_n)\in\R^n\,.
\]
Then $f$ is an arc-analytic function on $S$ that has no extension to an arc-analytic function on $\R^n$. Indeed, given any such extension $F:\R^n\to\R$, we would have $F(\gamma(t))=f(\gamma(t))$ for any $t \in (-1,0)$ and hence for any $t \in (-1,s)$, where $s\in(0,1)$ is the minimum parameter such that $\gamma(s)=a$. But $f(\gamma(t))$ has no left-sided limit at $s$, which means that $F\circ\gamma$ cannot be analytic, and thus $F$ is not arc-analytic.
\qed
\end{proof}
Theorem~\ref{thm:main} implies also an arc-analytic variant of the Urysohn lemma. More precisely, we have the following.
\begin{corollary}
\label{cor:TU}
Let $X$ and $Y$ be disjoint $\AR$-closed sets in $\R^n$. Then, there exists an arc-analytic semialgebraic function $F:\R^n\to\R$ such that $F|_X\equiv0$ and $F|_Y\equiv1$. In particular, there exist disjoint open semialgebraic sets $U$ and $V$ in $\R^n$ such that $X\subset U$ and $Y\subset V$.
\end{corollary}
\begin{proof}
Given $X$ and $Y$ as above, the function $f:X\cup Y\to\R$ defined as
\[
f(x)=\begin{cases}0, &x\in X\\ 1, & x\in Y\end{cases}
\]
is arc-analytic semialgebraic, and the set $X\cup Y$ is arc-symmetric. Hence, by Theorem~\ref{thm:main}, $f$ admits an extension $F:\R^n\to\R$ with the required properties.
Since arc-analytic semialgebraic functions are continuous (\cite[Prop.\,5.1]{Ku}), the sets $U:=F^{-1}((-\infty,1/2))$ and $V:=F^{-1}((1/2,\infty))$ are open semialgebraic. Clearly, $U\cap V=\varnothing$, $X\subset U$, and $Y\subset V$.
\qed
\end{proof}
\begin{remark}
\label{rem:no-Nash-sep}
Note that, in general, disjoint arc-symmetric sets cannot be separated by a Nash function. Indeed, consider for instance
\[
X=\{(x,y,z)\in\R^3:\, z(x^2+y^2)=x^3\}\setminus\{(x,y,z)\in\R^3:\, x=y=0,z\neq0\}
\]
and $Y=\{(0,0,1)\}$ in $\R^3$. The set $X$ is $\AR$-closed, but its real analytic closure in $\R^3$ is the irreducible algebraic hypersurface $Z=\{(x,y,z)\in\R^3:\, z(x^2+y^2)=x^3\}$ (see \cite[Ex.\,1.2(1)]{Ku}). It follows that every Nash function $f:\R^3\to\R$ which is identically zero on $X$ must vanish on the whole $Z$ and thus cannot be equal to $1$ on $Y$.
Similarly, it is easy to construct disjoint $\AR$-closed sets that cannot be separated by a continuous rational function (cf. \cite[Ex.\,2.3]{S}).
\end{remark}
|
1,477,468,750,357 | arxiv | \section{Introduction}
The importance of the study of complex optimization problems
which involve
quenched, random, frustrated functions of many variables,
as well as the major role that statistical mechanics can
play in that study,
have been pointed out by Anderson more than ten years ago \cite{Anderson}.
Since then, standard statistical mechanics techniques have been applied
to the probabilistic analysis of several classical combinatorial
optimization problems, such as
the graph partitioning problem \cite{AF},
the traveling salesman problem
\cite{MP,Cerf}, the knapsack problem \cite{Opper,Fonta,Jap},
and the satisfiability
problem \cite{Selman,Monasson,Brian}, to mention only a few.
In fact the well-established statistical mechanics methods to
characterize ground states (global minima) and metastable
states (local minima) of spin glass models can be readily adapted to
the study of optimization problems \cite{MPV}.
In this paper we study the number partition problem (NPP) which
is stated as follows.
Given a sequence of real numbers $\{ a_1,a_2,\ldots,a_N \}$, the
NPP consists of partitioning them into two disjoint
sets ${\cal {A}}_1$ and ${\cal {A}}_2$ such that the difference
\begin{equation}
\mid \sum_{a_j \in {\cal {A}}_1 } a_j
- \sum_{a_j \in {\cal {A}}_2 } a_j \mid
\end{equation}
is minimized. Alternatively, we can search for the Ising spin
configurations ${\bf s} =
\left ( s_1,\ldots,s_N \right ) $ that minimize the
energy or cost function
\begin{equation} \label{E_1}
E \left ( {\bf s} \right ) = ~ \mid \sum_{j=1}^N a_j s_j \mid,
\end{equation}
where $s_j = 1$ if $a_j \in {\cal {A}}_1 $ and $s_j = -1$ if
$a_j \in {\cal {A}}_2 $. We can consider also the problem
of constrained partitions, in which the difference between
the cardinalities of sets
${\cal {A}}_1$ and $ {\cal {A}}_2$ is fixed, i.e.,
\begin{equation} \label{m}
m = \frac{1}{N} ~\sum_{j=1}^N s_j .
\end{equation}
The NPP may be viewed as the practical problem of finding the
fairest way to
partition a set of $N$ objects $j=1,2, \ldots,N$, each of which
of value $a_j$, between two persons. Despite its simplicity,
the NPP was shown to
belong to the NP-complete class, which basically means that there
is no known deterministic algorithm guaranteed to solve all instances
of this problem within a polynomial time bound \cite{GJ}.
The fact that the NPP is frustrated can easily be
understood by squaring equation (\ref{E_1}), so that the problem of
minimizing $E$ becomes then the one of finding the ground state of the
infinite-range, random anti-ferromagnetic Ising Hamiltonian \cite{Fu}
\begin{equation}\label{H}
{\cal{H}} = \frac{1}{2} \sum_{i} \sum_{j>i} a_i a_j s_i s_j .
\end{equation}
Thus we note that the problem of finding the ground
state of (\ref{H}) is NP-complete.
Although zero-cost solutions of the NPP may be of some value to
cryptography \cite{Shamir}, the interest in this problem stems
mainly from the remarkable failure of the stochastic heuristic
simulated annealing \cite{KGV,Pablo}
to find good solutions to it, as compared
with the solutions found by deterministic heuristics \cite{JAMS}.
In fact, the reason
for that failure is that the usual strategy of exploring the space
of configurations $\{{\bf s}\}$ through single spin flips leads
to changes of energy that are typically of order $1/N$, while a
theoretical analysis
indicates that the global minimum energy is of
order $\sqrt{N}~2^{-N}$ for unconstrained partitions \cite{KKLO}.
It is interesting to note that a very simple deterministic
heuristic, the differencing method
of Karmakar and Karp \cite{KK}, can find with high probability
solutions whose energies are of
order $1/N^{\alpha \log N}$ for some $\alpha >0$.
More recently, it has been
shown that the performance of simulated annealing can be
greatly improved and even surpass that of the differencing method
by employing different representations for the problem \cite{Ruml}.
In this work we employ the annealed approximation \cite{TV,VM}
to derive rigorous lower bounds to the average value of the
difference or energy for the best constrained and unconstrained
partitions. For constrained partitions, we show that
the average optimal energy
is extensive for $ m > \sqrt{2}-1$ and
we calculate it exactly in this regime using the
self-averaging property of
the free energy density. The theoretical
predictions are compared with numerical estimates for the
optimal energy obtained through the
exhaustive search of the configuration space for $N \leq 24$.
Furthermore, we calculate analytically the average
number of minima in the 1-swap neighborhood and
estimate their typical energy. A minimum in the 1-swap
neighborhood is a state that has lower energy than
all the $N$ states that differ from it by a single spin only
\cite{JAMS}.
Similarly to previous studies of
the NPP \cite{JAMS,KKLO,Ruml},
we will consider the case where the $a_j$'s are statistically
independent random variables uniformly distributed in the unit interval.
The remainder of this paper is organized as follows. In section 2
we describe the annealed approximation and calculate the lower
bounds to the average value of the optimal energy.
In section 3 we present the calculation of the average number of
local minima in the 1-swap neighborhood. In section 4 we
discuss our main results and present
some concluding remarks. In particular, we
compare our approach with other theoretical studies of
the NPP \cite{Fu,KKLO}. In the appendix we present the details
of the self-averaging calculation of the average optimal
energy in the regime where this quantity is extensive.
\section{Annealed approximation}
In the canonical ensemble formalism of the statistical mechanics
the average value of the optimal energy for
constrained partitions is given by
\begin{equation}\label{E_m}
\bar{E}_m = \lim_{T \rightarrow 0} F_m (T) = -
\lim_{T \rightarrow 0} T ~ \left \langle \ln Z_m \right \rangle ,
\end{equation}
where $F_m (T)$ is the average free energy, and
$Z_m (T)$ is the partition function
\begin{equation}\label{Z_m}
Z_m (T) = \sum_{\{ {\bf s} \} } \delta \left ( N m, \sum_j s_j
\right ) \exp \left [ - \frac{E \left ( {\bf s} \right )}{T} \right ]
\end{equation}
with $m= -1, -1 +2/N, \ldots, 1-2/N, 1$.
Here
the summation is over the $2^N$ states ${\bf s}$,
$\delta (k,l)$ is the Kronecker delta and $T$
is the temperature. The notation $\langle \ldots \rangle$
stands for the average over the random variables $a_i$.
The limit $T \rightarrow 0$ in equation (\ref{E_m})
ensures that only the states that minimize $E \left ( {\bf s} \right )$
will contribute to $Z_m$.
Since the average entropy $S_m (T) = - dF_m/dT $
of a system of Ising spins is positive at all temperatures,
$F_m$ must be a decreasing function of $T$, so that
$\bar{E}_m = F(0) \geq F(T)$ for all $T$.
Defining the annealed free energy by
\begin{equation}\label{F_m^a}
F_m^a (T) = - T ~\ln ~ \left \langle Z_m (T) \right \rangle ,
\end{equation}
and using Jensen's inequality \cite{Feller},
$\ln \langle Z_m \rangle \geq \langle \ln Z_m \rangle$,
yield the following
inequalities
\begin{equation}
F_m^a (T) \leq F_m (T) \leq \bar{E}_m .
\end{equation}
Thus, the annealed
free energy calculated at any $T$ provides a rigorous lower
bound to $\bar{E}_m$ \cite{TV,VM}.
Clearly,
the tightest bound is given by $\bar{E}_m^a = F_m^a (T_m^*)$
where $T_m^*$ is
the temperature that maximizes $F_m^a (T)$, i.e.
\begin{equation}\label{derivative}
\frac{d F_m^a}{dT} \mid_{T_m^*} =0 .
\end{equation}
This procedure is very useful because, in general,
the annealed free energy
is much easier to evaluate than the quenched one.
We now proceed with the explicit evaluation of the annealed free energy.
Using the integral representations of the Dirac and Kronecker delta
functions we write
\begin{eqnarray}\label{Z_m_1}
\langle Z_m (T) \rangle & = & \int_{-\infty}^\infty
\int_{-\infty}^\infty
\frac{dx d\tilde{x}}{%
2 \pi} \, \mbox{e}^{i x \tilde{x} - \mid x \mid /T } \,
\int_{-\pi}^\pi \frac{d\tilde{m}}{2 \pi} \, \mbox{e}^{i N m
\tilde{m}} \, \nonumber \\
& & \prod_j \int_0^1 da_j \sum_{s_j = \pm 1} \exp
\left [ - i s_j \left (
a_j \tilde{x} + \tilde{m} \right ) \right ] .
\end{eqnarray}
The
integrals over $x$ and $a_j$, as well as the
summation over $s_j$, can easily be performed yielding
\begin{eqnarray}
\langle Z_m (T) \rangle & = & \int_{-\infty}^{\infty}
\frac{d\tilde{x}}{2 \pi} \, \frac{ 2 T}{ 1 + \left ( T \tilde{x}\right )^2}
\,
\left [ \frac{\sin \left (\tilde{x}/2 \right )}{\tilde{x}/2} \right ]^N
\nonumber \\
& &
\int_{-\pi}^{\pi} \frac{d\tilde{m}}{2 \pi} \mbox{e}^{i N m \tilde{m}} \,
\left [ \mbox{e}^{i \tilde{m} + i \tilde{x}/2} +
\mbox{e}^{-i \tilde{m} - i \tilde{x}/2} \right ]^N .
\end{eqnarray}
Using the binomial theorem,
the integral over $\tilde{m}$ can be readily carried out. The final
result is simply
\begin{equation}\label{Z_m_2}
\langle Z_m (T) \rangle = \left ( \! \! \begin{array}{c} N \\ n
\end{array} \! \! \right ) \, \int_{-\infty}^{\infty}
\frac{dy}{\pi} \, \frac{ 2 T}{ 1 + \left ( 2 T y \right )^2} \,
\mbox{e}^{N G_m (y)}
\end{equation}
where
\begin{equation}
n = N ~ \frac{1-m}{2} ,
\end{equation}
\begin{equation}
G_m (y) = i m y + \ln \left ( \frac{\sin y}{y} \right ) ,
\end{equation}
and we have made the change of variable $y = \tilde{x}/2$.
In the limit of large $N$, the integral over $y$ can be
evaluated using
the saddle-point method \cite{Daniels}. Since $\mid m \mid \leq 1$,
the saddle-point is the imaginary $y_s = i \zeta$, where $\zeta$
is the real
solution of the equation
\begin{equation}
m - \mbox{coth} ~\zeta + \frac{1}{\zeta} = 0 .
\end{equation}
Hence, the function $G_m (y_s) = G_m$, where
\begin{equation}
G_m = - m \zeta + \ln \frac{\sinh \zeta}{\zeta} ,
\end{equation}
is real. Finally, using Stirling's formula for the binomial coefficient
we rewrite equation (\ref{Z_m_2}) in the limit of large $N$ as
\begin{equation}\label{ZZ}
\langle Z_m (T) \rangle = \frac{2}{\pi N}
\sqrt{ \frac{1}{(1-m^2) \mid G_m^{''} \mid }}~
\frac{2 T}{1 - \left ( 2 T \zeta \right )^2}
~ \mbox{e}^{N g_m}
\end{equation}
where
\begin{equation}
g_m = G_m - \frac{1+m}{2} \ln \frac{1+m}{2}
- \frac{1-m}{2} \ln \frac{1-m}{2}
\end{equation}
and
\begin{equation}
G_m^{''} = -1 + m^2 + \frac{2m}{\zeta} .
\end{equation}
At this stage we can readily calculate the temperature $T_m^*$ that
maximizes the annealed free energy. In fact, equation
(\ref{derivative}) is written as
\begin{equation}\label{T^*_1}
\ln \left \langle Z_m \left ( T^*_m \right ) \right \rangle +
\frac{ 1 + \left ( 2 T^*_m \zeta \right )^2}
{1 - \left ( 2 T^*_m \zeta \right )^2} = 0 .
\end{equation}
We consider first the regime where
$\left \langle Z_m \left ( T^*_m \right ) \right \rangle$
is of order $1$. In this case, equation (\ref{ZZ})
implies that $T^*_m$ is vanishingly
small, so that equation (\ref{T^*_1}) reduces to
$ \left \langle Z_m \left ( T^*_m \right ) \right \rangle
= \mbox{e}^{-1}$. Inserting this result into
equation (\ref{F_m^a}) yields $\bar{E}_m^a = T_m^*$. Hence,
\begin{equation}\label{lb_m}
\bar{E}_m^a = \frac{\pi N}{4} \sqrt{ (1-m^2) \mid G^{''}_m \mid }
~\mbox{e}^{-1 - N g_m}
\end{equation}
which is consistent with the assumption that $T^*_m$ is small
for large $N$, provided that $g_m > 0$. Since $g_m$
decreases monotonically with $m$, from $g_0 = \ln 2$ to $g_1 = -\infty$,
this assumption breaks down for $\mid m \mid > 0.560$ where
$g_m$ is negative. Henceforth we will assume that $m \geq 0$.
It is instructive to consider in detail the case of even partitions
($m = 0$). In this case we find $\zeta=0$, $g_0 = \ln 2$, and
$G_0^{''} = -1/3$ so that
\begin{equation}
\langle Z_0 (T) \rangle = 2^N T \frac{4 \sqrt{3}}{\pi N}
\end{equation}
and
\begin{equation}\label{E_0^a}
\bar{E}^a_0 = 2^{-N} \frac{\pi N}{4 \, \mbox{e} \sqrt{3}}
\approx 0.167 ~2^{-N} N .
\end{equation}
In figures $ 1(a)$ and $1(b)$ we present the results of numerical
experiments to estimate the energy of the
global minima for even partitions
through the
exhaustive search in the configuration space for $N \leq 24$.
In all experiments discussed in this work, the symbols
represent the averages over $10^4$ realizations of the set $\{a_j \}$.
The error bars are calculated by measuring the standard deviation
of the average optimal energies obtained in $25$ experiments, each one
involving the average of $400$ realizations of the set $\{a_j \}$.
In these experiments we focus on the
$N$ dependence of the average optimal energy
$\bar{E}_m = \langle E_m \rangle $, and
of the ratio $r_m = \sqrt{\sigma_m^2}/\bar{E}_m$
where $\sigma_m^2 = \langle E_m^2 \rangle -\langle E_m \rangle^2$
is the variance of the random variable $E_m$.
In figure $1(a)$ we show $\bar{E}_0$ as a function of $N$. The
straight line shown in this figure yields the fitting
$\bar{E}_0 = 0.80 ~2^{-N} N$.
Hence, although the annealed bound $\bar{E}_0^a$ gives
the correct scaling with $N$, it is about five times smaller than
our numerical estimate for $\bar{E}_0$.
In figure $1(b)$ we show the ratio $r_0$
as a function of $N$. Interestingly, this ratio tends to 1 for
large $N$ indicating then that the optimal energy $E_0$ is not
self-averaging.
In the regime where
$\left \langle Z_m \left ( T^*_m \right ) \right \rangle$
is of order $\mbox{e}^N$ we find
$\ln \left \langle Z_m \left ( T^*_m \right ) \right \rangle
\approx N g_m$ and $T_m^* \approx 1/2 \zeta$
so that
\begin{equation}\label{EG}
\bar{E}_m^a = - N ~\frac{g_m}{2 \zeta} .
\end{equation}
Of course, this solution is valid only for $ m > 0.560$
where $g_m$ is negative. We note that (\ref{EG}) gives
a very poor lower bond to $\bar{E}_m$. In particular, for $m = 1$ we have
$\bar{E}_1 = N/2$ while the annealed bound yields $\bar{E}_1^ a = 0$.
Fortunately, in the regime of extensive $E_m $ we can
use the self-averaging property of the free energy density
to calculate $\bar{E}_m$ exactly for large $N$ (see Appendix).
The final result is simply
\begin{equation}\label{self}
\bar{E}_m = \frac{N}{2} \left [ \frac{ \left ( 1 + m \right )^2}{2} - 1
\right ] ,
\end{equation}
which is valid for $ m \geq \sqrt{2} - 1 \approx 0.414$.
Thus the annealed lower bound is also very poor in the region
$0.414 < m < 0.560$ since in
this region $\bar{E}_m^a$ decreases exponentially with
$N$, while $\bar{E}_m$ actually increases linearly with $N$.
To better appreciate the qualitative differences between the
regimes of distinct scalings with $N$, we present
in figure $2$ the numerical estimates for $\bar{E}_m$
as a function of $m$ for $N=24$.
The existence of two different regimes of scaling with $N$,
as well as the very good agreement with the
theoretical predictions for
$m > 0.414$, are apparent in this figure.
A noteworthy feature
of our numerical estimate for $\bar{E}_m$ shown in the inset
is that, in contrast
to the annealed lower bound (\ref{lb_m}), the even
partitions ($m=0$) do not give the lowest energy. We have
verified that this
result holds for smaller values of $N$ as well.
Furthermore, there seems to occur a rather
abrupt transition at $m \approx 0.25$ as indicated by the large
error bar and for the change of almost three orders
of magnitude in $\bar{E}_m$. Although it would be very interesting to
study these results more carefully for larger $N$,
we are not aware of any efficient heuristic to solve the NPP for
constrained partitions.
In particular, we note that the differencing method \cite{KK}
applies only to unconstrained partitions.
We turn now to the analysis of unconstrained partitions.
The average partition function in this case is given by
\begin{eqnarray} \label{Z_u}
\langle Z_u (T) \rangle & = & \sum_m ~ \langle Z_m (T) \rangle
\nonumber \\
& = & 2^N \int_{-\infty}^{\infty}
\frac{dy}{\pi} \, \frac{ 2 T}{ 1 + \left ( 2 T y \right )^2} \,
\mbox{e}^{N G_u (y)} ,
\end{eqnarray}
where
\begin{equation}
G_u (y) = \ln \left [ \frac{\sin \left (2y\right)}{2y} \right ] .
\end{equation}
As before, in the limit of large $N$
the integral over $y$ can be carried out via a saddle-point
integration. Since the saddle-point is $y_s = 0$, the final result
is simply
\begin{equation}
\langle Z_u (T) \rangle = 2^N T \sqrt{\frac{6}{\pi N}} ,
\end{equation}
which yields
\begin{equation}\label{E_u^a}
\bar{E}_u^a = 2^{-N} \sqrt{\frac{\pi N}{6 \mbox{e}^2}}
\approx 0.266 ~ 2^{-N} \sqrt{N} .
\end{equation}
It is interesting to compare this result with the average
energy of a randomly chosen configuration ${\bf s}$.
This quantity, which is defined by
\begin{equation}
\bar{E}_r = 2^{-N} ~\prod_i \int_0^1 da_i ~ \sum_{s_i =\pm 1}
\mid \sum_i a_i s_i \mid ,
\end{equation}
is easily calculated and yields $\bar{E}_r = \sqrt{2N/3 \pi}$
for large $N$.
Moreover, comparing equations (\ref{E_0^a}) and (\ref{E_u^a})
we find that
the lower bound for average optimal energy of even partitions ($m=0$),
which minimizes $E_m^a$, is larger than
that of unconstrained partitions by a factor
$ N^{1/2}$. The fact that these quantities do not coincide
indicates that, for unconstrained partitions, $m$ is not a
self-averaging quantity, even in the large $N$ limit, i.e.
the values of $m$ associated to the best unconstrained partitions
depend on the specific realization of the set of random variables
$\{a_j \}$. In figure $3(a)$ we present the numerical estimate for
the average optimal energy $\bar{E}_u = \langle E_u \rangle$
obtained through the exhaustive search for $N \leq 24$.
The data are very
well fitted by $\bar{E}_u = 1.12 ~2^{-N} \sqrt{N} $.
In figure $3(b)$ we show the ratio $r_u = \sqrt{\sigma_u^2}/\bar{E}_u$
as a function of $N$. As before, the
finding that this ratio tends to 1 for increasing $N$
indicates that $E_u$ is not self-averaging.
\section{Average number of local minima}
As mentioned before, a minimum in the 1-swap
neighborhood is a state that has lower energy than
all the $N$ states that differ from it by a single spin only
\cite{JAMS}. In the statistical mechanics context, these states
are usually termed metastable states \cite{Tanaka}.
The following analysis will
be restricted to unconstrained partitions only, since for constrained
partitions we would have to consider
the simultaneous flip of two spins in order
to satisfy the cardinality constraint.
The average number local minima with energy $E = \mid t \mid $
is defined by
\begin{equation}
\langle {\cal{M}} \left ( t \right ) \rangle =
\left \langle \sum_{\{ {\bf s} \} }
\delta \left ( t - \sum_j s_j a_j \right )
\prod_i \Theta \left ( \mid t - 2 s_i a_i \mid
- \mid t \mid \right ) \right \rangle
\end{equation}
where $\delta (x) $ is the Dirac delta function and
$\Theta (x) = 1$ if $x \geq 0$ and $0$ otherwise.
As the calculation is straightforward we will only sketch it in
the sequel. Using the integral representation of the
delta function we obtain
\begin{equation}
\langle {\cal{M}} \left ( t \right ) \rangle =
\int_{-\infty}^\infty \frac{d \tilde{t}}{2 \pi}
~\mbox{e}^{i t \tilde{t}} \prod_j \sum_{s_j= \pm 1}
\int_0^1 da_j ~\mbox{e}^{-i \tilde{t} s_j a_j} ~
\Theta \left ( \mid t - 2 s_j a_j \mid - \mid t \mid \right ) .
\end{equation}
Hence
the integral over $a_j$ and the summation over
$s_j$ can readily be performed, yielding
\begin{equation}\label{El1}
\langle {\cal{M}} \left ( t \right ) \rangle =
\int_{-\infty}^{\infty}
\frac{d \tilde{t}}{2\pi} ~\mbox{e}^{i t \tilde{t} }
\left ( \frac{\mbox{e}^{-i t \tilde{t} } - \mbox{e}^{i \tilde{t} }
+ \mbox{e}^{i \tilde{t} } - 1}{i \tilde{t}} \right )^N
~~~~~\mbox{if}~~ E = \mid t \mid ~ < 1 ,
\end{equation}
and
\begin{equation}\label{Eg1}
\langle {\cal{M}} \left ( t \right ) \rangle =
\int_{-\infty}^{\infty}
\frac{d \tilde{t}}{2 \pi} ~\mbox{e}^{i t \tilde{t} }
\left ( \frac{\mbox{e}^{i \tilde{t}}- 1}{i \tilde{t}} \right )^N
= 0
~~~~~\mbox{if}~~ E = \mid t \mid ~ \geq 1 ,
\end{equation}
where we have used the interesting result that
the integral in equation (\ref{Eg1})
vanishes for all $N$ \cite{tabela}. Thus, there are no local minima
with
$E \geq 1$. As usual, for large $N$ the integral in equation (\ref{El1})
can evaluated via a saddle-point integration.
The final result is
\begin{equation}
\langle {\cal{M}} \left ( t \right ) \rangle =
\sqrt{ \frac{1}{2 \pi N \mid H''(\xi) \mid } }
~ \mbox{e}^{ N H (\xi) }
\end{equation}
where
\begin{equation}
H ( \xi ) = \ln 2 + \ln \left [ \frac{\sinh \xi}{\xi} -
\mbox{e}^{- t \xi/2}~ \frac{\sinh \left ( t \xi/2 \right )}{\xi}
\right ] ,
\end{equation}
and $ H''(\xi) = - d^2 H (\xi)/d \xi^2$. Here,
$\xi$ is the solution of
\begin{equation}
\frac{2}{\xi} - \frac{2 \cosh \xi - \mbox{e}^{- t \xi} ~t }
{\sinh \xi - \mbox{e}^{- t \xi/2} ~\sinh \left ( t \xi/2 \right ) }
= 0 .
\end{equation}
The function $H (\xi)$ is a monotonically decreasing function
of $E = \mid t \mid$. In particular, it decreases from $\ln 2$ at
$E = 0$ ($\xi = 0$)
to $- \infty$ at $E = 1$ ($\xi = - \infty$).
It vanishes
at $E \approx 0.851$, so the average number of local minima with
energy larger than that value decreases
exponentially with $N$.
A more interesting quantity is
the average number of local minima regardless their energy
values, which is defined by
\begin{equation}\label{total_1}
\langle {\cal{M}} \rangle = \int_{-\infty}^\infty dt ~
\langle {\cal{M}} \left ( t \right ) \rangle .
\end{equation}
From the above discussion,
it is clear that only the close neighborhood of
$t=0$ contributes to this integral, so we can expand the integrand
of (\ref{El1}) in powers of $t$ and $\tilde{t}$ and keep the lowest
order terms only. The final result is
\begin{equation}\label{total_2}
\langle {\cal{M}} \rangle = \sqrt{\frac{24}{\pi}}~\frac{2^N}{N^{3/2}}
\approx 2.764~\frac{2^N}{N^{3/2}} .
\end{equation}
It is interesting to estimate the dependence on
$N$ of the typical energy of a local minimum. This quantity,
denoted by $E_t$, is defined by
\begin{equation}
E_t = \langle \frac{ \int dt~ t ~ {\cal M} (t) }
{ \int dt~ {\cal M} (t)} \rangle ,
\end{equation}
which, in the annealed approximation framework \cite{Tanaka},
is approximated by
\begin{equation}\label{E_t_a}
E_t \approx \frac{\int dt~ t ~ \langle {\cal M} (t) \rangle }
{\langle {\cal M} \rangle} .
\end{equation}
The procedure to evaluate (\ref{E_t_a}) is identical to that used
in the evaluation of (\ref{total_1}) and yields
\begin{equation}\label{approx}
E_t \approx \frac{2}{N} .
\end{equation}
We note that while equation (\ref{total_2}) gives the exact
leading order term of the average number of local minima, equation
(\ref{approx}) is an uncontrolled estimate for the energy of a
typical minimum. These quantities can be easily estimated
numerically: for each value of $N$, ranging from $100$ to $3000$,
we generate $10^5$ random states ${\bf s}$ and count
the fraction of them that are local minima and measure their energies.
We find that the numerical data are very well fitted by the equations
$ \langle {\cal M} \rangle
\approx (2.81 \pm 0.02)~2^N/N^{3/2}$
and $E_t \approx (1.76 \pm 0.04)/N$, which are in quite good agreement
with the theoretical predictions.
\section{Conclusion}
To appreciate some of the drastic features of the energy landscape
associated to the NPP or, equivalently, to the random anti-ferromagnetic
Ising model defined by the Hamiltonian (\ref{H}),
we compare our results with those of the
SK model, which is
defined by the Hamiltonian \cite{SK}
\begin{equation}
{\cal H} = - \sum_i \sum_{j > i}J_{ij} s_i s_j ,
\end{equation}
where the couplings $J_{ij}$ are Gaussian statistically independent
random variables of zero mean and variance $1/N$. In this model the
annealed
lower bound for the
ground state energy is $E^a = - 0.833 N$ \cite{TV} and the number
of metastable states increases as $\mbox{e}^{0.199 N}$ \cite{Bray}.
Hence, in the NPP there are much more local minima
and the global minima are much deeper than in the SK model. These
findings may explain the failure of local
search techniques
to produce good solutions to the NPP.
Some comments regarding the comparison of our approach with that
of Karmakar {\em et al.} \cite {KKLO} are in order.
Those authors have derived bounds on the
probability of occurrence of the event
$ {\cal {N}} (E) = 0 $, where ${\cal {N}} (E)$ stands for
the number of states ${\bf s}$ with energy smaller
than $E$.
Interestingly, these bounds are related to the first two
moments of ${\cal {N}}$:
\begin{equation}
1 - \langle {\cal {N}} \rangle
\leq Pr \{ {\cal {N}} = 0 \} \leq \frac{
\langle {\cal {N}}^2 \rangle - \langle {\cal {N}} \rangle^2 }{\langle
{\cal {N}}^2 \rangle} .
\end{equation}
The first inequality follows
trivially from the fact that ${\cal {N}} \geq 0$,
while the second is an improvement of Chebyshev's inequality.
Only unconstrained and even partitions ($m=0$) were considered.
However, as acknowledged by Karmakar
{\em et al.} \cite{KKLO}, these bounds give no information
on the average value
of the difference for the best partition, except perhaps for
its scaling with $N$.
Also, we should mention that Fu \cite{Fu} has actually carried out a
replica analysis of the NPP for the case of even partitions $m=0$.
However, since in that analysis it is assumed that
$E_0$ is extensive, it misses the low-temperature phase
completely.
\bigskip
{\bf Acknowledgments}~~JFF thanks Pablo Moscato
for useful conversations. This work was supported in part by
Conselho Nacional
de Desenvolvimento Cient\'{\i}fico e Tecnol\'ogico (CNPq).
\newpage
\section*{Appendix}
In this appendix we calculate exactly the average optimal energy
in the regime where $E_m$ scales linearly with $N$.
Similarly to equation (\ref{Z_m_1}) we write the partition function
defined in (\ref{Z_m}) as
\begin{eqnarray}
Z_m (T) & = & \int_{-\infty}^\infty
\int_{-\infty}^\infty
\frac{dx d\tilde{x}}{%
2 \pi} \, \mbox{e}^{i x \tilde{x} - \beta \mid x \mid } \,
\int_{-\pi}^\pi \frac{d\tilde{m}}{2 \pi} \, \mbox{e}^{i N m
\tilde{m}} \, \nonumber \\
& & \prod_j \sum_{s_j = \pm 1} \exp
\left [ - i s_j \left (
a_j \tilde{x} + \tilde{m} \right ) \right ] ,
\end{eqnarray}
where $\beta = 1/T$ is the inverse temperature. As in the
annealed approximation,
the summation over $s_j$ can easily be performed,
yielding
\begin{eqnarray}
Z_m (T) & = & N \beta^2
\int_{-\infty}^\infty dx \int_{-i \infty}^{i \infty}
\frac{d\tilde{x}}{2 \pi i} \, \int_{-i \pi/\beta}^{i \pi/\beta}
\frac{d\tilde{m}}{2 \pi i}
\exp \left [- N \beta \left (x \tilde{x} + \mid x \mid + m \tilde{m}
\right) \right ] \nonumber \\
& & \times
\exp \left [ N \int_0^1 da ~
\ln 2 \cosh \beta \left ( \tilde{x} a + \tilde{m} \right )
\right ] ,
\end{eqnarray}
where we have used the self-averaging property
\begin{equation}
\frac{1}{N} \sum_j
\ln 2 \cosh \beta \left( \tilde{x} a_j + \tilde{m} \right )
= \int_0^1 da
\ln 2 \cosh \beta \left( \tilde{x} a + \tilde{m} \right ) ,
\end{equation}
which is exact for $N \rightarrow \infty$. In this limit we
can carry out the
integrals using the saddle-point method, and so we obtain the following
equation for the average free-energy density
$\bar{f}_m = \bar{F}_m/N$:
\begin{equation}\label{f_m}
\bar{f}_m = x \tilde{x} + \mid x \mid + m \tilde{m} -
\frac{1}{\beta} ~\int_0^1 da
\ln 2 \cosh \beta \left( \tilde{x} a + \tilde{m} \right ) .
\end{equation}
In the zero-temperature limit ($\beta \rightarrow
\infty$), the saddle-point equations yield $\tilde{x} = -1$,
$\tilde{m} = (1+m)/2$ and
\begin{equation}
x = \frac{\left ( 1 + m \right )^2}{4} - \frac{1}{2} ,
\end{equation}
where we have assumed $x \geq 0$ and $m \geq 0$.
The average optimal energy
is obtained by taking
the zero-temperature limit in equation (\ref{f_m}) which yields
$\bar{f}_m \rightarrow \bar{E}_m /N = \mid x \mid$.
\newpage
\parindent=0pt
\parskip=10pt
|
1,477,468,750,358 | arxiv | \section{Introduction}
\subsection{Constraint satisfaction problems} A \emph{constraint satisfaction problem} (CSP) is a computational problem in which the input consists of a finite set of variables and a finite set of \emph{constraints}, and where the question is whether there exists a mapping from the variables to some fixed domain such that all the constraints are satisfied. We can thus see the possible constraints as relations on that fixed domain, and in an instance of the CSP, we are asked to assign domain values to the variables such that certain specified tuples of variables become elements of certain specified relations.
When the domain is finite, and arbitrary constraints are permitted, then the CSP is NP-complete.
However, when only constraints from a restricted set of relations on the domain are allowed in the input, there might be a polynomial-time algorithm for the CSP.
The set of relations that is allowed to formulate the constraints in the input is often called the \emph{constraint language}. The question which constraint
languages give rise to polynomial-time solvable CSPs
has been the topic of intensive research over the past years. It has been conjectured by Feder and Vardi~\cite{FederVardi} that CSPs for constraint languages over finite domains have a complexity dichotomy: they are either in P or NP-complete. {Over the years, the conjecture was proved for substantial classes (for example when the domain has at most three elements~\cite{Schaefer,Bulatov} or when the constraint language contains a single binary relation without sources and sinks ~\cite{HellNesetril,BartoKozikNiven}). Various methods, combinatorial (graph-theoretic), logical, and universal-algebraic were brought to bear on this classification project, with many remarkable consequences. A conjectured delineation for the dichotomy was given in the algebraic language in~\cite{JBK}, and finally the conjecture, and in particular this delineation, has recently been proven to be accurate~\cite{FVproofBulatov,FVproofZhuk}.}
When the domain is infinite, the complexity of the CSP can be outside NP, and even undecidable~\cite{BodirskyNesetrilJLC}.
But for natural classes of such CSPs there is often the potential for structured classifications, and this has proved to be the case for structures first-order definable over the order
$({\mathbb Q},<)$ of the rationals~\cite{tcsps-journal} or over the integers with successor~\cite{dCSPs2}.
Another classification of this type
has been obtained for CSPs where the
constraint language is first-order definable over the random (Rado) graph~\cite{BodPin-Schaefer},
making use of structural Ramsey theory.
This paper was titled `Schaefer's theorem for graphs' and it can be seen as lifting the famous classification of Schaefer~\cite{Schaefer} from Boolean logic to logic over finite graphs, since the random graph is universal for the class of finite graphs.
\subsection{Homogeneous graphs and their reducts}
The notion of \emph{homogeneity}
from model theory plays
an important role
when applying techniques from
finite-domain constraint satisfaction
to constraint satisfaction over infinite domains. A relational structure is
\emph{homogeneous} if every isomorphism
between finite induced substructures can be extended
to an automorphism of the entire structure.
Homogeneous
structures are uniquely
(up to isomorphism) given by
the class of finite structures
that embed into them.
The structure $(\mathbb Q,<)$
and the random graph are among the most prominent
examples of homogeneous structures.
The class of structures that are {first-order} definable over a homogeneous structure with finite relational signature is a very large generalization of the class of all finite structures,
and CSPs for those structures have
been studied independently in many different areas of theoretical computer science, e.g. in temporal and spatial reasoning, phylogenetic analysis, computational linguistics, scheduling, graph homomorphisms, and many more; see~\cite{Bodirsky-HDR} for references.
While homogeneous relational structures are abundant, there are remarkably few countably infinite homogeneous (undirected, irreflexive) \emph{graphs}; they have been classified by Lachlan and Woodrow~\cite{LachlanWoodrow}.
Besides the random graph mentioned earlier, an example of such a graph is
the countable homogeneous \emph{universal triangle-free} graph, one of the
fundamental structures that appears in most textbooks in model theory.
This graph is the up to isomorphism unique countable triangle-free graph $(H_3,E)$ with
the property that for every finite independent set $X \subseteq H_3$ and
for every finite set $Y \subseteq H_3$ there exists a vertex
$x \in H_3 \setminus (X \cup Y)$ such that $x$ is adjacent to
every vertex in $X$ and to no vertex in $Y$.
Further examples of homogeneous
graphs are the graphs $(H_4,E)$, $(H_5,E)$, and so forth, which together with $(H_3,E)$ are called the \emph{Henson graphs}, and their complements.
Here, $(H_n,E)$ for $n > 3$ is the generalization of the graph $(H_3,E)$ above from triangles to cliques of size $n$. Finally, the list of Lachlan and Woodrow contains only one more family of infinite graphs, namely the graphs $(C_n^s,E)$ whose reflexive closure $Eq$ is an equivalence relation with $n$ classes of equal size $s$, where $1\leq n,s\leq \omega$ and either $n$ or $s$ equals $\omega$, as well as their complements. We remark that $(C_n^s,Eq)$ is itself homogeneous and first-order interdefinable with $(C_n^s,E)$, and so we shall sometimes refer to the \emph{homogeneous equivalence relations}.
All countable homogeneous graphs, and even all structures which are first-order definable over homogeneous graphs, are \emph{$\omega$-categorical},
that is, all countable models of their first-order theory are isomorphic.
Moreover,
all countably infinite homogeneous graphs $\Gamma$ are
\emph{finitely bounded} in the sense
that the \emph{age} of $\Gamma$, i.e.,
the class of finite structures that embed into $\Gamma$, can be described by finitely many forbidden substructures.
Finitely bounded homogeneous structures also share with finite structures the property of having a finite description: up to isomorphism, they are uniquely given by the finite list of forbidden structures that describes their age.
Recent work indicates the importance of finite boundedness for complexity classification~\cite{BPT-decidability-of-definability, BP-reductsRamsey,Bodirsky-Mottet,TwoDichotomyConjectures}, and it has been conjectured that all structures with a first-order definition in a finitely bounded homogeneous structure enjoy a complexity dichotomy, i.e., their CSP is either in P or NP-complete (cf.~\cite{BPP-projective-homomorphisms, wonderland, TwoDichotomyConjectures}). The structures first-order definable in homogeneous graphs
therefore provide the most natural class on which to test further the methods developed in~\cite{BodPin-Schaefer} specifically for the random graph.
In this article we obtain a complete classification
of the computational complexity of CSPs where all constraints have a first-order definition in one of the Henson graphs. We moreover obtain such a classification for CSPs where all constraints have a first-order definition in a countably infinite homogeneous graph whose reflexive closure is an equivalence relation, expanding earlier results for the special cases of one single equivalence class (so-called \emph{equality constraints}~\cite{ecsps}) and infinitely many
infinite classes~\cite{equiv-csps}. Together with the above-mentioned result on the random graph, this completes the classification of CSPs for constraints with a first-order definition in any countably infinite homogeneous graph, by Lachlan and Woodrow's classification. {Our result is in accordance with the delineations between tractability and hardness predicted in general for structures with a first-order definition in a finitely bounded homogeneous structure~\cite{BPP-projective-homomorphisms, wonderland, TwoDichotomyConjectures}.}
\mic{Following an established convention (e.g.,~\cite{RandomReducts, BP-reductsRamsey}, and many more) we call a relational structure $\Gamma$ a \emph{reduct} of a structure $\Delta$ if it has the same domain as $\Delta$ and all relations of $\Gamma$ are first-order definable without parameters in $\Delta$. }That is, for us a reduct of $\Delta$ is as the classical definition of a reduct with the difference that we first allow a first-order expansion of $\Delta$. With this terminology, the present article provides a complexity classification of the CSPs for all reducts of countably infinite homogeneous graphs. In other words, for every such reduct we determine the complexity of
deciding its \emph{primitive positive theory}, which consists of all sentences which are existentially quantified conjunctions of atomic formulas and which hold in the reduct. We remark that all reducts of such graphs can be defined by quantifier-free first-order formulas, by homogeneity and $\omega$-categoricity.
For reducts of $(H_n, E)$, the
CSPs express computational problems where the task is to decide whether
there exists a finite graph without any clique of size $n$ that meets certain constraints.
An example of a reduct whose CSP can be solved in polynomial time is $(H_n, {E}, \{(x,y,u,v) \;|\; E(x,y) \Rightarrow E(u,v)\})$, where $n\geq 3$ is arbitrary. As it turns out, for every CSP of a reduct of a Henson graph which is solvable in polynomial time, the corresponding reduct over the {random} graph, i.e., the reduct whose relations are defined by the same quantifier-free formulas, is also polynomial-time solvable. On the other hand, the CSP of the reduct $(H_n,
\{(x,y,u,v) \;|\; E(x,y) \vee E(u,v)\})$ is NP-complete for all $n\geq 3$, but the corresponding reduct over
the random graph can be decided in polynomial time.
Similarly, for reducts of the graph $(C_n^s,E)$ whose reflexive closure is an equivalence relation with $n$ classes of size $s$, where $1\leq n,s\leq \omega$, the computational problem is to decide whether there exists an equivalence relation with $n$ classes of size $s$ that meets certain constraints. For example, consider the structure $(C_\omega^2;Eq,A)$
where \begin{align*}
A := \big \{(x_1,y_1,x_2,y_2,x_3,y_3) \mid & \text{ if } Eq(x_1,y_1), Eq(x_2,y_2) \text{ and } Eq(x_3,y_3) \text{ then there is} \\
& \text{ an odd number of } i \in \{1,2,3\} \text{ such that }
x_i \neq y_i \big \}.
\end{align*}
This structure is a reduct of $(C_\omega^2;E)$ and it follows from our results in Section~\ref{thm:C-low-omega-high-2-P} that its CSP can be solved in polynomial time.
\subsection{Results} Our first result is the complexity classification of the CSPs of all reducts of Henson graphs, showing in particular that a uniform approach to infinitely many `base structures' in the same language (namely, the $n$-th Henson graph for each $n\geq 3$) is, {in principle,} possible.
\begin{theorem}\label{thm:main}
Let $n\geq 3$, and let $\Gamma$ be a finite signature reduct of the $n$-th Henson graph $(H_n, E)$. Then $\text{\rm CSP}(\Gamma)$ is either in $\ensuremath{\mathrm{P}}$ or $\ensuremath{\mathrm{NP}}$-complete.
\end{theorem}
We then obtain a similar complexity dichotomy for reducts of homogeneous equivalence relations, expanding earlier results for special cases~\cite{equiv-csps, ecsps}.
\begin{theorem}\label{thm:equiv}
Let $(C_n^s,E)$ be a graph whose reflexive closure $Eq$ is an equivalence relation with $n$ classes of size $s$, where $1\leq n, s \leq \omega$ {and one of $s$ or $n$ is $\omega$}. Then for any finite signature reduct $\Gamma$ of $(C_n^s,E)$, the problem $\text{\rm CSP}(\Gamma)$ is either in $\ensuremath{\mathrm{P}}$ or $\ensuremath{\mathrm{NP}}$-complete.
\end{theorem}
Together with the classification of countable homogeneous graphs, and the fact that the complexity of the CSPs of the reducts of the {random} graph have been classified~\cite{BodPin-Schaefer}, this completes the CSP classification of reducts of all countably infinite homogeneous graphs, confirming further instances of the open conjecture that CSPs of reducts of finitely bounded homogeneous structures are either in P or NP-complete~\cite{BPP-projective-homomorphisms,wonderland,TwoDichotomyConjectures}.
\begin{cor}\label{cor:homo}
Let $\Gamma$ be a finite signature reduct of a countably infinite homogeneous graph. Then $\text{\rm CSP}(\Gamma)$ is either in $\ensuremath{\mathrm{P}}$ or $\ensuremath{\mathrm{NP}}$-complete.
\end{cor}
\mic{We are going to provide more detailed versions of Theorems~\ref{thm:main} and~\ref{thm:equiv}, which describe in particular the delineation between the tractable and the NP-complete cases algebraically, in Sections~\ref{sect:summary_Henson} and~\ref{sect:summary_equivalence}. We would like to emphasize that our proof does not assume or use the dichotomy for CSPs of finite structures, as opposed to some other dichotomy results for CSPs of infinite structures such as~\cite{dCSPs2}.}
\subsection{The strategy} The method we employ follows {broadly} the method invented in~\cite{BodPin-Schaefer}
for the corresponding classification problem where the `base structure' is the random graph. The key component of this method
is the usage of Ramsey theory (in our case, a result of Ne\v{s}et\v{r}il and R\"odl~\cite{NesetrilRoedlPartite}) and the concept of \emph{canonical functions} introduced in~\cite{RandomMinOps}. There are, however, some interesting differences and novelties that appear in the present proof, as we now shortly outline.
\subsubsection{Henson graphs}
When studying the proofs in~\cite{BodPin-Schaefer}, one might get the impression that the complexity of the method grows with the model-theoretic complexity of the base structure, and that for the random
graph we have really reached the limits of bearableness for applying
the Ramsey method.
However, quite surprisingly, when we step from
the random graph to the graphs $(H_n,E)$, which are in a sense more complicated structures from a model-theoretic point of view\footnote{For example, the random graph has a \emph{simple} theory~\cite{Tent-Ziegler}, whereas the Henson graphs are {among} the most basic examples of structures whose theory is \emph{not} simple.},
the classification and its proof become easier again.
It is one of the contributions of the present article to
explain the reasons behind this effect. Essentially, certain \emph{behaviours} of canonical functions (cf.~Section~\ref{sect:prelims}) existing on the random graph cannot be realised in $(H_n, E)$. For example the behaviour `$\maxi$' (cf.~Section~\ref{sect:prelims}) plays no role for the present classification, but accounts over the random graph for the tractability of, inter alia, the 4-ary relation
defined by the formula $E(x,y) \vee E(u,v)$.
\mic{Remarkably}, we are able to reuse results about canonical functions over the random graph, since the calculus for composing behaviours of canonical functions is the same for any other structure with a smaller type space, and in particular the Henson graphs. Via this meta-argument we can, on numerous occasions, make statements about canonical functions over the Henson graphs which were proven earlier for the {random} graph, ignoring completely the actual underlying structure; even more comfortably, we can \emph{a posteriori} rule out some possibilities in those statements because of the $K_n$-freeness of the Henson graphs. {
Instances of this phenomenon appear in the analysis of canonical functions in Section~\ref{prop:getbinary}.}
On the other hand, along with these simplifications,
there are also new additional difficulties that appear
when investigating reducts of $(H_n,E)$ and
that were not present in the classification
of reducts of the random graph, which basically stem from the lower degree of symmetry of $(H_n,E)$ compared to the {random} graph. For example, in expansions of Henson graphs by finitely many constants,
not all orbits induce copies of Henson graphs; the fact that the analogous statement does hold for the {random} graph was used extensively in~\cite{BodPin-Schaefer}, for example in the {rather technical proof of} Proposition~7.18 of that paper.
\subsubsection{Equivalence relations} Similarly to the situation for the equivalence relation with infinitely many infinite classes studied in~\cite{equiv-csps}, there are two interesting sources of NP-hardness for the reducts $\Gamma$ of other homogeneous equivalence relations: namely, if the equivalence relation is invariant under the polymorphisms of $\Gamma$, then the structure obtained from $\Gamma$ by factoring by the equivalence relation might have a{n} NP-hard CSP, implying NP-hardness for the CSP of $\Gamma$ itself; or, roughly, for a fixed equivalence class the restriction of $\Gamma$ to that class might have a{n} NP-hard CSP, again implying NP-hardness of the CSP of $\Gamma$ (assuming that $\Gamma$ is a \emph{model-complete core}, see Sections~\ref{sect:polymorphisms} and~\ref{sect:polymorphisms_equivalence}). But whereas for the equivalence relation with infinitely many infinite classes both the factor structure and the restriction to a class are again infinite structures, for the other homogeneous equivalence relations one of the two is a finite structure. This obliges us to combine results about CSPs of finite structures with those of infinite structures. As it turns out, the two-element case is, not surprisingly, different from the other finite cases and, quite surprisingly, significantly more involved than the other cases.
\mic{One particularity of this case is that tractability is, for some reducts, implied by a ternary \emph{non-injective} canonical function which we obtain by our Ramsey-analysis. Among all the classification results for $\omega$-categorical structures obtained so far,
this ternary function is the first example of a non-injective canonical function leading to a maximal tractable class. The occurrence of this phenomenon is of technical interest in the quest for a proof of the CSP dichotomy conjecture for reducts of finitely bounded homogeneous structures via a reduction to the finite CSP dichotomy.}
\subsection{Overview} We organize the remainder of this article as follows. Basic notions and definitions, as well as the fundamental facts of the method we are going to use, are provided in Section~\ref{sect:prelims}.
Sections~\ref{sect:polymorphisms} to~\ref{sect:summary_Henson} deal with the Henson graphs: Section~\ref{sect:polymorphisms} is complexity-free and investigates the structure of reducts of Henson graphs via polymorphisms and Ramsey theory. In Section~\ref{sect:CSP}, we provide hardness and tractability proofs for different classes of reducts. Section~\ref{sect:summary_Henson} contains the proof of Theorem~\ref{thm:main}, and
we discuss the complexity classification in more detail, formulating in particular a tractability criterion for CSPs of reducts of Henson graphs.
We then turn to homogeneous equivalence relations in Sections~\ref{sect:polymorphisms_equivalence} to~\ref{sect:summary_equivalence}. Similarly to the Henson graphs, the first section (Section~\ref{sect:polymorphisms_equivalence}) is complexity-free and investigates the structure of reducts of homogeneous equivalence relations via polymorphisms and Ramsey theory. Section~\ref{sect:CSP_equivalence} contains the algorithms proving tractability where it applies. Finally, Section~\ref{sect:summary_equivalence} provides the proof of Theorem~\ref{thm:equiv}, and describes in detail the delineation between the tractable and the NP-complete cases.
We finish this work with further research directions in Section~\ref{sect:final}.
\section{Preliminaries}\label{sect:prelims}
\subsection{General notational conventions} We use one single symbol, namely $E$, for the edge relation of all homogeneous graphs; since we never consider several such graphs at the same time, this should not cause confusion. Moreover, we use $E$ for the symbol representing the relation $E$, for example in logical formulas. In general, we shall not distinguish between relation symbols and the relations which they denote. The binary relation $N(x,y)$ is defined by the formula $\neg E(x,y)\wedge x\neq y$.
When $E$ is the edge relation of a homogeneous graph whose reflexive closure is an equivalence relation, then we denote this equivalence relation by $Eq$; so $Eq(x,y)$ is defined by the formula $E(x,y) \vee x=y$.
When $t$ is an $n$-tuple, we refer to its entries by $t_1,\ldots,t_n$. When $f \colon A \rightarrow B$ is a function and $C \subseteq A$, we write $f[C]:=\{f(a) \; | \; a \in C\}$.
\subsection{Henson graphs} For $n \geq 2$, denote the clique on $n$ vertices by $K_n$. For $n\geq 3$, the graph $(H_n, E)$ is the up to isomorphism unique countable graph which is
\begin{itemize}
\item \emph{homogeneous}: any isomorphism between two finite induced subgraphs of $(H_n, E)$ can be extended to an automorphism of $(H_n, E)$, and
\item \emph{universal for the class of $K_n$-free graphs}:
$(H_n, E)$ contains all finite (in fact, all countable) $K_n$-free graphs as induced subgraphs.
\end{itemize}
The graph $(H_n, E)$ has the \emph{extension property}: for all disjoint finite $U, U'\subseteq H_n$ such that $U$ is not inducing any isomorphic copy of $K_{n-1}$ in $(H_n,E)$ there exists $v\in H_n$ such that $v$ is adjacent in $(H_n, E)$ to all members of $U$ and to none in $U'$. Up to isomorphism, there exists a unique countably infinite $K_n$-free graph with this extension property, and hence the property can be used as an alternative definition of $(H_n, E)$.
\subsection{Homogeneous equivalence relations} For $1\leq n,s \leq \omega$ the graph $(C_n^s,E)$ is the up to isomorphism unique countable graph whose reflexive closure is an equivalence relation \mic{$Eq$ with $n$ classes $C_i$, where ${0\leq i<n}$, all of which have size $s$}. Clearly, $(C_n^s,E)$ is homogeneous and universal in a similar sense as above.
\subsection{Constraint satisfaction problems} For a relational signature $\tau$, a first-order $\tau$-formula is called \emph{primitive positive} (or short \emph{pp}) if
it is of the form $$\exists x_1,\dots,x_n \, (\psi_1 \wedge \dots \wedge \psi_m)$$ where the $\psi_i$ are \emph{atomic}, i.e., of the form $y_1=y_2$ or $R(y_1,\dots,y_k)$ for a $k$-ary relation symbol
$R \in \tau$ and not necessarily distinct variables $y_i$.
Let $\Gamma$ be a structure with a finite relational signature $\tau$.
The \emph{constraint satisfaction problem for $\Gamma$}, denoted by $\Csp(\Gamma)$, is the computational problem of deciding for a given primitive positive (pp-) $\tau$-sentence
$\phi$ whether $\phi$ is true in $\Gamma$.
The following lemma has been first stated in~\cite{JeavonsClosure} for finite domain structures $\Gamma$ only, but the proof there also works for arbitrary infinite structures.
\begin{lemma}\label{lem:pp-reduce}
Let $\Gamma = (D, R_1,\dots,R_\ell)$ be a relational structure,
and let $R$ be a
relation that has a primitive positive definition in $\Gamma$, i.e., a definition via a pp formula.
Then $\Csp(\Gamma)$ and
$\Csp(D, R, R_1, \dots, R_\ell)$ are polynomial-time equivalent.
\end{lemma}
When a relation $R$ has a primitive positive definition in a structure $\Gamma$, then we also say that $\Gamma$ \emph{pp-defines} $R$. Lemma~\ref{lem:pp-reduce} enables the so-called \emph{universal-algebraic approach} to constraint satisfaction, as exposed in the following.
\subsection{The universal-algebraic approach}\label{subsect:ua}
We say that a $k$-ary function
(also called \emph{operation})
$f \colon D^k \rightarrow D$ \emph{preserves} an $m$-ary relation
$R \subseteq D^m$ if for all $t_1,\dots,t_k \in R$ the tuple
$f(t_1,\dots,t_k)$, calculated componentwise, is also contained in $R$.
If an operation $f$ does not
preserve a relation $R$, we say that $f$ \emph{violates} $R$.
If $f$ preserves all relations of a structure $\Gamma$, we say that $f$ is a \emph{polymorphism} of $\Gamma$, and that $f$ \emph{preserves} $\Gamma$. We write $\Pol(\Gamma)$ for the set of all polymorphisms of $\Gamma$.
The unary polymorphisms of $\Gamma$
are just the \emph{endomorphisms} of $\Gamma$, and denoted by $\End(\Gamma)$.
The set of all polymorphisms $\mathrm{Pol}(\Gamma)$ of a relational structure $\Gamma$ forms an algebraic object called a \emph{function clone} (see~\cite{Szendrei}, \cite{GoldsternPinsker}), which is
a set of finitary operations defined on a fixed domain that is closed
under composition and that contains all projections.
Moreover, $\mathrm{Pol}(\Gamma)$ is closed in the \emph{topology of pointwise convergence}, i.e., an $n$-ary function $f$ is contained in $\Pol(\Gamma)$ if and only if for all finite subsets $A$ of $\Gamma^n$ there exists an $n$-ary $g\in\Pol(\Gamma)$ which agrees with $f$ on $A$. We will write $\overline{F}$ for the closure of a set $F$ of functions on a fixed domain in this topology; so $\overline{\Pol(\Gamma)}=\Pol(\Gamma)$. {This closure is sometimes referred to as \emph{local closure}, and closed sets as \emph{locally closed}. For an arbitrary set $F$ of functions on a fixed domain, when $\Gamma$ is the structure whose relations are precisely those which are preserved by all function in $F$, then $\Pol(\Gamma)$ is the smallest locally closed function clone containing $F$ (cf.~\cite{Szendrei}).}
When $\Gamma$ is a countable and $\omega$-categorical structure, then we can characterize primitive positive definable relations via $\Pol(\Gamma)$, as follows.
\begin{theorem}[from~\cite{BodirskyNesetrilJLC}]
\label{conf:thm:inv-pol}
Let $\Gamma$ be a countable $\omega$-categorical structure.
Then the relations preserved by the polymorphisms of $\Gamma$
are precisely those having a primitive positive definition in $\Gamma$.
\end{theorem}
Theorem~\ref{conf:thm:inv-pol} and Lemma~\ref{lem:pp-reduce} imply that if two countable $\omega$-categorical structures $\Gamma, \Delta$ with finite relational signatures have the same clone of polymorphisms, then their CSPs are polynomial-time
equivalent. Moreover, if $\mathrm{Pol}(\Gamma)$ is contained in $\mathrm{Pol}(\Delta)$, then $\text{\rm CSP}(\Gamma)$ is, up to polynomial time, at least as hard as $\text{\rm CSP}(\Delta)$.
Note that the \emph{automorphisms} of a structure $\Gamma$
are just the bijective unary polymorphisms of $\Gamma$ whose inverse function is also a polymorphism of $\Gamma$; the set of all automorphisms of $\Gamma$
is denoted by $\Aut(\Gamma)$. For every reduct $\Gamma$ of a structure $\Delta$ we have that $\Pol(\Gamma)\supseteq \Aut(\Gamma)\supseteq \Aut(\Delta)$. In particular, this is the case for reducts of the homogeneous graphs $(H_n,E)$ and $(C_n^s,E)$. Conversely, it follows from the $\omega$-categoricity of homogeneous graphs $(D,E)$ (in our case, $D=H_n$ or $D=C_n^s$) that every topologically closed function clone containing $\Aut(D,E)$ is the polymorphism clone of a reduct of $(D,E)$.
When $(D,E)$ is a homogeneous graph, and $F$ is a set of functions and $g$ is a function on the domain $D$, then we say that $F$ \emph{generates} $g$ if $g$ is contained in the smallest topologically closed function clone which contains $F\cup \Aut(D, E)$. This is the same as saying that for every finite $S\subseteq D$, there exists a term function over $F\cup \Aut(D, E)$ which agrees with $g$ on $S$. {By the discussion preceding Theorem~\ref{conf:thm:inv-pol}, this is equivalent to $g$ preserving all relations which are preserved by $F\cup \Aut(D, E)$.}
We finish this section with a general lemma that we will refer to on numerous occasions; it allows to restrict the arity of functions violating a relation. For a structure $\Gamma$ and a tuple $t \in \Gamma^k$, the \emph{orbit of $t$} in $\Gamma$ is the set
$\{ \alpha(t) \; | \; \alpha \in \Aut(\Gamma) \}$. We also call this the orbit of $t$ with respect to $\Aut(\Gamma)$.
\begin{lemma}[from~\cite{tcsps-journal}]\label{lem:arity-reduction}
Let $\Gamma$ be a relational structure. Suppose that $R\subseteq \Gamma^k$ intersects at most $m$ orbits of $k$-tuples in $\Gamma$. If $\Pol(\Gamma)$ contains a function violating $R$, then $\Pol(\Gamma)$ also contains an $m$-ary operation violating $R$.
\end{lemma}
\subsection{Canonical functions}
It will turn out that the polymorphisms relevant for the CSP classification show regular behaviour with respect to the underlying homogeneous graph, in a sense that we are now going to define.
\begin{definition}
Let $\Delta$ be a structure. The \emph{type} $\tp(a)$ of an $n$-tuple $a=(a_1,\ldots,a_n)$ of elements in $\Delta$ is the set of first-order formulas with
free variables $x_1,\dots,x_n$ that hold for $a$ in $\Delta$. For structures $\Delta_1,\ldots,\Delta_k$ and $k$-tuples $a^1,\ldots,a^n\in\Delta_1\times\cdots\times\Delta_k$, the type of $(a^1,\ldots,a^n)$ in $\Delta_1\times\cdots\times\Delta_k$, denoted by $\tp(a^1,\ldots,a^n)$, is the $k$-tuple containing the types of $(a^1_i,\ldots,a^n_i)$ in $\Delta_i$ for each $1\leq i\leq k$.
\end{definition}
We bring to the reader's attention the well-known fact that in $\omega$-categorical structures, in particular in $(H_n, E)$ and $(C_n^k,E)$, two $n$-tuples have the same type if and only if their orbits coincide.
\begin{definition}\label{defn:arbitrarilyLarge}
Let $\Delta_1,\ldots,\Delta_k$ and $\Lambda$ be structures. A \emph{behaviour} $B$ between $\Delta_1,\ldots,\Delta_k$ and $\Lambda$ is a partial function from the types over $\Delta_1,\ldots,\Delta_k$ to the types over $\Lambda$. Pairs $(s,t)$ with $B(s)=t$ are also called \emph{type conditions}. We say that a function $f \colon\Delta_1\times\cdots\times\Delta_k\rightarrow\Lambda$ \emph{satisfies the behaviour $B$} if whenever $B(s)=t$ and $(a^1,\ldots,a^n)$ has type $s$ in ${\Delta_1\times \cdots \times \Delta_k}$, then the $n$-tuple $(f(a^1_1,\ldots,a^1_k),\ldots,f(a^n_1,\ldots,a^n_k))$ has type $t$ in $\Lambda$. A function $f \colon\Delta_1\times\cdots\times\Delta_k\rightarrow\Lambda$ is \emph{canonical} if it satisfies a behaviour which is a total function from the types over $\Delta_1{\times\cdots\times}\Delta_k$ to the types over $\Lambda$.
\end{definition}
We remark that since our structures are homogeneous and have only binary relations, the type of an $n$-tuple $a$ is determined by its binary subtypes, i.e., the types of the pairs $(a_i, a_j)$, where $1\leq i,j\leq n$. In other words, the type of $a$ is determined by which of its components are equal, and between which of its components there is an edge.
Therefore, a function $f \colon (H_n,E)^k \rightarrow (H_n,E)$ or $f \colon (C_n^s,E)^k \rightarrow (C_n^s,E)$ is canonical iff it satisfies the condition of the definition for types of 2-tuples.
To provide immediate examples for these notions, we now define some behaviours that will appear in our proof as well as in the precise CSP classification. {For $m$-ary relations
$R_1,\dots,R_k$ over a set $D$,
we will in the following write $R_1\cdots R_k$ for the $m$-ary relation on $D^k$ defined as follows: $R_1\cdots R_k(x^1,\dots,x^m)$ holds for $k$-tuples $x^1,\dots,x^m \in D^k$ if and only if $R_i(x^1_i,\dots,x^m_i)$ holds for all $1\leq i \leq k$. For example, when $p,q\in D^3$ are triples of elements in a homogeneous graph $(D,E)$, then $\ENeq(p,q)$ holds if and only if $E(p_1,q_1)$, $N(p_2,q_2)$, and $p_3=q_3$ hold in $(D,E)$.}
We start with behaviours of binary injective functions $f$ on homogeneous graphs.
\begin{definition}\label{defn:behaviours_binary}
Let $(D,E)$ be a homogeneous graph. We say that a binary injective operation $f \colon D^2\rightarrow D$ is
\begin{itemize}
\item \emph{balanced in the first argument} if for all $u,v\in D^2$ we have that $\EEQ(u,v)$ implies $E(f(u),f(v))$ and $\NEQ(u,v)$ implies $N(f(u),f(v))$;
\item \emph{balanced in the second argument} if $(x,y) \mapsto f(y,x)$ is balanced in the first argument;
\item {\emph{balanced} if $f$ is balanced in both arguments;
\item \emph{$E$-dominated ($N$-dominated) in the first argument} if for all $u,v \in D^2$ with $\NEQEQ(u,v)$
we have that $E(f(u),f(v))$ ($N(f(u),f(v))$);
\item \emph{$E$-dominated ($N$-dominated) in the second argument} if
$(x,y) \mapsto f(y,x)$ is $E$-dominated ($N$-dominated) in the first argument;
\item \emph{$E$-dominated ($N$-dominated)} if it is $E$-dominated ($N$-dominated) in both arguments;
\item \emph{of behaviour $\mini$} if for all $u,v\in D^2$ with $\NEQNEQ(u,v)$ we have
$E(f(u),f(v))$ if and only if $\EE(u,v)$;
\item \emph{of behaviour $\maxi$} if for all $u,v\in D^2$ with $\NEQNEQ(u,v)$ we have
$N(f(u),f(v))$ if and only if $\NN(u,v)$;
\item \emph{of behaviour $p_1$} if for all $u,v \in D^2$ with $\NEQNEQ(u,v)$ we have
$E(f(u),f(v))$ if and only if $E(u_1,v_1)$;
\item \emph{of behaviour $p_2$} if $(x,y) \mapsto f(y,x)$ is of behaviour $p_1$;
\item \emph{of behaviour projection} if it is of behaviour $p_1$ or $p_2$;
\item \mic{\emph{of behaviour xnor} if for all $u,v\in D^2$ with $\NEQNEQ(u,v)$ we have
$E(f(u),f(v))$ if and only if $\EE(u,v)$ or $\NN(u,v)$.}
\end{itemize}
\end{definition}
Each of these properties describes the set of all functions of a certain behaviour. We explain this for the first item defining functions which are balanced in the first argument, which can be expressed by the behaviour consisting of the following two type conditions. Let $(u,v)$ be any pair of elements $u,v\in D^2$ such that $\EEQ(u,v)$, and let $s$ be the type of the pair $(u,v)$ in $(D,E)\times (D,E)$. Let $x,y\in D$ satisfy $E(x,y)$, and let $t$ be the type of $(x,y)$ in $(D,E)$. Then the first type condition is $(s,t)$. Now
let $s'$ be the type in $(D,E)\times (D,E)$ of any pair $(u,v)$, where $u,v\in D^2$ satisfy $\NEQ(u,v)$,
and let $t'$ be the type in $(D,E)$ of any $x,y \in D$ with $N(x,y)$. The second type condition is $(s',t')$.
To justify the less obvious names of some of the above behaviours, we would like to point out that a binary injection of behaviour $\mini$ is reminiscent of the
Boolean minimum function on $\{0,1\}$,
where $E$ takes the role of $1$
and $N$ the role of $0$: for $u,v\in H_n^2$ with $\NEQNEQ(u,v)$, we have
$E(f(u),f(v))$ if $u,v$ are connected by an edge in both coordinates, and $N(f(u),f(v))$ otherwise. The names `{$\maxi$}' and `projection' can be explained similarly.\smallskip
\begin{definition}~\label{defn:behaviours_ternary}
Let $(D,E)$ be a homogeneous graph. We say that a ternary injective operation $f \colon D^3\rightarrow D$ is of behaviour
\begin{itemize}
\item \emph{majority} if for all $u,v\in D^3$ with ${\neq}{\neq}{\neq}(u,v)$ we have that $E(f(u),f(v))$ if and only if $\EEE(u,v)$, $\EEN(u,v)$, $\ENE(u,v)$, or $\NEE(u,v)$;
\item \emph{minority} if for all $u,v\in D^3$ with ${\neq}{\neq}{\neq}(u,v)$ we have $E(f(u),f(v))$ if and only if $\EEE(u,v)$, $\NNE(u,v)$, $\NEN(u,v)$, or $\ENN(u,v)$.
\end{itemize}
\end{definition}
In this article, contrary to $\mini$ and minority, neither $\maxi$ nor majority will play a role but we introduce them for the sake of completeness since they occur in \cite{BodPin-Schaefer}.
{When we want to explain a type condition over a homogeneous graph $(D,E)$, we are going to express it in the form $f(R_1,\ldots,R_k)=S$ for binary relations $R_1,\ldots,R_k$ and a binary relation $S$; the meaning is that whenever $p,q\in D^k$, then $R_1\cdots R_k(p,q)$ implies $S(f(p),f(q))$. The relations we use in this notation range among $\{E,N,Eq,\neq,=\}$. Examples of type conditions expressed this way include $f(E,N)=N$ (meaning that $\EN(p,q)$ implies $N(f(p),f(q))$, for all $p,q\in D^2$), and $f(E,=)=E$. In the latter, note that the second $=$ has different semantic content from the first.
Similarly, the majority behaviour in Definition~\ref{defn:behaviours_ternary} can be expressed by writing $f(E,E,E)=f(E,E,N)=f(E,N,E)=f(N,E,E)=E$ and $f(N,N,N)=f(E,N,N)=f(N,E,N)=f(N,N,E)=N$. As another example, note that $E$-dominated in the first argument can be expressed as $f(\neq,=)=E$, or equivalently, as the conjunction of $f(E,=)=E$ and $f(N,=)=E$.}
Our notation
is justified by the fact that the type conditions satisfied by a function induce a partial function from types to types, and that in the case of homogeneous graphs, all that matters is the three types of pairs, given by the relations $E$, $N$, and $=$; {the relation $\neq$ is the union of $E$ and $N$, and used as a shortcut.}
\begin{definition}
Let $(D,E)$ be a homogeneous graph. We say a ternary canonical injection $f \colon D^3\rightarrow D$ is \emph{hyperplanely of behaviour projection} if the functions $(u,v) \mapsto f(c,u,v)$, $(u,v) \mapsto f(u,c,v)$, and $(u,v) \mapsto f(u,v,c)$ are of behaviour projection for all $c\in D$. Similarly other hyperplane behaviours, such as hyperplanely $E$-dominated, are defined.
\end{definition}
Note that hyperplane behaviours are defined by conditions for the type functions $f(=,\cdot,\cdot)$, $f(\cdot,=,\cdot)$, and $f(\cdot,\cdot,=)$. For example, hyperplanely $E$-dominated precisely means that $$f(=,=,\neq)=f(=,\neq,=)=f(\neq,=,=)=E \, .$$
\subsection{Achieving canonicity in Ramsey structures}\label{sect:Ramsey}
The next proposition, which is an instance of more general statements from~\cite{BP-reductsRamsey, BPT-decidability-of-definability}, provides us with the main combinatorial tool for analyzing functions on Henson graphs. Equip $H_n$ with a total order $\prec$ in such a way that $(H_n, E,\prec)$ is homogeneous; up to isomorphism, there is only one such structure $(H_n, E,\prec)$, called the \emph{random ordered $K_n$-free graph}. The order $(H_n, \prec)$ is then isomorphic to the order $(\mathbb{Q}, <)$ of the rationals. By~\cite{NesetrilRoedlPartite}, $(H_n, E,\prec)$ is a \emph{Ramsey structure}, which implies the following proposition -- for more details, see the survey~\cite{BP-reductsRamsey}.
\begin{prop}\label{prop:canfct}
Let $f\colon H_n^k \rightarrow H_n$, let $c_1,\ldots,c_r\in H_n$,
and let $(H_n,E,\prec,c_1,\ldots,c_r)$ be the expansion of $(H_n,E,\prec)$ by the constants $c_1,\ldots,c_r$. Then
$$
\overline{\{\alpha\circ f\circ (\beta_1, \ldots, \beta_r)\;|\; \alpha\in \Aut(H_n, E, \prec),\; \beta_1, \ldots, \beta_r\in \Aut(H_n, E, \prec, c_1, \ldots, c_r)\}}
$$
contains a function $g$ such that
\begin{itemize}
\item $g$ is canonical as a function from $(H_n, E, \prec, c_1, \ldots, c_r)$ to $(H_n, E, \prec)$;
\item $g$ agrees with $f$ on $\{c_1,\ldots, c_r\}^k$.
\end{itemize}
In particular, $f$ generates a function $g$ with these properties.
\end{prop}
Similarly, Ramsey theory allows us to produce canonical functions on $(C_n^s,E)$, expanded with a certain linear order.
Equip $C_n^s$ with a total order $\prec$ so that the equivalence classes of $(C_n^s,Eq)$ are \emph{convex} with respect to $\prec$, i.e., whenever $Eq(u,v)$ holds and $u\prec w\prec v$, then $Eq(u,w)$.
Moreover, in the case where the size of the classes $s=\omega$, we require the order $\prec$ to be isomorphic to the order of the rational numbers on each equivalence class, and in case where the number of classes $n=\omega$, we require the order to be isomorphic to the order of the rational numbers between the classes (note that we already required convexity, so that $\prec$ naturally induces a linear order between the classes).
If the number of classes $n$ is finite and their size $s=\omega$ infinite, let $P_1,\dots,P_n$
denote unary predicates such that $P_i$ contains precisely the elements in the $i$-th equivalence class of $Eq$ with respect to the order on the classes induced by $\prec$.
The structure
$(C_n^\omega,E,\prec,P_1,\dots,P_n)$ is homogeneous {and a Ramsey structure, since its automorphism group is, as a topological group, isomorphic to $\Aut(\mathbb{Q};<)^n$, and since being a Ramsey structure is a property of the automorphism group (as a topological group)~\cite{Topo-Dynamics}. Thus, by~\cite{BP-reductsRamsey, BPT-decidability-of-definability}, we have the following analogous statement to Proposition~\ref{prop:canfct} for this structure. In the statement, we may drop the mention of the auxiliary relations $P_1,\dots,P_n$, since these are first-order definable in $(C_n^s,E,\prec)$ and since the types over first-order interdefinable structures coincide; in other words, the relations were only needed temporarily in order to to achieve homogeneity, required in~\cite{BP-reductsRamsey, BPT-decidability-of-definability}, but
not for the Ramsey property.}
{
\begin{prop}\label{prop:canfct-C-high-s-low-n}
Let $n\geq 1$ be finite. Let $f\colon {(C_n^\omega)}^k \rightarrow C_n^\omega$, and let $c_1,\ldots,c_r\in C_n^\omega$. Then
$$
\overline{\{\alpha\circ f\circ (\beta_1, \ldots, \beta_r)\;|\; \alpha\in \Aut(C_n^\omega, E, \prec),\; \beta_1, \ldots, \beta_r\in \Aut(C_n^\omega, E, \prec, c_1, \ldots, c_r)\}}
$$
contains a function $g$ such that
\begin{itemize}
\item $g$ is canonical as a function from $(C_n^\omega, E, \prec, c_1, \ldots, c_r)$ to $(C_n^\omega, E, \prec)$;
\item $g$ agrees with $f$ on $\{c_1,\ldots, c_r\}^k$.
\end{itemize}
In particular, $f$ generates a function $g$ with these properties.
\end{prop}
}
If the class size $s$ is finite {and their number $n=\omega$}, we add $s$ unary predicates $Q_1,\dots,Q_s$ where $Q_i$ contains precisely the
$i$-th element for each equivalence class with respect to the order $\prec$.
Then $(C_\omega^s,E,\prec,Q_1,\dots,Q_s)$ is homogeneous
and {Ramsey, since its automorphism group is isomorphic as a topological group to $\Aut(\mathbb{Q};<)$, so that we obtain an analogue of Propositions~\ref{prop:canfct} and~\ref{prop:canfct-C-high-s-low-n} also in this case. Again, we may drop the relations $Q_1,\dots,Q_n$, which are first-order definable in $(C_\omega^n,E,\prec)$, in the statement.}
{
\begin{prop}\label{prop:canfct-C-high-n-low-s}
Let $s\geq 1$ be finite. Let $f\colon {(C_\omega^s)}^k \rightarrow C_\omega^s$, and let $c_1,\ldots,c_r\in C_\omega^s$. Then
$$
\overline{\{\alpha\circ f\circ (\beta_1, \ldots, \beta_r)\;|\; \alpha\in \Aut(C_\omega^s, E, \prec),\; \beta_1, \ldots, \beta_r\in \Aut(C_\omega^s, E, \prec, c_1, \ldots, c_r)\}}
$$
contains a function $g$ such that
\begin{itemize}
\item $g$ is canonical as a function from $(C_\omega^s, E, \prec, c_1, \ldots, c_r)$ to $(C_\omega^s, E, \prec)$;
\item $g$ agrees with $f$ on $\{c_1,\ldots, c_r\}^k$.
\end{itemize}
In particular, $f$ generates a function $g$ with these properties.
\end{prop}
}
\section{Polymorphisms over Henson graphs}\label{sect:polymorphisms}
We investigate polymorphisms of reducts of $(H_n,E)$. We start with unary polymorphisms in Section~\ref{sect:unary}, obtaining that we can assume that the relations $E$ and $N$ are pp-definable in our reducts, {since otherwise their $\text{\rm CSP}$ can be modeled by a reduct of equality and hence has already been classified in~\cite{ecsps}.}
We then turn to binary polymorphisms in Section~\ref{sect:binary}, obtaining {Lemma~\ref{lem:essbin}} telling us that, {excluding in addition just one degenerate case where all polymorphisms are essentially unary functions,} we may further assume the existence of a binary injective polymorphism.
Building on the results of those sections, we show in Section~\ref{subsect:H} via an analysis of ternary polymorphisms that for any reduct which pp-defines the relations $E$ and $N$, either the polymorphisms preserve a certain relation $H$ {(and hence, $H$ is pp-definable in the reduct by Theorem~\ref{conf:thm:inv-pol})}, or there is a polymorphism of behaviour $\mini$ (Proposition~\ref{prop:higherArity}).
\subsection{The unary case: model-complete cores}\label{sect:unary}
A countable $\omega$-categorical structure $\Delta$ is called a \emph{model-complete core} if $\Aut(\Delta)$ is dense in $\End(\Delta)$, or equivalently, every endomorphism of $\Delta$ is an elementary self-embedding, i.e., preserves all first-order formulas over $\Delta$. Every countable $\omega$-categorical structure $\Gamma$ is \emph{homomorphically equivalent} to an up to isomorphism unique $\omega$-categorical model-complete core $\Delta$, that is, there exists homomorphisms from $\Gamma$ into $\Delta$ and vice-versa~\cite{Cores-journal}. Since the CSPs of homomorphically equivalent structures are equal, it has proven fruitful in classification projects to always work with model-complete cores. The following proposition essentially calculates the model-complete cores of the reducts of Henson graphs.
\begin{proposition}\label{prop:redendo}
Let $\Gamma$ be a reduct of $(H_n, E)$. Then either $\End(\Gamma)$ contains a function whose image induces an independent set or $\End(\Gamma)=\overline{\Aut(\Gamma)}= \overline{\Aut(H_n, E)}$.
\end{proposition}
\begin{proof}
Assume that $\End(\Gamma)\neq \overline{\Aut(H_n, E)}$.
Then, {since $\Gamma$ is $\omega$-categorical and by Theorem~\ref{conf:thm:inv-pol} and Lemma~\ref{lem:arity-reduction}}, there exists an $f\in \End(\Gamma)$ which violates $E$ or $N$.
If $f$ violated $N$ but not $E$, then there would be a copy of $K_n$ in the range of $f$, a contradiction.
Thus, we may assume that $f$ violates $E$, i.e., there exists $(u,v)\in E$ such that
$(f(u), f(v))\in N$ {or $f(u)= f(v)$. If for all such $(u,v)$ we have $f(u)=f(v)$, then one can locally generate from $f$ a function whose image is an independent set. Since this is the first time we appeal to an argument with a flavour of local closure, let us give it in longhand. First fix $u,v\in H_n$ such that $E(u,v)$ so that $f(u)=f(v)$. Given a subset $A$ of vertices containing $m\geq 1$ edges, we argue there is a $g$ generated by $f$ so that $g[A]$ contains fewer vertices than $A$. Indeed, take any $a,b\in A$ with $E(a,b)$, and an automorphism $\alpha\in\Aut(H_n,E)$ mapping $(a,b)$ to $(u,v)$, and use $g=f(\alpha(x),\alpha(y))$. Note that $g$ maps the edge $(a,b)$ to a single vertex, so that $g[A]$ is indeed smaller than $A$. By iterating this method, we can see that for every finite subset $A$ of $H_n$, there is a function $g$ generated by $f$ so that $g[A]$ is an independent set. The conclusion that then $f$ also generates a function which sends the entire domain $H_n$ onto an independent set is achieved via a typical compactness argument which appears in one form or another in most works on polymorphism clones of $\omega$-categorical structures; it uses local closure together with $\omega$-categoricity. The modern and perhaps most elegant way to present it is to consider an equivalence relation $\sim$ on the set $F$ of all functions generated by $f$, defined by $g\sim g'$ if and only if $\overline{\{\alpha\circ g\;|\; \alpha\in \Aut(H_n,E)\}}=\overline{\{\alpha\circ g'\;|\; \alpha\in \Aut(H_n,E)\}}$. Then the factor space $F/_\sim$ is compact since $(H_n,E)$ is $\omega$-categorical. This has first been observed, in slightly different form, in~\cite{Topo-Birk}; we refer to~\cite{BodPin-CanonicalFunctions} for a proof of the variant we are using here. Let $(A_i)_{i\in\omega}$ be an increasing sequence of finite sets so that $\bigcup_{i \in \omega} A_i = H_n$. Fix a function $g_i$ generated by $f$ which sends $A_i$ onto an independent set. By compactness, a subsequence of $([g_i]_\sim\;|\; i\in\omega)$ converges in $F/_\sim$ to a class $[g]_\sim$. This means that there are $\alpha_i\in\Aut(H_n,E)$, for $i\in\omega$, such that a subsequence of $(\alpha_i\circ g_i\;|\; i\in\omega)$ converges to $g$. But then $g$ maps $H_n$ onto an independent set.}
{Thus, we may assume that there exists $(u,v)\in E$ such that $(f(u), f(v))\in N$.}
By Proposition~\ref{prop:canfct}, $f$ generates a canonical function $g\colon (H_n, E,\prec, u, v)\rightarrow (H_n, E,\prec)$ such that $f(u)=g(u)$ and $f(v)=g(v)$; in fact, since $f$ is unary, we can disregard the order $\prec$ and assume that $g$ is canonical as a function from $(H_n, E, u, v)$ to $(H_n, E)$~\cite[Proposition 3.7]{Pon11}.
Let $U_{u v}:= \{x\in H_n \mid E(u, x)\wedge E(v, x)\}$, $U_{u \overline{v}}:= \{x\in H_n \mid E(u, x)\wedge N(v, x)\}$, $U_{\overline{u} v}:= \{x\in H_n \mid N(u, x)\wedge E(v, x)\}$ and $U_{\overline{u} \overline{v}}:= \{x\in H_n \mid N(u, x)\wedge N(v, x)\}$. {
As all four of these sets contain an independent set of size $n$, we cannot have $g(N)=E$ on any of them, as this would introduce a copy of $K_n$. Moreover, if all non-edges were collapsed to $=$ on any of these sets, then so would be all edges, in which case $g$ would generate an endomorphism of range $1$, as above. Hence, we may assume that $N$ is preserved by $g$ on all four sets. }
If $g$ violates $E$ on $U_{\overline{u} \overline{v}}$, then, {since $U_{\overline{u} \overline{v}}$ induces an isomorphic copy of $(H_n,E)$ therein,} $g$ generates a function whose image is an independent set. Thus, we may assume that $g$ preserves $E$ on $U_{\overline{u} \overline{v}}$.
Then $g$ preserves $N$ between $U_{\overline{u} \overline{v}}$ and any other orbit $X$ of $\Aut(H_n, E, u, v)$, as otherwise the image of the $n$-element induced subgraph of $(H_n, E)$ induced by any {point in $X$} together with a copy of $K_{n-1}$ in $U_{\overline{u} \overline{v}}$ would be isomorphic to $K_n$.
Assume that $g$ violates $E$ between $U_{\overline{u} \overline{v}}$ and another orbit {$X$} of $\Aut(H_n, E, u, v)$.
Let $A\subseteq H_n$ be finite with an edge $(x, y)$ in $A$.
Then there exists an $\alpha\in \Aut(H_n, E)$ such that $\alpha(x)\in X$ and $\alpha[A\setminus \{x\}]\subseteq U_{\overline{u} \overline{v}}$.
The function $(g\circ \alpha)\upharpoonright_{A}$ preserves $N$, and it maps $(x, y)$ to a non-edge.
By an iterative application of this step we can systematically delete all edges of $A$.
Hence, by topological closure, $g$ generates a function whose image is an independent set.
Thus, we may assume that $g$ preserves $E$ between $U_{\overline{u} \overline{v}}$ and any other orbit of $\Aut(H_n, E, u, v)$.
Let $X$ and $Y$ be infinite orbits of $\Aut(H_n, E, u, v)$, and assume that $g$ violates $N$ between $X$ and $Y$.
There exist vertices $x\in X$ and $y\in Y$, and a copy of $K_{n-2}$ in $U_{\overline{u} \overline{v}}$ such that $(x,y)$ is the only non-edge in the graph induced by these $n$ vertices.
Then, by the above, the image of this $n$-element set under $g$ induces a copy of $K_n$, a contradiction.
Hence, we may assume that $g$ preserves $N$ on $H_n\setminus \{u, v\}$.
If $g$ violates $E$ on $H_n\setminus \{u, v\}$, then we can systematically delete the edges of any finite subgraph of $(H_n, E)$ whilst preserving the non-edges, and conclude that $g$ generates a function whose image is an independent set.
Thus, we may assume that $g$ preserves $E$ on $H_n\setminus \{u, v\}$.
Assume that $g$ violates $E$ between $u$ and $U_{u \overline{v}}$.
Given any finite $A\subseteq H_n$ with a vertex $x\in A$, there exists a $\beta\in \Aut(H_n, E)$ such that $\beta(x)=u$ and $\beta[A\setminus \{x\}]\subseteq U_{u \overline{v}}\cup U_{\overline{u} \overline{v}}$.
{Since, as observed earlier, $g$ preserves $N$ between $U_{\overline{u} \overline{v}}$ and any other orbit of $\Aut(H_n, E, u, v)$, including the orbits $U_{u \overline{v}}$ and $\{u\}$, we conclude that $(g\circ \beta)\upharpoonright_A$ preserves $N$, and it maps edges from $x$ to non-edges. Thus, we can systematically delete the edges of $A$, and consequently, $g$ generates a function whose image is an independent set. Hence, we may assume that $g$ preserves $E$ between $u$ and $U_{u \overline{v}}$.}
There exists a vertex $x\in U_{\overline{u} v}$ and a copy of $K_{n-2}$ in $U_{u \overline{v}}$ such that $(x,u)$ is the only non-edge in the graph induced by these $n-1$ vertices together with $u$.
Thus, if $g$ violates $N$ between $\{u\}$ and $U_{\overline{u} v}$, then the image of this $n$-element set under $g$ induces a copy of $K_n$, a contradiction. Hence, $g$ preserves $N$ between $u$ and $U_{\overline{u} v}$.
By symmetry, we may assume that $g$ preserves $N$ between $v$ and $U_{u \overline{v}}$.
Thus, $g$ preserves $N$.
As $g$ deletes the edge between $u$ and $v$, we can systematically delete the edges of any finite subgraph of $(H_n, E)$.
Hence, $g$ generates a function whose image is an independent set.
\end{proof}
In the first case of Proposition~\ref{prop:redendo}, the model-complete core of the reduct is in fact a reduct of equality. Since the CSPs of reducts of equality have been classified~\cite{ecsps}, we do not have to consider any further reducts with an endomorphism whose image induces an independent set.
\begin{lemma}\label{lem:emptyendo}
Let $\Gamma$ be a reduct of $(H_n, E)$, and assume that $\End(\Gamma)$ contains a function whose image is an independent set. Then $\Gamma$ is homomorphically equivalent to a reduct of $(H_n, =)$.
\end{lemma}
\begin{proof}
Trivial.
\end{proof}
In the second case of Proposition~\ref{prop:redendo}, it turns out that all polymorphisms preserve the relations $E$, $N$, and $\neq$, by the following lemma and Theorem~\ref{conf:thm:inv-pol}.
\begin{lemma}\label{lem:neq-pp}
Let $\Gamma$ be a reduct of $(H_n, E)$. Then the following are equivalent:
\begin{itemize}
\item[(1)] $\End(\Gamma)= \overline{\Aut(H_n, E)}$.
\item[(2)] $E$ and $N$ have primitive positive definitions in $\Gamma$.
\item[(3)] $E$, $N$, and $\neq$ have primitive positive definitions in $\Gamma$.
\end{itemize}
\end{lemma}
\begin{proof}
Since $E$ and $N$ are orbits
of pairs with respect to $\Aut(H_n,E)$, the implication from (1) to (2) is an immediate consequence of Theorem~\ref{conf:thm:inv-pol} and Lemma~\ref{lem:arity-reduction}. For the implication from (2) to (3), it is enough to observe that the primitive positive formula
$\exists z (E(x,z) \wedge N(y,z))$
defines $x \neq y$. Finally, the implication from (3) to (1) follows from the homogeneity of $(H_n,E)$.
\end{proof}
Before moving on to binary polymorphisms, we observe the following corollary of Proposition~\ref{prop:redendo},
first mentioned in~\cite{RandomReducts}.
\begin{cor}
For every $n\geq 3$, the permutation group $\Aut(H_n,E)$ is a maximal closed subgroup of the full symmetric group on $H_n$, i.e., every closed subgroup of the full symmetric group containing $\Aut(H_n,E)$ either equals $\Aut(H_n,E)$ or the full symmetric group.
\end{cor}
\begin{proof}
The closure $\overline{G}$ of any permutation group $G\supseteq \Aut(H_n,E)$ in the set of all unary functions on $H_n$ is a {closed transformation monoid, i.e., a topologically closed monoid of unary functions,} and hence the {monoid of endomorphisms} of a reduct of $(H_n,E)$ (cf.~for example~\cite{RandomMinOps}). By Proposition~\ref{prop:redendo}, it
either contains a function whose image induces an independent set, or it equals $\overline{\Aut(H_n,E)}$. In the first case, it is easy to see that $G$ equals the full symmetric group, and in the latter case, $G=\Aut(H_n,E)$.
\end{proof}
We remark that the automorphism group of the {random} graph has five closed supergroups~\cite{RandomReducts}, which leads to more cases in the corresponding CSP classification in~\cite{BodPin-Schaefer}.
\subsection{{Higher arities: generating injective polymorphisms}}\label{sect:binary}
We investigate at least binary functions preserving $E$ and $N$ (and hence, {by Theorem~\ref{conf:thm:inv-pol}, also $\neq$, since this relation is pp-definable from $E$ and $N$
by Lemma~\ref{lem:neq-pp}}); our goal in this section is to show that they generate injections. Every unary function gives rise to a binary function by adding a dummy variable; the following definition rules out such ``improper" higher-arity functions.
\begin{defn}
A finitary operation $f(x_1,\ldots,x_n)$ on a set is \emph{essential} if it depends on more than one of its variables $x_i$.
\end{defn}
\begin{lemma}\label{lem:essbin}
Let $f\colon H_n^2\rightarrow H_n$ be a binary essential function that preserves $E$ and $N$. Then $f$ generates a binary injection.
\end{lemma}
\begin{proof}
{
Let $\Delta$ be the structure with domain $H_n$ and whose relations are those preserved by $\{f\}\cup\Aut(H_n,E)$;
in particular, $E$, $N$, and $\neq$ are relations of $\Delta$. It is sufficient to show that $\Pol(\Delta)$ contains a binary injection (see Section~\ref{subsect:ua}). }
We follow the strategy of the proof of \cite[{Theorem} 38]{RandomMinOps}.
By \cite[Lemma 42]{RandomMinOps} it is enough to show that for all primitive positive formulas $\phi$ over $\Delta$ we have that whenever $\phi\wedge x\neq y$ and $\phi\wedge s\neq t$ are satisfiable in $\Delta$, then the formula $\phi\wedge x\neq y\wedge s\neq t$ is also satisfiable in $\Delta$.
Still following the proof of \cite[Proposition 38]{RandomMinOps} it is enough to show the following claim.
\textbf{Claim.} Given two $4$-tuples $a = (x, y, z, z)$ and $b = (p, p, q, r)$ in $H_n^4$ such that $x\neq y$ and $q\neq r$, there exist $4$-tuples $a'$ and $b'$ such that $\tp(a)=\tp(a')$ and $\tp(b)=\tp(b')$ in $(H_n,E)$ and such that $f(a', b')$ is a $4$-tuple whose first two coordinates are different and whose last two coordinates are different.
\textit{Proof of Claim.} We may assume that $x\neq z$ and $p\neq q$.
We may also assume that $f$ itself is not a binary injection.
In the following, we say that a point $(x,y)\in H_n^2$ is \emph{v-good} if $f(x,y)\neq f(x,z)$ for all $y\neq z$.
Assume without loss of generality that there exist $u_1\neq u_2, v\in H_n$ such that $f(u_1, v)= f(u_2, v)$.
In particular, as $f$ preserves $\neq$, the points $(u_1, v)$ and $(u_2, v)$ are v-good.
First fix any values $z',q'$ such that $(z',q')$ is v-good.
We may assume that for any $x', y', {p'}\in H_n$ with $\tp(x', y', z')=\tp(x, y, z)$ and $\tp(p', q')= \tp(p, q)$ we have $f(x', p')= f(y', p')$, otherwise the tuples $a'= (x', y', z', z')$ and $b'= (p', p', q', r')$ are appropriate with any $r'\in H_n$ with $\tp(p', q', r')= \tp(p, q, r)$.
Hence, as $f$ preserves $\neq$, all the points $(x',p')$ with $\tp(x', z') = \tp(x, z)$ and $\tp(p', q') = \tp(p, q)$ are v-good.
So we obtained that whenever the point $(s,t)$ is v-good, and $s_0,t_0\in H_n$ are such that $\tp(s, s_0) = \tp(x, z)$ and $\tp(t, t_0) = \tp(p, q)$, then $(s_0, t_0)$ is also v-good, or otherwise we are done.
We show that whatever the types $Q_1= \tp(x, z)$ and $Q_2= \tp(p, q)$ are, we can reach any point $(s_4, t_4)$ in $H_n^2$ from a given v-good point $(s_0, t_0)$ by at most four such steps.
To see this, note that $Q_1$ and $Q_2$ are different from $=$ by assumption.
Now let $s_1, s_2, s_3, t_1, t_2, t_3$ be such that
\begin{itemize}
\item $s_0, s_1, s_2, s_3, s_4$ are pairwise different except that $s_0=s_4$ is possible, and
\item $t_0, t_1, t_2, t_3, t_4$ are pairwise different except that $t_0=t_4$ is possible, and
\item $(s_0, s_1), (s_1, s_2), (s_2, s_3), (s_3, s_4)\in Q_1$ and all other pairs $(s_i, s_j)$ are in $N$ except that $s_0=s_4$ is possible, and
\item $(t_0, t_1), (t_1, t_2), (t_2, t_3), (t_3, t_4)\in Q_2$ and all other pairs $(t_i, t_j)$ are in $N$ except that $t_0=t_4$ is possible.
\end{itemize}
These rules are not in contradiction with the extension property of $(H_n,E)$, thus such vertices exist, and we can propagate the v-good property from $(s_0, t_0)$ to $(s_4, t_4)$.
Hence, every point is v-good,
or we are done.
If $f(u_1, v)= f(u_2, v)$ for all $u_1, u_2, v\in H_n$ with $\tp(u_1, u_2)= \tp(x, y)$, then $f$ would be essentially unary, since $(H_n, E)$ and its complement have diameter $2$.
As $f$ is a binary essential function, we can choose $x', y', p'\in H_n$ such that $\tp(x', y')=\tp(x, y)$ and $f(x', p')\neq f(y', p')$.
By choosing points $z', q', r'\in H_n$ such that $\tp(x', y', z') = \tp(x, y, z)$ and $\tp(p', q', r') = \tp(p, q, r)$ the tuples $a'= (x', y', z', z')$ and $b'= (p', p', q', r')$ are appropriate.
\end{proof}
{The following lemma allows us to drop the restriction to binary essential functions.}
\begin{lemma}\label{lem:essgen}
Let $k\geq 2$. Every essential function $f\colon H_n^k\rightarrow H_n$ that preserves $E$ and $N$ generates a binary injection.
\end{lemma}
\begin{proof}
By~\cite[Lemma 40]{RandomMinOps}, every essential operation generates a binary essential operation over the random graph; the very same proof works for the Henson graphs. Therefore, we may assume that $f$ itself is binary.
The assertion now follows from Lemma~\ref{lem:essbin}.
\end{proof}
\ignore{
\begin{lemma}
Let $f\colon (H_n, E)^r\rightarrow (H_n, E)$ be an essential function that preserves $E$, $N$, and $\neq$.
Then $f$ generates $g_{\del}$ or $f$ generates $g_{\pr1}$ and $g_{\pr2}$.
\end{lemma}
\begin{proof}
By Lemma~\ref{lem:essgen} we may assume that $r=2$ and $f$ is injective.
By Proposition~\ref{prop:canfct} we have that $f$ generates a binary injection $g$ that is canonical as a $(H_n, E, <)^2\rightarrow (H_n, E, <)$ function.
By copying the proof of \cite[Proposition 52]{RandomMinOps}, we obtain that all such functions generate a binary injection that maps the same types to two pairs if they only differ in the order.
Thus, we may assume that $g$ is canonical as a $(H_n, E)^2\rightarrow (H_n, E)$ function.
By the defining axioms of $(H_n, E)$ and the fact that $g$ preserves $E$, $N$, and $\neq$, the function $g$ maps pairs of type $(=, N)$, $(N, =)$ and $(N,N)$ to $N$ and those of type $(E, E)$ to $E$.
Moreover, $g$ cannot map any of the following pairs of types to $E$ simultaneously: $(=, E)$ and $(E, =)$; $(E, N)$ and $(N, E)$; $(=, E)$ and $(E, N)$; $(E, =)$ and $(N, E)$.
This leaves us with seven possible behaviours listed in table~\ref{tbl:canbin}.
However, $g_{\mini 1}\circ (\pi_2^2, g_{\mini 1})$ is equivalent to $g_{\pr 2}$ (and dually $g_{\mini 2}\circ (g_{\mini 2}, \pi_1^2)$ is equivalent to $g_{\pr 1}$), and $g_{\dom 1}\circ (g_{\dom 1}, \pi_1^2)$ and $g_{\dom 2}\circ (\pi_2^2, g_{\dom 2})$ are equivalent to $g_{\del}$.
Moreover, $g_{\pr 1}\circ (\pi_2^2, \pi_1^2)$ is equivalent to $g_{\pr 2}$ (and dually $g_{\pr 2}\circ (\pi_2^2, \pi_1^2)$ is equivalent to $g_{\pr 1}$).
\end{proof}
}
\ignore{
\begin{lemma}\label{lem:min}
Let $f \colon H^2_n \to H_n$ be a function of behaviour $\mini$ that preserves $E$ and $N$. Then $f$ generates
a binary function of behaviour $\mini$ that is $N$-dominated.
\end{lemma}
\begin{proof}
By Proposition~\ref{prop:canfct} we have that $f$ generates a binary injection $g$ that is canonical as a function
$(H_n, E, \prec)^2\rightarrow (H_n, E, \prec)$; from the first (and stronger) statement, and since composing functions of a certain behaviour with automorphisms yields functions of the same behaviour, we conclude that $g$ can be assumed to have behaviour $\mini$.
We now refer to the proof of Theorem~57 in~\cite{RandomMinOps}, observing that the calculus for canonical functions on the Henson graphs is the same as the calculus on the random graph. More precisely, when we compose canonical functions, then we obtain a canonical function, and its behaviour can be calculated by composing the respective behaviours of the composing functions; this is independent of whether the underlying graph is the random graph or a Henson graph. By that theorem, $g$ generates an
operation of behaviour $\mini$ which is $N$-dominated, or one of behaviour $\mini$ which is balanced. However, binary balanced injections that preserve $E$ do not exist over $(H_n,E)$, as they would introduce a copy of $K_n$.
To see this, let $x_1,\dots,x_{n-1} \in H_n$ be pairwise
adjacent vertices in $H_n$. Then
$g(x_1,x_1),\dots,g(x_{n-1},x_{n-1})$ are pairwise adjacent since $g$ preserves $E$. For the same reason, $E(g(x_i,x_i),g(x_1,x_{n-1}))$ if $i$ is distinct from $1$ and from $n-1$. Finally, if $g$ is balanced then $E(g(x_1,x_1),g(x_1,x_{n-1}))$ and
$E(g(x_{n-1},x_{n-1}),g(x_1,x_{n-1}))$.
This is in contradiction to the assumption that $(H_n,E)$ is $K_n$-free. We conclude that $g$, and hence also $f$, generates an
operation of behaviour $\mini$ which is $N$-dominated.
\end{proof}
\begin{lemma}\label{lem:binary}
Let $k\geq 2$, and let $f\colon H_n^k\rightarrow H_n$ be an essential function that preserves $E$, $N$, and $\neq$.
Then $f$ generates one of the following binary canonical injections:
\begin{itemize}
\item of behaviour $\mini$ and $N$-dominated
\item of behaviour $p_1$, balanced in the first, and $N$-dominated in the second argument.
\end{itemize}
\end{lemma}
\begin{proof}
By Lemma~\ref{lem:essgen} we may assume that $k=2$ and that $f$ is injective.
By Proposition~\ref{prop:canfct} we have that $f$ generates a binary injection $g$ that is canonical as a $(H_n, E, \prec)^2\rightarrow (H_n, E, \prec)$ function. We can now refer to Theorem 24 in \cite{BodPin-Schaefer} (itself from \cite{RandomMinOps}) since the calculus for canonical functions on the Henson graphs is the same as the calculus on the random graph, and conclude that $f$ generates a function of one of the following behaviours.
\begin{enumerate}
\item a canonical injection of behaviour $p_1$ which is balanced;
\item a canonical injection of behaviour \textrm{max} which is balanced;
\item a canonical injection of behaviour $p_1$ which is $E$-dominated;
\item a canonical injection of behaviour \textrm{max} which is $E$-dominated;
\item a canonical injection of behaviour $p_1$ which is balanced in the first and $E$-dominated in the second argument;
\item a canonical injection of behaviour $\mini$ which is balanced;
\item a canonical injection of behaviour $p_1$ which is $N$-dominated;
\item a canonical injection of behaviour $\mini$ which is $N$-dominated;
\item a canonical injection of behaviour $p_1$ which is balanced in the first and $N$-dominated in the second argument.
\end{enumerate}
For the $K_n$-free graphs, none of the behaviours $\maxi$, $E$-dominated, or balanced in both arguments can occur since they would introduce a $K_n$. So we are left with items $(7)$ and $(8)$, proving the lemma.
\end{proof}
We conclude this section by summarizing the results we have so far.
\begin{proposition}\label{prop:binary}
Let $\Gamma$ be a reduct of $(H_n, E)$, where $n\geq 3$.
Then either
\begin{itemize}
\item[(1)] $\Gamma$ is homomorphically equivalent to a reduct of $(H_n, =)$, or
\item[(2)] $\End(\Gamma)=\overline{\Aut(H_n,E)}$, and $\Gamma$ pp-defines $E$, $N$, and $\neq$.
\end{itemize}
In the latter case we have that either
\begin{itemize}
\item[(2a)] every function in $\Pol(\Gamma)$ is essentially unary, or
\item[(2b)] $\Pol(\Gamma)$ contains one of the two binary canonical injections of Lemma~\ref{lem:binary}.
\end{itemize}
\end{proposition}
Note that if item (1) holds then $\text{\rm CSP}(\Gamma)$ is either in $\ensuremath{\mathrm{P}}$ or $\ensuremath{\mathrm{NP}}$-complete \cite{ecsps}, and if item (2a) holds then $\text{\rm CSP}(\Gamma)$ is $\ensuremath{\mathrm{NP}}$-complete (Theorem~10 in~\cite{Maximal}). In case (2b), when $\Pol(\Gamma)$ contains a binary canonical injection of behaviour $\mini$ which is hyperplanely $N$-constant then $\text{\rm CSP}(\Gamma)$ is in $\ensuremath{\mathrm{P}}$, as we will show in Section~\ref{subsect:tractabilityOfMin}. It thus remains to further consider the second case of Lemma~\ref{lem:binary}. This is the content of the following section.
}
\subsection{The relation $H$}\label{subsect:H}
\mic{Let us investigate the case in which $\Gamma$, a reduct of $(H_n,E)$, pp-defines $E$ and $N$ (and hence, $\neq$).} The following relation characterizes the NP-complete cases in this situation.
\begin{definition}\label{defn:H}
We define a 6-ary relation
$H(x_1,y_1,x_2,y_2,x_3,y_3)$ on $H_n$ by
\begin{align}
& \bigwedge_{i,j \in \{1,2,3\}, i \neq j, u \in \{x_i,y_i\}, v \in \{x_j,y_j\}} N(u,v) \nonumber \\
\wedge & \; \big((E(x_1,y_1) \wedge N(x_2,y_2) \wedge N(x_3,y_3))
\nonumber \\
& \vee \; (N(x_1,y_1) \wedge E(x_2,y_2) \wedge N(x_3,y_3)) \nonumber\\
& \vee \; (N(x_1,y_1) \wedge N(x_2,y_2) \wedge E(x_3,y_3)) \big)\; . \nonumber
\end{align}
\end{definition}
Our goal for this section is to prove the following proposition, which states that if $\Gamma$ is a reduct of $(H_n, E)$ with $E$ and $N$ primitive positive definable in $\Gamma$, then either $H$ has a primitive positive definition in $\Gamma$, in which case $\Csp(\Gamma)$ is NP-complete, or $\Pol(\Gamma)$ has a certain canonical polymorphism which will imply tractability of the CSP. NP-completeness and tractability for those cases will be shown in Section~\ref{sect:CSP}.
\begin{proposition}\label{prop:higherArity}
Let $\Gamma$ be a reduct of $(H_n, E)$ with $E$ and $N$ primitive positive definable in $\Gamma$. Then at least one of the following holds:
\begin{enumerate}
\item[(a)] There is a primitive positive definition of $H$ in $\Gamma$.
\item[(b)] $\Pol(\Gamma)$ contains a canonical binary injection of behaviour $\mini$.
\end{enumerate}
\end{proposition}
\ignore{
\subsubsection{First arity reduction: down to ternary}
Let us assume that $\Gamma$ is a reduct of $(H_n, E)$ with $E$ and $N$ primitive positive definable in $\Gamma$ such that there is no primitive positive definition of $H$ in $\Gamma$. It follows of course that $\Gamma$ has a polymorphism that violates $H$. Our first goal is to prove that we can assume this polymorphism to be a ternary injection.
\begin{lemma}\label{lem:Ternary}
Let $f\colon H_n^k\rightarrow H_n$ be an operation which preserves $E$ and $N$ and violates $H$. Then $f$ generates a ternary injection which shares the same properties.
\end{lemma}
\begin{proof}
Since the relation $H$ consists of three orbits of 6-tuples with respect to $(H_n, E)$, Lemma~\ref{lem:arity-reduction} shows that $f$ generates a ternary function that violates $H$,
and hence we can assume that $f$ itself is at most ternary.
Then $f$ must certainly be essential, since essentially unary operations that
preserve $E$ and $N$ also preserve $H$. Applying Proposition~\ref{prop:binary}, we get that $f$ generates a binary canonical injection $g$ of type $\mini$ or $p_1$. In the case of $\mini$ we are done, since the ternary injection $g(g(x,y),z)$ violates $H$. Now consider the case where $g$ is of type $p_1$. Then $$h(x,y,z):= g(g(g(f(x,y,z),x),y),z)$$
is injective and violates $H$ -- the latter can easily be verified combining the facts that $f$ violates $H$, $g$ is of type $p_1$, and all tuples in $H$ have pairwise distinct entries.
\end{proof}
}
\subsubsection{\mic{Arity} reduction: down to binary} With the ultimate goal of producing a binary canonical polymorphism of behaviour $\mini$, we now show that under the assumption \mic{that $\Gamma$ has a polymorphism preserving $E$ and $N$ yet violating $H$}, it also have a binary polymorphism which is not of behaviour projection. We begin by ruling out some ternary behaviours which do play a role on the random graph.
\begin{lemma}\label{lem:nominority}
On $(H_n, E)$, there are no ternary functions of behaviour majority or satisfying the type conditions \mic{$f(N,N,E)=f(E,N,N)=E$}.
\end{lemma}
\begin{proof}
These could introduce a $K_n$ in the $K_n$-free graph $(H_n,E)$, \mic{in the following fashions.}
\mic{Suppose $f$ has behaviour majority, and choose $x_1,\ldots,x_{n-1}\in H_n$ inducing a copy of $K_{n-1}$, as well as a distinct $x_0\in H_n$ adjacent to $x_1$ and no other $x_i$. Then $\{f(x_0,x_1,x_2),f(x_1,x_2,x_0),f(x_2,x_0,x_1)\}$ induces $K_3$, and is adjacent to any element in $\{f(x_i,x_i,x_i)\;|\; 2< i\leq {n-1}\}$ since $E$ is preserved, so that altogether we obtain a copy of $K_n$.}
\mic{Suppose now $f$ satisfies the type conditions $f(N,N,E)=f(E,N,N)=E$, and choose elements $x_1,\ldots,x_{n-1}\in H_n^3$ such that $\NNE(x_i,x_j)$ holds for distinct $1\leq i,j\leq {n-1}$. Pick furthermore $x_0\in H_{n}^3$ with $\ENN(x_0,x_i)$ for all $1\leq i \leq n-1$. Then $\{f(x_0),\ldots,f(x_{n-1})\}$ induces a $K_n$.}
\end{proof}
\begin{proposition}\label{prop:getbinary}
Let $f\colon H_n^k\rightarrow H_n$ be an operation that preserves $E$ and $N$ and violates $H$. Then $f$ generates a binary injection which is not of behaviour projection.
\end{proposition}
\begin{proof}
\mic{Since $H$ consists of three orbits of $6$-tuples in $(H_n,E)$, we may assume that $f$ is ternary, by Lemma~\ref{lem:arity-reduction}. Moreover, since $f$ preserves $E$ and $N$, it can only violate $H$ if it is essential. Thus, by Lemma~\ref{lem:essgen}, $f$ generates a binary injection $g$. If $g$ is not of behaviour projection, then we have proved the proposition. Otherwise, assume without loss of generality that it is of behaviour $p_1$. Consider the ternary function $g(g(g(f(x,y,z),x),y),z)$. This function is injective, since $g$ is. Moreover, it violates $H$: if $x^1,x^2,x^3 \in H$ are so that $t:=f(x^1,x^2,x^3) \notin H$, then $t$ has pairwise distinct entries since $f$ preserves $\neq$. Hence, because $g$ is of behaviour $p_1$, $t':=g(t,x^1)$ has the same type as $t$, and so do $t'':=g(t',x^2)$ and $t''':=g(t'',x^3)$, proving the claim. By substituting $f$ by this function, we can therefore in the following assume that $f$ is itself injective.}
\mic{We now prove the proposition by showing that a function of the form $(x,y)\mapsto f(x,y,\alpha(x))$, or $(x,y)\mapsto f(x,\alpha(x),y)$, or $(x,y)\mapsto f(y,x,\alpha(x))$, where $\alpha\in\Aut(H_n, E)$, is not of type projection.}
\mic{Fix $x^1,x^2,x^3 \in H$ such that $f(x^1,x^2,x^3) \notin H$.
In the following, we will write $x_i := (x^1_i,x^2_i,x^3_i)$ for $1\leq i\leq 6$. So $(f(x_1),\dots,f(x_6)) \notin H$. If there exists $\alpha\in\Aut(H_n, E)$ such that $\alpha(x^i) = x^j$ for $1\leq i \neq j \leq 3$,
then our claim follows: for example, if $i=1$ and $j=3$, then the function $(x,y)\mapsto f(x,y,\alpha(x))$ violates $H$, and hence cannot be of behaviour projection.}
We assume henceforth that there is no such automorphism $\alpha$.
In this situation, by permuting arguments of $f$ if necessary, we can
assume without loss of generality that
\begin{align*}
\ENN(x_1,x_2),\, \NEN(x_3,x_4),\,\text{and } \NNE(x_5,x_6).
\end{align*}
We set $$S := \{ y \in H_n^3 \; | \; \NNN(x_i,y) \text{ for all } 1\leq i \leq 6 \} \; .$$
Consider the binary relations $Q_1Q_2Q_3$ on $H_n^3$, where $Q_i\in\{E,N\}$ for $1\leq i\leq 3$.
We show that either our claim above proving the proposition holds, or for each such relation $Q_1Q_2Q_3$, whether $E(f(u),f(v))$ or $N(f(u),f(v))$ holds for $u,v \in S$ with $Q_1Q_2Q_3(u,v)$ does not depend on $u,v$; that is, whenever $u,v,u',v'\in S$ satisfy $Q_1Q_2Q_3(u,v)$ and $Q_1Q_2Q_3(u',v')$, then $E(f(u),f(v))$ if and only if $E(f(u'),f(v'))$. Note that this is another way of saying that $f$ satisfies some type conditions on $S$. We go through all possibilities of $Q_1Q_2Q_3$.
\begin{enumerate}
\item[(1)] $Q_1Q_2Q_3=\ENN$. Let $\alpha \in \Aut(H_n, E)$ be such that $(x^2_1,x^2_2,u_2,v_2)$ is mapped
to $(x^3_1,x^3_2,u_3,v_3)$; such an automorphism exists since
$$
\NNN(x_1, u), \NNN(x_1, v),
\NNN(x_2, u), \NNN(x_2, v)
$$
hold, and since $(x^2_1,x^2_2)$ has the same type as
$(x^3_1,x^3_2)$, and $(u_2,v_2)$ has the same type as $(u_3,v_3)$.
We are done if the operation $g$ defined by $g(x,y):=f(x,y,\alpha(y))$ is not of type projection. Otherwise, $E(g(u_1,u_2),g(v_1,v_2))$ iff $E(g(x_1^1,x_1^2),g(x_2^1,x_2^2))$. Combining this with the equations $(f(u),f(v))=(g(u_1,u_2),g(v_1,v_2))$ and
$(g(x_1^1,x_1^2),g(x_2^1,x_2^2))=(f(x_1),f(x_2))$, we get that $E(f(u),f(v))$ iff $E(f(x_1),f(x_2))$, and so our claim holds for this case.
\item[(2)] $Q_1Q_2Q_3=\NEN$ or $Q_1Q_2Q_3=\NNE$. These cases are analogous to the previous case.
\item[(3)] $Q_1Q_2Q_3=\NEE$. Let $\alpha$ be defined as in the first case.
Reasoning as above, if the operation defined by $f(x,y,\alpha(y))$ is of type projection, then one gets that $E(f(u),f(v))$ iff $N(f(x_1),f(x_2))$.
\item[(4)] $Q_1Q_2Q_3=\ENE$ or $Q_1Q_2Q_3=\EEN$. These cases are analogous to the previous case.
\item[(5)] $Q_1Q_2Q_3= \EEE$ or $Q_1Q_2Q_3=\NNN$. Trivial since $f$ preserves $E$ and $N$.
\end{enumerate}
\ignore{
We now make another case distinction, based on the fact that $(f(x_1),\dots,f(x_6)) \notin H$.
\begin{enumerate}
\item[(1)] Suppose that $E(f(x_1),f(x_2)), E(f(x_3),f(x_4)),E(f(x_5),f(x_6))$.
Then by the above $f$ is of behaviour minority on $S$, a contradiction since $S$ induces a copy of $(H_n,E)^3$ and because of Lemma~\ref{lem:nominority}.
\item[(2)] Suppose that $N(f(x_1),f(x_2)),N(f(x_3),f(x_4)),N(f(x_5),f(x_6))$.
Then $f$ has behaviour majority on $S$, again contradicting Lemma~\ref{lem:nominority}.
\item Suppose that $E(f(x_1),f(x_2)),E(f(x_3),f(x_4)),N(f(x_5),f(x_6))$.
Let $e$ be an endomorphism of $(H_n, E,N)$ such that for all $w \in H_n$, all $1\leq j\leq 3$, and all $1\leq i \leq 6$
we have that $N(x_i^j,e(w))$.
Then $(u_1,u_2,e(f(u_1,u_2,u_3))) \in S$ for all
$(u_1,u_2,u_3) \in S$.
Hence, by the above,
the ternary operation defined by $f(x,y,e(f(x,y,z)))$ is of type behaviour on $S$, a contradiction.
\item Suppose that $E(f(x_1),f(x_2))$, $N(f(x_3),f(x_4))$, $E(f(x_5),f(x_6))$, or
$N(f(x_1),$ $f(x_2))$, $E(f(x_3),f(x_4))$, $E(f(x_5),f(x_6))$.
These cases are analogous to the previous case.
\end{enumerate}
Each of the cases leads to a contradiction, hence proving the proposition.
}
{Now we show that $f$ actually cannot satisfy the type conditions above on $S$. First note that setting $h:=(x,y,z):=f(e^1(x),e^2(y),e^3(z))$ for self-embeddings $e_1,e_2,e_3$ of $(H_n,E)$ such that $(e_1,e_2,e_3)(u)\in S$ for all $u\in H_n^3$, we obtain a function $h$ which satisfies the same type conditions everywhere; such embeddings exist since by its definition, the projection of $S$ onto any coordinate has an induced copy of $(H_n,E)$. Then, if $\{(f(x_1),f(x_2)),(f(x_3),f(x_4)),(f(x_5),f(x_6))\}$ has $E$ twice or more, by~(1) and~(2) we get that $h$ satisfies two type conditions from the minority behaviour, say $h(N,N,E)=E$ and $h(E,N,N)=E$, contradicting Lemma~\ref{lem:nominority}. If $\{(f(x_1),f(x_2)),(f(x_3),f(x_4)),(f(x_5),f(x_6))\}$ has $E$ no times, then by~(3) and~(4) $h$ is of behaviour majority, again contradicting Lemma~\ref{lem:nominority}. Thus, the set must have precisely one $E$, contradicting $f(x^1,x^2,x^3)\notin H$.
}
\end{proof}
\subsubsection{Producing min}\label{sect:producingmin} By Proposition~\ref{prop:getbinary}, it remains to show the following to obtain a proof of Proposition~\ref{prop:higherArity}.
\begin{proposition}\label{prop:nonProjGeneratesMin}
Let $f \colon H_n^2 \rightarrow H_n$ be a binary injection preserving $E$ and $N$ that is not of behaviour projection. Then $f$ generates a binary canonical injection of behaviour $\mini$.
\end{proposition}
In the remainder of this section we will prove this proposition by a Ramsey theoretic analysis of $f$, which requires the following definitions and facts from~\cite{RandomMinOps} concerning behaviours with respect to the homogeneous expansion of the graphs $(H_n,E)$ by the total order $\prec$ from Section~\ref{sect:Ramsey}. At this point, it might be appropriate to remark that canonicity of functions on $H_n$, and even the notion of behaviour, does depend on which underlying structure we have in mind, in particular, whether or not we consider the order $\prec$ (which we almost managed to ignore so far).
Let us define the following behaviours for functions from $(H_n, E,\prec)^2$ to $(H_n, E)$; we write $\succ$ for the relation $\{(a,b) \; | \; b \prec a\}$.
\begin{definition}
Let $f \colon H_n^2 \rightarrow H_n$ be injective. If for all $u,v\in H_n^2$ with $u_1\prec v_1$ and $u_2\prec v_2$
\begin{itemize}
\item $E(f(u),f(v))$ if and only if $\EE(u,v)$, then \emph{$f$ behaves like $\mini$ on input $(\prec,\prec)$}.
\item $E(f(u),f(v))$ if and only if $E(u_1,v_1)$, then \emph{$f$ behaves like $p_1$ on input $(\prec,\prec)$}.
\item $E(f(u),f(v))$ if and only if $E(u_2,v_2)$, then \emph{$f$ behaves like $p_2$ on input $(\prec,\prec)$}.
\end{itemize}
Analogously, we define behaviours on input $(\prec,\succ)$ using pairs $u,v\in H_n^2$ with $u_1 \prec v_1$ and $u_2\succ v_2$.
\end{definition}
\begin{proposition}\label{prop:binaryBehaviourOnInputBLaBla}
Let $f \colon H_n^2 \rightarrow H_n$ be an injection which is canonical as a function from $(H_n, E,\prec)^2$ to $(H_n, E,\prec)$ and suppose $f$ preserves $E$ and $N$. Then it behaves like $\mini$, $p_1$ or $p_2$ on input $(\prec ,\prec )$ (and similarly on input $(\prec ,\succ)$).
\end{proposition}
\begin{proof}
By definition of the term canonical; one only needs to enumerate all possible types of pairs $(u,v)$, where $u,v \in H_n^2$, and recall that $(H_n, E)$ does not contain any clique of size $n$, which makes some behaviours impossible to be realized by $f$.
\end{proof}
\begin{definition}
If an injection $f \colon H_n^2 \rightarrow H_n$ behaves like $X$ on input $(\prec ,\prec )$ and like $Y$ on input $(\prec ,\succ )$, where $X,Y\in\{\mini, p_1,p_2\}$, then we say that $f$ is of \emph{behaviour $X / Y$}.
\end{definition}
In the following lemmas, we
show that every injective canonical binary function which
behaves differently on input $(\prec ,\prec )$
and on input $(\prec,\succ)$ generates a function
which behaves the same way on both inputs, allowing us to ignore the order again.
\begin{lemma}\label{lem:mixtyp:minp}
Suppose that $f \colon H_n^2 \rightarrow H_n$ is injective and canonical from $(H_n, E,\prec)^2$ to $(H_n, E,\prec)$, and suppose that it is of type $\mini / p_i$ or of type $p_i / \mini$, where $i\in\{1,2\}$. Then $f$ generates a binary injection of type $\mini$.
\end{lemma}
\begin{proof}
Since the calculus for behaviours on the Henson graphs is the same as that on the random graph, the same proof as in \cite{BodPin-Schaefer} works.
\end{proof}
\begin{lemma}\label{lem:p1p2impossible}
No binary injection $f \colon H_n^2 \rightarrow H_n$ can have behaviour $p_1/p_2$.
\end{lemma}
\begin{proof}
Such a behaviour would introduce a $K_n$ in a $K_n$-free graph.
\end{proof}
Having ruled out some behaviours without constants, we finally introduce constants to the language to prove Proposition~\ref{prop:nonProjGeneratesMin}.
{
\begin{proof}[Proof of Proposition~\ref{prop:nonProjGeneratesMin}]
We use Proposition~\ref{prop:getbinary} to
observe that $f$ generates a binary injection $t$ which is not of
behaviour projection.
Fix a finite set $\{c_1,\ldots, c_m\}\subseteq H_n$ on which the
latter fact is witnessed. Invoking Proposition~\ref{prop:canfct}, we
may henceforth assume that $t$ is canonical as a function from $(H_n,
E,\prec, c_1,\ldots,c_m)^2$ to $(H_n, E,\prec)$. We are going to show that $t$ generates a binary injection $g$ of behaviour $\mini$. Then another application of Proposition~\ref{prop:canfct} to $g$ yields a canonical function $g'$; this function is still of behaviour $\mini$ because any function of the form $\alpha(g(\beta(x),\gamma(y))$ is of type $\mini$, for automorphisms $\alpha,\beta,\gamma$ of $(H_n,E)$, and $g'$ is generated from operations of this type by local closure.
To obtain $g$, consider in the structure ${(H_n, E,\prec, c_1,\ldots,c_m)}$ the orbit
$$
O:=\{v\in H_n\; |\; N(v,c_i) \text{ and } v\prec c_i \text{ for all }
1\leq i\leq m\}.
$$
Then $O$ induces a structure isomorphic to $(H_n, E,\prec)$, as it
satisfies the extension property for totally ordered $K_n$-free
graphs: the same extensions can be realized in $O$ as in $(H_n,
E,\prec)$. Therefore, by
Lemma~\ref{prop:binaryBehaviourOnInputBLaBla}, $t$ has one of the
three mentioned behaviours on input $(\prec,\prec)$ and on input $(\prec,\succ)$. By Lemmas~\ref{lem:mixtyp:minp}
and~\ref{lem:p1p2impossible}, we may assume that
$t$ behaves like a projection on $O$, for any other combination of behaviours implies that it generates a binary injection of behaviour $\mini$.
Assume without loss of generality that $t$ behaves like $p_1$ on $O$. Let $u\in O^2$ and $v\in (H_n\setminus \{c_1,\ldots, c_m\})^2$ satisfy
$\neq\neq(u,v)$; we claim that $t$ behaves like $p_1$ or like $\mini$
on $\{u,v\}$. Otherwise we must have $\NE(u,v)$, and $t$ behaves like
$p_2$ on $\{u,v\}$. Pick $q_1,\ldots, q_{n-1}\in O^2$ forming a
clique in the first coordinate, an independent set in the second
coordinate, and such that the type of $(q_i,v)$ equals the type of
$(u,v)$. Then by canonicity, the image of $\{q_1,\ldots,q_{n-1},v\}$
under $t$ forms a clique of size $n$, a contradiction.
Suppose next that there exist $u\in O^2$ and $v\in (H_n\setminus \{c_1,\ldots,
c_m\})^2$ such that $t$ does not behave like $p_1$ (and hence, by the above, like $\mini$) on $\{u,v\}$. This means that $\EN(u,v)$ and $N(t(u),t(v))$. We use local closure to show that $t$ generates a binary injection which behaves like $\mini$.
To this end, set $$S:=\{p\in H_n^2\;|\; \tp(p,v)=\tp(u,v) \text{ in } (H_n,E,\prec,c_1,\ldots,c_n)\}\subseteq O^2\; .$$
Now let $q_0\in H_n^2$ be arbitrary. Pick a self-embedding $e$ of $(H_n,E)$ whose range is contained in $O$. Then the function $r\colon H_n^2\rightarrow H_n^2$ defined by $(x,y)\mapsto (t(e(x),e(y)),t(e(y),e(x)))$ has the property that $\EN(p,q)$ implies $\EN(r(p),r(q))$ and $\NE(p,q)$ implies $\NE(r(p),r(q))$, for all $p,q\in H_n^2$, since $t$ behaves like $p_1$ on $O$. Moreover, since $t$ is injective, we have that $p\neq q$ implies $\NEQNEQ(r(p),r(q))$. By the latter property, there exist self-embeddings $e_1,e_2$ of $(H_n,E)$ such that for the function $r'\colon H_n^2\rightarrow H_n^2$ defined by $r':=(e_1,e_2)\circ r$ we have that $r'(q_0)=v$, that $r'(p)\in O^2$ for all $p\in H_n^2\setminus \{q_0\}$, and that $r'(p)\in S$ for all $p\in H_n^2$ with $\EN(p,q_0)$. Then the function $h\colon H_n^2\rightarrow H_n^2$ defined by $h(x,y):=(t(r'(x,y)),y)$ has the property that $\NN(h(p),h(q_0))$ holds for all $p\in H_n^2$ with $\EN(p,q_0)$, since $t$ behaves like $\mini$ between $S$ and $v$. Moreover, $\NE(p,q_0)$ holds for all $p\in H_n^2$ with $\NE(p,q_0)$, since $t$ behaves like $p_1$ or like $\mini$ between $O^2$ and $v$. Finally, for any $p,p'\in H_n^2$ distinct from $q_0$ we have that $\EN(p,p')$ implies $\EN(h(p),h(p'))$ and $\NE(p,p')$ implies $\NE(h(p),h(p'))$, since $t$ behaves like $p_1$ on $O$. Similarly, one can construct a function $h'$ on $H_n^2$ which preserves $\EN$ and $\NE$ between any $p,p'\in H_n^2$ distinct from $q_0$, and such that $\NE(p,q_0)$ implies $\NN(h'(p),h'(q_0))$.
Iterating such functions for different choices of $q_0$, we obtain functions on the plane for every finite subset $A\subseteq H_n^2$ such that $\EN(p,p')$ or $\NE(p,p')$ implies $\NN(p,p')$ for all $p,p'\in A$. By local closure (cf.~Proposition~\ref{prop:redendo}), one then gets a function $r$ on the plane which has this property everywhere, and then $t(r)$ is the desired binary injection of behaviour $\mini$.
So we assume henceforth that $t$ behaves like $p_1$ on $\{u,v\}$ for
all $u\in O^2$ and all $v\in (H_n\setminus \{c_1,\ldots, c_m\})^2$. We
then claim that $t$ must behave like $p_1$ or like
$\mini$ on $\{u,v\}$ for all $u,v\in (H_n\setminus \{c_1,\ldots, c_m\})^2$
with $\neq\neq(u,v)$. Otherwise we must have $\NE(u,v)$, and $t$
behaves like $p_2$ on $\{u,v\}$. Pick $q_1,\ldots, q_{n-2}\in O^2$
forming a clique in the first coordinate, an independent set in the
second coordinate, and adjacent to $u$ and $v$ in the first
coordinate. Applying $t$ we get a clique of size $n$, a
contradiction. By applying the same argument again, we now get that
$t$ must behave like $p_1$ or like $\mini$ on $\{u,v\}$ for all
$u,v\in H_n^2$ with $\neq\neq(u,v)$ (picking $q_1,\ldots,q_{n-2}\in
H_n^2$ rather than $O^2$ this time, but with the same properties relative to $u,v$).
Somewhere $t$ does not behave like $p_1$ but like $\mini$, and so by a
standard iterating argument, similar to the one above (or the one given in detail in the proof of Proposition~\ref{prop:redendo}), it generates a binary injection $g$ of type
$\mini$.
\end{proof}
}
\section{CSPs over Henson graphs}\label{sect:CSP}
\subsection{Hardness of $H$}\label{subsect:hardnessOfH}
We now show that any reduct of $(H_n,E)$ which has $H$ among its relations, and hence by Lemma~\ref{lem:pp-reduce} every reduct which pp-defines $H$, has an NP-hard CSP. \mic{We first show hardness directly by reduction from positive 1-in-3-SAT; then, we provide another proof via \emph{h1 clone homomorphisms} which gives further insight into the mathematical structure of such reducts, and draws connections to the general dichotomy conjecture for reducts of finitely bounded homogeneous structures.}
\subsubsection{Reduction from positive 1-in-3-SAT} \mic{We start by showing hardness directly, which however does not tell us anything about the structure of the polymorphism clones of reducts which pp-define $H$.}
{
\begin{proposition}\label{prop:Hhardnew}
$\Csp(H_n, H)$ is NP-hard.
\label{prop:new-henson-hardness}
\end{proposition}
\begin{proof}
We reduce positive 1-in-3-SAT to $\Csp(H_n, H)$. Each variable $v$ in an instance $\phi$ of the former becomes two variables $v,v'$ in the corresponding instance $\psi$ of the latter. Each clause $(u,v,w)$ from $\phi$ becomes a tuple $H(u,u',v,v',w,w')$ in $\psi$. It is easy to see that $\phi$ is a yes-instance of 1-in-3-SAT if and only if $\psi$ is a yes-instance of $\Csp(H_n, H)$, and the result follows.
\end{proof}
}
\subsubsection{Clone homomorphisms}
\mic{We will now show another way to prove NP-hardness of $\text{\rm CSP}(H_n,G)$ via a structural property of $\Pol(H_n,H)$, using general results from~\cite{wonderland} (a strengthening of the structural hardness proof in~\cite{Topo-Birk}). This will allow us to show that the dichotomy for the Henson graphs is in line with the dichotomy conjecture, for CSPs of reducts of finitely bounded homogeneous structures, from~\cite{wonderland} (and the earlier dichotomy conjecture for the same class, due to Bodirsky and Pinsker (cf.~\cite{BPP-projective-homomorphisms}), which has recently been proved equivalent~\cite{TwoDichotomyConjectures}.}
\begin{defn}\label{defn:clonehomo}
Let $\Gamma$ be a structure. A \emph{projective clone homomorphism} of $\Gamma$ (or $\Pol(\Gamma)$) is a mapping from $\Pol(\Gamma)$ onto its projections which
\begin{itemize}
\item preserves arities;
\item fixes each projection;
\item preserves composition.
\end{itemize}
A \emph{projective strong h1 clone homomorphism} of $\Gamma$ is a mapping as above, where the third condition is weakened to preservation of composition of any function in $\Pol(\Gamma)$ with projections only.
\end{defn}
Recall that $\Pol(\Gamma)$ is equipped with the topology of pointwise convergence, for any structure $\Gamma$.
\begin{thm}[from \cite{wonderland}]\label{thm:wonderland}
Let $\Gamma$ be a countable $\omega$-categorical structure in a finite relational language which has a uniformly continuous strong h1 clone homomorphism. Then $\text{\rm CSP}(\Gamma)$ is NP-hard.
\end{thm}
\begin{proposition}\label{prop:h-hard}
The structure $(H_n, H)$ has a uniformly continuous strong h1 clone homomorphism. Consequently, $\Csp(H_n, H)$ is NP-hard.
\end{proposition}
\begin{proof}
{
Note that $H$ consists of three orbits of $6$-tuples with respect to $(H_n,E)$. Let $a^1,a^2,a^3\in H$ be representatives of those three orbits. By reshuffling the $a^i$ we may assume that $\ENN(a_1,a_2)$, $\NEN(a_3,a_4)$, $\NNE(a_5,a_6)$ (where $a_i$ denotes the $i$-th row of the matrix $(a^1,a^2,a^3)$, for $1\leq i\leq 6$).
We claim that whenever $f\in\Pol(H_n,H)$ is ternary, and $b^1,b^2,b^3\in H$ are so that $\tp(b^1,b^2,b^3)=\tp(a^1,a^2,a^3)$, then $\tp(f(b^1,b^2,b^3))=\tp(f(a^1,a^2,a^3))$ in $(H_n,E)$. To see this, let $c^1,c^2,c^3\in H$ be so that $\tp(c^1,c^2,c^3)=\tp(b^1,b^2,b^3)$, and such that no entry of any $c^i$ is adjacent to any component of any $b^j$ or $a^j$. Suppose that $f(b^1,b^2,b^3)$ and $f(a^1,a^2,a^3)$ do not have the same type, then one of them, say $f(a^1,a^2,a^3)$, does not have the same type as $f(c^1,c^2,c^3)$. Without loss of generality, this is witnessed on the first two components of the 6-tuples $f(c^1,c^2,c^3)$ and $f(a^1,a^2,a^3)$. For $1\leq i\leq 3$, consider the $6$-tuple $d^i:=(c_1^i,c_2^i,a_3^i,\ldots,a_6^i)$, i.e., in $a^i$ we replace the first two components by the components from $c^i$. Then $d^i\in H$, but $f(d^1,d^2,d^3)\notin H$, a contradiction.
Let $f\in\Pol(H_n,H)$. Then precisely one out of $(f(a_1),f(a_2))$, $(f(a_3),f(a_4))$, and $(f(a_5),f(a_6))$ is contained in $E$. If this is the case for the first pair, then it follows from the claim above that $f$ satisfies the three type conditions $f(E,N,N)=E$ and $f(N,E,N)=f(N,N,E)=N$; in the other two cases we obtain similar type conditions.
The mapping which sends every ternary $f\in\Pol(H_n,H)$ to the ternary projection which is consistent with the type conditions satisfied by $f$ (in the case considered above, the projection onto the first coordinate) is a strong h1 clone homomorphism from the ternary functions of $\Pol(H_n,H)$, and is uniformly continuous since the value of every $f$ under the mapping can be seen on any test matrix $(a^1,a^2,a^3)$ as above. It is easy to see and well-known that any such mapping from the ternary functions of a function clone extends to the entire clone.}
\end{proof}
\ignore{
\begin{proof}
The proof is a reduction from positive 1-in-3-3SAT (one of the hard
problems in Schaefer's classification; also see~\cite{GareyJohnson}).
Let $\Phi$ be an instance of positive 1-in-3-3SAT, that is, a
set of clauses, each having three positive literals.
We create from $\Phi$ an instance $\Psi$ of $\Csp(H_n, H)$ as follows.
For each variable $x$ in $\Phi$ we have a pair $u_x,v_x$ of
variables in $\Psi$. When $\{x,y,z\}$ is a clause in $\Phi$, then
we add the conjunct $H(u_x,v_x,u_y,v_y,u_z,v_z)$ to $\Psi$. Finally, we existentially quantify all variables of the conjunction in order to obtain a sentence.
Clearly, $\Psi$ can be computed from $\Phi$ in linear time.
Suppose now that $\Phi$ is satisfiable, i.e., there exists a mapping $s$ from the variables of $\Phi$ to $\{0,1\}$ such that in each clause exactly one of the literals is set to $1$; we claim that $(H_n, H)$ satisfies $\Psi$. To show this, let $F$ be the graph
whose vertices are the variables of $\Psi$, and that has an
edge between $u_x$ and $v_x$ if $x$ is set to 1 under the mapping $s$, and that has no other edges. By universality of $(H_n, E)$ we may assume that $F$ is a subgraph of it, since clearly $F$ does not contain any cliques of size greater than two. It is then enough to show that $F$ satisfies the conjunction of $\Psi$ in order to show that $(H_n, H)$ satisfies $\Psi$.
Indeed, let $H(u_x,v_x,u_y,v_y,u_z,v_z)$ be a clause from $\Psi$. By definition of $F$, the conjunction in the first line of the definition of $H$ is clearly
satisfied; moreover, from the disjunction in the remaining lines
of the definition of $H$ exactly one disjunct will be true,
since in the corresponding clause $\{x,y,z\}$ of $\Phi$ exactly
one of the values $s(x),s(y),s(z)$ equals $1$.
This argument can easily be inverted to see that every
solution to $\Psi$ can be used to define a solution to $\Phi$ (in which for a variable $x$ of $\Phi$ one sets $s(x)$ to $1$ iff in the solution to $\Psi$ there is an edge between $u_x$ and $v_x$).
\end{proof}
}
\subsection{Tractability of min}\label{subsect:tractabilityOfMin}
{
We now show that if a reduct $\Gamma$ of $(H_n,E)$ with finite relational signature has
a polymorphism which is of behaviour $\mini$,
then $\Csp(\Gamma)$ is in P.
We are going to apply Theorem~\ref{thm:maximal} below for the structure $\Delta := (H_n,E)$.
In the theorem, $\hat \Delta$ denotes the
expansion of $\Delta$ by the inequality relation $\neq$ and
by the complement $\hat R$
of each relation $R$ in $\Delta$.
\begin{theorem}[Proposition~14 in~\cite{Maximal}]\label{thm:maximal}
Let $\Delta$ be an $\omega$-categorical structure,
and let $\Gamma$ be a reduct of $\Delta$. If $\Gamma$ has a polymorphism $e$ which is
an embedding of $\Delta^2$ into $\Delta$,
and if $\Csp(\hat \Delta)$ is in P, then $\Csp(\Gamma)$
is in P as well.
\end{theorem}
\begin{prop}\label{prop:mintractable}
Let $\Gamma$ be a reduct of $(H_n,E)$ which has a polymorphism of behaviour $\mini$. Then $\text{\rm CSP}(\Gamma)$ is in $P$.
\end{prop}
\begin{proof}
To apply Theorem~\ref{thm:maximal} to $\Delta=(H_n,E)$, we first show that the CSP for
$\hat \Delta = (H_n,E,\hat E,\neq)$ can be solved
in polynomial time. But this is easy: an instance
of this CSP is satisfiable if and only if there are no
variables $x_1,\dots,x_n$ such that
\begin{itemize}
\item $E(x_i,x_j)$ is in the input for all distinct $i,j \in \{1,\dots,n\}$ (in particular, the statement for $x_1= \dots = x_n$ implies that the input does not contain constraints of the form $E(x,x)$),
\item there are no constraints of the form $x_1 \neq x_1$, and
\item there are no constraints of the form $E(x_1,x_2)$ and $\hat E(x_1,x_2)$.
\end{itemize}
Since $n$ is fixed, it is clear that these conditions can be checked in polynomial time.
Now let $f\in\Pol(\Gamma)$ be a canonical binary injection of behaviour $\mini$. Each of the type conditions $f(N,{=})=E$ and $f({=},N)=E$ is impossible, because they would introduce a $K_n$. Further, $f(E,{=})=N$ or $f({=},E)=N$, for the same reason. But then $g(x,y):=f(f(x,y),f(y,x))$ is of behaviour $\mini$ and $N$-dominated, and therefore an embedding from $(H_n,E)^2$ into $(H_n,E)$. Hence, $\text{\rm CSP}(\Gamma)$ is in P by Theorem~\ref{thm:maximal}.
\end{proof}
}
\section{Summary for the Henson graphs}\label{sect:summary_Henson}
\subsection{Proof of the complexity dichotomy}
We are ready to assemble our results to prove the dichotomy for the CSPs of reducts of Henson graphs.
\begin{proof}[Proof of Theorem~\ref{thm:main}]
Let $\Gamma$ be a reduct of $(H_n,E)$. If $\End(\Gamma)$ contains a
function whose image is an independent set,
then $\Csp(\Gamma)$ equals the CSP
for a reduct of $(H_n,=)$ by Lemma~\ref{lem:emptyendo}, and such CSPs are either in P or NP-complete~\cite{ecsps}.
Otherwise, $\End(\Gamma) = \overline{\Aut(H_n,E)}$ by Proposition~\ref{prop:redendo}.
Lemma~\ref{lem:neq-pp}
shows that $E$, $N$,
and $\neq$ are pp-definable
in $\Gamma$.
If also the relation $H$ is pp-definable in $\Gamma$, then
$\Csp(\Gamma)$ is NP-hard by Proposition~\ref{prop:h-hard} (or Proposition~\ref{prop:Hhardnew}); it is in NP since $\Gamma$ is a reduct of $(H_n,E)$, which is a finitely bounded homogeneous structure.
So let us assume that $H$ is not pp-definable in $\Gamma$; then Proposition~\ref{prop:higherArity} shows that $\Pol(\Gamma)$
contains a canonical binary injection $f$ of behaviour $\mini$. Hence, $\text{\rm CSP}(\Gamma)$ is in P by Proposition~\ref{prop:mintractable}.
\end{proof}
\subsection{Discussion} We can restate Theorem~\ref{thm:main} in a more detailed fashion as follows.
\begin{thm}\label{thm:main2}
Let $\Gamma$ be a reduct of a Henson graph $(H_n,E)$. Then one of the following holds.
\begin{itemize}
\item[(1)] $\Gamma$ has an endomorphism inducing an independent set, and is homomorphically equivalent to a reduct of $(H_n,=)$.
\item[(2)] $\Pol(\Gamma)$ has a uniformly continuous projective clone homomorphism.
\item[(3)] $\Pol(\Gamma)$ contains a binary canonical injection which is of behaviour $\mini$ and $N$-dominated.
\end{itemize}
Items~(2) and~(3) cannot simultaneously hold, and when $\Gamma$ has a finite relational signature, then $(2)$ implies NP-completeness and (3) implies tractability of its CSP.
\end{thm}
The first statement of Theorem~\ref{thm:main2} follows directly from the proof of Theorem~\ref{thm:main}, with the additional observation that the strong h1 clone homomorphism defined in Proposition~\ref{prop:h-hard} is in fact a clone homomorphism. When~(3) holds for a reduct, then~(2) cannot hold, because~(3) implies the existence of $f(x,y)\in\Pol(\Gamma)$ and $\alpha\in\overline{\Aut(\Gamma)}$ satisfying the equation $f(x,y)=\alpha f(y,x)$, an equation impossible to satisfy by projections. In fact, by further analyzing case~(1), using what is known about reducts of equality, one can easily show that it also implies either~(2) or~(3), so that we have the following.
\begin{cor}
For every reduct $\Gamma$ of a Henson graph $(H_n,E)$, precisely one of the following holds.
\begin{itemize}
\item $\Pol(\Gamma)$ has a uniformly continuous projective clone homomorphism.
\item $\Pol(\Gamma)$ contains $f(x,y)\in\Pol(\Gamma)$ and $\alpha\in\overline{\Aut(\Gamma)}$ such that $f(x,y)=\alpha f(y,x)$.
\end{itemize}
When $\Gamma$ has a finite relational signature, then the first case implies NP-completeness and the second case implies tractability of its CSP.
\end{cor}
\section{Polymorphisms over homogeneous equivalence relations}\label{sect:polymorphisms_equivalence}
We now investigate polymorphisms of reducts of the graphs $(C_n^s,E)$, for $2\leq n,s\leq\omega$, with precisely one of $n,s$ equal to $\omega$. Recall from Section~\ref{sect:prelims} that we write $Eq$ for the reflexive closure of $E$, that $Eq$ is an equivalence relation with $n$ classes of size $s$, and that we denote its equivalence classes by $C_i$ for $0\leq i<n$.
Similarly to the case of the Henson graphs, we start with unary polymorphisms in Section~\ref{sect:eq:unary}, reducing the problem to model-complete cores.
We then turn to higher-arity polymorphisms; here, the organization somewhat differs from the case of the Henson graphs. The role of the NP-hard relation $H$ from the Henson graphs is now taken by the two sources of NP-hardness mentioned in the introduction: the first source being that factoring by the equivalence relation $Eq$ yields a structure with an NP-hard problem, and the second source being that restriction to some equivalence class yields a structure with an NP-hard problem. In Section~\ref{sect:eq:other}, we show that in fact, one of the two sources always applies for model-complete cores when $2<n<\omega$ or $2<s<\omega$. Consequently, only the higher-arity polymorphisms of the reducts of $(C_2^\omega,E)$ and $(C_\omega^2,E)$ require deeper investigation using Ramsey theory; this will be dealt with in Sections~\ref{sect:eq:2infinity} and~\ref{sect:eq:infinity2}, respectively.
\subsection{The unary case: model-complete cores}\label{sect:eq:unary} \
\begin{prop}\label{prop:EndEq1}
Let $\Gamma$ be a reduct of $(C_n^s, E)$, where $1\leq n,s\leq \omega$, and at least one of $n,s$ equals $\omega$.
Then either $\End(\Gamma)=\overline{\Aut(\Gamma)}=\overline{\Aut(C_n^s,E)}$, or $\End(\Gamma)$ contains an endomorphism onto a clique or an independent set.
\end{prop}
\begin{proof}
Assume that $\End(\Gamma)\neq \overline{\Aut(C_n^s,E)}$, so there is an endomorphism $f$ of $\Gamma$ violating either $E$ or $N$. \smallskip
\textbf{Case 0.} If $n=1$ or $s=1$ then the statement is trivial.\smallskip
\textbf{Case 1.} If $n=s=\omega$, so $Eq$ has infinitely many infinite classes, we can refer to \cite{equiv-csps}.\smallskip
\textbf{Case 2.} Assume that $1<n<\omega$ and $s=\omega$.
Suppose that $f$ violates $Eq$ and preserves $N$; then clearly, iterating applications of automorphisms of $(C_n^\omega,E)$ and $f$, we could send any finite subset of $C_n^\omega$ to an independent set in $(C_n^\omega,E)$, contradicting that the number of equivalence classes is the fixed finite number $n$.
If $f$ preserves both $Eq$ and $N$, then there exist $a,b$ with $E(a,b)$ and $f(a)=f(b)$. Via a standard iterative argument using local closure, one then sees that $f$ generates a function whose range is an independent set.
Therefore, it remains to consider the case where $f$ violates $N$. Fix $u,v\in C_n^\omega$ with $N(u,v)$ and $Eq(f(u),f(v))$.
\mic{Without loss of generality we may assume $u\in C_0$ and $v\in C_1$. By Proposition~\ref{prop:canfct-C-high-s-low-n}, we may assume that $f$ is canonical as a function from $(C_n^\omega, E, \prec, u,v)$ to $(C_n^\omega, E, \prec)$. Clearly, $f$ must preserve $Eq$ on each class $C_i$ with $i>1$, as otherwise canonicity would imply the existence of an infinite independent set in $(C_n^\omega,E)$. For the same reason, $f$ preserves $Eq$ on each of the four sets $C_0^-:=\{a\in C_0\;|\;a\prec u\}$, $C_0^+:=\{a\in C_0\;|\;u\prec a\}$, $C_1^-:=\{a\in C_1\;|\;a\prec v\}$, and $C_1^+:=\{a\in C_0\;|\;v\prec a\}$. If $N$ is not preserved between two sets among $S:=\{C_0^-,C_0^+,C_1^-,C_1^+,C_2,C_3,\ldots\}$, then we pick these two sets along with $n-2$ further sets from $S$ belonging to distinct equivalence classes. The union of this collection induces a copy of $(C_n^\omega,E)$ on which $f$ preserves $Eq$ but not $N$, and a standard iterative argument shows that $f$ generates a function whose range is contained in a single equivalence class. Hence, we may assume that $N$ is preserved between any two sets in $S$. Since $n$ is finite, this is only possible if $Eq$ is preserved on $C_0^-\cup C_0^+$ and on $C_1^-\cup C_1^+$. By composing $f$ with an automorphism of $(C_n^\omega,E)$, we may thus assume that $f[C_i^-\cup C_i^+]\subseteq C_i$ for $i\in\{0,1\}$ and that $f$ preserves the classes $C_i$ for $i>1$. Either $f(u)\notin C_0$ or $f(v)\notin C_1$. Assume without loss of generality that $f(u)\in C_i$ where $i>0$. Let $e$ be a self-embedding of $(C_n^\omega,E)$ with range $C_n^\omega\setminus\{v\}$. Then $f\circ e$ preserves all equivalence classes except for the element $u$, which it moves from $C_0$ to $C_i$. Iterating applications of $f\circ e$ and automorphisms, and using local closure, we obtain a function which joins $C_0$ and $C_i$. By further iteration, we obtain a function which joins all classes.
}
\smallskip
\textbf{Case 3.} Assume that $s<\omega$ and $n=\omega$.
Suppose that $f$ violates $N$ and preserves $Eq$; then, by local closure, $f$ generates a mapping onto a clique. If it preserves both $Eq$ and $N$, then as above, $f$ generates a function whose range is an independent set.
\mic{Therefore, we may assume that $f$ violates $Eq$. Fix $u,v\in C_\omega^s$ with $E(u,v)$ such that $N(f(u),f(v))$. By Proposition~\ref{prop:canfct-C-high-n-low-s}, we may assume that $f$ is canonical as a function from $(C_\omega^s, E, \prec, u,v)$ to $(C_\omega^s, E, \prec)$. If $f$ preserves $N$, then by local closure $f$ generates a function whose range induces an independent set. Otherwise, there exist $a,b\in C_\omega^s$ with $N(a,b)$ and $Eq(f(a),f(b))$. Without loss of generality, $a$ is not contained in the class of $u$ and $v$. Then $\{a'\in C_\omega^s\;|\; \tp(a',b)=\tp(a,b) \text{ in } (C_\omega^s, E, \prec, u, v)\}$ contains an infinite independent set $S$. By canonicity, we have $Eq(f(a'),f(b))$ for all $a'\in S$, so that $S$ is mapped into a single class. Since this class is finite, there exist $a',a''\in S$ with $a'=a''$, and so by local closure, we can generate a function from $f$ whose range is contained in a single equivalence class.}
\end{proof}
If the second case of
Proposition~\ref{prop:EndEq1} applies to a reduct $\Gamma$ of $(C_n^s,E)$, then $\Gamma$ is \mic{homomorphically equivalent to} a reduct of equality,
and its CSP \mic{is} understood. In the following sections, we investigate essential polymorphisms of reducts $\Gamma$ of $(C_n^s,E)$ satisfying $\End(\Gamma)=\overline{\Aut(\Gamma)}=\overline{\Aut(C_n^s,E)}$. In particular, such reducts are model-complete cores. The following proposition implies that in \mic{the situation where $s\geq 3$} the equivalence relation $Eq$ is invariant under $\Pol(\Gamma)$.
\begin{prop}\label{prop:eqpreserved}
Let $\Gamma$ be a reduct of $(C_n^s, E)$, where $1\leq n\leq \omega$ {and $3 \leq s \leq \omega$}.
If $\End(\Gamma)=\overline{\Aut(C_n^s,E)}$, then $E$, $N$ and $Eq$ are preserved by the polymorphisms of $\Gamma$.
\end{prop}
\begin{proof}
By Lemma~\ref{lem:arity-reduction}, the condition $\End(\Gamma)=\overline{\Aut(C_n^s,E)}$ implies that all polymorphisms of $\Gamma$ preserve $E$ and $N$, and hence also $Eq$ since $Eq(x,y)$ has the primitive positive definition $\exists z\; (E(x,z)\wedge E(z,y))$. \mic{Note that we need that the classes contain at least three elements for this definition to work.}
\end{proof}
{If $s=1$, then $Eq$ is pp-definable as equality, but if $s=2$ then $Eq$ is not in general pp-definable; this will account for an additional non-trivial (tractable) case in our analysis.}
Since in the situation of Proposition~\ref{prop:eqpreserved}, $Eq$ is an equivalence relation which is invariant under $\Pol(\Gamma)$, it follows that $\Pol(\Gamma)$ acts naturally on the equivalence classes of $Eq$: \mic{for $f(x_1,\ldots,x_n)\in\Pol(\Gamma)$ and classes $C_{i_1},\ldots,C_{i_n}$ of $Eq$, the class $f(C_{i_1},\ldots,C_{i_n})$ is then defined as the equivalence class of $f(c_{i_1},\ldots,c_{i_n})$, where $c_{i_1}\in C_{i_1},\ldots,c_{i_n}\in C_{i_n}$ are arbitrary.}
\mic{Moreover, if we fix any class $C$ of $Eq$ and expand the structure $\Gamma$ by the predicate $C$ to a structure $(\Gamma,C)$, then $\Pol(\Gamma, C)$ acts naturally on $C$ via restriction of its functions. Since $\Aut(C_n^s,E)$ can flip any two equivalence classes, all such actions are isomorphic, i.e., for any two classes $C,C'$ there exists a bijection $i\colon C\rightarrow C'$ such that $\Pol(\Gamma,C')=\{i(f(i^{-1}(x_1),\ldots,i^{-1}(x_n)))\;|\; f\in\Pol(\Gamma,C)\}$ and $\Pol(\Gamma,C)=\{i^{-1}(f(i(x_1),\ldots,i(x_n)))\;|\; f\in\Pol(\Gamma,C')\}$ (in fact any bijection $i$ works, since any permutation on $C$ extends to an automorphism of $(C_n^s,E)$ which fixes the elements of $C'$ pointwise). It is for this reason that in the following, it will not matter if we make statements about all such actions, or a single action.
}
In the following sections, we analyze these two types of actions.
\subsection{The case $2<n<\omega$ or $2<s<\omega$}\label{sect:eq:other} It turns out that in these cases, one of the two types of actions always yields hardness of the CSP. We are going to use the following fact about function clones on a finite domain.
\begin{prop}[from~\cite{HaddadRosenberg}]\label{prop:finite3}
Every function clone on a finite domain of at least three elements which contains all permutations as well as an essential function contains a unary constant function.
\end{prop}
We can immediately apply this fact to the action of $\Pol(\Gamma)$ on the equivalence classes, when there are more than two, but finitely many classes.
\begin{prop}\label{prop:n>2}
Let $\Gamma$ be a reduct of $(C_n^\omega, E)$, where $2<n<\omega$, such that $\End(\Gamma)=\overline{\Aut(C_n^\omega,E)}$. Then the action of $\Pol(\Gamma)$ on the equivalence classes of $Eq$ has no essential and no constant operation.
\end{prop}
\begin{proof}
The action has no constant operation because $N$ is preserved. Therefore, it cannot have an essential operation either, by Proposition~\ref{prop:finite3}.
\end{proof}
\mic{Similarly, we can apply the same fact to the action of $\Pol(\Gamma,C)$ on any equivalence class $C$ on $C_\omega^s$, when this class is finite and has more than two elements.}
\begin{prop}\label{prop:s>2}
Let $\Gamma$ be a reduct of $(C_\omega^s, E)$, where $2<s<\omega$, such that $\End(\Gamma)=\overline{\Aut(C_\omega^s,E)}$. \mic{Then for any equivalence class $C$ of $Eq$, the action of $\Pol(\Gamma,C)$ on $C$ has no essential and no constant operation.}
\end{prop}
\begin{proof}
The action has no constant operation because $E$ is preserved. Therefore, it cannot have an essential operation either, by Proposition~\ref{prop:finite3}.
\end{proof}\smallskip
\subsection{The case of two infinite classes: $n=2$ and $s=\omega$}~\label{sect:eq:2infinity} The following proposition states that either one of the two sources of hardness applies, or $\Pol(\Gamma)$ contains a ternary canonical function with a certain behaviour.
\begin{prop}\label{prop:2omega}
Let $\Gamma$ be a reduct of $(C_2^\omega, E)$ such that $\End(\Gamma)=\overline{\Aut(C_2^\omega,E)}$.
Then one of the following holds:
\begin{itemize}
\item the action of $\Pol(\Gamma)$ on the equivalence classes of $Eq$ has no essential function;
\item the action of $\Pol(\Gamma,{C})$ on some \mic{(or any)} class {$C$} has no essential function;
\item $\Pol(\Gamma)$ contains a canonical ternary injection of behaviour minority which is hyperplanely of behaviour balanced xnor.
\end{itemize}
\end{prop}
To prove the proposition, we need to recall a special case of Post's classical result about function clones acting on a two-element set. Comparing this statement with Proposition~\ref{prop:finite3} sheds light on why the case of this section is more involved than the cases of the preceding section.
\begin{proposition}[Post~\cite{Post}]\label{prop:Post}
Every function clone with domain $\{0,1\}$ containing both permutations of $\{0,1\}$ as well as an essential function contains a unary constant operation or the ternary addition modulo 2.
\end{proposition}
We moreover require the following result on polymorphism clones on a countable set.
\begin{proposition}[from~\cite{ecsps}]\label{prop:BK}
Every polymorphism clone on a countably infinite set which contains all permutations as well as an essential operation contains a binary injection.
\end{proposition}
We now combine these two results to a proof of Proposition~\ref{prop:2omega}.
\begin{proof}[Proof of Proposition~\ref{prop:2omega}]
\mic{Recall that the equivalence classes of $Eq$ are denoted by $C_0$ and $C_1$, and that $E$, $N$, and $Eq$ are preserved by the functions of $\Pol(\Gamma)$, by Proposition~\ref{prop:eqpreserved}.}
Suppose that the first statement of the proposition does not hold. Then by Proposition~\ref{prop:Post}, the action of $\Pol(\Gamma)$ on $\{C_0,C_1\}$ contains a unary constant operation, or a function which behaves like ternary addition modulo 2. The first case is impossible since the unary functions in $\Pol(\Gamma)$ preserve $N$, so the latter case holds and $\Pol(\Gamma)$ contains a ternary function $g$ which acts like $x+y+z$ modulo $2$ on the classes.
Suppose now in addition that the second statement of the proposition does not hold either, and \mic{fix some equivalence class $C$}. Since the action of $\Pol(\Gamma,{C})$ on $C$ contains all permutations of \mic{$C$}, by Proposition~\ref{prop:BK} it also contains a binary injection. Therefore $\Pol(\Gamma)$ contains for each $i\in\{0,1\}$ a binary function $f_i$ whose restriction to $C_i$ is an injection on this set.
We claim that there is a single function $f\in\Pol(\Gamma)$ which has this property for both $C_0$ and $C_1$. Note that since $N$ is preserved by $f_0$, it maps $C_1$ into itself. If $f_0$ is essential on $C_1$, then Proposition~\ref{prop:BK} implies that together with all permutations which fix the classes, it generates a function which is injective on $C_1$; this function is then injective on both classes $C_0, C_1$. So assume that $f_0$ is not essential on $C_1$, say without loss of generality that it depends only on the first coordinate (and injectively so, since it preserves $E$). Then $f_0(f_1(x,y),f_0(x,y))$ preserves both classes and is injective on each of them.
By Proposition~\ref{prop:canfct}, we may assume that $f$ is canonical as a function from $(C_2^\omega,E,\prec)\times (C_2^\omega,E,\prec)$ to $(C_2^\omega,E,\prec)$. We claim that $f$ is also canonical as a function from $(C_2^\omega,E)\times(C_2^\omega,E)$ to $(C_2^\omega,E)$. To prove this, it suffices to show that if $u,v,u',v'\in C_2^\omega\times C_2^\omega$ are so that $(u,v)$ and $(u',v')$ have the same type in $(C_2^\omega,E)\times (C_2^\omega,E)$, then $(f(u),f(v))$ and $(f({u'}),f(v'))$ have the same type in $(C_2^\omega,E)$.
There exist $u'',v''\in C_2^\omega\times C_2^\omega$ such that $(u',v')$ and $(u'',v'')$ have the same type in $(C_2^\omega,E,\prec)\times (C_2^\omega,E,\prec)$ and such that ${Eq}{Eq}(u,u'')$ and ${Eq}{Eq}(v,v'')$; by the canonicity of $f$ as a function from $(C_2^\omega,E,\prec)\times (C_2^\omega,E,\prec)$ to $(C_2^\omega,E,\prec)$, it suffices to show that $(f(u),f(v))$ and $(f({u''}),f(v''))$ have the same type in $(C_2^\omega,E)$. Since $Eq$ is preserved, we have $Eq(f(u),f(u''))$ and $Eq(f(v),f(v''))$, and so $Eq(f(u),f(v))$ implies $Eq(f(u''),f(v'')))$ and vice-versa, by the transitivity of $Eq$. Failure of canonicity can therefore only happen if $Eq(f(u),f(v))$ and $Eq(f(u''),f(v'')))$, and precisely one of $f(u)=f(v)$ and $f(u'')=f(v'')$ holds, say without loss of generality the former. But then picking any $v'''\in C_2^\omega\times C_2^\omega$ distinct from $v$ such that ${Eq}{Eq}(v,v''')$ and such that the type of $(u,v)$ equals the type of $(u,v''')$ in $(C_2^\omega,E,\prec)\times (C_2^\omega,E,\prec)$ shows that $f(v)=f(u)=f(v''')$ by canonicity, contradicting the fact that $f$ is injective on each equivalence class.
We analyze the behaviour of the canonical function $f\colon (C_2^\omega,E)\times (C_2^\omega,E) \rightarrow(C_2^\omega,E)$. Because $E$ and $N$ are preserved, we have $f(E,E)=E$ and $f(N,N)=N$. Moreover, because $f$ is injective on the classes, and because $Eq$ is preserved, we have $f(=,E)=f(E,=)=E$.
We next claim that either $f(\cdot ,N)=N$ or $f(N,\cdot)=N$. Otherwise, there exist $Q,P\in\{E,=\}$ such that $f(Q,N)\neq N$ and $f(N,P)\neq N$. Pick $u,v,w\in (C_2^\omega)^2$ such that $\QN(u,v), \NPe(v,w)$, and $\NN(u,w)$. Then $Eq(f(u),f(w))$ and $N(f(u),f(w))$, a contradiction.
Assume henceforth without loss of generality that $f(N,\cdot)=N$.
Then $f(P,N)\neq N$ for $P\neq N$, because there are only two equivalence classes.
Moreover, $f(E,N)= {=}$ or $f(=,N)= {=}$ would imply that $f$ is not injective on the classes, so we have $f(E,N) = f(=,N) = E$.
Summarizing, $f$ is a binary injection of behaviour $p_1$, balanced in the first argument, and $E$-dominated in the second argument.
{
Let $q\in\Pol(\Gamma)$ be any ternary injection (for example, $(x,y,z)\mapsto f(x,f(y,z))$), and set $h(x,y,z):=f(g(x,y,z), q(x,y,z))$.
We now show that $h$ is canonical by establishing all type conditions satisfied by it. To this end, we use the behaviour of $f$ and the fact that $g$ acts like $x+y+z$ modulo $2$ on the classes. The latter fact implies that $g$ satisfies certain type conditions as well, as is easily verified: $g(Eq,Eq,N)=g(Eq,N,Eq)=g(N,Eq,Eq)=N$, $g(Eq,Eq,Eq)=Eq$, and moreover $g(Eq,N,N)=Eq$, $g(N,Eq,N)=Eq$, and $g(N,N,Eq)=Eq$. In the following table, $u,v,w\in (C_2^\omega)^2$ are three pairs for which $\eqeqeq(u,v,w)$ does not hold, and according to the type of $(u,v,w)$ in $(C_2^\omega,E)\times (C_2^\omega,E)$ the type of $h(u,v,w)$ in $(C_2^\omega,E)$ is computed. By the symmetry of the type conditions of $g$ listed above, and since of $q$ we only use injectivity so that ${\neq}(q(u,v,w))$ holds, the value of a triple of types does not change if its components are permuted. Therefore, we only list all possibilities of types for $(u,v,w)$ up to permutations.
\[
\begin{array}{ccc}
\mbox{$\tp(u,v,w)$} & \mbox{$\tp(g(u,v,w), q(u,v,w))$} & \mbox{$\tp(h(u,v,w))$} \\
\EEE & (E,\neq) & E\\
\NNN & (N,\neq) & N\\
\EEN & (N,\neq)& N\\
\ENN & (Eq,\neq) & E\\
\eqEE & (Eq,\neq) & E \\
\eqNN & (Eq,\neq) & E \\
\eqEN & (N,\neq) & N \\
\eqeqE & (Eq,\neq) & E \\
\eqeqN & (N,\neq) & N \\
\end{array}
\]
So $h$ acts like a minority which is hyperplanely of behaviour balanced xnor.
}
\ignore{
{
Set $q(x,y,z):=f(x,f(y,z))$, and $h(x,y,z):= g(q(x,y,z),q(y,z,x),q(z,x,y))$. We now establish some type conditions satisfied by these functions, using the behaviour of $f$ and the fact that $g$ acts like $x+y+z$ modulo $2$ on the classes. This fact implies that $g$ satisfies some type conditions as well, as is easily verified: $g(Eq,Eq,N)=g(Eq,N,Eq)=g(N,Eq,Eq)=N$, and moreover $g(Eq,N,N)=Eq$, $g(N,Eq,N)=Eq$, and $g(Eq,N,N)=Eq$. In the following table, $u,v,w\in (C_2^\omega)^2$ are three pairs, and according to the type of $(u,v,w)$ in $(C_2^\omega,E)\times (C_2^\omega,E)$ the type of $h(u,v,w)$ in $(C_2^\omega,E)$ is computed. By the symmetry of the function $g$ in its action on the classes, and by the cyclically symmetric construction of $q$, the result of a triple types does not change if its components are permuted cyclically, so that we need not list all possibilities.
\[
\begin{array}{ccc}
\mbox{$\tp(u,v,w)$} & \mbox{$\tp(q(u,v,w), q(v,w,u), q(w,u,v))$} & \mbox{$\tp(h(u,v,w))$} \\
\EEE & \EEE & E\\
\NNN & \NNN & N\\
\EEN & \EEN& N\\
\ENN & \ENN & E\\
\eqEE & \EEE & E \\
\eqNN & \ENN & E \\
\eqEN & \EEN & N \\
\eqNE & \ENE & N \\
\eqeqE & \EEE & E \\
\eqeqN & \EEN & N \\
\end{array}
\]
So $h$ acts like a minority which is hyperplanely of behaviour balanced xnor.
}
}
\end{proof}\smallskip
\subsection{The case of infinitely many classes of size two: $n=\omega$ and $s=2$}~\label{sect:eq:infinity2} \mic{Recall that in this situation, Proposition~\ref{prop:eqpreserved} does not apply, \mic{and $Eq$ might not be pp-definable in a reduct $\Gamma$ of $(C^2_\omega,E)$, even if $\Gamma$ is a model-complete core.} We first show that if this happens, then $\Pol(\Gamma)$ contains a certain binary canonical function (Proposition~\ref{prop:al-jabr}). We then show, in Proposition~\ref{prop:omega2}, that if $Eq$ does have a primitive positive definition in $\Gamma$, then either one of the two sources of hardness applies, or $\Pol(\Gamma)$ contains a ternary function of a certain behaviour.}
{
\begin{proposition}
Let $\Gamma$ be a reduct of $(C_\omega^2, E)$ such that $\End(\Gamma)=\overline{\Aut(C_\omega^2,E)}$, and such that $Eq$ is not pp-definable. Then $\Gamma$ enjoys a binary canonical polymorphism of behaviour $\mini$ which is $N$-dominated.
\label{prop:al-jabr}
\end{proposition}
\begin{proof}
By Theorem~\ref{conf:thm:inv-pol}, $\Gamma$ has a polymorphism $f$ which violates $Eq$. By the assumption, all endomorphisms preserve $E$ and $N$, and hence, by Lemma~\ref{lem:arity-reduction}, so does $f$. By the same lemma, because $Eq$ consists of two orbits with respect to the action of the automorphism group of $(C_\omega^2,E)$ on pairs, we may assume that $f$ is binary.
Recall that we denote the equivalence classes of $Eq$ by $C_i$, where $i\in\omega$. We refer to sets of the form $C_i \times C_j$ as \emph{squares}. Note that each square is the disjoint union of precisely two edges in the product graph $({C_\omega^2},E)^2$, and that each of these edges is mapped by $f$ to an edge in $({C_\omega^2},E)$, since $f$ preserves $E$. We say that $f$ \emph{splits} a square when it does not map this square into a single class; in this case, it necessarily maps it into two classes, by the previous observation.
By composing $f$ with automorphisms from the inside, we may assume that $f$ violates $Eq$ on $C_0$, i.e., it splits the square $C_0\times C_0$. Writing $C_0=\{u,v\}$, we may invoke Proposition~\ref{prop:canfct-C-high-n-low-s} and assume that $f$ is canonical when viewed as a function from $({C_\omega^2},E,\prec,u,v)\times ({C_\omega^2},E,\prec,u,v)$ to $({C_\omega^2},E)$. We write $S:= \bigcup_{i>0} C_i$.
We now distinguish two cases to show the following.\smallskip
{\bf Claim.} $f$ generates a binary function $f'$ which still splits $C_0\times C_0$ and satisfies either $f(N,\cdot)=N$ or $f(\cdot, N)=N$.\smallskip
{\bf Case 1:} We first assume that $f$ splits all squares within $S\times S$. In that case, by replacing $f(x,y)$ by the function $f(e(x),e(y))$, where $e$ is a self-embedding from $({C_\omega^2},E,\prec)$ onto the structure induced therein by $S$, we may assume that
$f$ is canonical as a function from $({C_\omega^2},E,\prec)\times ({C_\omega^2},E,\prec)$ to $({C_\omega^2},E)$ whilst splitting all squares; the constants $u,v$ which were introduced to witness the occurrence of a splitting will not be of importance to us anymore in the further discussion of this case.
The function $g$ on $({C_\omega^2})^2$ sending every pair $(x,y)$ to the pair $(f(x,y),f(y,x))$ is canonical when viewed as a function
$$
({C_\omega^2},E,\prec)\times ({C_\omega^2},E,\prec)\rightarrow(({C_\omega^2})^2,\EE,\EN,\NE,\NN,\Eeq,\eqE,\Neq,\eqN)\; ,
$$
by the canonicity of $f$. In the following, we analyse the behaviour of $g$.
We start by observing that every square consists of an \emph{upward edge} and a \emph{downward edge} in $(C_\omega^2,E,\prec)^2$, the orientation being induced by the order $\prec$: by the upward edge $(p,q)\in \EE$ we refer to the one on which the order $\prec$ agrees in both coordinates between $p$ and $q$, and by the downward one we refer to the other edge in the square (on which $\prec$ disagrees between the coordinates). Let $U$ be the set of points contained in an upward edge, and $V$ the set of points contained in a downward edge, so that $({C_\omega^2})^2$ is the disjoint union of $U$ and $V$. We are going to verify the following properties of $g$:
\begin{itemize}
\item[(i)] Each of $g[U], g[V]$ is either contained in $U$ or in $V$.
\item[(ii)] $\Eeq(p,q)$, $\eqE(p,q)$, and $\NN(p,q)$ all imply $\NN(g(p),g(q))$, for all $p,q\in ({C_\omega^2})^2$.
\item[(iii)] $\NN(g(p),g(q))$ for all $p\in U$ and all $q\in V$.
\item[(iv)] On $U$ as well as on $V$, either $f(N,\cdot)=N$ or $f(\cdot,N)=N$ holds.
\end{itemize}
Property~(i) is a direct consequence of the canonicity of $g$ and the fact that it sends edges to edges in $({C_\omega^2},E)^2$, since $f$ preserves $E$. Property~(ii) follows since $f$ preserves $N$ and because $f$ splits all squares.
For~(iii), suppose that $\NN(g(p),g(q))$ does not hold for some $p\in U$ and $q\in V$. We cannot have $\EE(p,q)$ since $p$ is contained in an upward and $q$ in a downward edge, so by~(ii), $p$ and $q$ must be related by $N$ in one coordinate. Say we have $\Neq(p,q)$; the other situations are handled similarly. Pick $q'\in V$ distinct from $q$ such that the types of $(p,q)$ and $(p,q')$ in $({C_\omega^2},E,\prec)\times ({C_\omega^2},E,\prec)$ coincide. Then, by canonicity, we have that $g(p),g(q)$ are equivalent with respect to $Eq$ in the same coordinate as $g(p),g(q')$; hence, so are $g(q),g(q')$, by the transitivity of $Eq$. By canonicity, we then know that for the unique $q''\in V$ with $\Eeq(p,q'')$, we have that $g(q)$ and $g(q'')$ are equivalent in that very same coordinate, since the types of $(q,q')$ and either $(q,q'')$ or $(q'',q)$ agree. Again by transitivity, $g(p),g(q'')$ are then equivalent in that coordinate, contradicting~(ii).
Property~(iv) is clear from canonicity and since $f$ preserves $N$.
Now suppose that $f(N,\cdot)=N$ on both $U$ and $V$. Then the function $f'(x,y):=f(g(x,y))=f(f(x,y),f(y,x))$ has the same property by~(iii), and moreover it splits all squares, so we are done. If $f(\cdot,N)=N$ on both $U$ and $V$, then by symmetry $f'(x,y):=f(f(y,x),f(x,y))$ has the same property everywhere and splits all squares. It remains to consider the case where, say, $f(N,\cdot)=N$ on $U$ and $f(\cdot,N)=N$ on $V$.
Let $P_U$ and $P_V$ be the projections of $g[U]$ and $g[V]$ onto the first coordinate; by~(iii), the two sets are disjoint. Let $\alpha\in \Aut({C_\omega^2},E)$ be so that it preserves all equivalence classes, that it flips the two elements of each class on $P_U$ if and only if $g[U]\subseteq V$, and it flips the two elements of each class on $P_V$ if and only if $g[V]\subseteq U$. Denoting the identity function on $C_\omega^2$ by $\id$, we then have that $h(x,y):=(\alpha,\id)(g(x,y))$ has all of the above properties of $g$, but in addition satisfies $h[U]\subseteq U$ and $h[V]\subseteq V$. Moreover, $f'(x,y):=f\circ h$ satisfies $f'(N,\cdot)=N$. To see this, let $p,q\in ({C_\omega^2})^2$ be related by $N$ in the first coordinate. If $p,q\in U$, then $h(p), h(q)$ are related by $N$ in the first coordinate, and because $h[U]\subseteq U$, we have $N(f(h(p)), f(h(q)))$. When $p\in U$ and $q\in V$, then $\NN(h(p),h(q))$ by~(iii), and so $N(f'(p), f'(q))$ since $f$ preserves $N$. Finally, if $p,q \in V$, then $h(p),h(q)$ are related by $N$ in the second coordinate, and using $h[V]\subseteq V$, we see that $N(f'(p), f'(q))$. Since $f'$ moreover splits all squares by~(ii), we are done.\smallskip
{\bf Case 2:} If $f$ does not split all squares of equivalence classes in $S$, then by canonicity it splits no such square. Then $f(N,\cdot)=N$ or $f(\cdot,N)=N$ on $S$: otherwise, there would exist $p,q,p',q' \in S^2$ such that $p,q$ are related by $N$ in the first coordinate, $p',q'$ are related by $N$ in the second coordinate, and $Eq(f(p),f(q))$ and $Eq(f(p'),f(q'))$ hold. But then we could pick $q''\in S^2$ such that $\tp(p,q'')=\tp(p',q')$ in $({C_\omega^2},E,\prec,u,v)\times ({C_\omega^2},E,\prec,u,v)$, so that by canonicity we would have $Eq(f(p),f(q''))$. By transitivity, this would imply $Eq(f(q),f(q''))$, a contradiction since $\NN(q,q'')$ and since $f$ preserves $N$. We assume without loss of generality that $f(N,\cdot)=N$ on $S$.
We now distinguish two subcases to show that $f$ generates a binary function $f'$ which splits $C_0\times C_0$ and such that $f'(N,\cdot)=N$ everywhere, thus proving the claim.
{Case 2.1:} If $f(N,\cdot)=N$ on $S \times C_0$, then
by canonicity one easily concludes $N(f(p),f(q))$ for all $p\in C_0\times C_0$ and all $q\in S \times C_0$, so that altogether $f(N,\cdot)=N$ everywhere. Hence, setting $f':=f$ we have achieved our goal.
{Case 2.2:} If $f(N,\cdot)=N$ does not hold on $S \times C_0$, then there exists $c\in S\times C_0$ such that $N(f(c),f(q))$ for any $q\in
S^2$. To see this, we can
pick any $c\in S\times C_0$ so that there exists $q'\in S\times C_0$ related to $c$ by $N$ in the first coordinate. Then, if there existed $q\in S^2$ with $Eq(f(c),f(q))$, we would have $Eq(f(q),f(q'))$; replacing $q'$ by $q''\in S\times C_0$ such that $\tp(c,q')=\tp(c,q'')$ in $({C_\omega^2},E,\prec,u,v)$ and such that $q',q''$ are related by $N$ in both coordinates, this would yield a contradiction to the preservation of $N$.
We are going to check the following properties of the function $g$ on $({C_\omega^2})^2$ defined by $(x,y)\mapsto (x,f(x,y))$.
\begin{itemize}
\item[(i)] Whenever $p,q\in ({C_\omega^2})^2$ are related by $N$ in the first coordinate, then so are $g(p), g(q)$.
\item[(ii)] If $p\in ({C_\omega^2})^2$, and $q\in S^2$ is related to $p$ by $N$ in the first coordinate, then $\NN(g(p),g(q))$.
\item[(iii)] Writing $a:=(u,u)$ and $b:=(v,u)$, we have $\Eeq(a,b)$ and $\EN(g(a),g(b))$.
\end{itemize}
Property~(i) is obvious from the definition of $g$. Property~(ii) is clear if $p\in {C_\omega^2}\times C_0$, since in that case $\NN(p,q)$ and since $f$ preserves $N$. If $p\in S^2$, then it follows from the fact that $f(N,\cdot)=N$ on $S$. Finally, consider the case where $p\in C_0\times S$. If we had $Eq(f(p),f(q))$, then picking $q'\in S^2$ such that $\Neq(q,q')$ and such that $\tp(p,q)=\tp(p,q')$ in $({C_\omega^2},E,\prec)\times ({C_\omega^2},E,\prec)$, we would get $Eq(f(p),f(q'))$ by canonicity, and so $Eq(f(q),f(q'))$, contradicting that $f(N,\cdot)=N$ on $S$. Property~(iii) just restates that $f$ splits $C_0\times C_0$.
Let $e_1,e_2$ be self-embeddings of $({C_\omega^2},E,\prec)$ such that the range of $(e_1,e_2)\circ g$
is contained in $S\times {C_\omega^2}$ and such that $(e_1,e_2)\circ g(a)=c$. Using that assumption, $g':=g\circ (e_1,e_2)\circ g$ clearly also satisfies~(i) and~(ii). Moreover, since $(e_1,e_2)\circ g(a)=c$, and since $\EN((e_1,e_2)\circ g(a),(e_1,e_2)\circ g(b))$, we have $(e_1,e_2)\circ g(b)\in S^2$; this implies $\EN(g'(a),g'(b))$, since $N(f(c),f(q))$ for all $q\in S^2$. Hence, $g'$ still satisfies~(iii).
We then pick a pair $(e_1',e_2')$ of self-embeddings of $({C_\omega^2},E,\prec)$ with $(e_1',e_2')\circ g'(b)=c$, and consequently $(e_1',e_2')\circ g'(a)\in S^2$. Then $g'':=g\circ (e_1',e_2')\circ g'=g\circ (e_1',e_2')\circ g\circ (e_1,e_2)\circ g$ has the property that whenever $p,q\in ({C_\omega^2})^2$ are related by $N$ in the first coordinate, then $\NN(g''(p),g''(q))$; this is because every point went
through $S^2$ in one of the applications of $g$, and because of~(ii). Moreover, we have $\EN(g''(a),g''(b))$.
Picking another pair $(e_1'',e_2'')$ of embeddings so that $(e_1'',e_2'')\circ g''(a)=c$, we have that $f'(x,y):=f\circ (e_1'',e_2'')\circ g''(x,y)$ preserves $N$ in the first coordinate and splits $C_0\times C_0$, finishing our proof of the claim.\\
{\bf Wrap-up.} Replacing $f$ by $f'$ from the claim, we thus henceforth assume that $f(N,\cdot)=N$. For the function $h$ on $({C_\omega^2})^2$ defined by $(x,y)\mapsto (f(x,y),f(y,x))$, we are going to prove the following properties.
\begin{itemize}
\item[(i)] If $p,q\in ({C_\omega^2})^2$ are related by $N$ in some coordinate, then $h(p), h(q)$
are related by $N$ in the same coordinate.
\item[(ii)] There are $p',q'\in ({C_\omega^2})^2$ with $\Eeq(p',q')$ such that $h(p'), h(q')$ are related
by $N$ in the first coordinate.
\item[(iii)] There are $p'',q''\in ({C_\omega^2})^2$ with $\EN(p'',q'')$ such that $\NN(h(p''), h(q''))$.
\item[(iv)] There are $p''',q''' \in ({C_\omega^2})^2$ with $\eqN(p'',q'')$ such that $\NN(h(p'''), h(q'''))$.
\end{itemize}
Property~(i) is obvious because $f(N,\cdot)=N$, and~(ii) follows because $f$ splits a square. To see~(iii), we first observe that there exist $p,q\in ({C_\omega^2})^2$ with equal first coordinate and such that $h(p),h(q)$ are related by $N$ in the first coordinate: simply pick $p,p'$ with $\eqE(p,p')$ within the square that is split; then $\NN(h(p),h(p'))$, and so for any $q\in ({C_\omega^2})^2$ with $\eqN(p,q)$ and $\eqN(p',q)$ we have that $h(q)$ must be related by $N$ in the first coordinate to either $h(p)$ or $h(p')$, showing the observation. Now fix $p,q$ with this property, and pick $v\in ({C_\omega^2})^2$ with
$\EN(p,v)$ and $\EN(q,v)$. Then $h(v)$ is related to $h(p)$ and $h(q)$ by $N$ in the
second coordinate by~(i), but also necessarily to one of them in the first coordinate, showing~(iii). The proof of~(iv) is similar.
Using these properties, we construct, by composition and local closure, a function $h'$ on $({C_\omega^2})^2$ which yields $\NN(p,q)$ for all $p,q\in ({C_\omega^2})^2$ which are related by $N$ in at least one coordinate, as well as for $(p,q)=(p',q')$, the pair from~(ii). To do this, set $(p_0,q_0):=(p',q')$, and let $\{(p_i,q_i)\;|\; i>0\}$ be an enumeration of all pairs in $({C_\omega^2})^2$ which are related by $N$ in at least one coordinate. We proceed inductively, constructing functions $h_0,h_1,\ldots$ with the property that $\NN(h_n(p_j),h_n(q_j))$ for all $j<n$. For the base case, we set $h_0:=h$. Suppose we have already constructed $h_n$. Then $h_n(p_n)$ and $h_n(q_n)$ are related by $N$ in at least one coordinate. If $\NN(h_n(p_n),h_n(q_n))$, then we set $h_{n+1}:=h_n$. If $\EN(h_n(p_n),h_n(q_n))$, then let $(\alpha,\beta)$ be a pair of automorphisms of $({C_\omega^2},E)$ such that $(\alpha,\beta)(h_n(p_n))=p''$ (from~(iii)), and $(\alpha,\beta)(h_n(q_n))=q''$. Setting $h_{n+1}:=h\circ (\alpha,\beta)\circ h_n$ then yields the desired property for $(p_n,q_n)$. If $\NE(h_n(p_n),h_n(q_n))$, then $\EN(h_n(q_n),h_n(p_n))$, and we proceed as before. The cases $\eqN(h_n(p_n),_n(q_n))$ and $\Neq(h_n(p_n),_n(q_n))$ are treated similarly, using~(iv) instead of~(iii). By local closure, we obtain the function $h'$.
The function $g_0:=f\circ h'$ then satisfies $g_0(N,\cdot)=g_0(\cdot,N)=N$, and moreover satisfies $N(g(p'),g(q'))$, since $\NN(h'(p'),h'(q'))$ and since $f$ preserves $N$.
Let $\{(p_i,q_i)\;|\; i\geq 0\}$ be an enumeration of all pairs in $({C_\omega^2})^2$ related by $\Eeq$, where $(p_0,q_0)=(p',q')$. As above, we obtain, by composition and local closure, for every $i\geq 0$ a function $g_i$ which satisfies $g_i(N,\cdot)=g_i(\cdot,N)=N$ and such that $N(g_i(p_i),g_i(q_i))$. Setting $t_0:=g_0$, and $t_{n+1}:=f(t_n(x,y),g_{n+1}(x,y))$ for all $n\geq 0$, we obtain binary functions $t_0,t_1,\ldots$ satisfying $t_i(N,\cdot)=t_i(\cdot,N)=N$ and with the property that $N(t_i(p_j),t_i(q_j))$ for all $j\leq i$. By local closure, we obtain a binary function $t$ satisfying $t(N,\cdot)=t(\cdot,N)=N$ and $N(t(p),t(q))$ for all $p,q\in ({C_\omega^2})^2$ with $\Eeq(p,q)$. This function clearly has behaviour $\mini$ and is $N$-dominated in the first argument; since it preserves $E$, these properties also imply that it is $N$-dominated in the second argument.
\end{proof}
}
\mic{We now turn to the case where $Eq$ is pp-definable in a reduct $\Gamma$, so that $\Pol(\Gamma)$ acts on its equivalence classes.}
\begin{prop}\label{prop:omega2}
Let $\Gamma$ be a reduct of $(C_\omega^2, E)$ such that $\End(\Gamma)=\overline{\Aut(C_\omega^2,E)}$ \mic{and such that $Eq$ is pp-definable}.
Then one of the following holds:
\begin{itemize}
\item the action of $\Pol(\Gamma)$ on the equivalence classes of $Eq$ has no essential function;
\item the action of $\Pol(\Gamma,\mic{C})$ on some (or any) equivalence class of $C$ has no essential function;
\item $\Pol(\Gamma)$ contains a ternary canonical function $h$ such that $h(N,\cdot,\cdot)=h(\cdot,N,\cdot)=h(\cdot,\cdot,N)=N$ which behaves like a minority on $\{E,=\}$ (so $h(E,=,=)=E$ etc.).
\end{itemize}
\end{prop}
To prove the proposition, we are again going to make use of Propositions~\ref{prop:Post} and~\ref{prop:BK}, and the following lemma. \mic{We are going to say that a ternary function $f$ on $C_\omega^2$ behaves like $x+y+z$ modulo 2 on an equivalence class $C=\{0,1\}$ of $Eq$ if the restriction of $f$ to $C$ is of the form $\alpha\circ g_C$, where $\alpha\in\Aut(C_\omega^2,E)$ and $g_C$ is the ternary function on $C$ defined by $g_C(x,y,z)=x+y+z$ modulo 2. Note that this property can be expressed in terms of type conditions satisfied on $C$: namely, $f$ behaves like $x+y+z$ modulo 2 on $C$ if and only if it satisfies $f(E,E,E)=E$, $f(E,E,=)=f(E,=,E)=f(=,E,E)={=}$, and $f(E,=,=)=f(=,=,E)=f(=,E,=)={E}$ on $C$. In other words, $f$ behaves like a minority on the types $\{E,=\}$.
}
{
\begin{lem}\label{lem:propagating+}
Let $\Gamma$ be a reduct of $(C_\omega^2, E)$ such that $\End(\Gamma)=\overline{\Aut(C_\omega^2,E)}$, {$Eq$ is pp-definable}, and $\Pol(\Gamma)$ contains a ternary function which behaves like $x+y+z$ modulo 2 on some equivalence class. Then $\Pol(\Gamma)$ contains a ternary function which behaves like $x+y+z$ modulo 2 on all equivalence classes.
\end{lem}
\begin{proof}
Let $C_0, C_1,\ldots$ be the equivalence classes of $Eq$. We show, by induction over $n$, that for all $n\in\omega$, $\Pol(\Gamma)$ contains a function $g_n$ which equals $x+y+z$ modulo 2 on each class $C_0,\ldots,C_n$. The lemma then follows by a standard compactness argument: by $\omega$-categoricity, there exist $\alpha_n\in\Aut(C_\omega^2,E)$, for $n\in\omega$, such that $(\alpha_n\circ g_n)_{n\in\omega}$ converges to a function $g\in\Pol(\Gamma)$ (cf.~for example the proof of Proposition~\ref{prop:redendo}). That function then has the desired property: for every $i\in\omega$, there exists $n>i$ such that $g$ agrees with $\alpha_n \circ g_n$ on $C_i$, and hence it behaves like $x+y+z$ modulo 2 on $C_i$.
For the base case $n=0$, the statement follows from the assumption of the lemma. Now suppose it holds for $n$. By the assumption that $\End(\Gamma)=\overline{\Aut(D,E)}$, we may assume that $g_n(x,x,x)=x$ for all $x\in C_0\cup \cdots \cup C_{n+1}$, and in particular $g_n$ preserves each of the classes $C_0, \ldots, C_{n+1}$. In particular, the restriction of $g_n$ to any $C_i$ with $0\leq i\leq n$ actually equals the function $x+y+z$ modulo 2 on that class.
Assume first that $g_n$ is not essential on $C_{n+1}$; by composing it with an automorphism of $(C_\omega^2,E)$, we may assume it is a projection, without loss of generality to the first coordinate, on $C_{n+1}$. Let $g_n'\in\Pol(\Gamma)$ be a ternary function which has the properties of $g_n$, but with the roles of $C_n$ and $C_{n+1}$ switched. Then $$g_{n+1}(x,y,z):=g_n(g_n'(x,y,z),g_n'(y,z,x),g_n'(z,x,y))$$ has the desired property.
Next assume that $g_n$ is essential on $C_{n+1}$, and write $g_n'$ for its restriction to $C_{n+1}$. Let $\alpha\in \Aut(C_\omega^2,E)$ flip the two elements of $C_{n+1}$, and fix all other elements of $C_\omega^2$; then the restriction $\alpha'$ of $\alpha$ to $C_{n+1}$ is the only non-trivial permutation of $C_{n+1}$. By Proposition~\ref{prop:Post}, there exists a term $h'(x,y,z)$ over $\{g_n',\alpha'\}$ which induces either a constant function or the function $x+y+z$ modulo 2 on $C_{n+1}$. The term $h(x,y,z)$ obtained from $h'$ by replacing all occurrences of $\alpha'$ by $\alpha$, and all occurrences of $g_n'$ by $g_n$ induces a ternary function on $C_\omega^2$ whose restriction to $C_{n+1}$ equals $h'$. Since $h$ preserves $E$, it cannot be constant on $C_{n+1}$, and hence it is equal to $x+y+z$ modulo 2 on $C_{n+1}$. For each $0\leq i\leq n$, since $g_n$ equals $x+y+z$ modulo 2 on $C_i$, and since $\alpha$ is the identity on $C_i$, it is easy to see that the term function $h$, restricted to $C_i$, is of the form $\beta'\circ g$, where $\beta'$ is a permutation on $C_i$ and $g$ either equals $x+y+z$ modulo 2 or a projection on $C_i$. Hence, iterating the preceding case we obtain the desired function.
\end{proof}
}
\begin{proof}[Proof of Proposition~\ref{prop:omega2}] Suppose that neither of the first two items hold.
Then by Proposition~\ref{prop:BK}, $\Pol(\Gamma)$ contains a binary function $f$ acting injectively on the classes of $Eq$; moreover, using Proposition~\ref{prop:Post} and since $E$ is preserved, we see that $\Pol(\Gamma)$ contains a ternary function which equals $x+y+z$ modulo 2 on some equivalence class. Hence, by Lemma~\ref{lem:propagating+} it contains a ternary function $g$ which behaves like $x+y+z$ modulo 2 on all equivalence classes.
By Proposition~\ref{prop:canfct}, we may assume that $f$ is canonical as a function from $(C_\omega^2,E,\prec)^2$ to $(C_\omega^2,E,\prec)$. As in Proposition~\ref{prop:2omega}, this implies that $f$ is also canonical as a function from $(C_\omega^2,E)^2$ to $(C_\omega^2,E)$.
\mic{Observe first that since $f$ acts injectively on the classes of $Eq$, we have that whenever $p,q\in (C_\omega^2)^2$ are not equivalent with respect to $Eq$ in at least one coordinate, then $Eq(f(p),f(q))$ cannot hold. In other words, we have the type conditions $f(N,Eq)=f(Eq,N)=f(N,N)=N$.}
\mic{We next argue that on each class $C$, $f$ is essentially unary. Write $C=\{0,1\}$. Since $E$ is preserved, we have $E(f(0,0),f(1,1))$; similarly, we know that $E(f(0,1),f(1,0))$. Since $f$ moreover preserves $Eq$, the four values are contained in a single class. Hence either $f(0,1)=f(0,0)$ and $f(1,0)=f(1,1)$, or $f(1,0)=f(0,0)$ and $f(0,1)=f(1,1)$. In the first case, the restriction of $f$ to $C$ only depends on its first argument, and in the second case on its second argument. Assume without loss of generality the former, i.e., $f(E,=)=E$ and $f(=,E)={=}$ on $C$. Then, by canonicity, it satisfies these type conditions everywhere.}
The function $q(x,y,z):=f(x,f(y,z))$ satisfies $q(N,\cdot,\cdot)=q(\cdot,N,\cdot)=q(\cdot,\cdot,N)=N$, and $q(P,Q,R)=P$ when $P,Q,R\in\{E,=\}$.
Consider the function $t$ on $(C_\omega^2)^3$ which sends every triple $(x,y,z)$ to the triple $(q(x,y,z),q(y,z,x),q(z,x,y))$. Then,
whenever $P,Q,R\in\{E,=\}$ and $p,q\in (C_\omega^2)^3$ satisfy ${P}{Q}{R}(p,q)$, then also ${P}{Q}{R}(t(p),t(q))$, by the properties of $q$. Moreover, whenever $p,q\in (C_\omega^2)^3$ are related by $N$ in at least one coordinate, then $\NNN(t(p),t(q))$. By the latter property of $t$, there exist $\alpha,\beta,\gamma\in\overline{\Aut(C_\omega^2,E)}$ such that the function
$$(\alpha,\beta,\gamma)\circ t(x,y,z):= (\alpha(q(x,y,z)),\beta(q(y,z,x)),\gamma(q(z,x,y)))$$ sends any product $C_i\times C_j\times C_k$ of three equivalence classes into the cube $C^3$ of a single equivalence class; moreover, this function still has the properties of $t$ mentioned above. Set $h(x,y,z):=g\circ (\alpha,\beta,\gamma)\circ t(x,y,z)=
g(\alpha (q(x,y,z)),\beta(q(y,z,x)),\gamma(q(z,x,y)))$.
Then $h(N,\cdot,\cdot)=h(\cdot,N,\cdot)=h(\cdot,\cdot,N)=g(N,N,N)=N$. Moreover, recall that because $g$ behaves like $x+y+z$ modulo 2 on each equivalence class, it behaves like a minority on $\{E,=\}$ on each equivalence class. Hence, when $P,Q,R\in\{E,=\}$, then since $h(P,Q,R)=g(P,Q,R)$, since $(\alpha,\beta,\gamma)\circ t(x,y,z)$ maps the product of three equivalence classes into the cube of a single equivalence class, and since $g$ behaves like a minority on $\{E,=\}$ on each equivalence class, we have that $h$ behaves like a minority on $\{E,=\}$.
\end{proof}
\section{Polynomial-time tractable CSPs over homogeneous equivalence relations}\label{sect:CSP_equivalence}
\mic{We provide two polynomial-time algorithms: the first one is designed for the $\text{\rm CSP}$s of reducts of $(C_2^\omega,E)$ with a ternary injective canonical polymorphism of behaviour minority which is hyperplanely of behaviour \mic{balanced xnor} (Section~\ref{thm:C-low-2-high-omega-P}), and the second one for reducts of $(C_\omega^2,E)$ with a ternary canonical polymorphism $h$ such that $$h(N,\cdot,\cdot)=h(\cdot,N,\cdot)=h(\cdot,\cdot,N)=N$$ and which behaves like a minority on $\{=,E\}$ (Section~\ref{thm:C-low-omega-high-2-P}).}
\subsection{Two infinite classes}
\label{thm:C-low-2-high-omega-P}
We consider the case where
$\Gamma$ is a reduct of $(C_2^\omega,E)$ which is preserved by a canonical
injection $h$ of behaviour minority which is
hyperplanely of behaviour \mic{balanced xnor} (cf.~Proposition~\ref{prop:2omega}). Our algorithm for
CSP$(\Gamma)$ is an adaptation
of an algorithm for reducts of the random graph~\cite{BodPin-Schaefer}.
We first reduce CSP$(\Gamma)$ to the CSP of a structure that we call the \emph{injectivization} of $\Gamma$, which can then be reduced to a {tractable} CSP over a Boolean domain.
\begin{definition}\label{def:injective}
A tuple is called \emph{injective} if all its entries have pairwise distinct entries.
A relation is called \emph{injective} if all its tuples are injective.
A structure is called \emph{injective} if all its relations are injective.
\end{definition}
\begin{definition}\label{def:inj}
We define \emph{injectivizations} for relations, atomic formulas, and structures.
\begin{itemize}
\item Let $R$ be any relation. Then the \emph{injectivization of $R$}, denoted by $\inj(R)$, is the (injective) relation consisting of all injective tuples of $R$.
\item Let $\phi(x_1,\ldots,x_n)$ be an atomic formula in the language of $\Gamma$, where $x_1,\ldots,x_n$ is a list of the variables that appear in $\phi$. Then
the \emph{injectivization of $\phi(x_1,\dots,x_n)$} is the formula $R^{\inj}_\phi(x_1,\ldots,x_n)$, where $R^{\inj}_\phi$ is a relation symbol which stands for the injectivization of the relation defined by $\phi$.
\item The \emph{injectivization} of a relational structure $\Gamma$, denoted by $\inj(\Gamma)$, is the relational structure with the same domain as $\Gamma$ whose relations are the injectivizations of the atomic formulas over $\Gamma$, i.e., the relations $R^{\inj}_\phi$.
\end{itemize}
\end{definition}
To state the reduction to the CSP of an injectivization, we also need the following operations on instances of $\Csp(\Gamma)$.
Here, it will be convenient to view instances of $\Csp(\Gamma)$ as primitive positive $\tau$-sentences.
\begin{definition}
Let $\Phi$ be an instance of $\Csp(\Gamma)$. Then
the \emph{injectivization of $\Phi$}, denoted by $\inj(\Phi)$, is the instance $\psi$
of $\Csp(\inj(\Gamma))$ obtained from $\phi$ by replacing each conjunct
$\phi(x_1,\dots,x_n)$ of $\Phi$
by $R^{\inj}_\phi(x_1,\ldots,x_n)$.
\end{definition}
We say that a constraint in an instance of $\Csp(\Gamma)$ is \emph{false} if it defines an empty relation in $\Gamma$.
Note that a constraint
$R(x_1,\dots,x_k)$ might be false
even if the relation
$R$ is non-empty (simply because some of the variables from $x_1,\dots,x_k$
might be equal).
The proof of the following statement
is identical to the proof for the
random graph instead of $(C^\omega_2,Eq)$
in~\cite{BodPin-Schaefer}.
\begin{proposition}[{Lemma 71 in \cite{BodPin-Schaefer}}]
\label{prop:inj}
Let $\Gamma$ be preserved by
a binary injection $f$ of {behaviour} $E$-dominated projection. Then $\Csp(\Gamma)$ can be
reduced to $\Csp(\inj(\Gamma))$
in polynomial time.
\end{proposition}
{ We are now in a position to give our reduction.
\begin{proposition}
Let $\Gamma$ be a reduct of $(C_\omega^2, E)$ such that $\End(\Gamma)=\overline{\Aut(C_\omega^2,E)}$ and $\Gamma$ has a ternary injection $f$ which behaves like minority. Further, let $\Delta$ be $(\{0,1\};0,1,\{(x,y,z):z+y+z=1 \bmod 2\})$. There is a polynomial time reduction from $\Csp(\inj(\Gamma))$ to $\Csp(\Delta)$.
\label{prop:bool}
\end{proposition}
\begin{proof}
Firstly, we note that from $f$ one can derive a polymorphism $f'$ on the two-element structure obtained from $\Gamma$ by factoring by the equivalence classes, which behaves like the ternary minimum function on domain $\{0,1\}$.
Take an instance $\phi$ for $\Csp(\inj(\Gamma))$ and build an instance $\phi'$ for $\Csp(\Delta)$ in the following manner. The variable set remains the same and every constraint $(b_1,\ldots,b_k) \in R$ from $\phi$ becomes $(a_1,\ldots, a_k)\in R'$ in $\phi'$ where $b_i \in C_{a_i}$. From Proposition~\ref{prop:Post}, through the presence of $f'$ and the lack of a polymorphism of $\Gamma$ identifying one equivalence class alone, we can assume that the relations of $\phi'$ are preserved by $x+ y + z \bmod 2$, and can thus be taken to be pp-definable in the relation $(x + y + z =1 \bmod 2)$ (see \mbox{e.g.} \cite{Creignou}).
Suppose $\phi$ is a yes-instance of $\Csp(\inj(\Gamma))$, then $\phi'$ is a yes-instance of $\Csp(\Delta)$, by application of the polymorphism $f'$.
Suppose $\phi'$ is a yes-instance of $\Csp(\Delta)$, with solution $f:V\rightarrow \{0,1\}$. Then we can build a satisfying assignment for $\phi$ by choosing any injective function from $V$ to $(C_\omega^2, E)$ sending $x \rightarrow C_{f(x)}$.
\end{proof}
}
\ignore{
\begin{proof}
In the main loop, when the algorithm detects a constraint that is false and therefore rejects, then $\phi$ cannot hold in $\Gamma$, because
the algorithm only contracts variables $x$ and $y$
when $x=y$ in all solutions to $\phi$ -- and contractions are the
only modifications performed on the input formula $\phi$.
So suppose that the algorithm does not reject, and let $\psi$ be
the instance of $\Csp(\Gamma)$ computed by the
algorithm when it reaches the final line of the algorithm.
By the observation we just made it suffices to show that
$\psi$ holds in $\Gamma$
if and only if $\inj(\psi)$ holds in $\inj(\Gamma)$.
It is clear that when $\inj(\psi)$ holds
in $\inj(\Gamma)$ then $\psi$ holds in $\Gamma$ (since the constraints in $\inj(\psi)$ have been made stronger).
We now prove that if $\psi$ has a solution $s$ in $\Gamma$,
then there is also a solution for $\inj(\psi)$ in $\inj(\Gamma)$.
Let $s'$ be any mapping from the variable set $V$ of $\psi$
to $C^\omega_2$ such that for all distinct $x,y \in W$ we have that
\begin{itemize}
\item if $E(s(x),s(y))$ then $E(s'(x),s'(y))$;
\item if $N(s(x),s(y))$ then $N(s'(x),s'(y))$;
\item if $s(x)=s(y)$ then $E(s'(x),s'(y))$.
\end{itemize}
By universality of $(C^\omega_2,E)$, such a mapping exists. We claim that $s'$ is a solution to $\psi$
in $\Gamma$. Since $s'$ is injective, it is then clearly
also a solution to $\inj(\psi)$.
To prove the claim, let $\gamma$ be a constraint of $\psi$ on the variables
$x_1,\dots,x_k \in W$. Since we are at the final stage of the algorithm, we can conclude that
$\gamma(x_1,\dots,x_k)$ does not imply equality of any of the variables $x_1,\dots,x_k$,
and so there is for all $1 \leq i < j \leq k$ a tuple $t^{(i,j)}$ such that $R(t^{(i,j)})$ and
$t_i \neq t_j$ hold. Since $\gamma(x_1,\ldots,x_k)$ is preserved by a binary injection, it is also preserved by injections of arbitrary arity (it is straightforward to build such terms from a binary injection). Application of an injection of arity $\binom{k}{2}$ to the tuples $t^{(i,j)}$ shows that $\gamma(x_1,\ldots,x_k)$ is satisfied by an injective tuple $(t_1,\dots,t_k)$.
Consider the mapping $r \colon \{x_1,\dots,x_k\} \rightarrow D$
given by $r(x_l) := f(s(x_l),t_l)$.
This assignment has the property that for all $i,j \in S$
if $E(s(x_i),s(x_j))$, then $E(r(x),r(y))$,
and if $N(s(x_i),s(x_j))$ then $N(r(x_i),r(x_j))$, because $f$ is of type $p_1$.
Moreover, if $s(x_i)=s(x_j)$ then $E(r(x_i),r(x_j))$ because $f$ is $E$-dominated in the second argument.
Therefore, $(s'(x_1),\dots,s'(x_n))$ and $(r(x_1),\dots,r(x_n))$ have the same type in $(C^\omega_2,E)$.
Since $f$ is a polymorphism of $\Gamma$, we have that $(r(x_1),\dots,r(x_n))$ satisfies the constraint $\gamma(x_1,\ldots,x_n)$. Hence, $s'$ satisfies
$\gamma(x_1,\ldots,x_n)$ as well.
We conclude that $s'$ satisfies all the constraints of $\psi$, proving our claim.
\end{proof}
}
\ignore{
To reduce the CSP for injective structures to Boolean CSPs,
we need the following definitions.
Let $t$ be a $k$-tuple of distinct vertices of $(C^\omega_2,E)$, and let $q$ be ${k}\choose{2}$.
Then $\Bool(t)$ is the $q$-tuple $(a_{1,2},a_{1,3},\dots,a_{1,k}$,
$a_{2,3},\dots,a_{k-1,k}) \in \{0,1\}^q$
such that $a_{i,j}=0$ if $N(t_i,t_j)$
and $a_{i,j} = 1$ if $E(t_i,t_j)$.
If $R$ is a $k$-ary injective relation, then $\Bool(R)$ is the $q$-ary Boolean relation $\{ \Bool(t) \; | \; t \in R \}$.
Note that if an injective relation $R$ is preserved by
a ternary operation of type minority,
then $B:=\Bool(R)$ is preserved
by the ternary minority function.
It is well-known that $B$ then has
a definition by a set of linear equations
over $\{0,1\}$ \cite{Schaefer}.
\begin{definition}\label{def:bool}
Let $\Phi$ be an instance
of $\Gamma$ with variables $V$.
Then $\Bool(\Phi)$ is the linear equation
system with variables ${V \choose 2}$
(that is, two-element subsets $\{u,v\}$ of $V$, denoted by $uv$)
that contains
\begin{enumerate}
\item for each conjunct
$\phi(x_1,\dots,x_k)$ of $\Phi$
all linear equations with variables
${\{x_1,\dots,x_k\} \choose 2}$ that
define $\Bool(R^{\inj}_{\phi})$, and
\item
all equations of the form
$xy + yz + xz = 1$ for ${x,y,z} \in V$.
\end{enumerate}
\end{definition}
}
\ignore{
\begin{proposition}\label{prop:bool}
The formula $\inj(\Phi)$ is satisfiable over $\inj(\Gamma)$ if and only if $\Bool(\Phi)$ is satisfiable over $\{0,1\}$.
\end{proposition}
\begin{proof}
Let $V$ be the variables of $\inj(\Phi)$
so that $V \choose 2$ are the variables
of $\Bool(\Phi)$.
First suppose that $\inj(\Phi)$ has
a solution $s \colon V \to C^\omega_2$; we may choose $s$ injective. Then $s' \colon {V \choose 2} \to \{0,1\}$ defined by $s'(xy) := 0$ if
$N(s(x),s(y))$ and $s'(xy) := 1$ if
$E(s(x),s(y))$ is a solution to $\Bool(\Phi)$. Conversely, if $s' \colon {V \choose 2} \to \{0,1\}$ is a solution to
$\Bool(\Phi)$, then define
$s \colon V \to C^\omega_2$ as follows.
Choose $x \in V$ and $v \in C^\omega_2$ arbitrarily, and define $s(x) := v$.
For any $y \in V \setminus \{x\}$,
if $s'(xy) = 1$, then pick
$u \in C^\omega_2$ with $E(u,v)$
and if $s'(xy) = 0$, then pick $y \in C^\omega_2$ with $N(u,v)$; in both cases,
choose values from $C^\omega_2$ that are
distinct from all previously
picked values from $C^\omega_2$. We claim
that $s$ satisfies all conjuncts $\phi$
of $\inj(\Phi)$. Let $R$ be the relation
defined by $\phi$ {and let $\oplus$ signify the Boolean xor.}
Then it suffices to show that $s$
satisfies all expressions of the
form $E(x_1,y_1) \oplus \cdots \oplus
E(x_k,y_k)$ or $\neg(E(x_1,y_1) \oplus \cdots \oplus
E(x_k,y_k))$ that correspond to the
Boolean equations defining $\Bool(R^{\inj}_\phi)$.
But
\begin{align*}
& E(s(x_1),s(y_1)) \oplus \cdots \oplus
E(s(x_k),s(y_k)) \\
\Leftrightarrow \; & (s'(xx_1)+s'(xy_1) = 1) \oplus \cdots \oplus (s'(xx_k) + s'(xy_k) = 1) && \text{(by definition of $s$)} \\
\Leftrightarrow \; & s'(x_1y_1) \oplus \cdots \oplus s'(x_ky_k) && \text{(by (2) in Definition~\ref{def:bool})}
\end{align*}
which is true because $s'$ satisfies
the equations from $(1)$ of Definition~\ref{def:bool}.
\end{proof}
}
\begin{corollary}
Let $\Gamma$ be a reduct of $(C_2^\omega,E)$ which is preserved by a
ternary injection $h$ of behaviour minority {which is hyperplanely of behaviour balanced xnor}. Then
$\Csp(\Gamma)$ can be
solved in polynomial time.
\end{corollary}
\begin{proof}
Note that the binary function $h(x,y,y)$ is of type $p_1$ and $E$-dominated in the second argument. So the statement
is a consequence of Proposition~\ref{prop:inj} and~\ref{prop:bool}.
\end{proof}
\subsection{Infinitely many classes of size two}\label{thm:C-low-omega-high-2-P}
We now prove tractability of $\Csp(\Gamma)$ for reducts $\Gamma$ of $(C^2_\omega,Eq)$ in a finite language
such that $\Pol(\Gamma)$ contains a ternary canonical function $h$ such that $$h(N,\cdot,\cdot)=h(\cdot,N,\cdot)=h(\cdot,\cdot,N)=N$$ which behaves like a minority on $\{=,E\}$.
\begin{prop}\label{prop:syntax}
A relation $R$ with a first-order definition
in $(C^2_\omega,Eq)$ is preserved by $h$
if and only if it
can be defined by a conjunction of formulas of the form
\begin{align}\label{eq:one}
N(x_1,y_1) \vee \cdots \vee N(x_k,y_k) \vee Eq(z_1,z_2)
\end{align}
for $k \geq 0$, or of the form
\begin{align}
N(x_1,y_1) \vee \cdots \vee N(x_k,y_k) \, \vee & \, (|\{i \in S : x_i \neq y_i\}| \equiv_2 p) \label{eq:two}
\end{align}
where $p \in \{0,1\}$ and $S \subseteq \{1,\dots,k\}$.
\end{prop}
The proof is inspired from a proof for tractable phylogeny constraints~\cite{Phylo-Complexity}.
\begin{proof}
For the backwards implication, it
suffices to verify that formulas
of the form in the statement are preserved
by $h$. Let $o,p,q \in R$, and
let $r := h(o,p,q)$. Assume that
$R$ has a definition by a formula
$\phi$ of the form as described in the statement. Suppose for contradiction that
$r$ does not satisfy $\phi$.
For {any conjunct of $\phi$ violated by $r$}, of the form
$N(x_1,y_1) \vee \dots \vee N(x_k,y_k) \vee \theta$, the tuple
$r$ must therefore satisfy
$Eq(x_1,y_1) \wedge \cdots \wedge Eq(x_k,y_k)$.
Since $h$ has the property
that $h(N,\cdot,\cdot) = h(\cdot,N,\cdot) = h(\cdot,\cdot,N) = N$, this means
that each of $o$, $p$, and $q$
also satisfies this formula. This in
turn implies that $o$, $p$, and $q$
must satisfy the formula $\theta$.
It suffices to prove that $r$ satisfies $\theta$, too, since
this contradicts the assumption that
$r$ does not satisfy $\phi$.
Suppose first that $\theta$
is of the form $Eq(z_1,z_2)$.
In this case, $r$ must also satisfy $Eq(z_1,z_2)$ since $h$ preserves $Eq$. So assume that $\theta$ is of the form
$|\{i \in S: x_i \neq y_i\}| \equiv_2 p$ for $S \subseteq \{1,\dots,k\}$ and $p \in \{0,1\}$. Since each of $o$, $p$, $q$
satisfies this formula and $h$ behaves like a minority on $\{E,=\}$, we have
that $r$ satisfies this formula, too.
For the forwards implication,
let $R$ be a relation with a first-order definition in $(C^2_\omega,Eq)$ that is preserved by $h$. Define $\sim$ to be the equivalence relation on $(C^2_\omega)^n$ where $a \sim b$ iff $Eq(a_i,a_j) \Leftrightarrow Eq(b_i,b_j)$ for all $i,j \leq n$. Note that $h$ preserves $\sim$.
For $a \in (C^2_\omega)^n$, let $R_a$ be the relation
that contains all $t \in R$ with $t \sim a$. Let $\psi_a$ be the formula
$$\bigwedge_{i < j \leq n, Eq(a_i,a_j)} Eq(x_i,x_j)$$
and $\psi_a'$ be the formula
$$\bigwedge_{i < j \leq n, N(a_i,a_j)} N(x_i,x_j) \, .$$
Note that $t \in (C^2_\omega)^n$ satisfies
$\psi_a \wedge \psi_a'$ if and only if
$t \sim a$, and hence
a tuple from
$R$
is in $R_a$ if and only it satisfies $\psi_a \wedge \psi_a'$.
Pick representatives $a_1,\dots,a_m$ for all orbits of $n$-tuples in $R$.
\medskip \noindent
{\bf Claim 1.} $\bigvee_{i \leq m} (\psi_{a_i} \wedge \psi'_{a_i})$
is equivalent to a conjunction of formulas of the form $(\ref{eq:one})$ from the statement.
Rewrite the formula into a formula $\psi_0$
in conjunctive normal form of minimal size where every literal is either of the form $Eq(x,y)$ or of the form $N(x,y)$. Suppose that $\psi_0$ contains a conjunct with literals $Eq(a,b)$ and $Eq(c,d)$.
Since $\psi_0$ is of minimal size there
exists $r \in (C^2_\omega)^n$ that satisfies
$Eq(a,b)$ and none of the other literals in the conjunct, and similarly there exists $s \in (C^2_\omega)^n$ that
satisfies $Eq(c,d)$ and none of the other literals. By assumption, $r \sim r' \in R$ and $s \sim s' \in R$.
Since $R$ is preserved by $h$,
we have $t' := h(r',s',s') \in R$.
Then $t \sim t'$ since $h$ preserves $\sim$, and hence $t$ satisfies $\psi_0$.
But $t$ satisfies none of the literals in the conjunct, a contradiction.
Hence, all conjuncts of $\psi_0$ have
form $(\ref{eq:one})$ from the statement.
Let $t \in (C^2_\omega)^n$, set $l := {n \choose 2}$, and let $i_1j_1,\dots,i_lj_l$ be an enumeration of ${\{1,\dots,n\} \choose 2}$. The tuple
$b \in \{0,1\}^{n \choose 2}$
with $b_s = 1$ if $t_{i_s} \neq t_{j_s}$
and $b_s = 0$ otherwise is called
the \emph{split vector} of $t$.
We associate to $R_a$ the Boolean relation $B_a$ consisting of all split
vectors of tuples in $R_a$.
Since $R$ and $R_a$ are preserved by $h$, the relation $B_a$ is preserved by
the Boolean minority operation, and
hence has a definition by a Boolean system of equations. Therefore,
there exists a conjunction {$\theta_a$}
of equations of the form
$|\{s \in S : x_{i_s} = y_{j_s}\}| \equiv_2 p$, $p \in \{0,1\}$ such that ${\theta_a} \wedge \psi_a \wedge \psi_a'$ defines $R_a$.
\medskip
\noindent
{\bf Claim 2.} The following formula
$\phi$ defines $R$:
$$\phi := \psi_0 \wedge \bigwedge_{a \in \{a_1,\dots,a_m\}} (\neg \psi_a \vee \theta_a)$$
It is straightforward to see that this
formula can be rewritten into a formula
of the form as required in the statement.
To prove the claim, we first show
that every $t \in R$ satisfies $\phi$.
Clearly, $t$ satisfies $\psi_0$.
Let $a \in \{a_1,\dots,a_m\}$ be arbitrary; we have to verify that $t$ satisfies $\neg \psi_a \vee \theta_a$. If there are indices $i,j \in \{1,\dots,n\}$ such that $N(t_i,t_j)$
and $Eq(a_i,a_j)$, then $t$ satisfies $\neg \psi_a$. We are left with the case that for all $i,j \in \{1,\dots,n\}$ if $Eq(a_i,a_j)$ then $Eq(t_i,t_j)$.
In order to show that $t$ satisfies
$\theta_a$, it suffices to show that
there exists a $t' \in R_a$ such
that
for all $i,j \leq n$ with $Eq(a_i,a_j)$
we have $t_i = t_j$ iff $t'_i = t'_j$.
Note that $t' := h(a,a,t) \sim a$
since $h(N,\cdot,\cdot) = h(\cdot,N,\cdot) = h(\cdot,\cdot,N) = h$.
Moreover, $t' \in R$ and thus $t' \in R_a$. Finally, for all $i,j \leq n$ with $Eq(a_i,a_j)$
we have $t_i = t_j$ iff $t'_i = t'_j$
because $h$ behaves as a minority on $\{E,=\}$. Hence, $t$ satisfies $\phi$.
\medskip
Next, we show that every tuple
$t$ that satisfies $\phi$ is in $R$.
Since $t$ satisfies $\psi_0$ we have that $t \sim a$
for some $a \in \{a_1,\dots,a_m\}$.
Thus,
$t \models \psi_a \wedge \psi_a'$.
By assumption, $t$ satisfies
$\neg \psi_a \vee \theta_a$
and hence
$t \models \theta_a$. Therefore,
$t \in R_a$ and in particular $t \in R$.
\end{proof}
\begin{prop}\label{prop:algorithm}
There is a polynomial-time algorithm
that decides whether a given set
$\Phi$
of formulas as in the statement of
Proposition~\ref{prop:syntax}
is satisfiable.
\end{prop}
\begin{proof}
Let $X$ be the set of variables that appear in $\Phi$.
Create
a graph $G$ with vertex set $X$
that contains an edge
between $z_1$ and $z_2$
if
$\Phi$ contains a formula
of the form $Eq(z_1,z_2)$.
Eliminate all literals of the form
$N(x_i,y_i)$ in formulas from $\Phi$ when $x_i$ and $y_i$ lie in the same connected component of $G$.
Repeat this procedure until no
more literals get removed.
We then create a Boolean system of equations
$\Psi$
with variable set ${X \choose 2}$ as follows.
For each formula
$|\{i \in S \mid x_i \neq y_i \}| \equiv_2 p$
we add the Boolean equation
$\sum_{i \in S} x_iy_i = p$.
We additionally add for all
$xy,yz,xz \in {X \choose 2}$ the equation $xy+yz = xz$.
If the resulting system of equations $\Psi$ does not have
a solution
over $\{0,1\}$, reject the instance.
Otherwise accept.
\medskip
To see that this algorithm is correct,
observe that the literals that have been removed in the first part of the algorithm are false in all solutions,
so removing them from the disjunctions does not change the set of solutions.
\medskip
If the algorithm rejects,
then there is indeed no solution to
$\Phi$.
To see this, suppose that $s \colon C^2_\omega \to C^2_\omega$ is a solution to $\Phi$.
Define
$b \colon {X \choose 2} \to \{0,1\}$ as follows.
Note that for every variable $\{x_i,y_i\}$
that appears in some Boolean equation in $\Psi$, a literal $N(x_i,y_i)$ has been deleted in the first phase of the algorithm, and hence we have $Eq(s(x_i),s(y_i))$. Define
$s'(x_iy_i) := 1$ if $s(x_i) \neq s(y_i)$
and $s'(x_iy_i) := 0$ otherwise.
Then $s'$ is a satisfying assignment for $\Psi$.
\medskip
We still have to show that there exists
a solution to $\Phi$ if the algorithm
accepts. Let $s' \colon {X \choose 2} \to \{0,1\}$ be a solution to $\Psi$.
For each connected component
$C$ in the graph $G$ at the final stage of the algorithm we pick two values
$a_C,b_C \in C^2_\omega$ such that $Eq(a_C,b_C)$, and such that
$N(a_C,d)$ and \mic{$N(b_C,d)$} for all
previously picked values $d \in C^2_\omega$.
Moreover, for each connected component $C$ of $G$ we pick a representative $r_C$.
Define $s(r_C) := a_C$,
and for $x \in C$ define $s(x) := a_C$
if $s'(xr_C) = 0$,
and $s(x) := b_C$ otherwise.
Then $s$ satisfies all formulas in $\Psi$
that still contain disjuncts of the form
$N(x_i,y_i)$, since these disjuncts are satisfied by $s$.
Formulas of the form
$|\{i \in S : x_i \neq y_i\}| \equiv_2 p$
are satisfied, too, since $x_i$ and
$y_i$ lie in the same connected component $C$,
and hence $s(x_i) \neq s(y_i)$ iff
$s'(xr_C) \neq s'(y_ir_C)$,
which is the case iff
$s'(xr_C) + s'(y_ir_C) = s'(x_iy_i) = 1$ because of the additional equations we have added to $\Psi$. Therefore,
$|\{i \in S : x_i = y_i\}| \equiv_2 p$
iff $\sum_{i \in S} s'(x_iy_i) = p$.
\end{proof}
\begin{corollary}\label{cor:tractability}
Let $\Gamma$ be a reduct of $(C^2_\omega,Eq)$ with finite signature and such that
$\Pol(\Gamma)$ contains a ternary canonical injection $h$ as described in the beginning of Section~\ref{thm:C-low-omega-high-2-P}. Then $\Csp(\Gamma)$ is in P.
\end{corollary}
\begin{proof}
Direct consequence of Proposition~\ref{prop:syntax} and Proposition~\ref{prop:algorithm}.
\end{proof}
\section{Summary for the homogeneous equivalence relations}\label{sect:summary_equivalence} \
\begin{thm}
\label{thm:equivalence-above-2}
Let $\Gamma$ be a finite signature reduct of $(C_n^s,E)$, where either $3\leq n<\omega$ or $3\leq s<\omega$, and either $n$ or $s$ equals $\omega$. Then one of the following holds.
\begin{itemize}
\item[(1)] $\Gamma$ is homomorphically equivalent to a reduct of $(C_n^s,=)$, and $\text{\rm CSP}(\Gamma)$ is in P or NP-complete by~\cite{ecsps}.
\item[(2)] $\End(\Gamma)=\overline{\Aut(C_n^s,E)}$, \mic{$\Pol(\Gamma)$ has a uniformly continuous h1 clone homomorphism}, and $\text{\rm CSP}(\Gamma)$ is NP-complete.
\end{itemize}
\end{thm}
\begin{proof}
If $\Gamma$ has an endomorphism whose image is a clique or an independent set, then $\Gamma$ is homomorphically equivalent to a reduct of $(C_n^s,=)$ and the complexity classification is known from~\cite{ecsps}. Otherwise, courtesy of Propositions~\ref{prop:EndEq1} and~\ref{prop:eqpreserved}, we may assume that $\End(\Gamma)=\overline{\Aut(C_n^s,E)}$, and that there is a pp-definition of $E$, $N$, and $Eq$ in $\Gamma$.
In the first case, that $Eq$ has a finite number $n\geq 3$ of classes, we use Proposition~\ref{prop:n>2} to see that the action of $\Pol(\Gamma)$ on the classes of $Eq$ has no essential and no constant operation. It follows that this action has a uniformly continuous projective clone homomorphism as in Definition~\ref{defn:clonehomo}. The mapping which sends every function in $\Pol(\Gamma)$ to the function it becomes in the action on the classes of $Eq$ is a uniformly continuous clone homomorphism~\cite{Topo-Birk}, and hence the original action of $\Pol(\Gamma)$ has a uniformly continuous projective clone homomorphism as well. This implies NP-completeness of $\text{\rm CSP}(\Gamma)$ (Theorem~\ref{thm:wonderland}).
In the second case, that $Eq$ has classes of finite size $s \geq 3$, we use Proposition~\ref{prop:s>2} to see that the action of \mic{$\Pol(\Gamma,C)$ on some equivalence class $C$ has no essential and no constant operation, and hence has a uniformly continuous projective clone homomorphism. Picking any $c\in C$, we have that $\Pol(\Gamma,c)\subseteq \Pol(\Gamma,C)$ since $C$ is pp-definable from $c$ and $Eq$. Consequently, $\Pol(\Gamma,c)$ has a uniformly continuous projective clone homomorphism as well. Because $\Gamma$ is a model-complete core, this implies that $\Pol(\Gamma)$ has a uniformly continuous projective h1 clone homomorphism~\cite{wonderland}, and hence $\text{\rm CSP}(\Gamma)$ is NP-complete by Theorem~\ref{thm:wonderland}.}
\end{proof}
\begin{thm}
\label{thm:C-low-2-high-omega-complexity}
Suppose $\Gamma$ is a finite signature reduct of $(C_2^\omega,E)$. Then one of the following holds.
\begin{itemize}
\item[(1)] $\Gamma$ is homomorphically equivalent to a reduct of $(C_2^\omega,=)$, and $\text{\rm CSP}(\Gamma)$ is in P or NP-complete by~\cite{ecsps}.
\item[(2)] $\End(\Gamma)=\overline{\Aut(C_2^\omega,E)}$, $\Pol(\Gamma)$ contains a canonical ternary injection of behaviour minority which is hyperplanely of behaviour \mic{balanced xnor}, and $\text{\rm CSP}(\Gamma)$ is in P.
\item[(3)] $\End(\Gamma)=\overline{\Aut(C_2^\omega,E)}$, \mic{$\Pol(\Gamma)$ has a uniformly continuous h1 clone homomorphism}, and $\text{\rm CSP}(\Gamma)$ is NP-complete.
\end{itemize}
\end{thm}
\begin{proof}
As in the proof of Theorem~\ref{thm:equivalence-above-2} we may assume that $\End(\Gamma)=\overline{\Aut(C_2^\omega,E)}$, and that $E$, $N$ and $Eq$ are pp-definable. We apply Proposition~\ref{prop:2omega}. The first two cases from that proposition imply a uniformly continuous projective h1 clone homomorphism, and hence NP-completeness of the CSP, as in the proof of Theorem~\ref{thm:equivalence-above-2}. The third case in Proposition~\ref{prop:2omega} yields case~(2) here, and tractability as detailed in Section~\ref{thm:C-low-2-high-omega-P}.
\end{proof}
\begin{thm}
\label{thm:C-low-omega-high-2-complexity}
Suppose $\Gamma$ is a finite signature reduct of $(C_\omega^2,E)$. Then one of the following holds.
\begin{itemize}
\item[(1)] $\Gamma$ is homomorphically equivalent to a reduct of $(C_\omega^2,=)$, and $\text{\rm CSP}(\Gamma)$ is in P or NP-complete by~\cite{ecsps}.
\item[(2)] $\End(\Gamma)=\overline{\Aut(C_\omega^2,E)}$, $Eq$ is not pp-definable, $\Pol(\Gamma)$ contains a canonical binary injective polymorphism of behaviour $\mini$ that is $N$-dominated, and $\text{\rm CSP}(\Gamma)$ is in P.
\item[(3)] $\End(\Gamma)=\overline{\Aut(C_\omega^2,E)}$, $Eq$ is pp-definable, $\Pol(\Gamma)$ contains a ternary canonical function $h$ with $h(N,\cdot,\cdot)=h(\cdot,N,\cdot)=h(\cdot,\cdot,N)=N$ and which behaves like a minority on $\{E,=\}$, and $\text{\rm CSP}(\Gamma)$ is in P.
\item[(4)] $\End(\Gamma)=\overline{\Aut(C_\omega^2,E)}$, \mic{$\Pol(\Gamma)$ has a uniformly continuous h1 clone homomorphism}, and $\text{\rm CSP}(\Gamma)$ is NP-complete.
\end{itemize}
\end{thm}
\begin{proof}
As in Theorem~\ref{thm:equivalence-above-2} we may assume that $\End(\Gamma)=\overline{\Aut(C_\omega^2,E)}$, and that therefore {$E$ and $N$ are pp-definable. If $Eq$ is not pp-definable, then by Proposition~\ref{prop:al-jabr}, we have a binary injective polymorphism of behaviour $\mini$ that is $N$-dominated, and we have a polynomial algorithm from Theorem~\ref{thm:maximal}, similarly as in Proposition~\ref{prop:mintractable} for reducts of $(H_n,E)$. Suppose now that $Eq$ is pp-definable.}
We apply Proposition~\ref{prop:omega2}. As before, the first two cases imply NP-completeness of $\text{\rm CSP}(\Gamma)$. The third case from Proposition~\ref{prop:omega2} yields tractability as detailed in {Section}~\ref{thm:C-low-omega-high-2-P}.
\end{proof}
Summarizing, we obtain a proof of Theorem~\ref{thm:equiv}.
\begin{proof}[Proof of Theorem~\ref{thm:equiv}]
The statement follows from the preceding three theorems, together with~\cite{equiv-csps} (for $C_\omega^\omega$) and~\cite{ecsps} (for $C_\omega^1$ and $C_1^\omega$).
\end{proof}
\ignore{
We close the section with a more detailed variant of Theorem~\ref{thm:equiv}.
\begin{theorem}\label{thm:main2equiv}
Let $(C_n^s,E)$ be an infinite graph whose reflexive closure $Eq$ is an equivalence relation with $n$ classes of size $s$, where $1\leq n, s \leq \omega$.
Let $\Gamma$ be a reduct of $(C_n^s,E)$.
Then one of the following holds.
\begin{itemize}
\item[(1)] $\Gamma$ has an endomorphism whose image induces a clique or an independent set, and is homomorphically equivalent to a reduct of $(C_n^s,=)$.
\item[(2)] $\Gamma$ is a model complete core, $\End(\Gamma)=\overline{\Aut(C_n^s,E)}$, and $\Pol(\Gamma)$ has a uniformly continuous projective h1 clone homomorphism.
\item[(3)] $n=2, s=\omega$, $\Gamma$ is a model complete core, and $\Pol(\Gamma)$ contains a canonical ternary injection of behaviour minority which is hyperplanely of behaviour E-dominated projection.
\item[(4)] $n=\omega, s=2$, $\Gamma$ is a model complete core, and $\Pol(\Gamma)$ contains a ternary canonical function $h$ with $h(N,\cdot,\cdot)=h(\cdot,N,\cdot)=h(\cdot,\cdot,N)=N$ and which behaves like a minority on $\{E,=\}$.
\end{itemize}
Neither items~(2) and~(3), nor items~(2) and~(4) can simultaneously hold, and when $\Gamma$ has a finite relational signature, then $(2)$ implies NP-completeness and both (3) and (4) imply tractability of its CSP.
\end{theorem}
}
\section{Outlook}\label{sect:final}
We have classified the computational
complexity of CSPs
for reducts of the infinite homogeneous graphs.
Our proof shows that the scope
of the classification method from~\cite{BodPin-Schaefer} is much larger
than one might expect at first sight.
The general research goal here is
to identify larger and larger classes
of infinite-domain CSPs where systematic
complexity classification is possible; {two dichotomy conjectures are given} for CSPs of reducts of finitely bounded homogeneous structures in {\cite{BPP-projective-homomorphisms} and~\cite{wonderland}, where these have now been proved equivalent in~\cite{TwoDichotomyConjectures}. We have given additional evidence for these conjectures by proving that they hold for all reducts of homogeneous graphs.} The next step in this direction
might be to
show a general complexity dichotomy
for reducts of homogeneous structures whose
age is finitely bounded
and has the \emph{free amalgamation property} (the Henson graphs provide natural examples for such structures).
The present paper
indicates that this problem might be within reach.
\bibliographystyle{alpha}
|
1,477,468,750,359 | arxiv | \section{Introduction}\label{intro}
The well-bounded operators of Smart and Ringrose \cite{dS,jR} are an important class of operators defined in
terms of a functional calculus. These are the operators which possess an ${AC}(J)$ functional calculus where
${AC}(J)$ is the algebra of absolutely continuous functions defined on some compact interval $J \subseteq \mathbb{R}$.
Self-adjoint operators on Hilbert spaces are examples of well-bounded operators. Originally they were studied in
the context of operators with conditionally convergent spectral expansions. As is the case for self-adjoint
operators, the spectrum of such an operator is always a subset of the real axis. Furthermore these operators
have an integral representation with respect to a family of projections (see \cite{jR2}) known as a
decomposition of the identity. This theory is somewhat restricted usefulness since the decomposition of the
identity acts on the dual of the underlying Banach space and is in general not unique (see \cite{hD} for
examples of this non-uniqueness).
In \cite{BD} a subclass of the well-bounded operators, the type~(B)
well-bounded operators, were introduced. The type~(B) well-bounded
operators, which includes those well-bounded operators acting on
reflexive spaces, possess a theory of integration with respect to a
family of projections which act on the original space. This family
of projections, known as the spectral family, is uniquely determined
by the operator. The integration theory provides an extension of the
${AC}(J)$ functional calculus to a ${BV}(J)$ functional calculus where
${BV}(J)$ is the algebra of functions of bounded variation on the
interval $J$.
The main obstacle to overcome if one wishes to extend the theory
of well-bounded operators to cover operators whose spectrum may
not lie in the real line, is that of obtaining a suitable
concept of bounded variation for functions defined on a subset
of the plane. Many such concepts exist in the literature. In
\cite{BG}, Berkson and Gillespie used a notion of variation
ascribed to Hardy and Krause to define the ${AC}$ operators.
These are the operators which have an ${AC}_{HK}(J \times K)$
functional calculus where ${AC}_{HK}(J \times K)$ is the algebra
of absolutely continuous functions in the sense of Hardy and
Krause defined on a rectangle $J \times K \subset \mathbb{R}^2 \cong
\mathbb{C}$. They showed \cite[Theorem 5]{BG} that an operator $T \in
B(X)$ is an ${AC}$ operator if and only if $T = R + i S$ where
$R$ and $S$ are commuting well-bounded operators. In \cite{BDG}
it is shown that this splitting is not necessarily unique.
Furthermore even if $T$ is an ${AC}$ operator on a Hilbert space
$H$, it does not necessarily follow that $\alpha T$ is an ${AC}$
operator for all $\alpha \in \mathbb{C}$. On the positive side,
the ${AC}$ operators include the trigonometrically well-bounded
operators which have found applications in harmonic analysis and
differential equations (see \cite{BG2} and \cite{BG3}). An
operator $T \in B(X)$ is said to be trigonometrically
well-bounded if there exists a type (B) well-bounded operator $A
\in B(X)$ such that $\sigma(A) \subset [0, 2 \pi)$ and such that
$T = \exp(i A)$.
One of the problems in the theory well-bounded and ${AC}$ operators is that
the functional calculus of these operators is based on an algebra of
functions whose domain is either an interval in the real axis or a rectangle
in the plane. From an operator theory point of view a much more natural
domain is the spectrum, or at least a neighbourhood of the spectrum.
Secondly, as we have already mentioned, the class of ${AC}$ operators is not
closed under multiplication by scalars. This is undesirable from a spectral
theory point of view since if one has structural information about an
operator $T$, this clearly gives similar information about $\alpha T$. To
overcome these problems, in \cite{AD} we defined ${AC}(\sigma)$, the
absolutely continuous functions whose domain is some compact set $\sigma$ in
the plane. In this paper we look at those operators which have an
${AC}(\sigma)$ functional calculus, which we call ${AC}(\sigma)$ operators.
Section~2 summarizes some of the main results from \cite{AD}
concerning the function algebras ${BV}(\sigma)$ and
${AC}(\sigma)$.
In Section~3 we give some results which illustrate the extent of the
class of ${AC}(\sigma)$ operators. In particular, we note that this
class contains all scalar-type spectral operators, all well-bounded
operators and all trigonometrically well-bounded operators.
In Section~\ref{lbl:309} we develop some of the main spectral
properties of ${AC}(\sigma)$ operators. Here we show that the
${AC}(\sigma)$ operators form a proper subclass of the ${AC}$
operators and hence such operators have a splitting into real and
imaginary well-bounded parts. The natural conjecture that every
${AC}(\sigma)$ operator is in fact an ${AC}(\sigma(T))$ operator
remains open. Resolving this question depends on a being able to
answer some difficult questions about the relationships between
${AC}(\sigma_1)$ and ${AC}(\sigma_2)$ for different compact sets
$\sigma_1$ and $\sigma_2$. These issues are discussed in
Section~\ref{support}.
In Section~\ref{lbl:SpecRes} we examine the case where the
${AC}(\sigma)$ functional calculus for $T$ is weakly compact. In this
case one can construct a family of spectral projections associated
with $T$ which is rich enough to recover $T$ via an integration
process. This `half-plane spectral family' is a generalization of
the spectral family associated with a well-bounded operator of
type~(B). A full integration theory for this class of operators has,
however, yet to be developed. In particular, it is not known whether
one can always extend a weakly compact ${AC}(\sigma)$ functional
calculus to a ${BV}(\sigma)$ functional calculus. The final section
discusses some of the progress that has been obtained in pursuing
such a theory, and lists some of the major obstacles that remain.
Throughout this paper let $\sigma \subset \mathbb{C}$ be compact and
non-empty. For a Banach space $X$ we shall denote the bounded linear
operators on $X$ by $B(X)$ and the bounded linear projections on $X$
by ${\rm Proj}(X)$. Given $T \in B(X)$ with the single valued extension
property (see \cite{nD}) and $x \in X$ we denote the local spectrum
of $x$ (for $T$) by $\sigma_T(x)$. We shall write $\boldsymbol{\lambda}$ for the
identity function $\boldsymbol{\lambda} : \sigma \rightarrow \mathbb{C}, z \mapsto z$.
\section{${BV}(\sigma)$ and ${AC}(\sigma)$}\label{bv-definitions}
We shall briefly look at ${BV}(\sigma)$ and ${AC}(\sigma)$. In
particular we look at how two dimensional variation is defined. More
details may be found in \cite{AD}.
To define two dimensional variation we first need to look at
variation along curves. Let $\Gamma = C([0, 1], \mathbb{C})$ be the set of
curves in the plane. Let $\Gamma_L \subset \Gamma$ be the curves
which are piecewise line segments. Let $S = \set{z_i}_{i=1}^n
\subset \mathbb{C}$. We write $\Pi(S) \in \Gamma_L$ for the (uniform speed)
curve consisting of line segments joining the vertices at $z_1, z_2,
\dots, z_n$ (in the given order). For $\gamma \in \Gamma$ we say
that $\set{s_i}_{i=1}^n \subset \sigma$ is a \emph{partition of
$\gamma$ over $\sigma$} if there exists a partition
$\set{t_i}_{i=1}^n$ of $[0, 1]$ such that $t_1 \leq t_2 \leq \dots
\leq t_n$ and such that $s_i = \gamma(t_i)$ for all $i$. We shall
denote the partitions of $\gamma$ over $\sigma$ by $\Lambda(\gamma,
\sigma)$. For $\gamma \in \Gamma$ and $S \in \Lambda(\gamma,
\sigma)$ we denote by $\gamma_S$ the curve $\Pi(S) \in \Gamma_L$.
The variation along $\gamma \in \Gamma$ for a function $f : \sigma
\rightarrow \mathbb{C}$ is defined as
\begin{equation} \label{lbl:298}
\cvar(f, \gamma) = \sup_{\set{s_i}_{i=1}^n \in \Lambda(\gamma,
\sigma)} \sum_{i=1}^{n-1} \abs{f(s_{i+1}) - f(s_i)}.
\end{equation}
To each curve $\gamma \in \Gamma$ we define a weight factor $\rho$.
For $\gamma \in \Gamma$ and a line $l$ we let $\vf(\gamma, l)$
denote the number of times that $\gamma$ crosses $l$ (for a precise
definition of a crossing see Section~3.1 of \cite{AD}). Set
$\vf(\gamma)$ to be the supremum of $\vf(\gamma, l)$ over all lines
$l$. We set $\rho(\gamma) = \frac{1}{\vf(\gamma)}$. Here we take the
convention that if $\vf(\gamma) = \infty$ then $\vf(\gamma) = 0$. We
can extend the definition of $\rho$ to include functions in $C[a,
b]$ in the obvious way.
The two dimensional variation of a function $f : \sigma
\rightarrow \mathbb{C}$ is defined to be
\begin{equation} \label{lbl:994}
\var(f, \sigma) = \sup_{\gamma \in \Gamma}
\rho(\gamma) \cvar(f, \gamma).
\end{equation}
We have the following properties of two dimensional variation which
were shown in \cite{AD}.
\begin{prop}\label{Gamma-L}
Let $\sigma \subseteq \mathbb{C}$ be compact, and suppose that $f: \sigma
\to \mathbb{C}$. Then
\begin{align*}
\var(f,\sigma)
&= \sup_{\gamma \in \Gamma_L} \rho(\gamma) \cvar(f, \gamma)
\\
& = \sup \Bigl\{ \rho(\gamma_S) \sum_{i=1}^{n-1} \abs{f(s_{i+1}) - f(s_i)}
\,:\, \hbox{$S = \set{s_i}_{i=1}^n \subseteq \sigma$}
\Bigr\}.
\end{align*}
\end{prop}
\begin{prop} \label{lbl:541}
Let $\sigma_1 \subset \sigma \subset \mathbb{C}$ both be compact. Let $f,
g : \sigma \rightarrow \mathbb{C}$, $k \in \mathbb{C}$. Then
\begin{enumerate}
\item $\var(f + g, \sigma) \leq \var(f, \sigma) + \var(g, \sigma)$,
\item\label{item2} $\var(f g, \sigma) \leq \norminf{f} \var(g, \sigma)
+ \norminf{g} \var(f, \sigma)$,
\item $\var(k f, \sigma) = \abs{k} \var(f, \sigma)$,
\item $\var(f, \sigma_1) \leq \var(f, \sigma)$.
\end{enumerate}
\end{prop}
For $f : \sigma \rightarrow \mathbb{C}$ set
\begin{equation} \label{lbl:268}
\normbv{f} = \norminf{f} + \var(f, \sigma).
\end{equation}
The functions of bounded variation with domain $\sigma$ are
defined to be
\begin{equation*} \label{lbl:415}
{BV}(\sigma) = \set{f : \sigma \mapsto \mathbb{C} : \normbv{f} < \infty}.
\end{equation*}
To aid the reader we list here some of the main results from
\cite{AD} and \cite{AD2}. The affine invariance of these algebras
(Theorem \ref{lbl:847} and Proposition \ref{lbl:408}) is one of the
main features of this theory and will be used regularly without
comment.
\begin{prop} \label{lbl:199}
If $\sigma = [a, b]$ is an interval then the above definition of
variation agrees with the usual definition of variation. Hence the
above definition of ${BV}(\sigma)$ agrees with the usual definition
of ${BV}[a, b]$ when $\sigma = [a, b]$.
\end{prop}
\begin{thm} \label{lbl:333}
Let $\sigma \subset \mathbb{C}$ be compact. Then ${BV}(\sigma)$ is a Banach
algebra using the norm given in Equation \eqref{lbl:268}.
\end{thm}
\begin{thm} \label{lbl:847}
Let $\alpha, \beta \in \mathbb{C}$ and suppose that $\alpha \neq 0$. Then
${BV}(\sigma) \cong {BV}( \alpha \sigma + \beta)$.
\end{thm}
\begin{lem}
Let $f : \sigma \rightarrow \mathbb{C}$ be a Lipschitz function with
Lipschitz constant $L(f) = \sup_{z, w \in \sigma} \abs{\frac{f(z) -
f(w)}{z - w}}$. Then $\var(f, \sigma) \leq L(f) \var(\boldsymbol{\lambda}, \sigma)$.
Hence $f \in {BV}(\sigma)$.
\end{lem}
We define ${AC}(\sigma)$ as being the subalgebra ${BV}(\sigma)$
generated by the functions $1$, $\boldsymbol{\lambda}$ and $\overline{\boldsymbol{\lambda}}$. (Note
that $\boldsymbol{\lambda}$ and $\overline{\boldsymbol{\lambda}}$ are always in ${BV}(\sigma)$.) We
call functions in ${AC}(\sigma)$ the \emph{absolutely continuous
functions with respect to $\sigma$}. By Proposition \ref{lbl:199}
this coincides with the usual notion of absolute continuity if
$\sigma = [a, b] \subset \mathbb{R}$ is an interval. In \cite{AD} the
following properties of ${AC}(\sigma)$ are shown.
\begin{prop} \label{lbl:996}
Let $\sigma = [a, b]$ be a compact interval. Let $g \in
{BV}(\sigma) \cap C(\sigma)$. Suppose that $\rho(g) > 0$. Then
$\normbv{f \circ g} \leq \frac{1}{\rho(g)}
\norm{f}_{{BV}(g(\sigma))}$ for all $f \in {BV}(g(\sigma))$.
\end{prop}
\begin{prop} \label{lbl:408}
Let $\alpha, \beta \in \mathbb{C}$ and suppose that $\alpha \neq 0$. Then
${AC}(\sigma) \cong {AC}( \alpha \sigma + \beta)$.
\end{prop}
\begin{prop} \label{lem:ac compact:4590}
If $f \in {AC}(\sigma)$ and $f(z) \ne 0$ on $\sigma$ then
$\frac{1}{f} \in {AC}(\sigma)$.
\end{prop}
\begin{comment}
Denoting the absolutely continuous functions defined on the
rectangle $J \times K$ in the sense of Hardy and Krause by ${AC_{HK}}(J
\times K)$ we have the following.
\begin{thm}
Let $J \times K$ be a rectangle in the complex plane with edges
parallel to the axes. Then ${AC_{HK}}(J \times K) \subset {AC}(J \times
K)$ and the inclusion map ${AC_{HK}}(J \times K) \hookrightarrow {AC}(J
\times K)$ is continuous. If the rectangle is non-degenerate then
the containment is proper.
\end{thm}
\end{comment}
We shall also need some properties of ${AC}(\sigma)$ and
${BV}(\sigma)$ which were not included in \cite{AD}.
\begin{prop} ${BV}(\sigma)$ is a lattice. If $f, g \in {BV}(\sigma)$, then
\[ \hbox{$\normbv{f \lor g} \le \normbv{f} + \normbv{g}$ and
$\normbv{f \land g} \le \normbv{f} + \normbv{g}$.} \]
\end{prop}
\begin{proof}
Suppose that $\gamma \in \Gamma$ and that $\{s_i\}_{i=1}^n \in
\Lambda(\gamma,\sigma)$. Note that for any $a,a',b,b'$,
\begin{equation}\label{max-inequality}
|(a \lor a') - (b \lor b')| \le |a-a'| \lor | b-b' | \le |a-a'| + |b-b'|
\end{equation}
and so
\[
\sum_{i=1}^{n-1} |(f \lor g)(s_{i+1}) - (f \lor g)(s_i)|
\le \sum_{i=1}^{n-1} |f(s_{i+1}) - f(s_i)| + |g(s_{i+1}) - g(s_i)|. \]
Thus
$ \cvar(f \lor g,\gamma) \le \cvar(f ,\gamma) + \cvar(g,\gamma)
$
and so
\begin{align*}
\normbv{f \lor g}
&= \norm{f \lor g}_\infty + \sup_\gamma \cvar(f \lor g,\gamma)
\\
&\le \norm{f}_\infty + \norm{g}_\infty
+ \sup_\gamma \{ \cvar(f ,\gamma) + \cvar(g,\gamma) \} \\
&\le \norm{f}_\infty + \sup_\gamma \cvar(f,\gamma) +
\norm{g}_\infty + \sup_\gamma \cvar(g,\gamma)\\
&= \normbv{f} + \normbv{g}.
\end{align*}
The proof for $f \land g$ is almost identical.
\end{proof}
Note that ${BV}(\sigma)$ is not a \emph{Banach} lattice, even in the
case $\sigma = [0,1]$.
The set $\hbox{$CT\kern-0.2ex{P}\kern-0.2ex{P}$}(\sigma)$ of functions on $\sigma$ which are
continuous and piecewise triangularly planar relative to $\sigma$
was introduced in \cite{AD}. It is easy to see that $\hbox{$CT\kern-0.2ex{P}\kern-0.2ex{P}$}(\sigma)$
is a sublattice of ${BV}(\sigma)$.
\begin{cor} ${AC}(\sigma)$ is a sublattice of ${BV}(\sigma)$.
\end{cor}
\begin{proof}
It suffices to show that if $f,g \in {AC}(\sigma)$, then $f \lor g
\in {AC}(\sigma)$. Suppose then that $f,g \in {AC}(\sigma)$. Then
there exist sequences $\{f_n\},\{g_n\} \subseteq \hbox{$CT\kern-0.2ex{P}\kern-0.2ex{P}$}(\sigma)$ such
that $f_n \to f$ and $g_n \to g$ in ${BV}(\sigma)$. As
$\hbox{$CT\kern-0.2ex{P}\kern-0.2ex{P}$}(\sigma)$ is a lattice, $f_n \lor g_n \in \hbox{$CT\kern-0.2ex{P}\kern-0.2ex{P}$}(\sigma)$ for
each $n$ and, using (\ref{max-inequality}), one can see that $(f_n
\lor g_n) \to (f \lor g)$. This implies that $f \lor g$ lies in the
closure of $\hbox{$CT\kern-0.2ex{P}\kern-0.2ex{P}$}(\sigma)$, namely ${AC}(\sigma)$.
\end{proof}
If one wishes to apply the results of local spectral theory, it is
important that ${AC}(\sigma)$ forms an admissible algebra of
functions in the sense of Colojoar{\v a} and Foia{\c s} \cite{CF2}.
The first step is to show that ${AC}(\sigma)$ admits partitions of
unity.
\begin{lem} $\sigma \subset \mathbb{C}$ be compact. Then ${AC}(\sigma)$ is a
normal algebra. That is, given any finite open cover
$\{U_i\}_{i=1}^n$ of $\sigma$, there exist functions
$\{f_i\}_{i=1}^n \subseteq {AC}(\sigma)$ such that
\begin{enumerate}
\item $f_i(\sigma) \subset [0,1]$, for all $1 \le i \le n$,
\item $\mathrm{supp} f_i \subseteq U_i$ for all $1 \le i \le n$,
\item $\sum_{i=1}^n f_i = 1$ on $\sigma$.
\end{enumerate}
\end{lem}
\begin{proof} This follows from the fact that $C^\infty(\sigma)
\subseteq {AC}(\sigma)$ \cite[Proposition 4.7]{AD}. More precisely,
let $\{U_i\}_{i=1}^n$ be a finite open cover of $\sigma$ and let $U
= \cup_{i=1}^n$. Choose an open set $V$ with $\sigma \subseteq V
\subseteq \overline{V} \subseteq U$. Then there exist non-negative
$f_1,\dots,f_n \in C^\infty(V)$ such that $\sum_{i=1}^n f_i = 1$ on
$V$ (and hence on $\sigma)$, and $\mathrm{supp} f_i \subseteq U_i$
for all $1 \le i \le n$ (see \cite[page 44]{LM}).
\end{proof}
For $f \in {AC}(\sigma)$ and $\xi \not\in \mathrm{supp} f$, define
\[ f_\xi(z) = \begin{cases}
\frac{f(z)}{z-\xi}, & z \in \sigma\setminus \{\xi\}, \\
0, & z \in \sigma \cap \{\xi\}.
\end{cases}
\]
Recall that an algebra $\mathcal{A}$ of functions (defined on some
subset of $\mathbb{C}$) is admissible if it contains the polynomials, is
normal, and $f_\xi \in \mathcal{A}$ for all $f \in \mathcal{A}$ and
all $\xi \not\in \mathrm{supp} f$.
\begin{prop} \label{lbl:656}
Let $\sigma \subset \mathbb{C}$ be compact. Then ${AC}(\sigma)$ is an
admissible inverse-closed algebra.
\end{prop}
\begin{proof} All that remains is to show that the last property
hold in ${AC}(\sigma)$. Suppose then that $f \in {AC}(\sigma)$ and
$\xi \not\in \mathrm{supp} f$. Given that $\mathrm{supp} f$ is
compact, there exists $h \in C^\infty(\mathbb{C})$ such that $h(z) =
(z-\xi)^{-1}$ on $\mathrm{supp} f$ and $h(z) \equiv 0$ on some
neighbourhood of $\xi$. Again using \cite[Proposition 4.7]{AD} we
have that $h|\sigma \in {AC}(\sigma)$ and hence that $f_\xi = f h \in
{AC}(\sigma)$.
\end{proof}
\begin{comment}
\begin{thm} \label{lbl:605}
Let $\sigma_1 \subset \sigma_2 \subset \mathbb{C}$ both be compact. Then
${AC}(\sigma_1) \cong {AC}(\sigma_2) / \mathcal{J}_{\sigma_1}$ where
$\mathcal{J}_{\sigma_1}$ is the closed ideal of ${AC}(\sigma_2)$
given by $\mathcal{J}_{\sigma_1} = \set{f \in {AC}(\sigma_2) :
f(\sigma_1) = \set{0}}$.
\end{thm}
\end{comment}
The relationship between $\var(f,\sigma_1)$, $\var(f,\sigma_2)$ and
$\var(f,\sigma_1 \cup \sigma_2)$ is in general rather complicated.
The following theorem will allow us to patch together functions
defined on different sets.
\begin{thm}\label{variation-join}
Suppose that $\sigma_1, \sigma_2 \subseteq \mathbb{C}$ are
nonempty closed sets which are disjoint except at their boundaries. Suppose
that $\sigma = \sigma_1 \cup \sigma_2$ is convex. If $f:\sigma \to \mathbb{C}$,
then
\[ \max\{\var(f,\sigma_1),\var(f,\sigma_2)\}
\le \var(f,\sigma)
\le \var(f,\sigma_1) + \var(f,\sigma_2) \]
and hence
\[ \max\{\norm{f}_{{BV}(\sigma_1)},\norm{f}_{{BV}(\sigma_2)} \}
\le \normbv{f}
\le \norm{f}_{{BV}(\sigma_1)} + \norm{f}_{{BV}(\sigma_2)}. \]
Thus, if $f|\sigma_1 \in {BV}(\sigma_1)$ and $f|\sigma_2 \in
{BV}(\sigma_2)$, then $f \in {BV}(\sigma)$.
\end{thm}
\begin{proof} The left-hand inequalities are obvious.
Note that given any points $z \in \sigma_1 \setminus \sigma_2$ and
$w \in \sigma_2 \setminus \sigma_1$ there exists a point $u$ on the
line joining $z$ and $w$ with $u$ in $\sigma_1 \cap \sigma_2$. To
see this, let $\alpha(t) = (1-t)z + t w$ and let $t_0 = \inf\{ t \in
[0,1] \,:\, \alpha(t) \in \sigma_2\}$. By the convexity of $\sigma$,
$\alpha(t) \in \sigma_1$ for all $0 \le t < t_0$. The closedness of
the subsets then implies that $u = \alpha(t_0) \in \sigma_1 \cap
\sigma_2$.
Suppose then that $S = \{z_0,z_1,\dots,z_n\} \subseteq \sigma$. For
any $j$ for which $z_j$ and $z_{j+1}$ lie in different subsets, then
using the above remark, expand $S$ to add an extra vertex on the
line joining $z_j$ and $z_{j+1}$ which lies in both $\sigma_1$ and
$\sigma_2$. (Note that the addition of these extra vertices does not
change the value of $\rho(\gamma_S)$ and can only increase the
variation of $f$ between the vertices.) Write the vertices of
$\gamma$ which lie in $\sigma_1$ as $S_1 =
\{z_0^1,z_1^1,\dots,z_{k_1}^1\}$ and those which lie in $\sigma_2$
as $S_2=\{z_0^2,z_1^2,\dots,z_{k_2}^2\}$, preserving the original
ordering. Note that for every $j$, $\{z_j,z_{j+1}\}$ is subset of at
least one of the sets $S_1$ and $S_2$. Thus
\[ \sum_{j=1}^n |f(z_j)-f(z_{j-1})| \le
\sum_{i=1}^2 \sum_{j=1}^{k_i} |f(z_j^i)-f(z_{j-1}^i)|
\]
where an empty sum is interpreted as having value $0$. Recall that
if $S' \subseteq S$ then $\rho(\gamma_{S'}) \ge \rho(\gamma_S)$.
Thus
\begin{align*}
\rho(\gamma_S) \sum_{j=1}^n |f(z_j)-f(z_{j-1})|
& \le \sum_{i=1}^2 \rho(\gamma_{S_i})
\sum_{j=1}^{k_i} |f(z_j^i)-f(z_{j-1}^i)| \\
& \le \sum_{i=1}^2 \rho(\gamma_{S_i}) \cvar(f,\sigma_i) \\
& \le \sum_{i=1}^2 \var(f,\sigma_i).
\end{align*}
The results follows on taking a supremum over finite $S \subseteq
\sigma$.
\end{proof}
Note that the convexity of $\sigma$ is vital in
Theorem~\ref{variation-join}. Without this condition it is easy to construct
examples where $\var(f,\sigma_1) + \var(f,\sigma_2) = 0$
for a non constant function $f$.
Later, we will need to show that we can patch two absolutely
continuous functions together.
\begin{lem}\label{pasting-lemma}
Suppose that $\sigma_1 = [0,1] \times [0,1]$, that $\sigma_2 = [1,2]
\times [0,1]$ and that $\sigma = \sigma_1 \cup \sigma_2$. Suppose
that $f: \sigma \to \mathbb{C}$ and that $f_i = f|\sigma_i$ ($i=1,2$). If
$f_1 \in {AC}(\sigma_1)$ and $f_2 \in {AC}(\sigma_2)$, then $f \in
{AC}(\sigma)$ and
\[ \normbv{f}\le \norm{f_1}_{{BV}(\sigma_1)} +
\norm{f_2}_{{BV}(\sigma_2)}. \]
\end{lem}
\begin{proof} By replacing $f$ with the function
$(x,y) \to f(x,y)-f(1,y)$ we may assume that $f|(\sigma_1 \cap
\sigma_2) = 0$. (Note that $(x,y) \to f(1,y)$ is always in
${AC}(\sigma)$.)
Suppose first that $f_2 = 0$. Fix $\epsilon > 0$. As $f_1 \in
{AC}(\sigma_1)$ there exists $p \in \hbox{$CT\kern-0.2ex{P}\kern-0.2ex{P}$}(\sigma_1)$ with
$\norm{f_1-p}_{{BV}(\sigma_1)} < \epsilon/4$. By the definition of
$\hbox{$CT\kern-0.2ex{P}\kern-0.2ex{P}$}(\sigma_1)$ there is a triangulation $\{A_i\}_{i=1}^n$ of
$\sigma_1$ such that $p|A_i$ is planar (see \cite[Section 4]{AD}).
Note that $b(y)= p(1,y)$ is a piecewise linear function on $[0,1]$
with $\norm{b}_{{BV}[0,1]} = \norm{f_1 - p}_{{BV}(\sigma_1 \cap
\sigma_2)} < \epsilon/4$. Extend $p$ to $\sigma_2$ by setting
$p(x,y) = b(y)$. Note that $p \in \hbox{$CT\kern-0.2ex{P}\kern-0.2ex{P}$}(\sigma)$ and by
\cite[Proposition 4.4]{AD}, $\norm{p|\sigma_2}_{{BV}(\sigma_2)} <
\epsilon/4$. Thus, using Theorem~\ref{variation-join},
\[ \norm{f - p}_{{BV}(\sigma)}
\le \norm{f - p}_{{BV}(\sigma_1)} + \norm{f - p}_{{BV}(\sigma_2)}
< \frac{\epsilon}{2}. \]
For arbitrary $f_2$, The same argument will produce a function $q
\in \hbox{$CT\kern-0.2ex{P}\kern-0.2ex{P}$}(\sigma)$ which approximates to within $\epsilon/2$ the
function which is $f_2$ on $ \sigma_2$ and zero on $\sigma_1$. Thus
the piecewise planar function $p+q$ approximates $f$ to within
$\epsilon$ on $\sigma$. It follows that $f \in {AC}(\sigma)$. The
norm estimate is given by Theorem~\ref{variation-join}.
\end{proof}
The conditions on $\sigma_1$ and $\sigma_2$ in
Lemma~\ref{pasting-lemma} could be relaxed considerably. Since we
will not need this greater generality in this paper, we have not
attempted to determine the most general conditions on these sets for
which the above proof works. It is worth noting that one does need
\emph{some} conditions on $\sigma_1$ and $\sigma_2$ or else the
pasted function need not even be of bounded variation.
\section{${AC}(\sigma)$ operators: definition and examples}
\label{lbl:id-s3}
\begin{defn} Suppose that $\sigma \subseteq \mathbb{C}$ is a nonempty compact set and
that $T$ is a bounded operator on a Banach space $X$. We say that
$T$ is an $AC(\sigma)$ operator if $T$ admits an bounded
$AC(\sigma)$ functional calculus. That is, $T$ is an ${AC}(\sigma)$
operator if there exists a bounded unital Banach algebra
homomorphism $\psi: {AC}(\sigma) \to B(X)$ for which $\psi(\boldsymbol{\lambda}) =
T$.
\end{defn}
Where there seems little room for confusion we shall often say that
$T$ is an ${AC}(\sigma)$ operator where one should more properly say
that $T$ is an ${AC}(\sigma)$ operator \emph{for some $\sigma$}.
Before proceeding to give some of the general properties of
$AC(\sigma)$ operators, it is appropriate to give the reader some
idea of how this class is related to other standard classes of
operators which arise in spectral theory.
\begin{ex} \label{lbl:189} {\rm
Let $H$ be a Hilbert space and let $T \in B(H)$ be normal. Then $T$
has a $C(\sigma(T))$ functional calculus $\psi$. Then $\psi \vert
{AC}(\sigma(T))$ is a linear homomorphism from ${AC}(\sigma(T))$ into
$B(X)$. Furthermore $\norm{\psi(f)} \leq \norm{\psi}\norminf{f} \leq
\norm{\psi}\norm{f}_{BV(\sigma(T))}$ for all $f \in {AC}(\sigma)$ and
so $\psi \vert {AC}(\sigma(T))$ is continuous from ${AC}(\sigma(T))$
into $B(H)$. Hence $T$ is an ${AC}(\sigma(T))$ operator. Indeed, by
the same argument any scalar type spectral operator (or even
scalar-type prespectral operator) $T$ on a Banach space $X$ is also
an ${AC}(\sigma(T))$ operator. (See \cite{hD} for the definitions of
these latter classes of operators.)}
\end{ex}
The operators in the previous example are associated with spectral
expansions which are of an unconditional nature. The motivation for
the present theory is of course to cover operators such as
well-bounded operators, which admit less constrained types of
spectral expansion.
\begin{lem} \label{lbl:909}
Let $T \in B(X)$ be an ${AC}(\sigma)$ operator. Suppose that
$\sigma \subset \sigma'$ where $\sigma' \subset \mathbb{C}$ is compact.
Then $T$ is an ${AC}(\sigma')$ operator.
\end{lem}
\begin{proof}
Let $\psi$ be a ${AC}(\sigma)$ functional calculus for $T$. Define
$\psi_{\sigma'} : {AC}(\sigma') \rightarrow B(X) : f \mapsto \psi(f
\vert \sigma)$. Then $\psi_{\sigma'}$ is a unital linear
homomorphism. Furthermore $\psi_{\sigma'}(\boldsymbol{\lambda}) = \psi(\boldsymbol{\lambda} \vert
\sigma) = T$. Finally we note from the inequality $\norm{f \vert
\sigma}_{{BV}(\sigma)} \leq \norm{f}_{{BV}(\sigma')}$ that
$\psi_{\sigma'}$ is continuous. Hence $\psi_{\sigma'}$ is an
${AC}(\sigma')$ functional calculus for $T$.
\end{proof}
The following result was announced in \cite[Section 2]{AD}.
\begin{prop} \label{lbl:508}
Let $T \in B(X)$. The following are equivalent.
\begin{enumerate}
\item $T$ is well-bounded,
\item $T$ is an ${AC}(\sigma)$ operator for some $\sigma
\subset \mathbb{R}$,
\item $\sigma(T) \subset \mathbb{R}$ and $T$ is an ${AC}(\sigma(T))$
operator.
\end{enumerate}
\end{prop}
\begin{proof}
Trivially (3) implies (2). Lemma \ref{lbl:909} shows that (2)
implies (1). Say $T$ is well-bounded with functional calculus $\psi
: {AC}(J) \rightarrow B(X)$ for some interval $J$. In \cite{AD} we
define a linear isometry $\iota : {AC}(\sigma(T)) \rightarrow
{AC}(J)$. Define $\psi_{\sigma(T)} : {AC}(\sigma(T)) \rightarrow B(X)
: f \mapsto \psi(\iota(f))$. We show that $\psi_{\sigma(T)}$ is an
${AC}(\sigma(T))$ functional calculus for $T$ which will complete the
proof. Clearly $\psi_{\sigma(T)}$ is linear and continuous.
Furthermore, since $\iota(\boldsymbol{\lambda} \vert \sigma(T)) = \boldsymbol{\lambda}$, we have
that $\psi_{\sigma(T)}(\boldsymbol{\lambda}) = T$. To see that $\psi_{\sigma(T)}$ is
a homomorphism we note that if $f, g \in {AC}(\sigma(T))$ then
$(\iota(f g) - \iota(f) \iota(g))(\sigma(T)) = \set{0}$. Theorem
4.4.4 of \cite{bA} says we can find a sequence
$\set{h_n}_{n=1}^\infty \subset {AC}(J)$ such that $\lim_n \norm{h_n
- (\iota(f g) - \iota(f) \iota(g))}_{{BV}(J)} = 0$ and such that for
each $n$, $h_n$ is zero on a neighbourhood of $\sigma(T)$. This
last condition, by Proposition 3.1.12 of \cite{CF}, implies that
$\psi(h_n) = 0$ for all $n$. Hence $\psi(\iota(f g) - \iota(f)
\iota(g)) = \lim_n \psi(h_n) = 0$, which shows that
$\psi_{\sigma(T)}$ is a homomorphism as claimed.
\end{proof}
As a result of the last proposition we prefer to use the term `real
${AC}(\sigma)$ operator' rather than the term well-bounded operator.
As well as being less descriptive, the term well-bounded operator
also suffers from the fact that it is used for quite a different
concept in the local theory of Banach spaces (see \cite{MTJ} for
example.) We shall however stick with the traditional term for the
remainder of this paper.
The next theorem shows that some important classes of ${AC}$
operators are also ${AC}(\sigma)$ operators.
\begin{thm} \label{lbl:732}
Let $A \in B(X)$ be well-bounded with functional calculus $\psi :
AC(J) \rightarrow B(X)$ for some interval $J$. Let $f \in AC(J)$
be such that $\rho(f(J)) > 0$. Then $\psi(f)$ is an ${AC}(f(J))$
operator.
\end{thm}
\begin{proof}
Define $\psi_f : {AC}(f(J)) \rightarrow B(X) : g \mapsto \psi(g \circ
f)$. Then $\psi_f$ is a unital linear homomorphism and $\psi_f(\boldsymbol{\lambda})
= \psi(f)$. By Proposition \ref{lbl:996}, $\psi_f$ is continuous.
\end{proof}
\begin{cor} \label{lbl:744}
Let $A \in B(X)$ be well-bounded and $p$ be a polynomial of one
variable. Then $p(A)$ is an ${AC}(p(\sigma(A)))$ operator.
\end{cor}
\begin{cor} \label{lbl:213}
Let $A \in B(X)$ be a well-bounded operator. Then $\exp(i A)$ is
an ${AC}(i \exp(\sigma(A)))$ operator.
\end{cor}
We noted earlier that the trigonometrically well-bounded operators
are those operators which can be expressed in the form $\exp(i A)$
where $A \in B(X)$ is a well-bounded operator of type (B) such that
$\sigma(A) \subset [0, 2 \pi)$.
\begin{cor} \label{lbl:829}
Let $T \in B(X)$ be trigonometrically well-bounded. Then $T$ as an
${AC}(\mathbb{T})$ operator where $\mathbb{T}$ is the unit circle in $\mathbb{C}$.
\end{cor}
We end this section with a more concrete example.
\begin{ex} {\rm Suppose that $1 < p < \infty$ and that $X$ is the usual
Hardy space $H^p(\mathbb{D})$ of analytic functions on the unit disk.
Consider the unbounded operator $Af(z) = z f'(z)$, $f \in H^p(\mathbb{D})$
(with natural domain $\{f \,:\, Af \in H^p(\mathbb{D})\})$. This operator
arises, for example, as the analytic generator of a semigroup of
composition operators, $T_tf(z) = f(e^{-t} z)$; see \cite{Si}, which
includes a summary of many of the spectral properties of $A$. The
spectrum of $A$ is $\sigma(A) = \mathbb{N} = \{0,1,2,\dots\}$ with the
corresponding spectral projections $P_k(\sum a_n z^n) = a_k z^k$ ($k
\in \mathbb{N}$) giving just the usual Fourier components. Suppose then
that $\mu \not\in \sigma(A)$. The resolvent operator $R(\mu,A) =
(\mu I - A)^{-1}$ is a compact operator with spectrum
$\sigma(R(\mu,A)) = \Bigl\{\frac{1}{\mu - k}\Bigr\}_{k=0}^\infty
\cup \{0\}$. From \cite[Theorem 3.3]{CD} it follows easily from the
properties of Fourier series that if $x \in \mathbb{R}\setminus\mathbb{N}$, then
$R(x,A)$ is well-bounded. If we fix such an $x$ and take $\mu
\not\in \mathbb{R}$, then $R(\mu,A) = f(R(x,A))$ where $f(t) =
t/(1+(\mu-x)t)$ is a M{\" o}bius transformation. If $J$ is any
compact interval containing $\sigma(R(x,A))$ then $\rho(f(J)) =
\frac{1}{2}$. Thus $R(\mu,A)$ is an $AC(f(J))$ operator. Thus, all
the resolvents of $A$ are compact $AC(\sigma)$ operators (for some
$\sigma$). Note that none of the resolvents is scalar-type spectral
unless $p=2$.}
\end{ex}
\section{Properties of ${AC}(\sigma)$ operators}
\label{lbl:309}
All ${AC}(\sigma)$ operators belong to the larger class of
decomposable operators (in the sense of \cite{CF2}).
\begin{prop} \label{lbl:741}
Let $T \in B(X)$ be an ${AC}(\sigma)$ operator. Then
\begin{enumerate}
\item $\sigma(T) \subseteq \sigma$.
\item $T$ is decomposable.
\end{enumerate}
\end{prop}
\begin{proof}
This follows from the admissibility of ${AC}(\sigma)$
(Proposition~\ref{lbl:656}).
\end{proof}
In general it is easy to pass between spectral properties of an
operator $T$ and those of affine translations of $T$. One of the
main motivations for developing this theory was to provide a
suitably broad class of operators which is closed under such
transformations. From Theorem \ref{lbl:408} we get the following.
\begin{thm} \label{lbl:814}
Let $T \in B(X)$ be an ${AC}(\sigma)$ operator. Let $\alpha, \beta
\in \mathbb{C}$. Then $\alpha T + \beta I$ is an ${AC}(\alpha \sigma +
\beta)$ operator.
\end{thm}
\begin{proof}
Let $\theta : {AC}(\sigma) \rightarrow {AC}(\alpha \sigma + \beta)$ be
the isomorphism of Theorem \ref{lbl:408}. Let $\psi$ be the
${AC}(\sigma)$ functional calculus for $T$. Then it is routine to
check that the map $\psi_{\alpha, \beta} : {AC}(\alpha \sigma +
\beta) \rightarrow B(X) : f \mapsto \psi(\theta^{-1}(f))$ is an
${AC}(\alpha \sigma + \beta)$ functional calculus for $\alpha T +
\beta I$.
\end{proof}
\begin{thm} \label{lbl:329}
Let $T \in B(X)$ be an ${AC}(\sigma)$ operator. Then $T = R + i S$
where $R, S$ are commuting well-bounded operators. Further,
$\sigma(R) = {\rm Re}(\sigma(T))$ and $\sigma(S) = {\rm Im}(\sigma(T))$.
\end{thm}
\begin{proof}
Let $\psi$ be an ${AC}(\sigma)$ functional calculus for $T$. In
\cite{AD} it is shown in Proposition 5.4 that the map $u :
{AC}({\rm Re}(\sigma)) \rightarrow AC(\sigma)$ defined by $u(f)(z) =
f({\rm Re}(z))$ is a norm-decreasing linear homomorphism. Then the map
$\psi_{{\rm Re}(\sigma)} : {AC}({\rm Re}(\sigma)) \rightarrow B(X) : f
\mapsto \psi(u(f))$ is a continuous linear unital homomorphism.
Hence $R := \psi_{{\rm Re}(\sigma)}(\boldsymbol{\lambda} \vert {\rm Re}(\sigma)) =
\psi({\rm Re}(\boldsymbol{\lambda}))$ is well-bounded. Similarly $S :=
\psi({\rm Im}(\boldsymbol{\lambda}))$ is well-bounded. Then $T = \psi(\boldsymbol{\lambda}) =
\psi({\rm Re}(\boldsymbol{\lambda}) + i\,{\rm Im}(\boldsymbol{\lambda})) := R + i S$. Finally we note that
$R$ and $S$ commute since ${AC}(\sigma)$ is a commutative algebra and
$\psi$ is a homomorphism.
The identification of $\sigma(R)$ and $\sigma(S)$ follows
immediately from the spectral mapping theorem for admissible
inverse-closed algebras of functions \cite[Theorem 2.1]{CF2}
\end{proof}
Splittings which arise from an ${AC}(\sigma)$ functional calculus
we call \emph{functional calculus splittings}.
\begin{cor} \label{lbl:781}
The ${AC}(\sigma)$ operators are a proper subset of the ${AC}$
operators of Berkson and Gillespie.
\end{cor}
\begin{proof}
We note that not all ${AC}$ operators are ${AC}(\sigma)$ operators.
Example 4.1 of \cite{BDG} shows that the class of ${AC}$ operators is
not closed under multiplication by scalars even on Hilbert spaces.
\end{proof}
Not all splittings into commuting real and imaginary well-bounded
parts arise from an ${AC}(\sigma)$ functional calculus. This was
shown in the next example which first appeared in \cite{BDG}.
\begin{ex} \label{lbl:690} {\rm
Let $X = L^\infty[0, 1] \oplus L^1[0, 1]$. Define $A \in B(X)$ by $A(f, g) =
(\boldsymbol{\lambda} f, \boldsymbol{\lambda} g)$. It is not difficult to see that $A$ is well-bounded and
that $\sigma(A) = [0, 1]$. Let $T = (1 + i)A = A + i A$. By Theorem
\ref{lbl:814}, $T$ is an ${AC}(\sigma(T))$ operator where $\sigma(T)$ is the
line segment from $0$ to $1 + i$.
The operator $T$ has an infinite number splittings. Define $Q \in
B(X)$ by $Q(f, g) = (0, f)$. In \cite{BDG} it is shown that $A +
\alpha Q$ is well-bounded for any $\alpha \in \mathbb{C}$. But then $T =
A + i A = A + Q + i(A + i Q)$.
The second splitting cannot come from an ${AC}(\sigma)$ functional
calculus. Say $T$ has an ${AC}(\sigma)$ functional calculus $\psi$.
Since $\sigma(T)$ is a line segment we can use similar reasoning as
to that in Proposition \ref{lbl:508} to conclude that if $f \in
{AC}(\sigma)$ is such that $f(\sigma(T)) = \set{0}$ then $\psi(f) =
0$. Hence if $g \vert \sigma(T) = h \vert \sigma(T)$ then $\psi(g) =
\psi(h)$. In particular since ${\rm Re}(\boldsymbol{\lambda}) \vert \sigma(T) =
{\rm Im}(\boldsymbol{\lambda}) \vert \sigma(T)$ we can only have ${AC}(\sigma)$
functional calculus splittings of the form $T = R + i R$. }
\end{ex}
We do not know if it is possible to have several splittings each
arising from an ${AC}(\sigma)$ functional calculus. The following
tells us to what extent we can expect splittings to be unique.
\begin{prop} \label{lbl:337}
Let $T \in B(X)$ be an ${AC}(\sigma)$ operator. Suppose that $T =
R_1 + i S_1 = R_2 + i S_2$ where $R_1, S_1$ and $R_2, S_2$ are
pairs of commuting well-bounded operators. Then $R_1$ and $R_2$
are quasinilpotent equivalent in the sense of \cite{CF} (as is
$S_1$ and $S_2$). Suppose that $\set{R_1, S_1, R_2, S_2}$ is a
commuting set. Then $(R_1 - R_2)^2 = (S_1 - S_2)^2 = 0$.
Furthermore suppose that $\set{R_1, S_1, R_2, S_2}$ are all type
(B) well-bounded operators. Then $R_1 = R_2$ and $S_1 = S_2$.
\end{prop}
\begin{proof}
This is Theorem 3.2.6 of \cite{CF2} and Theorem 3.7 of \cite{BDG}.
\end{proof}
\section{The support of the functional calculus}\label{support}
Suppose that $\psi: {AC}(\sigma) \to B(X)$ is the functional calculus map for
an ${AC}(\sigma)$ operator $T$. The support of $\psi$ is defined as the
smallest closed set $F \subseteq \mathbb{C}$ such that if $\mathrm{supp}f :=
\mathrm{cl}\{z \,:\, f(z) \ne 0 \cap F = \emptyset$, then $\psi(f) = 0$.
Since ${AC}(\sigma)$ is an admissible algebra of functions, it follows from
\cite[Theorem~3.1.6]{CF2} that the support of $\psi$ is $\sigma(T)$.
It is natural therefore to ask whether such an operator $T$ must
admit an ${AC}(\sigma(T))$ functional calculus. By Proposition
\ref{lbl:508}, this is certainly the case if $T$ is well-bounded,
but the general case remains open.
A major issue in addressing this question is whether one can always
extend an ${AC}(\sigma)$ function to a larger domain.
\begin{quest}\label{extension-quest} Suppose that $\sigma_1 \subseteq \sigma_2$ are
nonempty compact sets. Does there exist $C = C(\sigma_1,\sigma_2)$
such that for every $f \in {AC}(\sigma_1)$ there exists ${\tilde f}
\in {AC}(\sigma_2)$ such that ${\tilde f}|\sigma_1 = f$ and
$ \bigl\Vert {\tilde f} \bigr\Vert_{{BV}(\sigma_2)} \le C
\norm{f}_{{BV}(\sigma_1)}$?
\end{quest}
We shall now give a partial answer to this question, and show that
one may at least shrink $\sigma$ down to be a compact set not much
bigger than $\sigma(T)$. The following theorem will allow us to form
an absolutely continuous function on a square (or rectangle) with
given boundary values.
\begin{thm}\label{fill-in-square}
Let $\sigma$ denote the closed square $[0,1] \times
[0,1]$, and let $\partial \sigma$ denote the boundary of $\sigma$.
Suppose that $b \in {AC}(\partial \sigma)$. Then there exists $f \in
{AC}(\sigma)$ such that $f|\partial \sigma = b$ and $\normbv{f} \le
28 \norm{b}_{{BV}(\partial \sigma)}$.
\end{thm}
\begin{proof} Recall that by \cite[Proposition 4.4]{AD},
if $h \in {AC}[0,1]$ is any absolutely continuous function of one
variable, then its extension to the square, $\widehat h (x,y) =
h(x)$, is in ${AC}(\sigma)$ with $\ssnorm{\widehat h} =
\norm{h}_{{BV}[0,1]}$.
Define $f_s: \sigma \to \mathbb{C}$ by $f_s(x,y) = (1-y)\,b(x,0)$. Since
$f_s$ is the product of ${AC}$ functions of one variable, it is
absolutely continuous on $\sigma$ and
\[ \normbv{f_s} \le 2 \norm{b(\cdot,0)}_{{BV}[0,1]}
\le 2 \norm{b}_{{BV}(\partial \sigma)}. \]
Similarly, we define
\begin{align*}
f_e(x,y) &= (1-x)\, b(0,y),\\
f_n(x,y) &= y\, b(x,1), \\
f_w(x,y) &= x\, b(1,y).
\end{align*}
Let $g = f_s+f_e+f_n+f_w$. Then $g \in {AC}(\sigma)$ and $\normbv{g}
\le 8 \norm{b}_{{BV}(\partial \sigma)}$.
Let $\Delta_\ell = \{(x,y) \,:\, 0\le y \le x \le 1\}$ and $\Delta_u
= \{(x,y) \,:\, 0 \le x < y \le 1\}$ denote the lower and upper
closed triangles inside $\sigma$. Now let $p_\ell$ be the affine
function determined by the condition that it agrees with $b-g$ at
the points $(0,0),(1,0)$ and $(1,1)$. Similarly, let $p_u$ be the
affine function which agrees with $b-g$ at the points $(0,0),(0,1)$
and $(1,1)$. Note that $p_\ell(x,x) = p_u(x,x)$ for all $x$. Let
\[ p(x,y) = \begin{cases}
p_\ell(x,y), & (x,y) \in \Delta_\ell, \\
p_u(x,y), & (x,y) \in \Delta_u.
\end{cases}
\]
Then $p \in \hbox{$CT\kern-0.2ex{P}\kern-0.2ex{P}$}(\sigma) \subseteq {AC}(\sigma)$. Now (using the
facts about ${AC}(\sigma)$ functions which only vary in one
direction)
\[ \var(p,\Delta_\ell) \le \max\{ |p(0,0)-p(1,0)|,
|p(0,0)-p(1,1)|, |p(1,0)-p(1,1)| \}. \]
Note that
\begin{align*}
|p(0,0)-p(1,0)| &\le |b(0,0)-b(1,0)| + |g(0,0)-g(1,0)| \\
& \le \var(b,\partial \sigma) + \var(g,\sigma) \\
& \le 9 \norm{b}_{{BV}(\partial \sigma)} .
\end{align*}
This bound also holds for the other terms and hence
$\norm{p}_{{BV}(\Delta_\ell)} \le 10 \norm{b}_{{BV}(\partial
\sigma)}$. Applying the same argument in the upper triangle, and
then using Theorem~\ref{variation-join} gives that $\normbv{p} \le
20 \norm{b}_{{BV}(\partial \sigma)}$.
Let $f = g+p$. Clearly $f \in {AC}(\sigma)$ and $\normbv{f} \le 28
\norm{b}_{{BV}(\partial \sigma)}$. Note that $f_e(x,0), f_n(x,0),
f_w(x,0)$ and $p(x,0)$ are all affine functions of $x$. and hence
$f(x,0) - b(x,0)$ is an affine function. But $f(0,0) =
g(0,0)+b(0,0)-g(0,0) = b(0,0)$ and $f(1,0) = b(1,0)$ and so it
follows that $f(x,0) = b(x,0)$ for all $x \in [0,1]$. Similar
arguments hold for the remaining three sides and so $f|\partial
\sigma = b$ as required.
\end{proof}
At the expense of lengthening the reasoning, one could reduce the
constant $28$ in the above theorem. It would be interesting to know
the optimal constant; it seems unlikely that the above construction
would provide this.
\begin{defn} A set $G \subseteq \mathbb{C}$ is said to be gridlike if it is
a closed polygon with sides parallel to the axes.
\end{defn}
Note that we do not require that a gridlike set be convex, or even
simply connected.
\begin{lem}\label{edges-of-square}
Let $\sigma$ denote the boundary of the square $[0,1]\times[0,1]$.
Denote the four edges of the square as $\{\sigma_i\}_{i=1}^4$. Let
$J$ be a nonempty subset of $\{1,2,3,4\}$ and let $\sigma_J =
\cup_{i \in J} \sigma_i$. Then given any $b \in {AC}(\sigma_J)$ there
exists $\hat{b} \in {AC}(\sigma)$ with $\hat{b}|\sigma_J = b$ and
$\ssnorm{\hat{b}}_{{BV}(\sigma)} = 4 \norm{b}_{{BV}(\sigma_J)}$.
\end{lem}
\begin{proof} Let $T$ denote the circle passing through the 4
vertices of $\sigma$, and let $\pi$ denote the map from $\sigma$ to
$T$ defined by projecting along the rays coming out of the centre of
$\sigma$. Consider a finite list of points $S = \{z_1,\dots,z_n\}
\subseteq \sigma$ with corresponding path $\gamma_S =
\Pi(z_1,\dots,z_n)$. Choose a line $\ell$ in $\mathbb{C}$ for which
$\gamma_S$ has $\vf(\gamma_S)$ entry points on $\ell$. Note that you
can always do this with $\ell$ passing through the interior of
$\sigma$ and hence $\ell$ is determined by two points $w_1,w_2 \in
\sigma$. Let $\ell_\pi$ denote the line through $\pi(w_1)$ and
$\pi(w_2)$. Since the projection $\pi$ preserves which side of a
line points lie on, $\gamma_{\pi(S)}$ has $\vf(\gamma_S)$ entry
points on $\ell_\pi$. Conversely, if $\gamma_{\pi(S)}$ has
$\vf(\gamma_{\pi(S)})$ entry points on a line $\ell$, then $\gamma$
must have at least $\vf(\gamma_{\pi(S)})/2$ entry points on the
inverse image of $\ell$ under $\pi$. (The factor of $\frac{1}{2}$
comes from the fact the inverse image of $\ell$ may lie along one of
the edges of $\sigma$.) It follows then that
\begin{equation}\label{square-to-circle}
\frac{1}{2}\, \rho(\gamma_S) \le \rho(\gamma_{\pi(S)}) \le
\rho(\gamma_S).
\end{equation}
Suppose then that $f \in {BV}(\sigma)$. Let $f_\pi: T \to \mathbb{C}$ be
$f_\pi = f \circ \pi^{-1}$. From (\ref{square-to-circle}) it is
clear that
\[ \frac{1}{2} \var(f_\pi,T) \le \var(f,\sigma) \le \var(f_\pi,T)
\]
and so $f_\pi \in {BV}(T)$. The same estimate holds when comparing
the variation of $f \in {BV}(\sigma_J)$ and that of $f_\pi$ on the
corresponding subset $T_J$ of $T$. But, by \cite[Corollary
5.6]{AD2}, ${BV}(T)$ is $2$-isomorphic to the subset of ${BV}[0,1]$
consisting of functions which agree at the endpoints. In this final
space, one can extend an ${AC}$ function from a finite collection of
subintervals $K$ to the whole of $[0,1]$ by linear interpolation,
without increasing the norm. Note that absolute continuity is
preserved by the isomorphisms between these function spaces. The
factor $4$ comes from collecting together the norms along the
following composition of maps
\[
\begin{CD}
{AC}(\sigma_J) @. {AC}(\sigma) \\
@V{2}V{\pi}V @A{1}A{\pi^{-1}}A \\
{AC}(T_J) @. {AC}(T) \\
@V{2}VV @A{1}AA \\
{AC}(K) @>1>\hbox{extend}> {AC}[0,1]
\end{CD}
\]
\end{proof}
Note that if $\sigma_J$ consists of either one side, or else $2$
contiguous sides, then one may extend $b$ to all of $\sigma$ without
increasing of norm using \cite[Proposition~4.4]{AD}. We do not know
whether this is true if, for example, $\sigma_J$ consists of $2$
opposite sides of the square.
\begin{prop}\label{quotient-prop}
Suppose that $V$ is a gridlike set, that $\sigma$ is compact and
that $V \subseteq \sigma$. Let $I_V = \{f \in {AC}(\sigma) \,:\,
\hbox{$f \equiv 0$ on $V$}\}$. Then ${AC}(\sigma)/I_V \cong {AC}(V)$
as Banach algebras.
\end{prop}
\begin{proof} Define $\Theta: {AC}(\sigma)/I_V \to {AC}(V)$ by
$\Theta([f]) = f|V$. Then clearly
\[ \Theta([f]) = \Theta([g]) \iff f|V \equiv g|V \iff f-g \in I_V \]
and so $\Theta$ is well-defined and one-to-one. It is also easy to
see that $\Theta$ is an algebra homomorphism. Since
\begin{align*}
\norm{\Theta([f])}
& = \norm{f|V}_{{BV}(V)} \\
& = \inf_{g \in I_V} \norm{f+g|V}_{{BV}(V)} \\
& \le \inf_{g \in I_V} \normbv{f+g} \\
&= \norm{[f]}_{{AC}(\sigma)/I_V}
\end{align*}
the map $\Theta$ is bounded.
The hard part of the proof is to show that $\Theta$ is onto. That
is, given $f \in {AC}(V)$, there exists $F \in {AC}(\sigma)$ so that
$F|V = f$.
Choose then a square $J \times K$ containing $\sigma$. Extending the
edges of $V$ produces a grid on $J \times K$, determining $N$ closed
subrectangles $\{\sigma_k\}_{k=1}^{N}$.
Suppose now that $f \in {AC}(V)$. Our aim is to define $\hat{f} \in
{AC}(J \times K)$ with $\hat{f}|V = f$ and $\ssnorm{\hat{f}}_{{BV}(J
\times K)} \le C \norm{f}_{{BV}(V)}$.
Fix an ordering the rectangles $\sigma_k$ so that
\begin{enumerate}
\item there exists $k_0$ such that $\sigma_k \subseteq V$ if and
only if $k \le k_0$, and
\item for all $\ell$, $\sigma_\ell$ intersects $\cup_{k < \ell}
\sigma_k$ on at least one edge of $\sigma_\ell$.
\end{enumerate}
Let $E_0$ denote the union of the edges of the rectangles $\sigma_k$
for $k \le k_0$ and let $b$ be the restriction of $f$ to $E_0$. Note
that $b$ is absolutely continuous on $E$ and if $e$ is any edge of
any rectangle $\sigma_k$ ($k \le k_0$), then $b|e \in {AC}(e)$ with
$\norm{b|e}_{{BV}(e)} \le \norm{b}_{{BV}(E_0)} \le
\norm{f}_{{BV}(\overline{U})}$. Now apply Lemma~\ref{edges-of-square}
to recursively extend $b$ to the set $E$ of all edges of rectangles
$\sigma_k$, $1 \le k \le N$, so that $b \in {AC}(E)$ and
$\norm{b}_{{BV}(E)} \le C_N \norm{f}_{{BV}(V)}$.
For $1 \le k \le k_0$, let $f_k = f|\sigma_k$, so that $f_k \in
{AC}(\sigma_k)$ and $\norm{f_k}_{{BV}(\sigma_k)} \le
\norm{f}_{{BV}(V)}$. Suppose alternatively that $k_0 < k \le N$. By
Theorem~\ref{fill-in-square} we can find $f_k \in {AC}(\sigma_k)$
with $f_k|\partial \sigma_k = b|\partial\sigma_k$ and
$\norm{f_k}_{{BV}(\sigma_k)} \le 28
\norm{b|\partial\sigma_k}_{{BV}(\partial\sigma_k)} \le 28 C_N
\norm{f}_{{BV}(V)}$.
Define $\hat{f}:J \times K \to \mathbb{C}$ such that $\hat{f}|\sigma_k =
f_k$. That $\hat{f}$ is in ${AC}(J \times K)$ with
$\ssnorm{\hat{f}}_{{BV}(J \times K)} \le 28 C_N N \norm{f}_{{BV}(V)}$
follows from Lemma~\ref{pasting-lemma} (first patching together all
the squares in each row, and then all the rows together). We can now
let $F = \hat{f}|\sigma$.
It follows then that $\Theta$ is onto and hence is a Banach algebra
homomorphism.
\end{proof}
\begin{thm} \label{lbl:662}
Let $T \in B(X)$ be an ${AC}(\sigma)$ operator for some $\sigma
\subset \mathbb{C}$. Let $U$ be an open neighbourhood of $\sigma(T)$. Then
$T$ is an ${AC}(\overline{U})$ operator.
\end{thm}
\begin{proof} Suppose that $T$, $\sigma$ and $U$ are as stated.
Choose a rectangle $J \times K$ containing $U$. By
Lemma~\ref{lbl:909}, $T$ admits an ${AC}(J \times K)$ functional
calculus $\psi$.
Consider an equispaced grid on $J \times K$, determining $n^2$
subsquares $\{\sigma_k\}_{k=1}^{n^2}$. Let $V = V(n)$ be the union
of all those $\sigma_k$ which intersect $\sigma(T)$. For $n$ large
enough
\[ \sigma(T) \subseteq \mathrm{int}(V) \subseteq V \subseteq U. \]
For the rest of the proof, fix such an $n$.
As in Proposition~\ref{quotient-prop}, let $I_V = \{ f \in {AC}(J
\times K) \,:\, f|V \equiv 0\}$, so that ${AC}(J\times K)/I_V \cong
{AC}(V)$ via the isomorphism $\Theta$. Note that $I_V \subseteq
\mathrm{ker} (\psi)$ since if $f \in I_V$, then $\mathrm{supp}f \cap
\sigma(T) = \emptyset$. Thus the map $\tilde{\psi}: {AC}(J\times
K)/I_V \to B(X)$,
\[ \tilde{\psi}([f]) = \psi(f) \]
is a well-defined algebra homomorphism with $\ssnorm{\tilde{\psi}}
\le \norm{\psi}$.
We may therefore define $\hat{\psi}: {AC}(\overline{U}) \to B(X)$ by
$\hat{\psi}(f) = \tilde{\psi}([\Theta^{-1}(f|V)])$. Note that
$\hat{\psi}$ is a bounded algebra homomorphism and that, since
$\Theta([\boldsymbol{\lambda}]) = \boldsymbol{\lambda}|V$, $\hat{\psi}(\boldsymbol{\lambda}) = \psi(\boldsymbol{\lambda}) = T$. Thus
$\hat{\psi}$ is an ${AC}(\overline{V})$ functional calculus for $T$.
\end{proof}
\begin{cor}\label{lbl:cor-to-662}
Let $T \in B(X)$ be an ${AC}(\sigma_0)$ operator for some compact set
$\sigma_0$. Then
\[ \sigma(T) = \bigcap \{ \sigma \,:\,
\hbox{$T$ has an ${AC}(\sigma)$ functional
calculus}\}. \]
\end{cor}
The proof of Theorem~\ref{lbl:662} depends on two vital facts. The
first is that the map $\Theta$ is an isomorphism. The second is that
$I_V \subseteq \mathrm{ker}(\psi)$. To show that every ${AC}(\sigma)$
operator is an ${AC}(\sigma(T))$ operator, it would suffice to show
that
\begin{enumerate}
\item\label{Q1} the restriction map ${AC}(\sigma) \to {AC}(\sigma(T))$, $f
\mapsto f|\sigma(T)$ is onto. This is basically equivalent to
answering Question~\ref{extension-quest}.
\item\label{Q2} given any $f \in {AC}(\sigma)$ with $f|\sigma(T) \equiv 0$,
there exists a sequence $\{f_n\} \subseteq {AC}(\sigma)$ with
$\normbv{f-f_n} \to 0$ and $\mathrm{supp} f_n \cap \sigma(T) =
\emptyset$ for all $n$.
\end{enumerate}
Proving (\ref{Q1}) and (\ref{Q2}) when $\sigma(T)$ is a complicated
compact set would appear to require new ways of estimating the
two-dimensional variation used in our definitions.
If $T \in B(X)$ is an ${AC}(\sigma(T))$ operator then $T$ has
spectral theorems similar to those for normal operators. Recall
from \cite{nD} the definition of the local spectrum $\sigma_T(x)$
of $x \in X$ for an operator $T \in B(X)$ with the single-valued
extension property. From \cite{LV} if $T \in B(X)$ is an
${AC}(\sigma)$ operator (and hence decomposable) then those $x \in
X$ such that $\sigma_T(x) = \sigma(T)$ are second countable in
$X$.
\begin{thm} \label{lbl:091}
Suppose that $T \in B(X)$ is an ${AC}(\sigma(T))$ operator with
functional calculus $\psi : {AC}(\sigma(T)) \rightarrow B(X)$. Then
$\psi$ is injective. Hence we can identify ${AC}(\sigma(T))$ with a
subalgebra of $B(X)$. Furthermore suppose that $x \in X$ is such
that $\sigma_T(x) = \sigma(T)$. Then the map ${AC}(\sigma(T))
\rightarrow X : f \mapsto \psi(f)x$ is injective, and so we can
identify ${AC}(\sigma(T))$ with a subspace of $X$.
\end{thm}
\begin{proof}
Let $x \in X$ be such that $\sigma_T(x) = \sigma(T)$. To prove the
theorem it suffices to show that if $f \in {AC}(\sigma(T))$ and $f
\neq 0$ then $\psi(f)x \neq 0$. Let $\lambda_0 \in \sigma(T)$ be
such that $f(\lambda_0) \neq 0$. Since $f$ is continuous we can find
an open neighbourhood $V$ of $\lambda_0$ such that $0 \not \in
f(V)$. We can choose $g \in {AC}(\sigma(T))$ such that $(f g)(V) =
\set{1}$. If we show $\psi(f g) x \neq 0$ this will imply, since
$\psi$ is a homomorphism, that $\psi(f)x \neq 0$. Hence we can
assume that $f(V) = \set{1}$. Let $U$ be an open set such that
$\set{U, V}$ is an open cover of $\sigma(T)$ and such that
$\lambda_0 \not \in U$. By Lemma 5.2.3 of \cite{bA} we can find
non-zero $x_U, x_V \in X$ such that $x = x_U + x_V$ and where
$\sigma_T(x_U) \subset U$ and $\sigma_T(x_V) \subset V$. Since
$\sigma_T(x) \subset \sigma_T(x_U) \cup \sigma_T(x_V)$ we have that
$\lambda_0 \in \sigma_T(x_V)$ and $\lambda_0 \not \in
\sigma_T(x_U)$. Assume that $\psi(f)x = 0$. Then $0 = \psi(f)(x_U +
x_V) = \psi(f)x_U + x_V$ since $f$ is one on $V$. It follows that
$\sigma_T(x_V) = \sigma_T(-\psi(f)x_U) = \sigma_T(\psi(f)x_U)
\subset \sigma_T(x_U)$. Then we have the contradiction that
$\lambda_0 \in \sigma_T(x_V) \subset \sigma_T(x_U) \not \ni
\lambda_0$. Hence $\psi(f)x \neq 0$.
\end{proof}
Since every ${AC}(\sigma)$ operator is also an ${AC}$ operator, the
results of \cite{DW} give a representation theorem for compact
${AC}(\sigma)$ operators. Specifically, if $T \in B(X)$ is a compact
${AC}(\sigma)$ operator with nonzero eigenvalues $\{\mu_j\}$ and
corresponding Riesz projections $\{P_j\}$, then
\begin{equation}\label{comp-sum}
T = \sum_j \mu_j P_j
\end{equation}
where the sum converges in norm under a particular specified
ordering of the eigenvalues. Given a sequence of real numbers
$\{\mu_j\}$ and disjoint projections $\{P_j\} \subseteq B(X)$,
necessary and sufficient conditions are known which ensure that the
operator defined via (\ref{comp-sum}) is well-bounded (\cite[Theorem
3.3]{CD}). At present an analogous result for compact ${AC}(\sigma)$
operators in unknown. These questions are pursued more fully in
\cite{AD3} where, for example, various sufficient conditions for
(\ref{comp-sum}) to define a compact ${AC}(\sigma)$ operator are
given.
\section{Spectral resolutions}
\label{lbl:SpecRes}
The theory of well-bounded operators is at its most powerful if one
adds the additional assumption that the functional calculus map for
$T$ is `weakly compact'. That is, for all $x \in X$, the map
$\psi_x: {AC}(\sigma(T)) \to X$, $f \mapsto \psi(f)x$ is weakly
compact. In this case $T$ admits an integral representation with
respect to a spectral family of projections $\{E(\mu)\}_{\mu \in
\mathbb{R}}$. The integration theory for spectral families allows one to
define
\[ f(T) = \widehat{\psi}(f)
= \int_{\sigma(T)}^\oplus f(\mu) \, dE(\mu) \]
for all $f \in {BV}(\sigma)$ giving an extended functional calculus
map. (This integral is more usually written as $\int_{J}^\oplus \mu
\, dE(\mu)$, where $J$ is some compact interval containing
$\sigma(T)$. We have written it in the above form to stress that the
value of the integral only depends on the values of $f$ on
$\sigma(T)$.) If $\psi$ is not weakly compact, then there may be no
spectral resolution consisting of projections on $X$. A suitable
family of projections on $X^*$, known as a decomposition of the
identity, does always exist, but the theory here is much less
satisfactory.
Obviously extending this theory to cover general ${AC}(\sigma)$
operators with a weakly compact functional calculus is highly
desirable. At present a full analogue of the well-bounded theory has
not been found, but we are able to show that each such operator does
admit a nice spectral resolution from which the operator may be
recovered. The following definition extends the definition for
well-bounded operators.
\begin{defn} Let $T \in B(X)$ be an ${AC}(\sigma)$ operator with
functional calculus map $\psi$. Then $T$ is said to be of type~(B)
if for all $x \in X$, the map $\psi_x: {AC}(\sigma(T)) \to X$, $f
\mapsto \psi(f)x$ is weakly compact.
\end{defn}
Obviously every ${AC}(\sigma)$ operator on a reflexive Banach space
is of type~(B),
as is every scalar-type spectral operator on a
general Banach space (see \cite{K}). The weak compactness of the
functional calculus removes one of the potential complications with
studying ${AC}(\sigma)$ operators.
\begin{lem} \label{lbl:129}
Let $T \in B(X)$ have a weakly compact ${AC}(\sigma)$ functional
calculus. Then it has a unique splitting $T = R + i S$ where $R$ and
$S$ are commuting type (B) well-bounded operators.
\end{lem}
\begin{proof}
Recall if we set $R = \psi({\rm Re}(\boldsymbol{\lambda}))$ and $S = \psi({\rm Im}(\boldsymbol{\lambda}))$
then $R$ and $S$ are commuting well-bounded operators. The
${AC}({\rm Re}(\sigma(T)))$ functional calculus for $R$ is given by $f
\mapsto \psi(u(f))$ is clearly weakly compact. Hence $R$ is type
(B). Similarly $S$ is type (B). Uniqueness follows from Proposition
\ref{lbl:337}.
\end{proof}
If $T$ is a well-bounded operator of type~(B) with spectral family
$\{E(\mu)\}_{\mu \in \mathbb{R}}$, then, for each $\mu$, $E(\mu)$ is the
spectral projections for the interval $(-\infty,\mu]$. The natural
analogue of this in the ${AC}(\sigma)$ operator setting is to index
the spectral resolution by half-planes. Modelling the plane as
$\mathbb{R}^2$, each closed half-plane is specified by a unit vector
$\theta \in \mathbb{T}$ and a real number $\mu$:
\[ H(\theta,\mu) = \{ z \in \mathbb{R}^2 \,:\, z\cdot \theta \le \mu \} .\]
Let $\HP$ denote the set of all half-planes in $\mathbb{R}^2$. The
following provisional definition contains the minimal conditions one
would require of a spectral resolution for an ${AC}(\sigma)$
operator.
\begin{defn}\label{HPSF} Let $X$ be a Banach space. A half-plane spectral family on
$X$ is a family of projections $\{E(H)\}_{H \in \HP}$ satisfying:
\begin{enumerate}
\item $E(H_1)\, E(H_2) = E(H_2)\, E(H_1)$ for all $H_1,H_2 \in
\HP$;
\item there exists $K$ such that $\norm{E(H)} \le K$ for all $H
\in \HP$;
\item\label{HPSF-3} for all $\theta \in \mathbb{T}$,
$\{E(H(\theta,\mu))\}_{\mu \in \mathbb{R}}$ forms a spectral family of
projections.
\item\label{HPSF-4} for all $\theta \in \mathbb{T}$, if $\mu_1 < \mu_2$, then
$E(H(\theta,\mu_1))\, E(H(-\theta,-\mu_2)) = 0$.
\end{enumerate}
The radius of $\{E(H)\}$ is the (possibly infinite) value
\[ r(\{E(H)\}) = \inf \{ r \,:\,
\hbox{$E(H(\theta,\mu)) = I$ for all $\mu > r$} \}. \]
\end{defn}
Suppose that $\sigma \subset \mathbb{R}^2$ is a nonempty compact set. Given
any unit direction vector $\theta$, let $\sigma_\theta = \{ z\cdot
\theta \,:\, z \in \sigma\} \subseteq \mathbb{R}$. Define the subalgebra of
all ${AC}(\sigma)$ functions which only depend on the component of
the argument in the direction $\theta$,
\[ {AC}_\theta(\sigma) = \{ f \in {AC}(\sigma) \,:\,
\hbox{there exists $u \in {AC}(\sigma_\theta)$ such that $f(z) =
u(z \cdot \theta)$}\} .\]
By Proposition~3.9 and Lemma~3.10 of \cite{AD}, there is a norm 1
isomorphism $U_\theta: {AC}(\sigma_\theta) \to {AC}_\theta(\sigma)$.
Let $T \in B(X)$ be an ${AC}(\sigma)$ operator of type~(B), with
functional calculus map $\psi$. The algebra homomorphism
$\psi_\theta: {AC}(\sigma_\theta) \to B(X)$, $u \mapsto \psi(U_\theta
u)$ is clearly bounded and weakly compact. It follows then from the
spectral theorem for well-bounded operators of type~(B) (see, for
example, \cite{qBD}) that there exists a spectral family
$\{E(H(\theta,\mu))\}_{\mu \in \mathbb{R}}$, with $\norm{E(H(\theta,\mu))}
\le 2 \norm{\psi}$ for all $\mu$. We have thus constructed a
uniformly bounded family of projections $\{E(H)\}_{H \in \HP}$. To
show that this family is a half-plane spectral family it only
remains to verify (\ref{HPSF-3}) and (\ref{HPSF-4}).
Suppose then that $E_1 = E(\theta_1,\mu_1)$ and $E_2 =
E(\theta_2,\mu_2)$. For $\mu \in \mathbb{R}$ and $\delta > 0$, let
$g_{\mu,\delta}: \mathbb{R} \to \mathbb{R}$ be the function which is $1$ on
$(-\infty,\mu]$, is $0$ on $[\mu+\delta,\infty)$ and which is linear
on $[\mu,\mu+\delta]$. Let $h_\delta = U_{\theta_1}
(g_{\mu_1,\delta})$ and $k_\delta = U_{\theta_2}(
g_{\mu_2,\delta})$. The proof of the spectral theorem for
well-bounded operators shows that $E_1 = \lim_{\delta \to 0^+}
\psi(h_\delta)$ and $E_2 = \lim_{\delta \to 0^+} \psi(k_\delta)$,
where the limits are taken in the weak operator topology in $B(X)$.
Thus, if $x \in X$ and $x^* \in X^*$,
\begin{align*}
\ipr<E_1 E_2 x,x^*>
&= \lim_{\delta \to 0^+} \ipr<\psi(h_\delta) x,x^*> \\
&= \lim_{\delta \to 0^+} \ipr< x, \psi(h_\delta)^*x^*> \\
&= \lim_{\delta \to 0^+} \left( \lim_{\beta \to 0^+}
\ipr<\psi(h_\delta) \psi(k_\beta) x,x^*> \right)\\
&=\lim_{\delta \to 0^+} \left( \lim_{\beta \to 0^+}
\ipr<\psi(k_\beta) \psi(h_\delta) x,x^*> \right)\\
&=\lim_{\delta \to 0^+} \ipr<\psi(h_\delta) x, E_2^*x^*> \\
&= \ipr<E_2 E_1 x,x^*>
\end{align*}
Verifying (\ref{HPSF-4}) is similar. Fix $\theta \in \mathbb{T}$ and $\mu_1
< \mu_2$. Let $E_1 = E(\theta,\mu_1)$ and $E_2 = E(-\theta,-\mu_2)$.
Let $h_\delta = U_{\theta} (g_{\mu_1,\delta})$ and $k_\delta =
U_{-\theta}( g_{-\mu_2,\delta})$ so that $E_1 = \lim_{\delta \to
0^+} \psi(h_\delta)$ and $E_2 = \lim_{\delta \to 0^+}
\psi(k_\delta)$. The result follows by noting that for $\delta$
small enough, $h_\delta k_\delta = 0$.
We have shown then that $\{E(H)\}_{H \in \HP}$ is a half-plane
spectral family.
\smallskip
For $\theta \in \mathbb{T}$, the spectral family $\{E(\theta,\mu)\}_{\mu
\in \mathbb{R}}$ defines a well-bounded operator of type~(B)
\begin{equation}\label{T-theta}
T_\theta = \int_{\sigma_\theta} \mu \, dE(\theta,\mu).
\end{equation}
Note that, in particular, $r(T_\theta) \le r(T)$, where $r(\cdot)$
denotes the spectral radius. Since there exists $\theta \in \mathbb{T}$ for
which $r(T_\theta) = r(T)$, we have the following result.
\begin{prop} With $T$ and $\{E(H)\}$ as above, $r(\{E(H)\}) = r(T)$.
\end{prop}
For notational convenience, we shall identify the direction vector
$\theta \in \mathbb{R}^2$ with the corresponding complex number on the unit
circle. Thus, for example, we identify $(0,1)$ with $i$.
Via Theorem~\ref{lbl:329} and Theorem~\ref{lbl:337} we have
that $T$ has the unique splitting into real and imaginary parts
\begin{equation}\label{recon-formula}
T = T_1
+ \,i\, T_i .
\end{equation}
That is, $T$ can be recovered from the half-plane spectral family
produced by the above construction. Indeed, if $\theta \in \mathbb{T}$,
then
\begin{equation}\label{theta-decomp}
T = \theta T_\theta + \,i\theta\, T_{i\theta}.
\end{equation}
Note that if we define $f_\theta \in {AC}(\sigma)$ by $f_\theta(z) =
z\cdot \theta$, then $T_\theta = \psi(f_\theta) = f_\theta(T)$. In
particular, if $\omega = (1/\sqrt{2},1/\sqrt{2})$, then $f_\omega =
(f_1 + f_i)/\sqrt{2}$, and hence
\[ T_\omega = \psi(f_\omega) = (T_1+T_i)/\sqrt{2}. \]
This proves the following proposition. Note that in general the sum
of two commuting well-bounded operators need not commute.
\begin{prop} Let $T$ be an ${AC}(\sigma)$ operator of type~(B), with
unique splitting $T = R+iS$. Then $R+S$ is also well-bounded.
\end{prop}
\begin{quest}
Suppose that $R$ and $S$ are commuting well-bounded operators whose
sum is well-bounded. Is $R+iS$ an ${AC}(\sigma)$ operator?
\end{quest}
It is clear that given any half-plane spectral family $\{E(H)\}_{H
\in \HP}$ with finite radius, Equation~(\ref{recon-formula}) defines
$T \in B(X)$ which is an ${AC}$~operator in the sense of Berkson and
Gillespie. It is not clear however, that $T$ need be an
${AC}(\sigma)$ operator. In particular, if we define $T_\theta$ via
Equation~(\ref{T-theta}), then it is not known whether the identity
(\ref{theta-decomp}) holds.
\begin{quest}
Is there a one-to-one correspondence between ${AC}(\sigma)$ operators
of type~(B) and half-plane spectral families with finite radius? If
not, can one refine Definition~\ref{HPSF} so that such a
correspondence exists?
\end{quest}
\section{Extending the functional calculus}
\label{lbl:454}
Given a ${AC}(\sigma)$ operators of type~(B) its associated
half-plane spectral family (as constructed above), it is natural to
ask whether one can develop an integration theory which would enable
the functional calculus to be extended to a larger algebra than
${AC}(\sigma)$.
The spectral family associated to a well-bounded operator $T$ of
type~(B) allows one to associate a bounded projection with any set
of the form $\bigcup_{j=1}^n \sigma(T) \cap I_j$, where
$I_1,\dots,I_n$ are disjoint intervals of $\mathbb{R}$. Let $\sigma =
\sigma(T) \subset \mathbb{R}$ and let $\NP(\sigma(T))$ denote the algebra
of all such sets. It is easy to check the following.
\begin{thm} \label{lbl:777}
Let $T \in B(X)$ be a type (B) well-bounded operator with functional
calculus $\psi$. Then there is a unique map $E : \NP(\sigma(T))
\rightarrow {\rm Proj}(X)$ satisfying the following
\begin{enumerate}
\item $E(\emptyset) = 0$, $E(\sigma(T)) = I$,
\item $E(A \cap B) = E(A)E(B) = E(B)E(A)$ for all $A, B \in \NP(\sigma(T))$,
\item $E(A \cup B) = E(A) + E(B) - E(A \cap B)$
for all $A, B \in \NP(\sigma(T))$,
\item\label{NP-norm-bound}
$\normbv{E(A)} \leq \norm{\psi} \normbvj{\chi_A}$
for all $A \in \NP(\sigma(T))$,
\item if $S \in B(X)$ is such that $T S = S T$ then $E(A) S = S E(A)$
for all $A \in \NP(\sigma(T))$,
\item ${\rm Range}(E(A)) = \set{x \in X : \sigma_T(x) \subseteq A}$.
\end{enumerate}
\end{thm}
For general ${AC}(\sigma)$ operators, the natural algebra of sets is
that generated by the closed half-planes. This algebra has been
studied in various guises, particularly in the setting of
computational geometry. The sets that can be obtained by starting
with closed half-planes and applying a finite number of unions,
intersections and set complements are sometimes known as Nef
polygons. The set of Nef polygons in the plane, $\NP$, clearly
contains all polygons, lines and points in the plane. For more
information about Nef polygons, or more generally their
$n$-dimensional analogues, Nef polyhedra, we refer the reader to
\cite{BIC}, \cite{HKM} or \cite{Nef}.
Let $\sigma$ be a nonempty compact subset of $\mathbb{C}$. Define
\[ \NP(\sigma) = \{ A \,:\, \hbox{$A = \sigma \cap P$ for some $P
\in \NP$}\}. \]
It is clear that given an $AC(\sigma)$ operator of type~(B), one may
use the half-plane spectral family constructed in he previous
section to associate a projection $E(A) \in B(X)$ with each set $A
\in \NP(\sigma)$. The major obstacle in developing a suitable
integration theory in this setting is in providing an analogue of
condition (\ref{NP-norm-bound}) in Theorem~\ref{lbl:777}.
Note that if $A \in \NP(\sigma)$, then $\chi_A \in {BV}(\sigma)$.
Rather than forming $E(A)$ by a finite combination of algebra
operations, one might try to define $E(A)$ directly as we did when
$A$ was a half-plane. That is, one may try to write
\[ E(A) = WOT-\lim_\alpha \psi(h_\alpha) \]
where $\{h_\alpha\}$ is a suitable uniformly bounded net of
functions in ${AC}(\sigma)$ which approximates $\chi_A$ pointwise.
It is shown in \cite{bA} that if $A$ is a closed polygon then this
may be done but only under the bound
$ \norm{h_\alpha} \le V_A$ .
Here $V_A$ is a constant depending on $A$. This allows one to prove
a weaker version of Theorem~\ref{lbl:777}, with condition
(\ref{NP-norm-bound}) replaced by $\norm{E(A)} \le V_A \norm{\psi}$.
It remains an open question as whether one can do this with $V_A
\le 2 \norm{\chi_A}$. However, if $A$ is a closed convex polygon
contained in the interior of $\sigma$, then this is possible.
\begin{quest} Does every ${AC}(\sigma)$ operator of type~(B) admit a
${BV}(\sigma)$ functional calculus?
\end{quest}
It might be noted in this regard that all the examples of
${AC}(\sigma)$ operators of type~(B) given in Section~\ref{lbl:id-s3}
do admit such a functional calculus extension.
\section{Introduction}\label{intro}
A Banach space operator is said to be well-bounded if it admits a functional calculus for ${AC}(J)$, the algebra of absolutely continuous functions on some compact interval $J \subseteq \mathbb{R}$. The motivation for the introduction of this class was to provide a theory which extended the spectral representation results which apply to self-adjoint operators to Banach space operators which may possess a conditionally rather than unconditionally convergent spectral expansion. Smart and Ringrose
\cite{dS,jR,jR2} showed that well-bounded operators always have an integral representation with respect to a family of projections known as a
decomposition of the identity. The usefulness of this most general form of the theory is somewhat restricted however since the decomposition of the
identity acts on the dual of the underlying Banach space and is in general not unique (see \cite{hD} for examples of this non-uniqueness).
In \cite{BD} a subclass of the well-bounded operators, the
well-bounded operators of type~(B), were introduced. The type~(B) well-bounded
operators, which include those well-bounded operators acting on
reflexive spaces, possess a theory of integration with respect to a
family of projections which act on the original space. This family
of projections, known as the spectral family, is uniquely determined
by the operator. The integration theory provides an extension of the
${AC}(J)$ functional calculus to a ${BV}(J)$ functional calculus where
${BV}(J)$ is the algebra of functions of bounded variation on the
interval $J$.
As is the case for a self-adjoint operator, the spectrum of a well-bounded operator must lie in the real line.
The main obstacle to overcome if one wishes to extend the theory
of well-bounded operators to cover operators whose spectrum may
not lie in the real line, is that of obtaining a suitable
concept of bounded variation for functions defined on a subset
of the plane. Many such concepts exist in the literature. In
\cite{BG}, Berkson and Gillespie used a notion of variation
ascribed to Hardy and Krause to define the ${AC}$ operators.
These are the operators which have an ${AC}_{HK}(J \times K)$
functional calculus where ${AC}_{HK}(J \times K)$ is the algebra
of absolutely continuous functions in the sense of Hardy and
Krause defined on a rectangle $J \times K \subset \mathbb{R}^2 \cong
\mathbb{C}$. They showed \cite[Theorem 5]{BG} that an operator $T \in
B(X)$ is an ${AC}$ operator if and only if $T = R + i S$ where
$R$ and $S$ are commuting well-bounded operators. In \cite{BDG}
it is shown that this splitting is not necessarily unique.
Furthermore even if $T$ is an ${AC}$ operator on a Hilbert space
$H$, it does not necessarily follow that $\alpha T$ is an ${AC}$
operator for all $\alpha \in \mathbb{C}$. On the positive side,
the ${AC}$ operators include the trigonometrically well-bounded
operators which have found important applications in harmonic analysis and
differential equations (see \cite{BG2} and \cite{BG3}). An
operator $T \in B(X)$ is said to be trigonometrically
well-bounded if there exists a type~(B) well-bounded operator $A
\in B(X)$ such that
$T = \exp(i A)$.
One of the problems in the theory well-bounded and ${AC}$ operators is that
the functional calculus of these operators is based on an algebra of
functions whose domain is either an interval in the real axis or a rectangle
in the plane. From an operator theory point of view, a much more natural
domain is the spectrum, or at least a neighbourhood of the spectrum.
Secondly, as we have already mentioned, the class of ${AC}$ operators is not
closed under multiplication by scalars. This is also undesirable,
since if one has structural information about an
operator $T$, this clearly gives similar information about $\alpha T$. To
overcome these problems, in \cite{AD} we defined ${AC}(\sigma)$, the
Banach algebra of absolutely continuous functions whose domain
is some compact set $\sigma$ in
the plane. In this paper we look at those operators which have an
${AC}(\sigma)$ functional calculus, which we call ${AC}(\sigma)$ operators.
Section~2 summarizes some of the main results from \cite{AD}
concerning the function algebras ${BV}(\sigma)$ and
${AC}(\sigma)$. The question as to how one may patch together absolutely
continuous functions defined on different domains is addressed in Section~3.
These results will be needed in order to show that ${AC}(\sigma)$ operators are
decomposable in the sense of \cite{CF2}.
In Section~4 we give some results which illustrate the extent of the
class of ${AC}(\sigma)$ operators. In particular, we note that this
class contains all scalar-type spectral operators, all well-bounded
operators and all trigonometrically well-bounded operators.
In Section~\ref{lbl:309} we develop some of the main spectral
properties of ${AC}(\sigma)$ operators. Here we show that the
${AC}(\sigma)$ operators form a proper subclass of the ${AC}$
operators and hence such operators have a splitting into real and
imaginary well-bounded parts. The natural conjecture that every
${AC}(\sigma)$ operator is in fact an ${AC}(\sigma(T))$ operator
remains open. Resolving this question depends on being able to
answer some difficult questions about the relationships between
${AC}(\sigma_1)$ and ${AC}(\sigma_2)$ for different compact sets
$\sigma_1$ and $\sigma_2$. These issues are discussed in
Section~\ref{support}.
In Section~\ref{lbl:SpecRes} we examine the case where the
${AC}(\sigma)$ functional calculus for $T$ is weakly compact. In this
case one can construct a family of spectral projections associated
with $T$ which is rich enough to recover $T$ via an integration
process. This `half-plane spectral family' is a generalization of
the spectral family associated with a well-bounded operator of
type~(B). A full integration theory for this class of operators is,
however, yet to be developed. In particular, it is not known whether
one can always extend a weakly compact ${AC}(\sigma)$ functional
calculus to a ${BV}(\sigma)$ functional calculus. The final section
discusses some of the progress that has been obtained in pursuing
such a theory, and lists some of the major obstacles that remain.
Throughout this paper let $\sigma \subset \mathbb{C}$ be compact and
non-empty. For a Banach space $X$ we shall denote the bounded linear
operators on $X$ by $B(X)$ and the bounded linear projections on $X$
by ${\rm Proj}(X)$. Given $T \in B(X)$ with the single valued extension
property (see \cite{nD}) and $x \in X$ we denote the local spectrum
of $x$ (for $T$) by $\sigma_T(x)$. We shall write $\boldsymbol{\lambda}$ for the
identity function $\boldsymbol{\lambda} : \sigma \rightarrow \mathbb{C}, z \mapsto z$.
We would like to thank the referee for their careful reading of the manuscript.
\section{${BV}(\sigma)$ and ${AC}(\sigma)$}\label{bv-definitions}
We shall briefly look at ${BV}(\sigma)$ and ${AC}(\sigma)$. In
particular we look at how two dimensional variation is defined. More
details may be found in \cite{AD}.
To define two dimensional variation we first need to look at
variation along curves. Let $\Gamma = C([0, 1], \mathbb{C})$ be the set of
curves in the plane. Let $\Gamma_L \subset \Gamma$ be the curves
which are piecewise line segments. Let $S = \set{z_i}_{i=1}^n
\subset \mathbb{C}$. We write $\Pi(S) \in \Gamma_L$ for the (uniform speed)
curve consisting of line segments joining the vertices at $z_1, z_2,
\dots, z_n$ (in the given order). For $\gamma \in \Gamma$ we say
that $\set{s_i}_{i=1}^n \subset \sigma$ is a \emph{partition of
$\gamma$ over $\sigma$} if there exists a partition
$\set{t_i}_{i=1}^n$ of $[0, 1]$ such that $t_1 \leq t_2 \leq \dots
\leq t_n$ and such that $s_i = \gamma(t_i)$ for all $i$. We shall
denote the partitions of $\gamma$ over $\sigma$ by $\Lambda(\gamma,
\sigma)$. For $\gamma \in \Gamma$ and $S \in \Lambda(\gamma,
\sigma)$ we denote by $\gamma_S$ the curve $\Pi(S) \in \Gamma_L$.
The variation along $\gamma \in \Gamma$ for a function $f : \sigma
\rightarrow \mathbb{C}$ is defined as
\begin{equation} \label{lbl:298}
\cvar(f, \gamma) = \sup_{\set{s_i}_{i=1}^n \in \Lambda(\gamma,
\sigma)} \sum_{i=1}^{n-1} \abs{f(s_{i+1}) - f(s_i)}.
\end{equation}
To each curve $\gamma \in \Gamma$ we define a weight factor $\rho$.
For $\gamma \in \Gamma$ and a line $l$ we let $\vf(\gamma, l)$
denote the number of times that $\gamma$ crosses $l$ (for a precise
definition of a crossing see Section~3.1 of \cite{AD}). Set
$\vf(\gamma)$ to be the supremum of $\vf(\gamma, l)$ over all lines
$l$. We set $\rho(\gamma) = \frac{1}{\vf(\gamma)}$. Here we take the
convention that if $\vf(\gamma) = \infty$ then $\rho(\gamma) = 0$. We
can extend the definition of $\rho$ to include functions in $C[a,
b]$ in the obvious way.
The two dimensional variation of a function $f : \sigma
\rightarrow \mathbb{C}$ is defined to be
\begin{equation} \label{lbl:994}
\var(f, \sigma) = \sup_{\gamma \in \Gamma}
\rho(\gamma) \cvar(f, \gamma).
\end{equation}
We have the following properties of two dimensional variation which
were shown in \cite{AD}.
\begin{prop}\label{Gamma-L}
Let $\sigma \subseteq \mathbb{C}$ be compact, and suppose that $f: \sigma
\to \mathbb{C}$. Then
\begin{align*}
\var(f,\sigma)
&= \sup_{\gamma \in \Gamma_L} \rho(\gamma) \cvar(f, \gamma)
\\
& = \sup \Bigl\{ \rho(\gamma_S) \sum_{i=1}^{n-1} \abs{f(s_{i+1}) - f(s_i)}
\,:\, \hbox{$S = \set{s_i}_{i=1}^n \subseteq \sigma$}
\Bigr\}.
\end{align*}
\end{prop}
\begin{prop} \label{lbl:541}
Let $\sigma_1 \subset \sigma \subset \mathbb{C}$ both be compact. Let $f,
g : \sigma \rightarrow \mathbb{C}$, $k \in \mathbb{C}$. Then
\begin{enumerate}
\item $\var(f + g, \sigma) \leq \var(f, \sigma) + \var(g, \sigma)$,
\item\label{item2} $\var(f g, \sigma) \leq \norminf{f} \var(g, \sigma)
+ \norminf{g} \var(f, \sigma)$,
\item $\var(k f, \sigma) = \abs{k} \var(f, \sigma)$,
\item $\var(f, \sigma_1) \leq \var(f, \sigma)$.
\end{enumerate}
\end{prop}
For $f : \sigma \rightarrow \mathbb{C}$ set
\begin{equation} \label{lbl:268}
\normbv{f} = \norminf{f} + \var(f, \sigma).
\end{equation}
The functions of bounded variation with domain $\sigma$ are
defined to be
\begin{equation*} \label{lbl:415}
{BV}(\sigma) = \set{f : \sigma \mapsto \mathbb{C} : \normbv{f} < \infty}.
\end{equation*}
To aid the reader we list here some of the main results from
\cite{AD} and \cite{AD2}. The affine invariance of these algebras
(Theorem \ref{lbl:847} and Proposition \ref{lbl:408}) is one of the
main features of this theory and will be used regularly without
comment.
\begin{prop} \label{lbl:199}
If $\sigma = [a, b]$ is an interval then the above definition of
variation agrees with the usual definition of variation. Hence the
above definition of ${BV}(\sigma)$ agrees with the usual definition
of ${BV}[a, b]$ when $\sigma = [a, b]$.
\end{prop}
\begin{thm} \label{lbl:333}
Let $\sigma \subset \mathbb{C}$ be compact. Then ${BV}(\sigma)$ is a Banach
algebra using the norm given in Equation \eqref{lbl:268}.
\end{thm}
\begin{thm} \label{lbl:847}
Let $\alpha, \beta \in \mathbb{C}$ and suppose that $\alpha \neq 0$. Then
${BV}(\sigma) \cong {BV}( \alpha \sigma + \beta)$.
\end{thm}
\begin{lem}
Let $f : \sigma \rightarrow \mathbb{C}$ be a Lipschitz function with
Lipschitz constant $L(f) = \sup_{z, w \in \sigma} \abs{\frac{f(z) -
f(w)}{z - w}}$. Then $\var(f, \sigma) \leq L(f) \var(\boldsymbol{\lambda}, \sigma)$.
Hence $f \in {BV}(\sigma)$.
\end{lem}
We define ${AC}(\sigma)$ as being the subalgebra ${BV}(\sigma)$
generated by the functions $1$, $\boldsymbol{\lambda}$ and $\overline{\boldsymbol{\lambda}}$. (Note
that $\boldsymbol{\lambda}$ and $\overline{\boldsymbol{\lambda}}$ are always in ${BV}(\sigma)$.) We
call functions in ${AC}(\sigma)$ the \emph{absolutely continuous
functions with respect to $\sigma$}. By Proposition \ref{lbl:199}
this coincides with the usual notion of absolute continuity if
$\sigma = [a, b] \subset \mathbb{R}$ is an interval. In \cite{AD} the
following properties of ${AC}(\sigma)$ are shown.
\begin{prop} \label{lbl:996}
Let $\sigma = [a, b]$ be a compact interval. Let $g \in
{BV}(\sigma) \cap C(\sigma)$. Suppose that $\rho(g) > 0$. Then
$\normbv{f \circ g} \leq \frac{1}{\rho(g)}
\norm{f}_{{BV}(g(\sigma))}$ for all $f \in {BV}(g(\sigma))$.
\end{prop}
\begin{prop} \label{lbl:408}
Let $\alpha, \beta \in \mathbb{C}$ and suppose that $\alpha \neq 0$. Then
${AC}(\sigma) \cong {AC}( \alpha \sigma + \beta)$.
\end{prop}
\begin{prop} \label{lem:ac compact:4590}
If $f \in {AC}(\sigma)$ and $f(z) \ne 0$ on $\sigma$ then
$\frac{1}{f} \in {AC}(\sigma)$. Indeed, if $M = \ds \inf_{z \in \sigma} |f(z)|$, then
$\norm{1/f}_{{AC}(\sigma)} \le \ds \frac{1}{M} + \frac{\var(f,\sigma)}{M^2}$.
\end{prop}
\begin{comment}
Denoting the absolutely continuous functions defined on the
rectangle $J \times K$ in the sense of Hardy and Krause by ${AC_{HK}}(J
\times K)$ we have the following.
\begin{thm}
Let $J \times K$ be a rectangle in the complex plane with edges
parallel to the axes. Then ${AC_{HK}}(J \times K) \subset {AC}(J \times
K)$ and the inclusion map ${AC_{HK}}(J \times K) \hookrightarrow {AC}(J
\times K)$ is continuous. If the rectangle is non-degenerate then
the containment is proper.
\end{thm}
\end{comment}
We shall also need some properties of ${AC}(\sigma)$ and
${BV}(\sigma)$ which were not included in \cite{AD}.
\begin{prop} ${BV}(\sigma)$ is a lattice. If $f, g \in {BV}(\sigma)$, then
\[ \hbox{$\normbv{f \lor g} \le \normbv{f} + \normbv{g}$ and
$\normbv{f \land g} \le \normbv{f} + \normbv{g}$.} \]
\end{prop}
\begin{proof}
Suppose that $\gamma \in \Gamma$ and that $\{s_i\}_{i=1}^n \in
\Lambda(\gamma,\sigma)$. Note that for any $a,a',b,b'$,
\begin{equation}\label{max-inequality}
|(a \lor a') - (b \lor b')|
\le |(a \lor b) - (a' \lor b)| + |(a' \lor b) - (a' \lor b')|
\le |a-a'| + |b-b'|
\end{equation}
and so
\[
\sum_{i=1}^{n-1} |(f \lor g)(s_{i+1}) - (f \lor g)(s_i)|
\le \sum_{i=1}^{n-1} |f(s_{i+1}) - f(s_i)| + |g(s_{i+1}) - g(s_i)|. \]
Thus
$ \cvar(f \lor g,\gamma) \le \cvar(f ,\gamma) + \cvar(g,\gamma)
$
and so
\begin{align*}
\normbv{f \lor g}
&= \norm{f \lor g}_\infty + \sup_\gamma \cvar(f \lor g,\gamma)
\\
&\le \norm{f}_\infty + \norm{g}_\infty
+ \sup_\gamma \{ \cvar(f ,\gamma) + \cvar(g,\gamma) \} \\
&\le \norm{f}_\infty + \sup_\gamma \cvar(f,\gamma) +
\norm{g}_\infty + \sup_\gamma \cvar(g,\gamma)\\
&= \normbv{f} + \normbv{g}.
\end{align*}
The proof for $f \land g$ is almost identical.
\end{proof}
Note that ${BV}(\sigma)$ is not a \emph{Banach} lattice, even in the
case $\sigma = [0,1]$.
The set $\hbox{$CT\kern-0.2ex{P}\kern-0.2ex{P}$}(\sigma)$ of functions on $\sigma$ which are
continuous and piecewise triangularly planar relative to $\sigma$
was introduced in \cite{AD}. It is easy to see that $\hbox{$CT\kern-0.2ex{P}\kern-0.2ex{P}$}(\sigma)$
is a sublattice of ${BV}(\sigma)$.
\begin{cor} ${AC}(\sigma)$ is a sublattice of ${BV}(\sigma)$.
\end{cor}
\begin{proof}
It suffices to show that if $f,g \in {AC}(\sigma)$, then $f \lor g
\in {AC}(\sigma)$. Suppose then that $f,g \in {AC}(\sigma)$. Then
there exist sequences $\{f_n\},\{g_n\} \subseteq \hbox{$CT\kern-0.2ex{P}\kern-0.2ex{P}$}(\sigma)$ such
that $f_n \to f$ and $g_n \to g$ in ${BV}(\sigma)$. As
$\hbox{$CT\kern-0.2ex{P}\kern-0.2ex{P}$}(\sigma)$ is a lattice, $f_n \lor g_n \in \hbox{$CT\kern-0.2ex{P}\kern-0.2ex{P}$}(\sigma)$ for
each $n$ and, using (\ref{max-inequality}), one can see that $(f_n
\lor g_n) \to (f \lor g)$. This implies that $f \lor g$ lies in the
closure of $\hbox{$CT\kern-0.2ex{P}\kern-0.2ex{P}$}(\sigma)$, namely ${AC}(\sigma)$.
\end{proof}
If one wishes to apply the results of local spectral theory, it is
important that ${AC}(\sigma)$ forms an admissible algebra of
functions in the sense of Colojoar{\v a} and Foia{\c s} \cite{CF2}.
The first step is to show that ${AC}(\sigma)$ admits partitions of
unity.
\begin{lem} $\sigma \subset \mathbb{C}$ be compact. Then ${AC}(\sigma)$ is a
normal algebra. That is, given any finite open cover
$\{U_i\}_{i=1}^n$ of $\sigma$, there exist functions
$\{f_i\}_{i=1}^n \subseteq {AC}(\sigma)$ such that
\begin{enumerate}
\item $f_i(\sigma) \subset [0,1]$, for all $1 \le i \le n$,
\item $\mathrm{supp} f_i \subseteq U_i$ for all $1 \le i \le n$,
\item $\sum_{i=1}^n f_i = 1$ on $\sigma$.
\end{enumerate}
\end{lem}
\begin{proof} This follows from the fact that $C^\infty(\sigma)
\subseteq {AC}(\sigma)$ \cite[Proposition 4.7]{AD}. More precisely, let $\{U_i\}_{i=1}^n$ be a finite open cover
of $\sigma$ and let $U = \cup_{i=1}^n U_i$. Choose an open set $V$ with $\sigma \subseteq V \subseteq
\overline{V} \subseteq U$. Then there exist non-negative $f_1,\dots,f_n \in C^\infty(V)$ such that $\sum_{i=1}^n
f_i = 1$ on $V$ (and hence on $\sigma)$, and $\mathrm{supp} f_i \subseteq U_i$ for all $1 \le i \le n$ (see
\cite[page 44]{LM}).
\end{proof}
For $f \in {AC}(\sigma)$ and $\xi \not\in \mathrm{supp} f$, define
\[ f_\xi(z) = \begin{cases}
\frac{f(z)}{z-\xi}, & z \in \sigma\setminus \{\xi\}, \\
0, & z \in \sigma \cap \{\xi\}.
\end{cases}
\]
Recall that an algebra $\mathcal{A}$ of functions (defined on some
subset of $\mathbb{C}$) is admissible if it contains the polynomials, is
normal, and $f_\xi \in \mathcal{A}$ for all $f \in \mathcal{A}$ and
all $\xi \not\in \mathrm{supp} f$.
\begin{prop} \label{lbl:656}
Let $\sigma \subset \mathbb{C}$ be compact. Then ${AC}(\sigma)$ is an
admissible inverse-closed algebra.
\end{prop}
\begin{proof} All that remains is to show that the last property
hold in ${AC}(\sigma)$. Suppose then that $f \in {AC}(\sigma)$ and
$\xi \not\in \mathrm{supp} f$. Given that $\mathrm{supp} f$ is
compact, there exists $h \in C^\infty(\mathbb{C})$ such that $h(z) =
(z-\xi)^{-1}$ on $\mathrm{supp} f$ and $h(z) \equiv 0$ on some
neighbourhood of $\xi$. Again using \cite[Proposition 4.7]{AD} we
have that $h|\sigma \in {AC}(\sigma)$ and hence that $f_\xi = f h \in
{AC}(\sigma)$.
\end{proof}
\begin{comment}
\begin{thm} \label{lbl:605}
Let $\sigma_1 \subset \sigma_2 \subset \mathbb{C}$ both be compact. Then
${AC}(\sigma_1) \cong {AC}(\sigma_2) / \mathcal{J}_{\sigma_1}$ where
$\mathcal{J}_{\sigma_1}$ is the closed ideal of ${AC}(\sigma_2)$
given by $\mathcal{J}_{\sigma_1} = \set{f \in {AC}(\sigma_2) :
f(\sigma_1) = \set{0}}$.
\end{thm}
\end{comment}
\section{Patching theorems}
The relationship between $\var(f,\sigma_1)$, $\var(f,\sigma_2)$ and
$\var(f,\sigma_1 \cup \sigma_2)$ is in general rather complicated.
The following theorem will allow us to patch together functions
defined on different sets.
\begin{thm}\label{variation-join}
Suppose that $\sigma_1, \sigma_2 \subseteq \mathbb{C}$ are
nonempty compact sets which are disjoint except at their boundaries. Suppose
that $\sigma = \sigma_1 \cup \sigma_2$ is convex. If $f:\sigma \to \mathbb{C}$,
then
\[ \max\{\var(f,\sigma_1),\var(f,\sigma_2)\}
\le \var(f,\sigma)
\le \var(f,\sigma_1) + \var(f,\sigma_2) \]
and hence
\[ \max\{\norm{f}_{{BV}(\sigma_1)},\norm{f}_{{BV}(\sigma_2)} \}
\le \normbv{f}
\le \norm{f}_{{BV}(\sigma_1)} + \norm{f}_{{BV}(\sigma_2)}. \]
Thus, if $f|\sigma_1 \in {BV}(\sigma_1)$ and $f|\sigma_2 \in
{BV}(\sigma_2)$, then $f \in {BV}(\sigma)$.
\end{thm}
\begin{proof} The left-hand inequalities are obvious.
Note that given any points $z \in \sigma_1 \setminus \sigma_2$ and
$w \in \sigma_2 \setminus \sigma_1$ there exists a point $u$ on the
line joining $z$ and $w$ with $u$ in $\sigma_1 \cap \sigma_2$. To
see this, let $\alpha(t) = (1-t)z + t w$ and let $t_0 = \inf\{ t \in
[0,1] \,:\, \alpha(t) \in \sigma_2\}$. By the convexity of $\sigma$,
$\alpha(t) \in \sigma_1$ for all $0 \le t < t_0$. The closedness of
the subsets then implies that $u = \alpha(t_0) \in \sigma_1 \cap
\sigma_2$.
Suppose then that $S = \{z_0,z_1,\dots,z_n\} \subseteq \sigma$. For
any $j$ for which $z_j$ and $z_{j+1}$ lie in different subsets, then
using the above remark, expand $S$ to add an extra vertex on the
line joining $z_j$ and $z_{j+1}$ which lies in both $\sigma_1$ and
$\sigma_2$. (Note that the addition of these extra vertices does not
change the value of $\rho(\gamma_S)$ and can only increase the
variation of $f$ between the vertices.) Write the vertices of
$\gamma$ which lie in $\sigma_1$ as $S_1 =
\{z_0^1,z_1^1,\dots,z_{k_1}^1\}$ and those which lie in $\sigma_2$
as $S_2=\{z_0^2,z_1^2,\dots,z_{k_2}^2\}$, preserving the original
ordering. Note that for every $j$, $\{z_j,z_{j+1}\}$ is subset of at
least one of the sets $S_1$ and $S_2$. Thus
\[ \sum_{j=1}^n |f(z_j)-f(z_{j-1})| \le
\sum_{i=1}^2 \sum_{j=1}^{k_i} |f(z_j^i)-f(z_{j-1}^i)|
\]
where an empty sum is interpreted as having value $0$. Recall that
if $S' \subseteq S$ then $\rho(\gamma_{S'}) \ge \rho(\gamma_S)$.
Thus
\begin{align*}
\rho(\gamma_S) \sum_{j=1}^n |f(z_j)-f(z_{j-1})|
& \le \sum_{i=1}^2 \rho(\gamma_{S_i})
\sum_{j=1}^{k_i} |f(z_j^i)-f(z_{j-1}^i)| \\
& \le \sum_{i=1}^2 \rho(\gamma_{S_i}) \cvar(f,\sigma_i) \\
& \le \sum_{i=1}^2 \var(f,\sigma_i).
\end{align*}
The results follows on taking a supremum over finite $S \subseteq
\sigma$.
\end{proof}
Note that the convexity of $\sigma$ is vital in
Theorem~\ref{variation-join}. Without this condition it is easy to construct
examples where $\var(f,\sigma_1) + \var(f,\sigma_2) = 0$
for a non constant function $f$.
Later, we will need to show that we can patch two absolutely
continuous functions together. For notational simplicity, the following lemma
is stated in terms of specific sets $\sigma_1$ and $\sigma_2$, but the affine invariance result (Proposition~\ref{lbl:408}) implies that this immediately also applies to any two rectangles that meet along an edge.
\begin{lem}\label{pasting-lemma}
Suppose that $\sigma_1 = [0,1] \times [0,1]$, that $\sigma_2 = [1,2]
\times [0,1]$ and that $\sigma = \sigma_1 \cup \sigma_2$. Suppose
that $f: \sigma \to \mathbb{C}$ and that $f_i = f|\sigma_i$ ($i=1,2$). If
$f_1 \in {AC}(\sigma_1)$ and $f_2 \in {AC}(\sigma_2)$, then $f \in
{AC}(\sigma)$ and
\[ \normbv{f}\le \norm{f_1}_{{BV}(\sigma_1)} +
\norm{f_2}_{{BV}(\sigma_2)}. \]
\end{lem}
\begin{proof} By replacing $f$ with the function
$(x,y) \to f(x,y)-f(1,y)$ we may assume that $f|(\sigma_1 \cap
\sigma_2) = 0$. (Note that $(x,y) \to f(1,y)$ is always in
${AC}(\sigma)$.)
Suppose first that $f_2 = 0$. Fix $\epsilon > 0$. As $f_1 \in
{AC}(\sigma_1)$ there exists $p \in \hbox{$CT\kern-0.2ex{P}\kern-0.2ex{P}$}(\sigma_1)$ with
$\norm{f_1-p}_{{BV}(\sigma_1)} < \epsilon/4$. By the definition of
$\hbox{$CT\kern-0.2ex{P}\kern-0.2ex{P}$}(\sigma_1)$ there is a triangulation $\{A_i\}_{i=1}^n$ of
$\sigma_1$ such that $p|A_i$ is planar (see \cite[Section 4]{AD}).
Note that $b(y)= p(1,y)$ is a piecewise linear function on $[0,1]$
with $\norm{b}_{{BV}[0,1]} = \norm{f_1 - p}_{{BV}(\sigma_1 \cap
\sigma_2)} < \epsilon/4$. Extend $p$ to $\sigma_2$ by setting
$p(x,y) = b(y)$. Note that $p \in \hbox{$CT\kern-0.2ex{P}\kern-0.2ex{P}$}(\sigma)$ and by
\cite[Proposition 4.4]{AD}, $\norm{p|\sigma_2}_{{BV}(\sigma_2)} <
\epsilon/4$. Thus, using Theorem~\ref{variation-join},
\[ \norm{f - p}_{{BV}(\sigma)}
\le \norm{f - p}_{{BV}(\sigma_1)} + \norm{f - p}_{{BV}(\sigma_2)}
< \frac{\epsilon}{2}. \]
For arbitrary $f_2$, The same argument will produce a function $q
\in \hbox{$CT\kern-0.2ex{P}\kern-0.2ex{P}$}(\sigma)$ which approximates to within $\epsilon/2$ the
function which is $f_2$ on $ \sigma_2$ and zero on $\sigma_1$. Thus
the piecewise planar function $p+q$ approximates $f$ to within
$\epsilon$ on $\sigma$. It follows that $f \in {AC}(\sigma)$. The
norm estimate is given by Theorem~\ref{variation-join}.
\end{proof}
The conditions on $\sigma_1$ and $\sigma_2$ in
Lemma~\ref{pasting-lemma} could be relaxed considerably. Since we
will not need this greater generality in this paper, we have not
attempted to determine the most general conditions on these sets for
which the above proof works. It is worth noting that one does need
\emph{some} conditions on $\sigma_1$ and $\sigma_2$ or else the
pasted function need not even be of bounded variation.
A major issue in much of this paper will be whether one can always
extend an ${AC}(\sigma)$ function to a larger domain.
\begin{quest}\label{extension-quest} Suppose that $\sigma_1 \subseteq \sigma_2$ are
nonempty compact sets. Does there exist $C = C(\sigma_1,\sigma_2)$
such that for every $f \in {AC}(\sigma_1)$ there exists ${\tilde f}
\in {AC}(\sigma_2)$ such that ${\tilde f}|\sigma_1 = f$ and
$ \bigl\Vert {\tilde f} \bigr\Vert_{{BV}(\sigma_2)} \le C
\norm{f}_{{BV}(\sigma_1)}$?
\end{quest}
The following special case will be needed in Section~\ref{lbl:309} to show that ${AC}(\sigma)$ operators are decomposable.
\begin{thm}\label{fill-in-square}
Let $\sigma$ denote the closed square $[0,1] \times
[0,1]$, and let $\partial \sigma$ denote the boundary of $\sigma$.
Suppose that $b \in {AC}(\partial \sigma)$. Then there exists $f \in
{AC}(\sigma)$ such that $f|\partial \sigma = b$ and $\normbv{f} \le
28 \norm{b}_{{BV}(\partial \sigma)}$.
\end{thm}
\begin{proof} Recall that by \cite[Proposition 4.4]{AD},
if $h \in {AC}[0,1]$ is any absolutely continuous function of one
variable, then its extension to the square, $\widehat h (x,y) =
h(x)$, is in ${AC}(\sigma)$ with $\ssnorm{\widehat h} =
\norm{h}_{{BV}[0,1]}$.
Define $f_s: \sigma \to \mathbb{C}$ by $f_s(x,y) = (1-y)\,b(x,0)$. Since
$f_s$ is the product of ${AC}$ functions of one variable, it is
absolutely continuous on $\sigma$ and
\[ \normbv{f_s} \le 2 \norm{b(\cdot,0)}_{{BV}[0,1]}
\le 2 \norm{b}_{{BV}(\partial \sigma)}. \]
Similarly, we define
\begin{align*}
f_e(x,y) &= (1-x)\, b(0,y),\\
f_n(x,y) &= y\, b(x,1), \\
f_w(x,y) &= x\, b(1,y).
\end{align*}
Let $g = f_s+f_e+f_n+f_w$. Then $g \in {AC}(\sigma)$ and $\normbv{g}
\le 8 \norm{b}_{{BV}(\partial \sigma)}$.
Let $\Delta_\ell = \{(x,y) \,:\, 0\le y \le x \le 1\}$ and $\Delta_u
= \{(x,y) \,:\, 0 \le x \le y \le 1\}$ denote the lower and upper
closed triangles inside $\sigma$. Now let $p_\ell$ be the affine
function determined by the condition that it agrees with $b-g$ at
the points $(0,0),(1,0)$ and $(1,1)$. Similarly, let $p_u$ be the
affine function which agrees with $b-g$ at the points $(0,0),(0,1)$
and $(1,1)$. Note that $p_\ell(x,x) = p_u(x,x)$ for all $x$. Let
\[ p(x,y) = \begin{cases}
p_\ell(x,y), & (x,y) \in \Delta_\ell, \\
p_u(x,y), & (x,y) \in \Delta_u.
\end{cases}
\]
Then $p \in \hbox{$CT\kern-0.2ex{P}\kern-0.2ex{P}$}(\sigma) \subseteq {AC}(\sigma)$. Now (using the
facts about ${AC}(\sigma)$ functions which only vary in one
direction)
\[ \var(p,\Delta_\ell) \le \max\{ |p(0,0)-p(1,0)|,
|p(0,0)-p(1,1)|, |p(1,0)-p(1,1)| \}. \]
Note that
\begin{align*}
|p(0,0)-p(1,0)| &\le |b(0,0)-b(1,0)| + |g(0,0)-g(1,0)| \\
& \le \var(b,\partial \sigma) + \var(g,\sigma) \\
& \le 9 \norm{b}_{{BV}(\partial \sigma)} .
\end{align*}
This bound also holds for the other terms and hence
$\norm{p}_{{BV}(\Delta_\ell)} \le 10 \norm{b}_{{BV}(\partial
\sigma)}$. Applying the same argument in the upper triangle, and
then using Theorem~\ref{variation-join} gives that $\normbv{p} \le
20 \norm{b}_{{BV}(\partial \sigma)}$.
Let $f = g+p$. Clearly $f \in {AC}(\sigma)$ and $\normbv{f} \le 28
\norm{b}_{{BV}(\partial \sigma)}$. Note that $f_e(x,0), f_n(x,0),
f_w(x,0)$ and $p(x,0)$ are all affine functions of $x$, and hence
$f(x,0) - b(x,0)$ is an affine function. But $f(0,0) =
g(0,0)+b(0,0)-g(0,0) = b(0,0)$ and $f(1,0) = b(1,0)$ and so it
follows that $f(x,0) = b(x,0)$ for all $x \in [0,1]$. Similar
arguments hold for the remaining three sides and so $f|\partial
\sigma = b$ as required.
\end{proof}
At the expense of lengthening the reasoning, one could reduce the
constant $28$ in the above theorem. It would be interesting to know
the optimal constant; it seems unlikely that the above construction
would provide this.
In building up ${AC}$ functions in Section~\ref{support}, we shall need to make use of the following straightforward extension lemma.
\begin{lem}\label{edges-of-square}
Let $\sigma$ denote the boundary of the square $[0,1]\times[0,1]$.
Denote the four edges of the square as $\{\sigma_i\}_{i=1}^4$. Let
$J$ be a nonempty subset of $\{1,2,3,4\}$ and let $\sigma_J =
\cup_{i \in J} \sigma_i$. Then given any $b \in {AC}(\sigma_J)$ there
exists $\hat{b} \in {AC}(\sigma)$ with $\hat{b}|\sigma_J = b$ and
$\ssnorm{\hat{b}}_{{BV}(\sigma)} \le 4 \norm{b}_{{BV}(\sigma_J)}$.
\end{lem}
\begin{proof} Let $T$ denote the circle passing through the 4
vertices of $\sigma$, and let $\pi$ denote the map from $\sigma$ to
$T$ defined by projecting along the rays coming out of the centre of
$\sigma$. Consider a finite list of points $S = \{z_1,\dots,z_n\}
\subseteq \sigma$ with corresponding path $\gamma_S =
\Pi(z_1,\dots,z_n)$. Choose a line $\ell$ in $\mathbb{C}$ for which
$\gamma_S$ has $\vf(\gamma_S)$ entry points on $\ell$. Note that you
can always do this with $\ell$ passing through the interior of
$\sigma$ and hence $\ell$ is determined by two points $w_1,w_2 \in
\sigma$. Let $\ell_\pi$ denote the line through $\pi(w_1)$ and
$\pi(w_2)$. Since the projection $\pi$ preserves which side of a
line points lie on, $\gamma_{\pi(S)}$ has $\vf(\gamma_S)$ entry
points on $\ell_\pi$. Conversely, if $\gamma_{\pi(S)}$ has
$\vf(\gamma_{\pi(S)})$ entry points on a line $\ell$, then $\gamma$
must have at least $\vf(\gamma_{\pi(S)})/2$ entry points on the
inverse image of $\ell$ under $\pi$. (The factor of $\frac{1}{2}$
comes from the fact the inverse image of $\ell$ may lie along one of
the edges of $\sigma$.) It follows then that
\begin{equation}\label{square-to-circle}
\frac{1}{2}\, \rho(\gamma_S) \le \rho(\gamma_{\pi(S)}) \le
\rho(\gamma_S).
\end{equation}
Suppose then that $f \in {BV}(\sigma)$. Let $f_\pi: T \to \mathbb{C}$ be
$f_\pi = f \circ \pi^{-1}$. From (\ref{square-to-circle}) it is
clear that
\[ \frac{1}{2} \var(f_\pi,T) \le \var(f,\sigma) \le \var(f_\pi,T)
\]
and so $f_\pi \in {BV}(T)$. The same estimate holds when comparing
the variation of $f \in {BV}(\sigma_J)$ and that of $f_\pi$ on the
corresponding subset $T_J$ of $T$. But, by \cite[Corollary
5.6]{AD2}, ${BV}(T)$ is $2$-isomorphic to the subset of ${BV}[0,1]$
consisting of functions which agree at the endpoints. In this final
space, one can extend an ${AC}$ function from a finite collection of
subintervals $K$ to the whole of $[0,1]$ by linear interpolation,
without increasing the norm. Note that absolute continuity is
preserved by the isomorphisms between these function spaces. The
factor $4$ comes from collecting together the norms along the
following composition of maps
\[
\begin{CD}
{AC}(\sigma_J) @. {AC}(\sigma) \\
@V{2}V{\pi}V @A{1}A{\pi^{-1}}A \\
{AC}(T_J) @. {AC}(T) \\
@V{2}VV @A{1}AA \\
{AC}(K) @>1>\hbox{extend}> {AC}[0,1]
\end{CD}
\]
\end{proof}
Note that if $\sigma_J$ consists of either one side, or else $2$
contiguous sides, then one may extend $b$ to all of $\sigma$ without
increasing of norm using \cite[Proposition~4.4]{AD}. We do not know
whether this is true if, for example, $\sigma_J$ consists of $2$
opposite sides of the square.
\section{${AC}(\sigma)$ operators: definition and examples}
\label{lbl:id-s3}
\begin{defn} Suppose that $\sigma \subseteq \mathbb{C}$ is a nonempty compact set and
that $T$ is a bounded operator on a Banach space $X$. We say that
$T$ is an $AC(\sigma)$ operator if $T$ admits an bounded
$AC(\sigma)$ functional calculus. That is, $T$ is an ${AC}(\sigma)$
operator if there exists a bounded unital Banach algebra
homomorphism $\psi: {AC}(\sigma) \to B(X)$ for which $\psi(\boldsymbol{\lambda}) =
T$.
\end{defn}
Where there seems little room for confusion we shall often say that
$T$ is an ${AC}(\sigma)$ operator where one should more properly say
that $T$ is an ${AC}(\sigma)$ operator \emph{for some $\sigma$}.
Before proceeding to give some of the general properties of
$AC(\sigma)$ operators, it is appropriate to give the reader some
idea of how this class is related to other standard classes of
operators which arise in spectral theory.
\begin{ex} \label{lbl:189} {\rm
Let $H$ be a Hilbert space and let $T \in B(H)$ be normal. Then $T$
has a $C(\sigma(T))$ functional calculus $\psi$. Then $\psi \vert
{AC}(\sigma(T))$ is a linear homomorphism from ${AC}(\sigma(T))$ into
$B(X)$. Furthermore $\norm{\psi(f)} \leq \norm{\psi}\norminf{f} \leq
\norm{\psi}\norm{f}_{BV(\sigma(T))}$ for all $f \in {AC}(\sigma)$ and
so $\psi \vert {AC}(\sigma(T))$ is continuous from ${AC}(\sigma(T))$
into $B(H)$. Hence $T$ is an ${AC}(\sigma(T))$ operator. Indeed, by
the same argument any scalar type spectral operator (or even
scalar-type prespectral operator) $T$ on a Banach space $X$ is also
an ${AC}(\sigma(T))$ operator. (See \cite{hD} for the definitions of
these latter classes of operators.)}
\end{ex}
The operators in the previous example are associated with spectral
expansions which are of an unconditional nature. The motivation for
the present theory is of course to cover operators such as
well-bounded operators, which admit less constrained types of
spectral expansion.
\begin{lem} \label{lbl:909}
Let $T \in B(X)$ be an ${AC}(\sigma)$ operator. Suppose that
$\sigma \subset \sigma'$ where $\sigma' \subset \mathbb{C}$ is compact.
Then $T$ is an ${AC}(\sigma')$ operator.
\end{lem}
\begin{proof}
Let $\psi$ be a ${AC}(\sigma)$ functional calculus for $T$. Define
$\psi_{\sigma'} : {AC}(\sigma') \rightarrow B(X) : f \mapsto \psi(f
\vert \sigma)$. Then $\psi_{\sigma'}$ is a unital linear
homomorphism. Furthermore $\psi_{\sigma'}(\boldsymbol{\lambda}) = \psi(\boldsymbol{\lambda} \vert
\sigma) = T$. Finally we note from the inequality $\norm{f \vert
\sigma}_{{BV}(\sigma)} \leq \norm{f}_{{BV}(\sigma')}$ that
$\psi_{\sigma'}$ is continuous. Hence $\psi_{\sigma'}$ is an
${AC}(\sigma')$ functional calculus for $T$.
\end{proof}
The following result was announced in \cite[Section 2]{AD}.
\begin{prop} \label{lbl:508}
Let $T \in B(X)$. The following are equivalent.
\begin{enumerate}
\item $T$ is well-bounded,
\item $T$ is an ${AC}(\sigma)$ operator for some $\sigma
\subset \mathbb{R}$,
\item $\sigma(T) \subset \mathbb{R}$ and $T$ is an ${AC}(\sigma(T))$
operator.
\end{enumerate}
\end{prop}
\begin{proof}
Trivially (3) implies (2). Lemma \ref{lbl:909} shows that (2)
implies (1). Say $T$ is well-bounded with functional calculus $\psi
: {AC}(J) \rightarrow B(X)$ for some interval $J$. In \cite{AD} we
define a linear isometry $\iota : {AC}(\sigma(T)) \rightarrow
{AC}(J)$. Define $\psi_{\sigma(T)} : {AC}(\sigma(T)) \rightarrow B(X)
: f \mapsto \psi(\iota(f))$. We show that $\psi_{\sigma(T)}$ is an
${AC}(\sigma(T))$ functional calculus for $T$ which will complete the
proof. Clearly $\psi_{\sigma(T)}$ is linear and continuous.
Furthermore, since $\iota(\boldsymbol{\lambda} \vert \sigma(T)) = \boldsymbol{\lambda}$, we have
that $\psi_{\sigma(T)}(\boldsymbol{\lambda}) = T$. To see that $\psi_{\sigma(T)}$ is
a homomorphism we note that if $f, g \in {AC}(\sigma(T))$ then
$(\iota(f g) - \iota(f) \iota(g))(\sigma(T)) = \set{0}$. Theorem
4.4.4 of \cite{bA} says we can find a sequence
$\set{h_n}_{n=1}^\infty \subset {AC}(J)$ such that $\lim_n \norm{h_n
- (\iota(f g) - \iota(f) \iota(g))}_{{BV}(J)} = 0$ and such that for
each $n$, $h_n$ is zero on a neighbourhood of $\sigma(T)$. This
last condition, by Proposition 3.1.12 of \cite{CF}, implies that
$\psi(h_n) = 0$ for all $n$. Hence $\psi(\iota(f g) - \iota(f)
\iota(g)) = \lim_n \psi(h_n) = 0$, which shows that
$\psi_{\sigma(T)}$ is a homomorphism as claimed.
\end{proof}
As a result of the last proposition we prefer to use the term `real
${AC}(\sigma)$ operator' rather than the term well-bounded operator.
As well as being less descriptive, the term well-bounded operator
also suffers from the fact that it is used for quite a different
concept in the local theory of Banach spaces (see \cite{MTJ} for
example.) We shall however stick with the traditional term for the
remainder of this paper.
The next theorem shows that some important classes of ${AC}$
operators are also ${AC}(\sigma)$ operators.
\begin{thm} \label{lbl:732}
Let $A \in B(X)$ be well-bounded with functional calculus $\psi :
AC(J) \rightarrow B(X)$ for some interval $J$. Let $f \in AC(J)$
be such that $\rho(f) > 0$. Then $\psi(f)$ is an ${AC}(f(J))$
operator.
\end{thm}
\begin{proof}
Define $\psi_f : {AC}(f(J)) \rightarrow B(X) : g \mapsto \psi(g \circ
f)$. Then $\psi_f$ is a unital linear homomorphism and $\psi_f(\boldsymbol{\lambda})
= \psi(f)$. By Proposition \ref{lbl:996}, $\psi_f$ is continuous.
\end{proof}
\begin{cor} \label{lbl:744}
Let $A \in B(X)$ be well-bounded and $p$ be a polynomial of one
variable. Then $p(A)$ is an ${AC}(p(\sigma(A)))$ operator.
\end{cor}
\begin{cor} \label{lbl:213}
Let $A \in B(X)$ be a well-bounded operator. Then $\exp(i A)$ is
an ${AC}(i \exp(\sigma(A)))$ operator.
\end{cor}
We noted earlier that the trigonometrically well-bounded operators
are those operators which can be expressed in the form $\exp(i A)$
where $A \in B(X)$ is a well-bounded operator of type (B). (Indeed one can also insist that
$\sigma(A) \subset [0, 2 \pi]$.) As usual, we denote the unit circle in $\mathbb{C}$ by $\mathbb{T}$.
\begin{cor}[\cite{AD2}, Theorem 6.2] \label{lbl:829}
If $T \in B(X)$ be trigonometrically well-bounded then $T$ is an
${AC}(\mathbb{T})$ operator. Indeed, if $X$ is reflexive,
then $T$ is trigonometrically well-bounded operator if and only if it
is an $AC(\mathbb{T})$ operator.
\end{cor}
We end this section with a more concrete example.
\begin{ex} {\rm Suppose that $1 < p < \infty$ and that $X$ is the usual
Hardy space $H^p(\mathbb{D})$ of analytic functions on the unit disk.
Consider the unbounded operator $Af(z) = z f'(z)$, $f \in H^p(\mathbb{D})$
(with natural domain $\{f \,:\, Af \in H^p(\mathbb{D})\})$. This operator
arises, for example, as the analytic generator of a semigroup of
composition operators, $T_tf(z) = f(e^{-t} z)$; see \cite{Si}, which
includes a summary of many of the spectral properties of $A$. The
spectrum of $A$ is $\sigma(A) = \mathbb{N} = \{0,1,2,\dots\}$ with the
corresponding spectral projections $P_k(\sum a_n z^n) = a_k z^k$ ($k
\in \mathbb{N}$) giving just the usual Fourier components. Suppose then
that $\mu \not\in \sigma(A)$. The resolvent operator $R(\mu,A) =
(\mu I - A)^{-1}$ is a compact operator with spectrum
$\sigma(R(\mu,A)) = \Bigl\{\frac{1}{\mu - k}\Bigr\}_{k=0}^\infty
\cup \{0\}$. From \cite[Theorem 3.3]{CD} it follows easily from the
properties of Fourier series that if $x \in \mathbb{R}\setminus\mathbb{N}$, then
$R(x,A)$ is well-bounded. If we fix such an $x$ and take $\mu
\not\in \mathbb{R}$, then $R(\mu,A) = f(R(x,A))$ where $f(t) =
t/(1+(\mu-x)t)$ is a M{\" o}bius transformation. If $J$ is any
compact interval containing $\sigma(R(x,A))$ then $\rho(f(J)) =
\frac{1}{2}$. Thus $R(\mu,A)$ is an $AC(f(J))$ operator. Thus, all
the resolvents of $A$ are compact $AC(\sigma)$ operators (for some
$\sigma$). Note that none of the resolvents is scalar-type spectral
unless $p=2$.}
\end{ex}
\section{Properties of ${AC}(\sigma)$ operators}
\label{lbl:309}
All ${AC}(\sigma)$ operators belong to the larger class of
decomposable operators (in the sense of \cite{CF2}). This will follow immediately from the requirement that the functional calculus map $\psi: {AC}(\sigma) \to B(X)$ be what Colojoar{\v a} and Foia{\c s} term an ${AC}(\sigma)$-spectral function. Recall that by Proposition~\ref{lbl:656}, ${AC}(\sigma)$ is an admissible algebra.
Suppose that $f \in {AC}(\sigma)$. Let $\Omega_f \subseteq \mathbb{C}$ be the open set $\mathbb{C} \setminus \supp f$. By Proposition~\ref{lbl:656}, $\Phi_f(\xi) = f_\xi$ is a well-defined map from $\Omega_f$ to ${AC}(\sigma)$.
Following \cite[Section 3.1]{CF2}, the functional calculus map $\psi: {AC}(\sigma) \to B(X)$ is an \textbf{${AC}(\sigma)$-spectral function} if, for all $f \in {AC}(\sigma)$, the map
$\psi\circ \Phi_f: \Omega_f \to B(X)$ is analytic on $\Omega_f$.
Since $\psi$ is linear, it suffices to show that the map $\Phi_f$ is differentiable at each point $\xi_0 \in \Omega_f$. To establish this we shall need a technical lemma.
As in \cite{AD3}, let $|x+iy|_\infty = \max(|x|,|y|)$. For $\xi_0 \in \mathbb{C}$ and $\delta > 0$ let
\[ B_\infty(\xi_0,\delta) = \{z \in \mathbb{C} \,:\, |\xi_0 - z|_\infty < \delta\}. \]
\begin{lem}\label{resolvents}
Suppose that $f \in {AC}(\sigma)$, $\xi_0 \in \Omega_f$ and that $\delta >0$ is chosen so that $B_\infty(\xi_0,3\delta) \subseteq \Omega_f$. Then there exists a constant $C(\delta,\sigma)$ such that for all $\xi \in B_\infty(\xi_0,\delta)$, there exists $r_\xi \in {AC}(\sigma)$ which satisfies
\begin{enumerate}
\item $r_\xi(z) = \frac{1}{\xi-z}$ for all $z \in \sigma \setminus B_\infty(\xi_0,2\delta)$, and
\item $\norm{r_\xi}_{{AC}(\sigma)} \le C(\delta,\sigma)$.
\end{enumerate}
\end{lem}
\begin{proof}
Suppose first that $\xi_0 \in \sigma$. (The case where $\xi_0 \not\in \sigma$ is similar, but with slightly different norm bounds. The details are left to the reader)
Let $\sigma_0$ denote the smallest closed square (with sides parallel to the axes) containing $\sigma$ and $B_\infty(\xi_0,3\delta)$. Let $\sigma_1$ denote the $\overline B_\infty(\xi_0,2\delta)$ and let $\sigma_2$ denote $\sigma_0 \setminus B_\infty(\xi_0,2\delta)$.
Suppose that $\xi \in B_\infty(\xi_0,\delta)$. The function $z \mapsto \xi - z$ is absolutely continuous on $\sigma_2$ with variation equal to $d = d(\sigma,\delta)$, the length of the diagonal of $\sigma_0$. Since $|\xi - z| \ge \delta$ on $\sigma_2$, Lemma~\ref{lem:ac compact:4590} implies that $r_\xi: \sigma_2 \to \mathbb{C}$, $z \mapsto (\xi-z)^{-1}$ is in ${AC}(\sigma_2)$ with
\[ \norm{r_\xi}_{{AC}(\sigma_2)} \le \frac{1}{\delta} + \frac{d}{\delta^2}. \]
Clearly $\partial \sigma_1 \subseteq \sigma_2$ so by \cite[Lemmas 3.9 and 4.5]{AD}, $r_\xi|\partial \sigma_1 \in {AC}(\partial \sigma_1)$. Using Theorem~\ref{fill-in-square}, we can extend $r_\xi$ to $\sigma_1$ so that $r_\xi| \sigma_1 \in {AC}(\sigma_1)$ and $\norm{r_\xi}_{{AC}(\sigma_1)} \le 28 \norm{r_\xi}_{{AC}(\partial \sigma_1)} \le 28 \norm{r_\xi}_{{AC}(\sigma_2)}$.
By splitting $\sigma_0$ into $9$ smaller rectangles and then repeatedly using Lemma~\ref{pasting-lemma}, one can deduce that $r_\xi \in {AC}(\sigma_0)$, and that one has a bound on $\norm{r_\xi}_{{AC}(\sigma_0)}$ which depends only on $\sigma$ and $\delta$. Taking the restriction of this function to the original domain $\sigma$ completes the construction.
\end{proof}
\begin{prop}\label{ac-spectral-function}
The functional calculus map $\phi$ for an ${AC}(\sigma)$ operator $T \in B(X)$ is an \textbf{${AC}(\sigma)$-spectral function}.
\end{prop}
\begin{proof}
Fix $f \in {AC}(\sigma)$, $\xi_0 \in \Omega_f$ and $\delta >0$ so that $B_\infty(\xi_0,3\delta) \subseteq \Omega_f$. Using Lemma~\ref{resolvents}, choose a family of functions $r_\xi$ for $\xi \in B_\infty(\xi_0,\delta)$. Note that $\Phi_f(\xi) = r_\xi f \in {AC}(\sigma)$. Thus
\begin{align*}
\frac{\Phi_f(\xi) - \Phi_f(\xi_0)}{\xi-\xi_0}
&= \frac{(r_\xi - r_{\xi_0})f}{\xi-\xi_0} \\
&= -r_\xi r_{\xi_0} f \\
&= -r_{\xi_0}^2f + r_{\xi_0} (r_{\xi_0}-r_\xi) f \\
&= -r_{\xi_0}^2f + r_{\xi_0} (\xi-\xi_0) r_\xi r_{\xi_0} f \\
&\to -r_{\xi_0}^2f
\end{align*}
as $\xi \to \xi_0$, by the uniform bound on the norms of the functions $r_\xi$. Composing $\Phi_f$ with the linear map $\psi$ preserves differentiability so $\psi \circ \Phi_f: \Omega_f \to B(X)$ is analytic.
\end{proof}
\begin{prop} \label{lbl:741}
Let $T \in B(X)$ be an ${AC}(\sigma)$ operator. Then
\begin{enumerate}
\item $\sigma(T) \subseteq \sigma$.
\item $T$ is decomposable.
\end{enumerate}
\end{prop}
\begin{proof}
This follows from Proposition~\ref{ac-spectral-function} using Theorems 3.1.6 and 3.1.16 in \cite{CF2}.
\end{proof}
In general it is easy to pass between spectral properties of an
operator $T$ and those of affine translations of $T$. One of the
main motivations for developing this theory was to provide a
suitably broad class of operators which is closed under such
transformations. From Theorem \ref{lbl:408} we get the following.
\begin{thm} \label{lbl:814}
Let $T \in B(X)$ be an ${AC}(\sigma)$ operator. Let $\alpha, \beta
\in \mathbb{C}$. Then $\alpha T + \beta I$ is an ${AC}(\alpha \sigma +
\beta)$ operator.
\end{thm}
\begin{proof}
Let $\theta : {AC}(\sigma) \rightarrow {AC}(\alpha \sigma + \beta)$ be
the isomorphism of Theorem \ref{lbl:408}. Let $\psi$ be the
${AC}(\sigma)$ functional calculus for $T$. Then it is routine to
check that the map $\psi_{\alpha, \beta} : {AC}(\alpha \sigma +
\beta) \rightarrow B(X) : f \mapsto \psi(\theta^{-1}(f))$ is an
${AC}(\alpha \sigma + \beta)$ functional calculus for $\alpha T +
\beta I$.
\end{proof}
\begin{thm} \label{lbl:329}
Let $T \in B(X)$ be an ${AC}(\sigma)$ operator. Then $T = R + i S$
where $R, S$ are commuting well-bounded operators. Further,
$\sigma(R) = {\rm Re}(\sigma(T))$ and $\sigma(S) = {\rm Im}(\sigma(T))$.
\end{thm}
\begin{proof}
Let $\psi$ be an ${AC}(\sigma)$ functional calculus for $T$. In
\cite{AD} it is shown in Proposition 5.4 that the map $u :
{AC}({\rm Re}(\sigma)) \rightarrow AC(\sigma)$ defined by $u(f)(z) =
f({\rm Re}(z))$ is a norm-decreasing linear homomorphism. Then the map
$\psi_{{\rm Re}(\sigma)} : {AC}({\rm Re}(\sigma)) \rightarrow B(X) : f
\mapsto \psi(u(f))$ is a continuous linear unital homomorphism.
Hence $R := \psi_{{\rm Re}(\sigma)}(\boldsymbol{\lambda} \vert {\rm Re}(\sigma)) =
\psi({\rm Re}(\boldsymbol{\lambda}))$ is well-bounded. Similarly $S :=
\psi({\rm Im}(\boldsymbol{\lambda}))$ is well-bounded. Then $T = \psi(\boldsymbol{\lambda}) =
\psi({\rm Re}(\boldsymbol{\lambda}) + i\,{\rm Im}(\boldsymbol{\lambda})) := R + i S$. Finally we note that
$R$ and $S$ commute since ${AC}(\sigma)$ is a commutative algebra and
$\psi$ is a homomorphism.
The identification of $\sigma(R)$ and $\sigma(S)$ follows
immediately from the spectral mapping theorem \cite[Theorem 3.2.1]{CF2}
\end{proof}
Splittings which arise from an ${AC}(\sigma)$ functional calculus
we call \emph{functional calculus splittings}.
\begin{cor} \label{lbl:781}
The ${AC}(\sigma)$ operators are a proper subset of the ${AC}$
operators of Berkson and Gillespie.
\end{cor}
\begin{proof}
We note that not all ${AC}$ operators are ${AC}(\sigma)$ operators.
Example 4.1 of \cite{BDG} shows that the class of ${AC}$ operators is
not closed under multiplication by scalars even on Hilbert spaces.
\end{proof}
Not all splittings into commuting real and imaginary well-bounded
parts arise from an ${AC}(\sigma)$ functional calculus. This was
shown in the next example which first appeared in \cite{BDG}.
\begin{ex} \label{lbl:690} {\rm
Let $X = L^\infty[0, 1] \oplus L^1[0, 1]$. Define $A \in B(X)$ by $A(f, g) =
(\boldsymbol{\lambda} f, \boldsymbol{\lambda} g)$. It is not difficult to see that $A$ is well-bounded and
that $\sigma(A) = [0, 1]$. Let $T = (1 + i)A = A + i A$. By Theorem
\ref{lbl:814}, $T$ is an ${AC}(\sigma(T))$ operator where $\sigma(T)$ is the
line segment from $0$ to $1 + i$.
The operator $T$ has an infinite number splittings. Define $Q \in
B(X)$ by $Q(f, g) = (0, f)$. In \cite{BDG} it is shown that $A +
\alpha Q$ is well-bounded for any $\alpha \in \mathbb{C}$. But then $T =
A + i A = A + Q + i(A + i Q)$.
The second splitting cannot come from an ${AC}(\sigma)$ functional
calculus. Say $T$ has an ${AC}(\sigma)$ functional calculus $\psi$.
Since $\sigma(T)$ is a line segment we can use similar reasoning as
to that in Proposition \ref{lbl:508} to conclude that if $f \in
{AC}(\sigma)$ is such that $f(\sigma(T)) = \set{0}$ then $\psi(f) =
0$. Hence if $g \vert \sigma(T) = h \vert \sigma(T)$ then $\psi(g) =
\psi(h)$. In particular since ${\rm Re}(\boldsymbol{\lambda}) \vert \sigma(T) =
{\rm Im}(\boldsymbol{\lambda}) \vert \sigma(T)$ we can only have ${AC}(\sigma)$
functional calculus splittings of the form $T = R + i R$. }
\end{ex}
We do not know if it is possible to have several splittings each
arising from an ${AC}(\sigma)$ functional calculus. The following
tells us to what extent we can expect splittings to be unique.
\begin{prop} \label{lbl:337}
Let $T \in B(X)$ be an ${AC}(\sigma)$ operator. Suppose that $T =
R_1 + i S_1 = R_2 + i S_2$ where $R_1, S_1$ and $R_2, S_2$ are
pairs of commuting well-bounded operators. Then $R_1$ and $R_2$
are quasinilpotent equivalent in the sense of \cite{CF} (as is
$S_1$ and $S_2$). Suppose that $\set{R_1, S_1, R_2, S_2}$ is a
commuting set. Then $(R_1 - R_2)^2 = (S_1 - S_2)^2 = 0$.
Furthermore suppose that $\set{R_1, S_1, R_2, S_2}$ are all type
(B) well-bounded operators. Then $R_1 = R_2$ and $S_1 = S_2$.
\end{prop}
\begin{proof}
This is Theorem 3.2.6 of \cite{CF2} and Theorem 3.7 of \cite{BDG}.
\end{proof}
\section{The support of the functional calculus}\label{support}
Suppose that $\psi: {AC}(\sigma) \to B(X)$ is the functional calculus map for
an ${AC}(\sigma)$ operator $T$. The support of $\psi$ is defined as the
smallest closed set $F \subseteq \mathbb{C}$ such that if
$\mathrm{supp}f \cap F = \emptyset$, then $\psi(f) = 0$.
It follows from Theorem~\ref{ac-spectral-function} and
\cite[Theorem~3.1.6]{CF2} that the support of $\psi$ is $\sigma(T)$.
It is natural therefore to ask whether such an operator $T$ must
admit an ${AC}(\sigma(T))$ functional calculus. By Proposition
\ref{lbl:508}, this is certainly the case if $T$ is well-bounded (that is, if $\sigma(T) \subseteq \mathbb{R}$),
but the general case remains open.
We shall now give a partial answer to this question, and show that
one may always at least shrink $\sigma$ down to be a compact set not much
bigger than $\sigma(T)$.
\begin{defn} A set $G \subseteq \mathbb{C}$ is said to be gridlike if it is
a closed polygon with sides parallel to the axes.
\end{defn}
Note that we do not require that a gridlike set be convex, or even
simply connected.
\begin{prop}\label{quotient-prop}
Suppose that $V$ is a gridlike set, that $\sigma$ is compact and
that $V \subseteq \sigma$. Let $I_V = \{f \in {AC}(\sigma) \,:\,
\hbox{$f \equiv 0$ on $V$}\}$. Then ${AC}(\sigma)/I_V \cong {AC}(V)$
as Banach algebras.
\end{prop}
\begin{proof} Define $\Theta: {AC}(\sigma)/I_V \to {AC}(V)$ by
$\Theta([f]) = f|V$. Then clearly
\[ \Theta([f]) = \Theta([g]) \iff f|V \equiv g|V \iff f-g \in I_V \]
and so $\Theta$ is well-defined and one-to-one. It is also easy to
see that $\Theta$ is an algebra homomorphism. Since
\begin{align*}
\norm{\Theta([f])}
& = \norm{f|V}_{{BV}(V)} \\
& = \inf_{g \in I_V} \norm{f+g|V}_{{BV}(V)} \\
& \le \inf_{g \in I_V} \normbv{f+g} \\
&= \norm{[f]}_{{AC}(\sigma)/I_V}
\end{align*}
the map $\Theta$ is bounded.
The hard part of the proof is to show that $\Theta$ is onto. That
is, given $f \in {AC}(V)$, there exists $F \in {AC}(\sigma)$ so that
$F|V = f$.
Choose then a square $J \times K$ containing $\sigma$. Extending the
edges of $V$ produces a grid on $J \times K$, determining $N$ closed
subrectangles $\{\sigma_k\}_{k=1}^{N}$.
Suppose now that $f \in {AC}(V)$. Our aim is to define $\hat{f} \in
{AC}(J \times K)$ with $\hat{f}|V = f$ and $\ssnorm{\hat{f}}_{{BV}(J
\times K)} \le C \norm{f}_{{BV}(V)}$.
Fix an ordering of the rectangles $\{\sigma_k\}$ so that
\begin{enumerate}
\item there exists $k_0$ such that $\sigma_k \subseteq V$ if and
only if $k \le k_0$, and
\item for all $\ell$, $\sigma_\ell$ intersects $\cup_{k < \ell}
\sigma_k$ on at least one edge of $\sigma_\ell$.
\end{enumerate}
Let $E_0$ denote the union of the edges of the rectangles $\sigma_k$
for $k \le k_0$ and let $b$ be the restriction of $f$ to $E_0$. Note
that $b$ is absolutely continuous on $E_0$ and if $e$ is any edge of
any rectangle $\sigma_k$ ($k \le k_0$), then $b|e \in {AC}(e)$ with
$\norm{b|e}_{{BV}(e)} \le \norm{b}_{{BV}(E_0)} \le
\norm{f}_{{BV}(V)}$. Now apply Lemma~\ref{edges-of-square}
to recursively extend $b$ to the set $E$ of all edges of rectangles
$\sigma_k$, $1 \le k \le N$, so that $b \in {AC}(E)$ and
$\norm{b}_{{BV}(E)} \le C_N \norm{f}_{{BV}(V)}$.
For $1 \le k \le k_0$, let $f_k = f|\sigma_k$, so that $f_k \in
{AC}(\sigma_k)$ and $\norm{f_k}_{{BV}(\sigma_k)} \le
\norm{f}_{{BV}(V)}$. Suppose alternatively that $k_0 < k \le N$. By
Theorem~\ref{fill-in-square} we can find $f_k \in {AC}(\sigma_k)$
with $f_k|\partial \sigma_k = b|\partial\sigma_k$ and
$\norm{f_k}_{{BV}(\sigma_k)} \le 28
\norm{b|\partial\sigma_k}_{{BV}(\partial\sigma_k)} \le 28 C_N
\norm{f}_{{BV}(V)}$.
Define $\hat{f}:J \times K \to \mathbb{C}$ such that $\hat{f}|\sigma_k =
f_k$. That $\hat{f}$ is in ${AC}(J \times K)$ with
$\ssnorm{\hat{f}}_{{BV}(J \times K)} \le 28 C_N N \norm{f}_{{BV}(V)}$
follows from Lemma~\ref{pasting-lemma} (first patching together all
the rectangles in each row, and then all the rows together). We can now
let $F = \hat{f}|\sigma$.
It follows then that $\Theta$ is onto and hence is a Banach algebra
isomorphism.
\end{proof}
\begin{thm} \label{lbl:662}
Let $T \in B(X)$ be an ${AC}(\sigma)$ operator for some $\sigma
\subset \mathbb{C}$. Let $U$ be an open neighbourhood of $\sigma(T)$. Then
$T$ is an ${AC}(\overline{U})$ operator.
\end{thm}
\begin{proof} Suppose that $T$, $\sigma$ and $U$ are as stated.
Choose a square $J \times K$ containing $U \cup \sigma$. By
Lemma~\ref{lbl:909}, $T$ admits an ${AC}(J \times K)$ functional
calculus $\psi$.
Consider an equispaced grid on $J \times K$, determining $n^2$
subsquares $\{\sigma_k\}_{k=1}^{n^2}$. Let $V = V(n)$ be the union
of all those $\sigma_k$ which intersect $\sigma(T)$. For $n$ large
enough
\[ \sigma(T) \subseteq \mathrm{int}(V) \subseteq V \subseteq U. \]
For the rest of the proof, fix such an $n$.
As in Proposition~\ref{quotient-prop}, let $I_V = \{ f \in {AC}(J
\times K) \,:\, f|V \equiv 0\}$, so that ${AC}(J\times K)/I_V \cong
{AC}(V)$ via the isomorphism $\Theta$. Note that $I_V \subseteq
\mathrm{ker} (\psi)$ since if $f \in I_V$, then $\mathrm{supp}f \cap
\sigma(T) = \emptyset$. Thus the map $\tilde{\psi}: {AC}(J\times
K)/I_V \to B(X)$,
\[ \tilde{\psi}([f]) = \psi(f) \]
is a well-defined algebra homomorphism with $\ssnorm{\tilde{\psi}}
\le \norm{\psi}$.
We may therefore define $\hat{\psi}: {AC}(\overline{U}) \to B(X)$ by
$\hat{\psi}(f) = \tilde{\psi}([\Theta^{-1}(f|V)])$. Note that
$\hat{\psi}$ is a bounded algebra homomorphism and that, since
$\Theta([\boldsymbol{\lambda}]) = \boldsymbol{\lambda}|V$, $\hat{\psi}(\boldsymbol{\lambda}) = \psi(\boldsymbol{\lambda}) = T$. Thus
$\hat{\psi}$ is an ${AC}(\overline{U})$ functional calculus for $T$.
\end{proof}
\begin{cor}\label{lbl:cor-to-662}
Let $T \in B(X)$ be an ${AC}(\sigma_0)$ operator for some compact set
$\sigma_0$. Then
\[ \sigma(T) = \bigcap \{ \sigma \,:\,
\hbox{$T$ has an ${AC}(\sigma)$ functional
calculus}\}. \]
\end{cor}
The proof of Theorem~\ref{lbl:662} depends on two vital facts. The
first is that the map $\Theta$ is an isomorphism. The second is that
$I_V \subseteq \mathrm{ker}(\psi)$. To show that every ${AC}(\sigma)$
operator is an ${AC}(\sigma(T))$ operator, it would suffice to show
that
\begin{enumerate}
\item\label{Q1} the restriction map ${AC}(\sigma) \to {AC}(\sigma(T))$, $f
\mapsto f|\sigma(T)$ is onto. This is basically equivalent to
answering Question~\ref{extension-quest}.
\item\label{Q2} given any $f \in {AC}(\sigma)$ with $f|\sigma(T) \equiv 0$,
there exists a sequence $\{f_n\} \subseteq {AC}(\sigma)$ with
$\normbv{f-f_n} \to 0$ and $\mathrm{supp} f_n \cap \sigma(T) =
\emptyset$ for all $n$.
\end{enumerate}
Proving (\ref{Q1}) and (\ref{Q2}) when $\sigma(T)$ is a complicated
compact set would appear to require new ways of estimating the
two-dimensional variation used in our definitions.
If $T \in B(X)$ is an ${AC}(\sigma(T))$ operator then $T$ has
spectral theorems similar to those for normal operators. Recall
from \cite{nD} the definition of the local spectrum $\sigma_T(x)$
of $x \in X$ for an operator $T \in B(X)$ with the single-valued
extension property. From \cite{LV} if $T \in B(X)$ is an
${AC}(\sigma)$ operator (and hence decomposable) then those $x \in
X$ such that $\sigma_T(x) = \sigma(T)$ are second countable in
$X$.
\begin{thm} \label{lbl:091}
Suppose that $T \in B(X)$ is an ${AC}(\sigma(T))$ operator with
functional calculus $\psi : {AC}(\sigma(T)) \rightarrow B(X)$. Then
$\psi$ is injective. Hence we can identify ${AC}(\sigma(T))$ with a
subalgebra of $B(X)$. Furthermore suppose that $x \in X$ is such
that $\sigma_T(x) = \sigma(T)$. Then the map ${AC}(\sigma(T))
\rightarrow X : f \mapsto \psi(f)x$ is injective, and so we can
identify ${AC}(\sigma(T))$ with a subspace of $X$.
\end{thm}
\begin{proof}
Let $x \in X$ be such that $\sigma_T(x) = \sigma(T)$. To prove the
theorem it suffices to show that if $f \in {AC}(\sigma(T))$ and $f
\neq 0$ then $\psi(f)x \neq 0$. Let $\lambda_0 \in \sigma(T)$ be
such that $f(\lambda_0) \neq 0$. Since $f$ is continuous we can find
an open neighbourhood $V$ of $\lambda_0$ such that $0 \not \in
f(V)$. We can choose $g \in {AC}(\sigma(T))$ such that $(f g)(V) =
\set{1}$. If we show $\psi(f g) x \neq 0$ this will imply, since
$\psi$ is a homomorphism, that $\psi(f)x \neq 0$. Hence we can
assume that $f(V) = \set{1}$. Let $U$ be an open set such that
$\set{U, V}$ is an open cover of $\sigma(T)$ and such that
$\lambda_0 \not \in U$. By Lemma 5.2.3 of \cite{bA} we can find
non-zero $x_U, x_V \in X$ such that $x = x_U + x_V$ and where
$\sigma_T(x_U) \subset U$ and $\sigma_T(x_V) \subset V$. Since
$\sigma_T(x) \subset \sigma_T(x_U) \cup \sigma_T(x_V)$ we have that
$\lambda_0 \in \sigma_T(x_V)$ and $\lambda_0 \not \in
\sigma_T(x_U)$. Assume that $\psi(f)x = 0$. Then
$0 = \psi(f)(x_U +x_V) = \psi(f)x_U + x_V$
since $f$ is one on $V$. It follows that
$\sigma_T(x_V) = \sigma_T(-\psi(f)x_U) = \sigma_T(\psi(-f)x_U)
\subset \sigma_T(x_U)$. Then we have the contradiction that
$\lambda_0 \in \sigma_T(x_V) \subset \sigma_T(x_U) \not \ni
\lambda_0$. Hence $\psi(f)x \neq 0$.
\end{proof}
Since every ${AC}(\sigma)$ operator is also an ${AC}$ operator, the
results of \cite{DW} give a representation theorem for compact
${AC}(\sigma)$ operators. Specifically, if $T \in B(X)$ is a compact
${AC}(\sigma)$ operator with nonzero eigenvalues $\{\mu_j\}$ and
corresponding Riesz projections $\{P_j\}$, then
\begin{equation}\label{comp-sum}
T = \sum_j \mu_j P_j
\end{equation}
where the sum converges in norm under a particular specified
ordering of the eigenvalues. Given a sequence of real numbers
$\{\mu_j\}$ and disjoint projections $\{P_j\} \subseteq B(X)$,
necessary and sufficient conditions are known which ensure that the
operator defined via (\ref{comp-sum}) is well-bounded (\cite[Theorem
3.3]{CD}). At present an analogous result for compact ${AC}(\sigma)$
operators in unknown. These questions are pursued more fully in
\cite{AD3} where, for example, various sufficient conditions for
(\ref{comp-sum}) to define a compact ${AC}(\sigma)$ operator are
given.
\section{Spectral resolutions}
\label{lbl:SpecRes}
The theory of well-bounded operators is at its most powerful if one
adds the additional assumption that the functional calculus map for
$T$ is `weakly compact'. That is, for all $x \in X$, the map
$\psi_x: {AC}(\sigma(T)) \to X$, $f \mapsto \psi(f)x$ is weakly
compact. In this case $T$ admits an integral representation with
respect to a spectral family of projections $\{E(\mu)\}_{\mu \in
\mathbb{R}}$. The integration theory for spectral families allows one to
define
\[ f(T) = \widehat{\psi}(f)
= \int_{\sigma(T)}^\oplus f(\mu) \, dE(\mu) \]
for all $f \in {BV}(\sigma)$ giving an extended functional calculus
map. (This integral is more usually written as $\int_{J}^\oplus f(\mu)
\, dE(\mu)$, where $J$ is some compact interval containing
$\sigma(T)$. We have written it in the above form to stress that the
value of the integral only depends on the values of $f$ on
$\sigma(T)$.) If $\psi$ is not weakly compact, then there may be no
spectral resolution consisting of projections on $X$. A suitable
family of projections on $X^*$, known as a decomposition of the
identity, does always exist, but the theory here is much less
satisfactory.
Obviously extending this theory to cover general ${AC}(\sigma)$
operators with a weakly compact functional calculus is highly
desirable. At present a full analogue of the well-bounded theory has
not been found, but we are able to show that each such operator does
admit a nice spectral resolution from which the operator may be
recovered. The following definition extends the definition for
well-bounded operators.
\begin{defn} Let $T \in B(X)$ be an ${AC}(\sigma)$ operator with
functional calculus map $\psi$. Then $T$ is said to be of type~(B)
if for all $x \in X$, the map $\psi_x: {AC}(\sigma(T)) \to X$, $f
\mapsto \psi(f)x$ is weakly compact.
\end{defn}
Obviously every ${AC}(\sigma)$ operator on a reflexive Banach space
is of type~(B),
as is every scalar-type spectral operator on a
general Banach space (see \cite{K}). The weak compactness of the
functional calculus removes one of the potential complications with
studying ${AC}(\sigma)$ operators.
\begin{lem} \label{lbl:129}
Let $T \in B(X)$ have a weakly compact ${AC}(\sigma)$ functional
calculus. Then it has a unique splitting $T = R + i S$ where $R$ and
$S$ are commuting type (B) well-bounded operators.
\end{lem}
\begin{proof}
Recall if we set $R = \psi({\rm Re}(\boldsymbol{\lambda}))$ and $S = \psi({\rm Im}(\boldsymbol{\lambda}))$
then $R$ and $S$ are commuting well-bounded operators. The
${AC}({\rm Re}(\sigma(T)))$ functional calculus for $R$ is given by $f
\mapsto \psi(u(f))$ and so is clearly weakly compact. Hence $R$ is type
(B). Similarly $S$ is type (B). Uniqueness follows from Proposition
\ref{lbl:337}.
\end{proof}
If $T$ is a well-bounded operator of type~(B) with spectral family
$\{E(\mu)\}_{\mu \in \mathbb{R}}$, then, for each $\mu$, $E(\mu)$ is the
spectral projections for the interval $(-\infty,\mu]$. The natural
analogue of this in the ${AC}(\sigma)$ operator setting is to index
the spectral resolution by half-planes. Modelling the plane as
$\mathbb{R}^2$, each closed half-plane is specified by a unit vector
$\theta \in \mathbb{T}$ and a real number $\mu$:
\[ H(\theta,\mu) = \{ z \in \mathbb{R}^2 \,:\, z\cdot \theta \le \mu \} .\]
Let $\HP$ denote the set of all half-planes in $\mathbb{R}^2$. The
following provisional definition contains the minimal conditions one
would require of a spectral resolution for an ${AC}(\sigma)$
operator.
\begin{defn}\label{HPSF} Let $X$ be a Banach space. A half-plane spectral family on
$X$ is a family of projections $\{E(H)\}_{H \in \HP}$ satisfying:
\begin{enumerate}
\item\label{HPSF-1} $E(H_1)\, E(H_2) = E(H_2)\, E(H_1)$ for all $H_1,H_2 \in
\HP$;
\item there exists $K$ such that $\norm{E(H)} \le K$ for all $H
\in \HP$;
\item for all $\theta \in \mathbb{T}$,
$\{E(H(\theta,\mu))\}_{\mu \in \mathbb{R}}$ forms a spectral family of
projections.
\item\label{HPSF-4} for all $\theta \in \mathbb{T}$, if $\mu_1 < \mu_2$, then
$E(H(\theta,\mu_1))\, E(H(-\theta,-\mu_2)) = 0$.
\end{enumerate}
The radius of $\{E(H)\}$ is the (possibly infinite) value
\[ r(\{E(H)\}) = \inf \{ r \,:\,
\hbox{for all $\theta$, $E(H(\theta,\mu)) = I$ for all $\mu > r$} \} . \]
\end{defn}
Suppose that $\sigma \subset \mathbb{R}^2$ is a nonempty compact set. Given
any unit direction vector $\theta$, let $\sigma_\theta = \{ z\cdot
\theta \,:\, z \in \sigma\} \subseteq \mathbb{R}$. Define the subalgebra of
all ${AC}(\sigma)$ functions which only depend on the component of
the argument in the direction $\theta$,
\[ {AC}_\theta(\sigma) = \{ f \in {AC}(\sigma) \,:\,
\hbox{there exists $u \in {AC}(\sigma_\theta)$ such that $f(z) =
u(z \cdot \theta)$}\} .\]
By Proposition~3.9 and Lemma~3.10 of \cite{AD}, there is a norm 1
isomorphism $U_\theta: {AC}(\sigma_\theta) \to {AC}_\theta(\sigma)$.
Let $T \in B(X)$ be an ${AC}(\sigma)$ operator of type~(B), with
functional calculus map $\psi$. The algebra homomorphism
$\psi_\theta: {AC}(\sigma_\theta) \to B(X)$, $u \mapsto \psi(U_\theta
u)$ is clearly bounded and weakly compact. It follows then from the
spectral theorem for well-bounded operators of type~(B) (see, for
example, \cite{qBD}) that there exists a spectral family
$\{E(H(\theta,\mu))\}_{\mu \in \mathbb{R}}$, with $\norm{E(H(\theta,\mu))}
\le 2 \norm{\psi}$ for all $\mu$. We have thus constructed a
uniformly bounded family of projections $\{E(H)\}_{H \in \HP}$. To
show that this family is a half-plane spectral family it only
remains to verify (\ref{HPSF-1}) and (\ref{HPSF-4}).
Suppose then that $E_1 = E(\theta_1,\mu_1)$ and $E_2 =
E(\theta_2,\mu_2)$. For $\mu \in \mathbb{R}$ and $\delta > 0$, let
$g_{\mu,\delta}: \mathbb{R} \to \mathbb{R}$ be the function which is $1$ on
$(-\infty,\mu]$, is $0$ on $[\mu+\delta,\infty)$ and which is linear
on $[\mu,\mu+\delta]$. Let $h_\delta = U_{\theta_1}
(g_{\mu_1,\delta})$ and $k_\delta = U_{\theta_2}(
g_{\mu_2,\delta})$. The proof of the spectral theorem for
well-bounded operators shows that $E_1 = \lim_{\delta \to 0^+}
\psi(h_\delta)$ and $E_2 = \lim_{\delta \to 0^+} \psi(k_\delta)$,
where the limits are taken in the weak operator topology in $B(X)$.
Thus, if $x \in X$ and $x^* \in X^*$,
\begin{align*}
\ipr<E_1 E_2 x,x^*>
&= \lim_{\delta \to 0^+} \ipr<\psi(h_\delta) E_2 x,x^*> \\
&= \lim_{\delta \to 0^+} \ipr< E_2 x, \psi(h_\delta)^*x^*> \\
&= \lim_{\delta \to 0^+} \left( \lim_{\beta \to 0^+}
\ipr<\psi(h_\delta) \psi(k_\beta) x,x^*> \right)\\
&=\lim_{\delta \to 0^+} \left( \lim_{\beta \to 0^+}
\ipr<\psi(k_\beta) \psi(h_\delta) x,x^*> \right)\\
&=\lim_{\delta \to 0^+} \ipr<\psi(h_\delta) x, E_2^*x^*> \\
&= \ipr<E_2 E_1 x,x^*>
\end{align*}
Verifying (\ref{HPSF-4}) is similar. Fix $\theta \in \mathbb{T}$ and $\mu_1
< \mu_2$. Let $E_1 = E(\theta,\mu_1)$ and $E_2 = E(-\theta,-\mu_2)$.
Let $h_\delta = U_{\theta} (g_{\mu_1,\delta})$ and $k_\delta =
U_{-\theta}( g_{-\mu_2,\delta})$ so that $E_1 = \lim_{\delta \to
0^+} \psi(h_\delta)$ and $E_2 = \lim_{\delta \to 0^+}
\psi(k_\delta)$. The result follows by noting that for $\delta$
small enough, $h_\delta k_\delta = 0$.
We have shown then that $\{E(H)\}_{H \in \HP}$ is a half-plane
spectral family.
\smallskip
For notational convenience, we shall identify the direction vector
$\theta \in \mathbb{R}^2$ with the corresponding complex number on the unit
circle. Thus, for example, we identify $(0,1)$ with $i$.
For $\theta \in \mathbb{T}$, the spectral family $\{E(\theta,\mu)\}_{\mu
\in \mathbb{R}}$ defines a well-bounded operator of type~(B)
\begin{equation}\label{T-theta}
T_\theta = \int_{\sigma_\theta} \mu \, dE(\theta,\mu).
\end{equation}
Clearly the map $\boldsymbol{\lambda}_\theta = z \cdot \theta$ lies in ${AC}_\theta(\sigma) \subseteq {AC}(\sigma)$ and the construction of the spectral family ensures that $\psi(\boldsymbol{\lambda}_\theta) = T_\theta$.
Since $\boldsymbol{\lambda} = \theta \boldsymbol{\lambda}_\theta + i\theta \boldsymbol{\lambda}_{i \theta}$ we have that
\begin{equation}\label{theta-decomp}
T = \theta T_\theta + \,i\theta\, T_{i\theta}.
\end{equation}
In particular, using Theorem~\ref{lbl:329} and Theorem~\ref{lbl:337} we have
that $T$ has the unique splitting into real and imaginary parts
\begin{equation}\label{recon-formula}
T = T_1
+ \,i\, T_i .
\end{equation}
One consequence of these identities is that $T$ may be recovered from the half-plane spectral family
produced by the above construction.
Note that Theorem~\ref{lbl:329} and the fact that $\theta^{-1} T = T_\theta + i T_{i\theta}$ imply that $\sigma(T_\theta) = {\rm Re}(\sigma(\theta^{-1}T))$. Thus, if $r(\cdot)$
denotes the spectral radius, then $r(T_\theta) \le r(\theta^{-1}T) = r(T)$.
Since there exists $\theta \in \mathbb{T}$ for
which $r(T_\theta) = r(T)$, we have the following result.
\begin{prop} With $T$ and $\{E(H)\}$ as above, $r(\{E(H)\}) = r(T)$.
\end{prop}
Note that if we define $f_\theta \in {AC}(\sigma)$ by $f_\theta(z) =
z\cdot \theta$, then $T_\theta = \psi(f_\theta) = f_\theta(T)$. In
particular, if $\omega = (1/\sqrt{2},1/\sqrt{2})$, then $f_\omega =
(f_1 + f_i)/\sqrt{2}$, and hence
\[ T_\omega = \psi(f_\omega) = (T_1+T_i)/\sqrt{2}. \]
This proves the following proposition. Note that in general the sum
of two commuting well-bounded operators need not commute.
\begin{prop} Let $T$ be an ${AC}(\sigma)$ operator of type~(B), with
unique splitting $T = R+iS$. Then $R+S$ is also well-bounded.
\end{prop}
\begin{quest}
Suppose that $R$ and $S$ are commuting well-bounded operators whose
sum is well-bounded. Is $R+iS$ an ${AC}(\sigma)$ operator?
\end{quest}
It is clear that given any half-plane spectral family $\{E(H)\}_{H
\in \HP}$ with finite radius, Equation~(\ref{recon-formula}) defines
$T \in B(X)$ which is an ${AC}$~operator in the sense of Berkson and
Gillespie. It is not clear however, that $T$ need be an
${AC}(\sigma)$ operator. In particular, if we define $T_\theta$ via
Equation~(\ref{T-theta}), then it is not known whether the identity
(\ref{theta-decomp}) holds.
\begin{quest}
Is there a one-to-one correspondence between ${AC}(\sigma)$ operators
of type~(B) and half-plane spectral families with finite radius? If
not, can one refine Definition~\ref{HPSF} so that such a
correspondence exists?
\end{quest}
\section{Extending the functional calculus}
\label{lbl:454}
Given a ${AC}(\sigma)$ operators of type~(B) its associated
half-plane spectral family (as constructed above), it is natural to
ask whether one can develop an integration theory which would enable
the functional calculus to be extended to a larger algebra than
${AC}(\sigma)$.
The spectral family associated to a well-bounded operator $T$ of
type~(B) allows one to associate a bounded projection with any set
of the form $\bigcup_{j=1}^n \sigma(T) \cap I_j$, where
$I_1,\dots,I_n$ are disjoint intervals of $\mathbb{R}$. Let $\sigma =
\sigma(T) \subset \mathbb{R}$ and let $\NP(\sigma(T))$ denote the algebra
of all such sets. It is easy to check the following.
\begin{thm} \label{lbl:777}
Let $T \in B(X)$ be a type (B) well-bounded operator with functional
calculus $\psi$. Then there is a unique map $E : \NP(\sigma(T))
\rightarrow {\rm Proj}(X)$ satisfying the following
\begin{enumerate}
\item $E(\emptyset) = 0$, $E(\sigma(T)) = I$,
\item $E(A \cap B) = E(A)E(B) = E(B)E(A)$ for all $A, B \in \NP(\sigma(T))$,
\item $E(A \cup B) = E(A) + E(B) - E(A \cap B)$
for all $A, B \in \NP(\sigma(T))$,
\item\label{NP-norm-bound}
$\normbv{E(A)} \leq \norm{\psi} \normbvj{\chi_A}$
for all $A \in \NP(\sigma(T))$,
\item if $S \in B(X)$ is such that $T S = S T$ then $E(A) S = S E(A)$
for all $A \in \NP(\sigma(T))$,
\item ${\rm Range}(E(A)) = \set{x \in X : \sigma_T(x) \subseteq A}$.
\end{enumerate}
\end{thm}
For general ${AC}(\sigma)$ operators, the natural algebra of sets is
that generated by the closed half-planes. This algebra has been
studied in various guises, particularly in the setting of
computational geometry. The sets that can be obtained by starting
with closed half-planes and applying a finite number of unions,
intersections and set complements are sometimes known as Nef
polygons. The set of Nef polygons in the plane, $\NP$, clearly
contains all polygons, lines and points in the plane. For more
information about Nef polygons, or more generally their
$n$-dimensional analogues, Nef polyhedra, we refer the reader to
\cite{BIC}, \cite{HKM} or \cite{Nef}.
Let $\sigma$ be a nonempty compact subset of $\mathbb{C}$. Define
\[ \NP(\sigma) = \{ A \,:\, \hbox{$A = \sigma \cap P$ for some $P
\in \NP$}\}. \]
It is clear that given an $AC(\sigma)$ operator of type~(B), one may
use the half-plane spectral family constructed in the previous
section to associate a projection $E(A) \in B(X)$ with each set $A
\in \NP(\sigma)$. The major obstacle in developing a suitable
integration theory in this setting is in providing an analogue of
condition (\ref{NP-norm-bound}) in Theorem~\ref{lbl:777}.
Note that if $A \in \NP(\sigma)$, then $\chi_A \in {BV}(\sigma)$.
Rather than forming $E(A)$ by a finite combination of algebra
operations, one might try to define $E(A)$ directly as we did when
$A$ was a half-plane. That is, one may try to write
\[ E(A) = WOT-\lim_\alpha \psi(h_\alpha) \]
where $\{h_\alpha\}$ is a suitable uniformly bounded net of
functions in ${AC}(\sigma)$ which approximates $\chi_A$ pointwise.
It is shown in \cite{bA} that if $A$ is a closed polygon then this
may be done but only under the bound
$ \norm{h_\alpha} \le V_A$ .
Here $V_A$ is a constant depending on $A$. This allows one to prove
a weaker version of Theorem~\ref{lbl:777}, with condition
(\ref{NP-norm-bound}) replaced by $\norm{E(A)} \le V_A \norm{\psi}$.
It remains an open question as whether one can do this with $V_A
\le 2 \norm{\chi_A}$. However, if $A$ is a closed convex polygon
contained in the interior of $\sigma$, then this is possible.
\begin{quest} Does every ${AC}(\sigma)$ operator of type~(B) admit a
${BV}(\sigma)$ functional calculus?
\end{quest}
It might be noted in this regard that all the examples of
${AC}(\sigma)$ operators of type~(B) given in Section~\ref{lbl:id-s3}
do admit such a functional calculus extension.
|
1,477,468,750,360 | arxiv | \section{ Introduction }
The history of traffic dynamics began early in 1950s. Two different
models were proposed: the one is car-following
model\cite{Gazis,Newell,Pipes}, which describe the motion of vehicles
by many-variable differential equations, and the other is the fluid
dynamical model\cite{Fluid}, which regard traffic flow something like
fluid. These two models have been further investigated by many
authors. However either model did not succeed in explaining the
behavior of real traffic flow in some points. Especially those
traditional models can not attain an unified understanding the most
remarkable fact that traffic flow has two phases; one is free flow
with low car-density and high average speed and the other is congested
flow with high car-density and low average speed.
Recently, in these two models, there are found new mechanisms
which explain the existence of two phases. In car-following models,
a modified model was proposed by introducing optimal velocity\cite{AichiA}.
(We call this `Optimal Velocity Model' (OVM) hereafter.)
The equation of motion of OVM is given by
\begin{equation}
\ddot x_n(t) = a\{ V(\Delta x_n(t))-\dot x_n(t)\}\ \ \ \ n=1,2 \cdots N,
\label{eq:ovm0}
\end{equation}
where the notations are; car number $n$, time $t$, position of n-th
car $x_n$ and its headway $\Delta x_n$.
A dot on the variable denotes differentiation with regard to time $t$,
sensitivity $a$ is a constant parameter. The essential difference from
traditional car-following models is the introduction of an optimal
velocity $V(\Delta x)$ of a vehicle, which value is a function
of headway distance. A driver reacts according to the difference
between the vehicle's velocity and the optimal velocity $V(\Delta x)$
and controls its velocity by accelerating (or decelerating) his
vehicle proportional to this velocity difference.
The dynamical
equation of OVM has two different kind of solutions. One is a
homogeneous flow solution and the other is a congested flow
solution which consists of two distinct regions; congested
regions (high density) and free
regions (low density).
In OVM, if the density of vehicles is above some critical value
the traffic congestion occurs spontaneously, which can be
understood as a sort of phase transition from homogeneous flow state to
congested flow state\cite{AichiA,AichiB}.
Also in fluid dynamical models, the following equation has been
proposed\cite{Kerner}.
\begin{equation}
\frac{\partial v}{\partial t}+v\frac{\partial v}{\partial x}=
\frac{V(\rho)-v}{\tau}-\frac{c_0^2}{\rho}\frac{\partial \rho}{\partial x}+
\frac{l^2}{\rho}\frac{\partial^2 v}{\partial x^2},
\label{eq:fluid0}
\end{equation}
where $c_0$, $\l$ and $\tau$ are constant parameters of the system and
density $\rho$ and velocity $v$ of vehicles are functions of location
$x$ and time $t$. They call the function $V(\rho)$ `safe velocity'. To
see the similarity of these two models, let us ignore second and third
term of rhs of Eq.(\ref{eq:fluid0}) and use $D/Dt=\partial/\partial t
+ v\partial/\partial x$; the differentiation acting on the variables
of individual vehicles. The fluid dynamical equation
(\ref{eq:fluid0}) becomes
\begin{equation}
\frac{Dv}{Dt} = \frac{V(\rho)-v}{\tau} .
\end{equation}
This is quite similar to Eq.(\ref{eq:ovm0}) with optimal
velocity $V(\Delta x)$ and sensitivity $a$ replaced by `safe
velocity' $V(\rho)$ and parameter $1/\tau$,
respectively. The reason why these two models can explain the
formation of traffic congestion comes from the form of dynamical
equation and the introduction of `optimal' or `safe' velocity.
Contrary to this, the original equation of motion
of traditional car-following models\cite{Kometani}
\begin{equation}
\ddot x_n(t+\tau)=\lambda\{\dot x_{n-1}(t)-\dot x_n(t)\}\ ,
\label{eq:hist2}
\end{equation}
is essentially a first order differential equation without a delay
time $\tau$ of response. Because traffic flow
governed by a first order differential equation is
always stable, the delay time $\tau$ plays a crucial role
in describing the behavior of traffic flow.
The origin of the delay time $\tau$ have been thought to
be a physiological delay of response. In fact, it is well-known
that the motion of a vehicle accompanies some delay time
in response to the motion of its preceding vehicle.
There seem, however, questions to be discussed: what is the role
of the delay time $\tau$, or, whether or not the delay time $\tau$
is equal to the value of observed one.
Here we should remark that it is necessary to distinguish two different
types of definition concerning the notion of ``delay time''. There is
a time lag until the driver begins an action after being conscious of
a stimulus. It may also takes a finite time for a vehicle to change
its velocity after the operation of driver. We define such
physiological and mechanical time lag as ``delay time of response''.
This can be measured in principle although it may vary depending on
the type of the stimulus, individual driver and performance of
vehicle. On the other hand, we can define ``delay time of vehicle
motion'' as a period from the time of velocity change of a vehicle to
that of following one. The delay time of vehicle motion can also be
estimated in observations of real traffic and/or numerical simulations
using traffic models. However in general case, it is difficult to find
a good definition of the delay time of motion of two successive
vehicles. Only in the restricted case in which two vehicles behave
quite similarly, for example,
\begin{equation}
v_n(t)\simeq v_{n+1}(t-T),
\label{eq:def_delay}
\end{equation}
we can define the delay time of motion as $T$.
In the car-following models, there are another two types of ``delay
time'' to be distinguished. One is the delay time $\tau$ explicitly
introduced as a parameter in the equation of motion (see
Eq.(\ref{eq:hist2})), which we call ``explicit delay time'' in this
paper. This may correspond to the delay time of response. The other is
the delay time emerging as a result of the dynamics of traffic flow,
which is quite different notion from the explicit delay time. This
will correspond to the delay time of vehicle motion stated above.
However, this delay time includes not only the contribution from the
explicit delay time but also that from purely dynamical origin. We
call the latter as ``effective delay time''. Obviously, the effective
delay time is not zero even if we introduce no explicit delay time.
In this paper, we discuss the delay time of motion in OVM with
no explicit delay time. Therefore, because the
resulting delay time is equal to the effective delay time,
we can investigate the delay time from dynamical origin.
This quantity may be compared with observed delay
times of vehicle motion in various cases. It will be clear that
the effective delay time obtained by our procedure is enough to
explain the delay times observed in actual situations of traffic flow.
In section 2, we define the effective delay time of vehicle motion in
OVM in terms of the vehicle motions of a leader and its follower and
make an analytical study within linear approximation. Then we carry
out numerical
simulations to obtain the effective delay time in several cases.
We show the results in actual traffic
situations: the effective delay times under traffic rights
(section 3) and those in
uniform traffic flow and in congested flow (section 4). Discussions
and further prospects are given in section 5.
\section{Delay Time for Leader and Follower Case}
First let us make an definition of effective delay time of vehicle
motion. Consider a pair of vehicles, a leader and its follower. This
pair of vehicles may be either separate two vehicles or any pair of
vehicles picked up from a series of vehicles in highways or from a
queue waiting to start under traffic lights.
When a leader moves with its velocity $v(t)$, and its follower
replicates the motion of the leader with some delay time $T$ (then the
follower's velocity is given by $v(t-T)$), we can define the delay time of
vehicle-motion as $T$. It must be remarked that we do not define the
delay time as the delay of motion of the follower with respect to its
position but its velocity replication.
Let the positions of a leader and its follower be $y(t)$ and $x(t)$.
In this case Eq.(\ref{eq:ovm0}) is written as
\begin{equation}
\ddot{x}(t) = a\{ V(y(t) - x(t))-\dot{x}(t)\}.
\label{eq:leader_follower_eq}
\end{equation}
Uniform motion is described by
\begin{equation}
y_0(t)=V(b)t + b,\ \ x_0(t)=V(b)t,
\label{eq:const_motion}
\end{equation}
where $b$ is headway and $V(b)$ is a constant velocity.
To investigate the response of the follower vehicle to the leader
vehicle, we introduce a small perturbation $\lambda(t)$ and its
response $\xi(t)$:
\begin{equation}
y(t)=y_0(t) + \lambda (t), \ x(t)=x_0(t) + \xi(t).
\label{eq:per_to_const_motion}
\end{equation}
Inserting Eq.(\ref{eq:per_to_const_motion})
into Eq.(\ref{eq:leader_follower_eq}) and taking a linear
approximation, we get
\begin{equation}
\ddot{\xi}(t) + a\dot{\xi}(t) + af \xi(t) = af \lambda(t) ,
\label{eq:leader_follower_leq}
\end{equation}
where $f = V'(b)$. This is just equivalent to well-known equation of
motion for forced oscillation with damping term caused by friction.
In order to find a solution, we first write $\lambda(t)$
by a Fourier expansion
\begin{equation}
\lambda(t) = \int \tilde {\lambda}(\omega)e^{i\omega t} d\omega \ .
\end{equation}
For the Fourier component $\lambda_0e^{i\omega t}$,
the solution of Eq.(\ref{eq:leader_follower_leq}) is given by
\begin{equation}
\xi(t) = \frac{\lambda_0}{1 + i\omega/f - \omega^2/af}
\ e^{i\omega t}.
\label{eq:E8}
\end{equation}
This is rewritten as
\begin{equation}
\xi(t) = |\eta|\lambda_0 e^{i\omega (t - T)},
\label{eq:E9}
\end{equation}
where
\begin{eqnarray}
|\eta|^2&=&\frac{|\xi|^2}{\lambda_0^2}
=\frac{(af)^2}{(af-\omega^2)^2+(a\omega)^2},
\label{eq:amp_LA}\\
T &=& \frac{1}{\omega}\tan^{-1}\frac{a\omega}{af-\omega^2}.
\label{eq:delay_LA}
\end{eqnarray}
When $f<a/2$, the amplitude $|\eta|$ is monotonically damping function of
$\omega$. On the other hand, when $f>a/2$, $|\eta|$ takes its maximum
at $\omega=\omega_0$;
\begin{equation}
\omega_0^2 = a(f -a/2),
\label{eq:omega0}
\end{equation}
so we call this $\omega_0$ as ``enhanced mode''
(see Fig.\ref{fig:eta-delay}(a)). Note that $f=a/2$ is
the critical point for the instability condition of homogeneous flow
as we found in the previous paper \cite{AichiA}.
Eq.(\ref{eq:omega0}) shows that the enhanced mode
$\omega_0\ (\neq0)$ exists so
far as the instability condition $f > a/2$ is satisfied.
Let us examine some characteristic cases. For
$f<a/2$ where low frequency modes dominate $|\omega|\ll a,\ f$, we have
\begin{equation}
|\eta| \sim 1, \ T \sim \frac{1}{f}.
\label{eq:xiandt}
\end{equation}
In this case the response $\xi(t)$ to the perturbation $\lambda(t)$
becomes
\begin{equation}
\xi(t)= \int \tilde {\lambda}(\omega)e^{i\omega (t - T)} d\omega \
= \lambda(t - T),
\end{equation}
which leads
\begin{equation}
\dot x(t)=V(b)+\dot\xi(t)=V(b)+\dot\lambda(t - T)=\dot y(t-T) .
\label{eq:repli}
\end{equation}
Thus for sufficiently slow perturbation, the delay time of vehicle motion
becomes $T$ in Eq.(\ref{eq:xiandt}), which is approximately
the inverse of derivative
of the optimal velocity function at corresponding headway.
In the other case $f>a/2$, the amplitude $|\eta|$ takes maximum
value at $\omega=\omega_0$. Then we have
\begin{equation}
T_{\rm enhanced} = \frac{1}{\omega_0}\tan^{-1}\frac{2\omega_0}a,
\label{eq:enhanced}
\end{equation}
which indicates that the delay time $T$ for this enhanced mode depends
on the sensitivity $a$ in contrast to the previous case
Eq.(\ref{eq:xiandt}). One can easily confirm that $T$ tends to $1/f$ when
$a$ is close to its critical value $2f$. We should remark that an
exact replication indicated in Eq.(\ref{eq:repli}) is not realized in
this case because $|\eta|$ is not always equal to 1 in
Eq.(\ref{eq:E9}). (See also Figs.\ref{fig:LF1} and the discussions
below Eq.(\ref{eq:lam0}).)
Now let us see the results of numerical simulations and compare them
with those of our analytical consideration. Here and hereafter we use
the following form of the optimal velocity function \cite{AichiC},
\begin{eqnarray}
V(\Delta x)= 16.8\left[\tanh{0.0860\left(\Delta x - 25\right)} +
0.913\right] &({\rm for}\ \Delta x > 7{\rm m}) \\
=0 \qquad\qquad\qquad\qquad\qquad\qquad\qquad\quad
&({\rm for}\ \Delta x < 7{\rm m})
\end{eqnarray}
whose parameters are determined from the
car-following experiment on Chuo Motorway \cite{Oba,KoshiIO}:
the inflection point is $(\Delta x,\dot x)=(25~{\rm m},55~{\rm km/h})$,
the maximal velocity is $V_{\rm max}=115~{\rm km/h}$ and the minimal
headway is $\Delta x_{\rm min}=7.0~{\rm m}$, which includes the length
of the vehicle (5~m) which was used in the experiment.
In numerical simulations, we prepare a pair of vehicles with their
unperturbed motions of Eq.(\ref{eq:const_motion}) with headway $b$. If
the leader changes its motion by $\lambda(t)$ then the function
$\xi(t)$ can be obtained by the numerical simulation from which the
delay time can be read off. We choose $\lambda(t)$ of the leaders
motion as
\begin{equation}
y(t) = V(b)t + b - \lambda_0 \cos{\omega t}, \quad
\dot y(t) = V(b) + \lambda_0 \omega \sin{\omega t} \quad
{\rm for}\ t\ge0
\label{eq:lam0}
\end{equation}
with various $\omega$ setting $\omega\lambda_0=0.1$~m/s, which means
that $|\dot y(t) - V(b)| \le 0.1$~m/s.
\begin{figure}[htb]
\hspace*{-0.6cm}
\epsfxsize=7.5cm
\epsfbox{h25w01.eps}
\epsfxsize=7.5cm
\epsfbox{h25w0938.eps}\\
\hspace*{-0.6cm}
\epsfxsize=7.5cm
\epsfbox{h25w15.eps}
\caption{Numerical results for the motion of leader and follower,
where $\Delta x=$25~m and $a=2.0~{\rm s}^{-1}$.
Frequencies of leader's motion are (a) $\omega=0.1~{\rm s}^{-1}$,
(b)~$\omega=\omega_0$ and (c) $\omega=1.5~{\rm s}^{-1}$.}
\label{fig:LF1}
\end{figure}
\begin{figure}[htb]
\hspace*{-0.3cm}
\epsfxsize=7cm
\epsfbox{amp.eps}
\hspace*{0.3cm}
\epsfxsize=7cm
\epsfbox{delay.eps}
\caption{Each curve shows the behavior of (a) $|\eta|$
(Eq.(\protect{\ref{eq:amp_LA}}))
and (b) $T$ (Eq.(\protect{\ref{eq:delay_LA}}))
for $b=25,\ 30,\ 35,\ 45$~m with $a=2.0~{\rm s}^{-1}$.
Plotted marks on the curves show numerical results.}
\label{fig:eta-delay}
\end{figure}
As illustrations, we show the behaviors of $\lambda(t)$ and its
response $\xi(t)$ for $a=2.0~{\rm s}^{-1}$, $b=25$~m (therefore
$\omega_0 =0.938~{\rm s}^{-1}$). Figures \ref{fig:LF1} are the cases
for $\omega = 0.1,\ 0.938,\ 1.5~{\rm s}^{-1}$.
To find the values $T$, we first rescale $\dot\xi(t)$
and then translate it so as to coincide the curve $\dot\lambda(t)$
in Figures \ref{fig:LF1}.
The value of $|\eta|$ is this scale factor.
The numerical simulations have performed for $b=25,\ 30,\ 35,\ 45$~m.
In Figures \ref{fig:eta-delay},
we show the numerical values for $|\eta|$ and $T$.
The corresponding analytical results
Eqs.(\ref{eq:amp_LA}) and (\ref{eq:delay_LA}) are also shown in these
figures. In both figures, the analytical and numerical results
agree quite well.
\section{Delay Time under Traffic Lights}
The delay time of vehicle motion is clearly recognized in motions of a
series of vehicles under traffic lights. Consider the situation in
which every vehicle waits until a red light changes to green. The
initial conditions are as follows;
\begin{eqnarray}
&&\Delta x_1(0)=\infty,\quad\quad\, \dot x_1(0)=0\cr
&&\Delta x_n(0)=7({\rm m}),\quad \dot x_n(0)=0\qquad(n=2,3,\dots)
\end{eqnarray}
If the light changes to green, the top vehicle will first accelerates
to start, followed by the succeeding vehicles according to the
equations of motion.
\begin{figure}[htb]
\hspace*{-0.6cm}
\epsfxsize=7.5cm
\epsfbox{fig1.eps}
\epsfxsize=7.5cm
\epsfbox{fig3.eps}
\caption{(a) Behavior of velocities of first ten vehicles
under traffic lights with $a=2.0~{\rm s}^{-1}$.
(b) Figure of shifted curves $\dot x_n(t-(n-10)T)$ with $T=1.10$~s.}
\label{signal.2.0}
\end{figure}
\begin{figure}[htb]
\hspace*{-0.6cm}
\epsfxsize=7.5cm
\epsfbox{fig2.eps}
\epsfxsize=7.5cm
\epsfbox{fig4.eps}
\caption{(a) Behavior of velocities of first ten vehicles under
traffic lights with $a=2.8~{\rm s}^{-1}$.
(b) Figure of shifted curves $\dot x_n(t-(n-10)T)$ with $T=1.03$~s.}
\label{signal.2.8}
\end{figure}
We perform the numerical simulation for the cases $a=2.0,\ 2.8\ {\rm
s}^{-1}$ and obtain the time dependence of velocity $\dot x_n(t)$ of
$n$-th vehicles in a queue. Figures \ref{signal.2.0}(a) and \ref
{signal.2.8}(a) show the behavior of motion for the first ten
vehicles. In the figures, it seems that the behaviors of motion for
vehicles on the down stream converge into a common shape. Thus, except
first several vehicles, every vehicle in the queue almost replicates
the behavior of its preceding one with a certain delay time. In this
case, the relation (\ref{eq:def_delay}) can be applied
for n-th vehicle with enough large number $n$. In this sense
we may say that replicative pattern of vehicle motion
is realized ``asymptotically'' and only in this occasion
we can define a delay time of motion $T$ in just the same
way as we defined in section 2. Note that
this definition of the delay time slightly differs from a time lag
with which the successive vehicles start for example.
From Figures \ref{signal.2.0}(a) and \ref{signal.2.8}(a), we can
estimate the delay time $T = 1.10$~s for $a=2.0~{\rm s}^{-1}$ and $T
= 1.03$~s for $a=2.8~{\rm s}^{-1}$. To confirm this similarity of
vehicle motion, we also show plots of translated data $\dot
x_n(t-(n-10)T)$ for the 7th, 8th, 9th and 10th vehicles in Figures \ref
{signal.2.0}(b) and \ref{signal.2.8}(b).
\section{Delay Time in Highway Traffic Flow}
Next we investigate delay time under a simple situation where $N$
vehicles move on a single lane circuit with circumference $L$. Of
course we assume that road conditions are uniform along the circuit
and drivers are identical. Numerical calculations are made with the
initial condition: $x_n = 2\delta_{n1}- b(n-1) \ ( 1\le n\le N)$,
$\dot x_n=V(b)$. The first term of the right hand side of this
condition means that a small perturbation is added to the first
vehicle $x_1$.
In the previous papers \cite{AichiA}, we have shown a homogeneous flow
changes into congested flow spontaneously if the density of vehicles
is greater than the critical value. The results of simulations
indicate that after enough time the traffic flow on a circuit
creates an alternating pattern of high and low density regions. The
motion of vehicles in this flow is visualized by plotting them
in the `phase space'
($\Delta x$, $\dot{x}$). After the traffic flow becomes stationary,
the trajectory of every vehicle in this `phase space' draws a kind of
limit cycle which we named `hysteresis loop' in Ref.\cite{AichiA} (see
also Fig. \ref{limit cycle}).
Now let us estimate delay times for two cases under this traffic flow: (A)
the first stage and (B) the final stationary-state stage.
\begin{figure}[htb]
\hspace*{-0.3cm}
\epsfxsize=7cm
\epsfbox{h25s20.eps}
\epsfxsize=7cm
\epsfbox{h40s20.eps}
\caption{Motions of 10th and 11th vehicles in the first stage:
(a) $\Delta x=$25~m for 50 seconds and (b) $\Delta x=$40~m for 100 seconds
with $a=2.0\ {\rm s}^{-1}$.}
\label{fig:highway.caseA}
\end{figure}
\noindent
$\cdot$ Case A
In this case the traffic flow is almost homogeneous. Let us
pick up a pair of vehicles $n=10,11$. A small perturbation of the
first vehicle $x_1$ propagates backward and after several seconds
those pair of vehicles change their velocities. The typical behaviors
are demonstrated in Figures \ref{fig:highway.caseA}.
In the same way as section 2, the delay
time of motion can be estimated from numerical results.
The obtained values of delay times are shown in Table
\ref{tab:highway.delay2.0} for the case of $a=2.0~{\rm s}^{-1}$
and Table
\ref{tab:highway.delay2.8} for the case of $a=2.8~{\rm s}^{-1}$.
As references, the delay times for low frequency limit and
for the enhanced mode $\omega_0$ are also shown
in Tables \ref{tab:highway.delay2.0}, \ref{tab:highway.delay2.8}.
From these results, the low frequency limit is a good approximation
and the delay time $T$ is almost independent on sensitivity $a$ for
stable traffic flow. For unstable case, there exist contributions from
blow-up modes and they have sensitivity dependence.
\begin{table}[hbt]
\begin{center}
\caption{Delay times for various headway with $a=2.0\ {\rm s}^{-1}$.
The second column indicates that traffic flow is stable (-) or unstable
(+). The third and fourth columns show analytical results given in
section 2.}
\label{tab:highway.delay2.0}
\vspace{0.3cm}
\begin{tabular}{|c|c|c|c|c|}\hline
$\Delta x$ (m) & $f-a/2$ & $T_0=f^{-1}$ (s)
& $T_{\rm enhanced}$ (s) & $T_{\rm simulation}$ (s) \\ \hline
10 & $-$ & 2.6427 & - & 2.6 \\ \hline
15 & $-$ & 1.3434 & - & 1.35 \\ \hline
20 & $+$ & 0.8282 & 0.8884 & 0.95 \\ \hline
25 & $+$ & 0.6921 & 0.8017 & 0.85 \\ \hline
30 & $+$ & 0.8282 & 0.8884 & 0.95 \\ \hline
35 & $-$ & 1.3434 & - & 1.35 \\ \hline
40 & $-$ & 2.6427 & - & 2.6 \\ \hline
50 & $-$ & 13.101 & - & 13 \\ \hline
\end{tabular}
\caption{Delay times for various headway with $a=2.8\ {\rm s}^{-1}$.
The second column indicates that traffic flow is stable (-) or unstable
(+). The third and fourth columns show analytical results given in
section 2.}
\label{tab:highway.delay2.8}
\vspace{0.3cm}
\begin{tabular}{|c|c|c|c|c|}\hline
$\Delta x$ (m) & $f-a/2$ & $T_0=f^{-1}$ (s)
& $T_{\rm enhanced}$ (s) & $T_{\rm simulation}$ (s)\\ \hline
10 & $-$ & 2.6427 & - & 2.6 \\ \hline
15 & $-$ & 1.3434 & - & 1.35 \\ \hline
20 & $-$ & 0.8282 & - & 0.85 \\ \hline
25 & $+$ & 0.6921 & 0.6996 & 0.75 \\ \hline
30 & $-$ & 0.8282 & - & 0.85 \\ \hline
35 & $-$ & 1.3434 & - & 1.35 \\ \hline
40 & $-$ & 2.6427 & - & 2.6 \\ \hline
50 & $-$ & 13.101 & - & 13 \\ \hline
\end{tabular}
\end{center}
\vspace*{-0.4cm}
\end{table}
\noindent
$\cdot$ Case B
After sufficiently long time, traffic flow forms
stationary patterns of high and low density regions.
Under this situation vehicle does not change its velocity unless it
encounters a boundary of high and low density regions. Let us observe
the motion of a vehicle on the boundary. A vehicle which encounters
the boundary changes its velocity. After some delay time, the following vehicle
comes to the boundary and changes its velocity in the same way as
the previous vehicle. The typical behavior of vehicles is shown in
Figure \ref{fig:highway.caseB}.
\begin{figure}[htb]
\hspace*{3cm}
\epsfxsize=7cm
\epsfbox{h25cong.eps}
\caption{Motion of successive two vehicles in congested flow.
Initial condition is $\Delta x=$25~m with $a=2.0~{\rm s}^{-1}$.}
\label{fig:highway.caseB}
\end{figure}
The delay time of vehicle motion in this case can be derived as follows.
Consider two vehicles: one enters into a congested
region from a free region and
after a certain interval $T$, the next one follows.
In the `phase space' ($\Delta x$, $\dot{x}$) (Fig.\ref{limit cycle}),
the free region is denoted as
a point F$(\Delta x_F,v_F)$, that is, vehicles is moving with
velocity $v_F$ and headway $\Delta x_F$.
The delay time $T$ is defined as the time which is needed for the next
vehicle to reach the boundary and enter into a congested region.
Now at the time when the first vehicle reaches the boundary of the free region
the distance between the next vehicle and this boundary is of course
$\Delta x_F$. The next vehicle runs with velocity $v_F$ and the
boundary itself also is moving backward with velocity $v_B$,
so if the vehicle and the boundary will meet after time interval $T$,
we can write the following relation
\begin{equation}
v_FT+v_BT=\Delta x_F\ .
\label{eq:Tf}
\end{equation}
Similar relation can be written at a boundary where a vehicle exits
from congested region.
\begin{equation}
v_CT+v_BT=\Delta x_C\ .
\label{eq:Tc}
\end{equation}
This can be confirmed if one recalls that the pattern of the flow
is already stationary, and the input vehicles of a boundary of
congested region must be equal to the output of another boundary.
The above two equations,(\ref{eq:Tf}) and (\ref{eq:Tc}) indicate that the
time interval $T$ and $v_B$ can be graphically expressed as the slope
and the intercept with vertical axis of the line connecting F and C for
$a=2.0\ {\rm s}^{-1}$ and F' and C' for $a=2.8\ {\rm s}^{-1}$ in Figure
\ref{limit cycle}. If we write the velocity of the first vehicle as
$v(t)$, then the one of the following vehicle is $v(t-T)$, which
implies that $T$ is just the delay time of vehicle-motion defined in
the previous section.
\begin{figure}[hbt]
\hspace*{3cm}
\epsfxsize=8cm
\epsfbox{limit.eps}
\caption{The limit cycles for $a=2.0~{\rm s}^{-1}$ and $a=2.8~{\rm
s}^{-1}$.}
\label{limit cycle}
\end{figure}
It may be convenient to adopt the coordinate moving with the
congestion pattern. Let $x(t)$ and $X(t)$ be the positions of a
vehicle measured in a fixed and a moving (with constant velocity
$v_B$) coordinates, respectively;
\begin{equation}
x(t) = X(t) + v_Bt.
\label{eq:y2x}
\end{equation}
Then we get
\begin{eqnarray}
\ddot{X}(t)
&=& a\{ V(\Delta X(t))-(\dot{X}(t)+v_B)\} \nonumber \\
&=& a\{ W(\Delta X(t))-\dot{X}(t)\},
\label{eq:E33}
\end{eqnarray}
where $W(\Delta x)$ is the optimal velocity function in the co-moving
frame;
\begin{equation}
W(\Delta x) = V(\Delta x) - v_B.
\label{eq:E34}
\end{equation}
Let us concentrate on the motion of vehicles in this co-moving
coordinate. Every vehicle passes a same position with a certain time
intervals $T$; in high density region, the time interval is given by
\begin{equation}
T_C = \left(\frac{\Delta X}{\dot X}\right)_C,
\end{equation}
and (similarly) on the point $F$, we have,
\begin{equation}
T_F = \left(\frac{\Delta X}{\dot X}\right)_F.
\end{equation}
These two time intervals should be equal since otherwise the
congestion pattern moves. We write this time interval by $T$;
\begin{equation}
T = T_C = T_F,
\end{equation}
which is of course equivalent to Eqs.(\ref{eq:Tf}) and (\ref{eq:Tc}).
We have carried out numerical simulations. For sensitivity $a=2.0~{\rm
s}^{-1}$, we find $C(12.51,2.05)$ and $F(37.50,28.55)$ yielding the
delay of vehicle motion $T=0.943$~s and the back velocity of the
boundary $v_B=11.2$~m/s. As for $a=2.8~{\rm s}^{-1}$,
$C'(21.89,10.92)$ and $F'(28.11,19.68)$ yielding $T=0.711$~s and
$v_B=19.9$~m/s.
It is interesting to find that the main contribution of the resultant
delay of vehicle-motion comes from the structure of the Optimal
Velocity Model and not
from the explicit delay $\tau$.
\section{Summary and Discussions}
The notion of delay time of response $\tau$ has played a significant
role in the history of traffic dynamics. Indeed delays of vehicle
motions are observed in many cases, in traffic lights waiting queues
or in highway traffic motions and the delay time are usually observed
to be of order 1 second. We should take account of the effect of
observed delay time
and it has long been thought that it must be introduced in the
equation of motion as an explicit delay time, most of which is caused
by driver's physiological delay time and mechanical delay of response
of vehicles. However it is known that the physiological response time
is of order 0.1 second, not of order 1 second. We should be careful
that the delay time of vehicle motion comes from another
origin, that is, from the equation of motion itself
which we have here investigated intensively. The results are
summarized as follows:
\begin{enumerate}
\item{ The case of a leader vehicle and its follower}\\
As is seen in Figure \ref{fig:eta-delay}(b), if the headway distance is
around 25~m in which drivers are sensitive to the behavior of the
motion of the preceding vehicle, we clearly recognize delay time is
around 1 second independently of the frequency of the leader's
velocity-change function $\lambda(t)$. However if their headway
distance is more than 40~m, delay time is estimated to be larger than
1 second, and sometimes we obtain 6 second for the case $\Delta x=6$~m
for the case of low frequency ($\omega \sim 0$).
This is because of the structure of
optical velocity function. If the slope of the optimal velocity
function is very
small, drivers are insensitive to the behavior of the preceding
vehicle. This can be easily understood if one considers the extreme
case in which the function $V$ is independent of $\Delta x$ (and so
$f=0$). In this case, a follower never reacts to its previous vehicle and
accordingly its delay time of motion becomes infinity.
\item{ A queue of vehicles controlled by traffic lights}\\
In this case except the first several vehicles, most of the
succeeding vehicles behave almost similarly as seen in Figures \ref
{signal.2.0} and \ref {signal.2.8}. From those figures,
delay times are read off; $T=1.10$~s for $a=2.0~{\rm s}^{-1}$ and
$T=1.03$~s for $a=2.0~{\rm s}^{-1}$. Although $T$ depends on the
sensitivity adopted, the results obtained is again of order 1 second
for reasonable realistic sensitivity.
\item{Many-vehicle case of highway traffic flow}\\
In Figures \ref{fig:highway.caseA} we show the typical behaviors of a
pair of vehicles and in Table \ref{tab:highway.delay2.0} and \ref
{tab:highway.delay2.8} delay times obtained by numerical simulations
are summarized with various values of its headway $\Delta x$. Again in
the case of $\Delta x =25$~m, the estimated delay time is found to be
of order 1 second, and for the larger $\Delta x$, the larger the value
of delay time is obtained. In the case where the congested flow
becomes stable, $T=0.71~{\rm s}$ for $a=2.8~{\rm s}^{-1}$ and
$T=0.9471~{\rm s}$ for $a=2.0~{\rm s}^{-1}$. Since the end point C (F) on
the limit cycle becomes larger (smaller) as $a$ becomes larger (see
for example Fig.7 in ref.\cite{AichiC}), so of course $T$ becomes
smaller for larger $a$ (high sensitivity).
\end{enumerate}
All our results
show that the estimated delay time in our OVM is almost enough to
reproduce the order of observed delay time. It now becomes obvious that the
delay time of motion arises as an effect of dynamical
equation itself without any explicit introduction of $\tau$.
It is interesting to find the main contribution of the resultant delay
of vehicle motion are of same order of magnitudes in any cases.
This may come from the structure of optical velocity
function itself which we have determined phenomenologically.
However we believe that this remarkable fact has
its more profound reason, which will be made clearer by further
investigation on structure of OVM by performing analytical study.
|
1,477,468,750,361 | arxiv | \section{Introduction}
\label{introduction}
Touch dynamics systems use distinctive touchscreen gestures for authentication. These interactions include both common gestures like swipes and scrolls and more advanced ones like pinch-and-zoom. Touch dynamics have been proposed as a way to improve the security of login-time authentication mechanisms and to enable continuous authentication while a device is being used. The field has been growing rapidly since the first papers were published in 2012, with 30 papers collecting unique swipe-and-scroll datasets published so far.
Despite the growth in the field, no standard set of methods has been established to enable comparison between published work and transition to real-world deployment. While authors largely report the Equal Error Rate (EER) as a metric of average system performance, there are vast differences in methodological choices when evaluating systems on a static dataset. The goal of this paper is to identify these methodological choices, investigate how common they are in published work, and quantify their effect on reported system performance. These steps are crucial to enable fair comparisons between papers, ensure reproducibility of results and obtain results that are compatible with a real-world system- and threat model.
Through our analysis of the existing work, we identify six pitfalls where design flaws in the experiment, data collection, or analysis impede comparability or lead to unrealistic results.
To examine the impact of each of these pitfalls on a touch dynamics system we collect our own longitudinal large-scale dataset of swipes.
Specifically, we investigate the effects of sample and model size, mixing different phone models in the analysis, using non-contiguous training data, including attacker data in training, using arbitrary aggregation windows, and the implications of code and data availability.
We quantify the effect of each pitfall with their effect on the system equal error rate, showing that pitfalls lead to conspicuous changes in the resulting performance.
The dataset and code from our study are openly accessible to advance the field further.
\vspace{3px}
\noindent\textbf{In this study we make the following key contributions:}
\begin{itemize}
\item We identified six evaluation pitfalls: small sample size, phone model mixing, selecting non-contiguous training data, including attacker samples, swipe aggregation, and code/ dataset availability. We conducted a systematic analysis of the touch-based authentication literature, showing that all published studies overlook at least one of the pitfalls.
\item We quantified the effects stemming from these pitfalls in terms of resulting EER; to do so we collected a new 470-user touch dynamics dataset comprised of daily interactions over 31 days. The dataset and our code are available online.\footnote{https://github.com/ssloxford/evaluation-pitfalls-touch}
\item We outlined a set of best practices to avoid the identified pitfalls. These practices include both recommendations for experimental design and methods and also recommendations to allow for reproducibility and comparability of results in the field.
\end{itemize}
\input{pgfplots/cep_visualizations}
\section{Common evaluation pitfalls}
\label{sec:flawed_methods}
In this section, we present our identified evaluation pitfalls in touch authentication systems.
\myp{P1: \ponelabel} Sample size can refer both to the number of users in a study and the amount of data collection sessions recorded per user.
Due to various experimental limitations, often touch authentication methods are evaluated on limited amounts of data, with a median of $\sim$40 distinct users and two data collection sessions.
Nevertheless, the accuracy of the measured performance may benefit from a larger number of users.
In fact, sampling negative training data from larger pools of users can lead to differences in the performance of the recognition model, affecting the mean system performance.
On the other hand, collecting longitudinal data is also necessary to estimate the effect of changing user behavior over time, as this may change across different sessions.
These sample size effects are non-trivial to measure and hinder a robust generalization of results found on smaller samples.
\myp{P2: \ptwolabel} Many studies in the field perform data collection on multiple distinct device models.
This can be a result of convenience (especially in remote studies) or an attempt to demonstrate system performance on different hardware.
While phone models might look similar, slightly different specifications cause fundamental differences when devices are used to collect swipes.
These differences are caused by various factors, including the shape of the phone, its resolution, how it is held, touchscreen sampling rate, and the value range of its pressure and area sensors.
In general, an attacker would use the same phone model as their victim as they use the same physical device in an in-person attack.
Mixing phone models in testing violates this requirement as attackers and victims use different device models.
It is worth noting that this pitfall does not apply in the case of remote authentication where the attacker can send data from any device model.
\myp{P3: \pthreelabel} In practice, a biometric authentication system has an enrollment (training) phase which precedes the use of the system (or its evaluation).
However, when using the randomized training data selection method, swipes are randomly sampled from the whole user data as shown in Figure~\ref{fig:data_selection_issues} (right).
This does not resemble how a deployed system works, as it essentially evaluates the system by testing on samples from the past.
As a consequence, randomized training data selection leads to biased performance estimation.
\myp{P4: \pfourlabel} While there are several ways to design an authentication method, a common approach is to use a binary classifier that discriminates between legitimate and non-legitimate user samples.
In this case, the negative samples (non-legitimate) are generally gathered from the available pool of users, the same user pool is then used to test the system recognition rates.
However, most stated threat models rule out the possibility that the classifier was trained with negative training data belonging to an attacker: attacker samples should be \textit{unknown}.
Figure~\ref{fig:include_attacker_issues} illustrates this problem: including the attacker samples in the training data provides a significant benefit against attacks compared to what happens when the attacker is excluded from training.
This property has been initially addressed in~\cite{evaluating-behavioral-biometrics}, where it is shown that it artificially reduces the zero-effort attack success rates.
The inclusion of an attacker in training data is incompatible with a realistic threat model.
It is important to clarify that attacker data we use to delineate the negative class consists of legitimate swipes of other users. While active attacks are interesting to examine, we limit our analysis to zero-effort attackers.
\myp{P5: \pfivelabel} Intuitively, the use of multiple swipes when evaluating a particular model leads to an increase in performance~\cite{touchalytics, which-verifiers-work, unobservable-re-authentication, statistical-touch-images,fusing-typing-swiping-movement}.
While aggregating multiple swipes for an authentication decision is a legitimate approach in general (e.g., it mitigates occasional erratic behavior and improves recognition), it has two important drawbacks.
Firstly, it impedes straightforward comparison between different approaches when the aggregation window size is different.
Secondly, in a realistic threat scenario, it allows the attacker a non-negligible time to perform their attack, as the anomalous attacker behavior will only be identified after a certain number of swipes (depending on the aggregation window size).
\myp{P6: \psixlabel} Datasets and codebases of touch-based authentication systems are rarely made publicly available.
This is a major impediment to reproducibility and progress in the field.
Sharing datasets would enable researchers to reliably separate the effects of different models from those of the collected data.
Sharing the code used to obtain the results is especially important in light of the pitfalls investigated in this paper: oftentimes unstated assumptions are made which are not trivial to spot.
\section{Related work}
\label{sec:rw}
The focus of our work is on mobile continuous authentication systems based on swiping and scrolling behavior.
While our work concentrates on the use of swipes as the most widespread touch method, there are other types of touch gestures used for authentication (e.g., ``pinch to zoom''~\cite{towards-continuous-passive}, screen taps~\cite{tapping-behaviors}).
In this paper, we consider \textit{swipes} and \textit{scrolls} - horizontal and vertical displacements on a touch-capacitive display done using a single finger.
\subsection{Background}
\myp{Origin of touch-based authentication}
Feng et al. developed one of the earliest systems in touch-based continuous authentication on smartphones~\cite{glove-touch}.
Soon after, other systems solely based on the data provided by the phone were developed~\cite{touch-based-first, touchalytics, unobservable-re-authentication}.
Many hybrid approaches for touch-based authentication have also been proposed.
For instance, some research includes sensor data coming from the accelerometer and gyroscope~\cite{touch-sensor-1,touch-sensor-2}.
Deb et al. include 30 different modalities including GPS and magnetometer~\cite{touch-sensor-3} and Rahul et al. have even taken into account the power usage of the device~\cite{power-usage}.
\myp{Data collection modalities}
There are varying approaches for data collection in touch-based authentication.
Frank et al. use text-reading to collect vertical scrolls and a ``spot the difference'' game to gather horizontal swipes~\cite{touchalytics}.
Similarly, Antal et al. use text reading and image gallery tasks~\cite{information-revealed}.
Others include social media interactions~\cite{power-usage}, zooming on pictures~\cite{tips} and questionnaires~\cite{which-verifiers-work}.
Buschek et al. evaluate the influence of GUI elements and hand postures on the performance of touch dynamic systems \cite{touch-usability}.
In order to analyze the time stability of the biometric, some recent studies collect data over multiple sessions or days.
Watanabe et al. specifically look into the long-term performance of touch-based systems by collecting data over 6 months~\cite{long-term-influence}.
They demonstrate promising results for the time-stability of the biometric.
While the data from some experiments is collected in a restricted environment during lab sessions, Feng et al.~\cite{tips} recruited 100 users to use their data collection application over the course of 3 weeks to provide a more realistic environment when performing everyday tasks.
\myp{Feature extraction and classification modalities}
Most feature extraction methods in touch authentication systems focus on describing the geometrical attributes of swipes such as coordinates, duration, acceleration, deviation, and direction~\cite{touchalytics, unobservable-re-authentication}.
Zhao et al., however, use a method to convert the stroke information into an image that can be used for statistical feature model extraction~\cite{statistical-touch-images}.
There is a vast variability in the classification approaches in touch-based authentication.
Some studies have focused on systematizing and comparing knowledge within the field.
Fierrez et al.~\cite{benchmark-touch} analyze and compare recent efforts in the field in terms of datasets, classifiers, and performance.
Serwadda et al. compare the most common machine learning algorithms in the context of touch-based authentication~\cite{which-verifiers-work}.
The studies suggest that Support Vector Machine (SVM) and Random Forest perform the best for touch-based tasks.
Fierrez et al. provide insights into model and design choice performance by benchmarking open-access datasets ~\cite{touch-performance}.
They find that landscape phone orientation and horizontal gestures prove to be more stable and discriminative.
\myp{Performance and metrics} The difference in data collection and classification approaches leads to significant variability in the results reported in the field, with authors claiming EERs between 0\%~\cite{touchalytics, silent-sense} and 22.1\%~\cite{touch-sensor-2}
Studies also vary in their evaluation metrics as results are reported in False Acceptance Rate (FAR), False Rejection Rate (FRR), Equal Error Rate (EER), Receiver Operating Characteristics (ROC) curve, and Accuracy.
While it has been argued that EER does not adequately describe systematic errors~\cite{eberz2017evaluating}, it is generally accepted as a good measure of average system performance.
Furthermore, \cite{robust-performance} argues the importance of considering the ROC curve for performance as the EER metric could be misleading depending on TPR (True Positive Rate) and FPR (False Positive Rate) system requirements.
In this paper, we abstract from the variety of experimental choices outlined in this section and investigate fundamental effects of evaluation pitfalls on the EER and ROC curve.
\input{tables/related_work}
\subsection{Prevalence of evaluation pitfalls}
To check how prevalent the pitfalls are, we analyzed the touch-authentication literature.
We report an overview of our findings on 30 studies from the last decade, each of the studies introduces a new touch-based dataset in Table~\ref{tab:papers}.
We only selected studies with experiments containing natural swiping behavior such as navigating through specific tasks.
We did not consider studies that only rely on mobile keystroke dynamics, sensors, tapping, and one-time gestures for authentication.
Patterns that emerge are discussed throughout the paper.
Table~\ref{tab:papers} shows that all of the studies included in the table are subject to at least one of the pitfalls described in Section~\ref{sec:flawed_methods}.
Our set of studies have a close to equal split in their study environment, with 15 studies done in a lab and 13 remotely -- the collection environment was unclear for the 2 remaining studies.
We find that the median number of participants is 40, who complete a median of 2 sessions.
This relatively low number of median sessions is concerning and we analyze the impact of this (P1) in section~\ref{sec:anal:sample_size_p1}.
Seven of the studies hand out devices to participants for a period of time without specific instructions on how often to use them, meaning that the precise number of sessions is not known.
Of our analyzed studies, 28\% mix device models in their data collection and do not discuss splitting them in the evaluation, falling into P2.
Likewise, 30\% of the studies do not clearly explain the way they select their training and testing data, with a further 18\% using a randomized approach to select data, and are thus snared by P3.
For those that do not explain their selection process, the code is also not shared, making it impossible to know how the selection was performed.
In terms of attacker modeling, an overwhelming majority (80\%) of the studies investigated use an unrealistic attacker modeling approach and include attacker data into the training set, falling victim to P4.
A much smaller number of studies succumb to P5, with 17\% reporting their results only on the analysis of an aggregation group of more than one swipe, hindering comparability across studies.
P6 also captures many works, with only 8 studies (27\%) sharing their datasets upon publication, two of which no longer have functional web pages.
Furthermore, none of the studies we examined share a complete codebase of their work.
One study, \cite{touchalytics}, does share the feature extraction code files but not the rest of the analysis.
Recent studies have gathered large amounts of data by making collection apps available on public app stores
\cite{brain-run,be-captcha}.
This is a step in the right direction in terms of dataset sizes but presents other challenges.
For instance, in the case of \cite{brain-run} there is data from 2218 users collected on 2418 different devices and in \cite{be-captcha} there is data from 600 users on 278 distinct devices.
There is likely a large variation in the unique device models used as well, especially considering the large fragmentation of the Android ecosystem.
Furthermore, multiple people may perform the tasks on the same account (e.g. a parent giving a child to play the game).
\section{Study Design}
\label{sec:studydesign}
We designed our data collection experiment to enable us to thoroughly measure the effects of each of the pitfalls described in Section~\ref{sec:flawed_methods}.
As a consequence, we have a few notable differences from previous datasets.
We collected all data remotely on a carefully constrained set of devices.
Furthermore, we obtained data from 470 participants (well above the median of 40) and collected data of up to 31 sessions (compared to the median of 2).
In the remainder of this section, we discuss the designs of the key parts of our data collection experiment.
\subsection{Remote collection}
\label{remote_collection}
Remote data collection provides two major benefits.
Firstly, it allows for the collection of large amounts of data which is impractical for a lab study due to the difficulty of recruiting participants with particular qualities at scale.
Furthermore, external factors such as the COVID-19 pandemic may prevent lab studies altogether, leaving remote collection as the only viable option.
For our study, we utilized Amazon Mechanical Turk (MTurk) - a popular crowdsourcing platform, where workers perform Human Intelligence Tasks (HITs) in exchange for payment.
The platform gives access to a large population of potential subjects and allows for targeting by age, gender, and other demographic criteria.
We created an MTurk HIT, which described the requirements and details of the study and guided the subjects to install the data collection app which was distributed through TestFlight - an online service for over-the-air installation on the iOS platform, which does not allow the general public to install the application.
The HIT also contained the participant information sheet, as required by our institutional review board.
This study received ethical approval.
\myp{Application Onboarding}
Upon opening the application, users were required to complete a consent form and provide demographic information as they would in a lab study.
Users were then required to complete their first pair of tasks once.
This established a connection between MTurk and the application, providing users with the first payment, and allowing payments to be automatically generated for subsequent completions of the task.
\myp{Study Duration}
Within the study, participants were either invited to participate for 7 or 31 days.
Each day participants were prompted with a notification (if they allowed notifications from the application) to complete the task at 9 am, and again at 7 pm if the task had not yet been completed that day.
Not all users, however, completed their tasks consistently as further discussed in Appendix~\ref{appx:demographics}.
\subsection{Devices}
\label{devices}
We selected the iOS platform to carry out our data collection efforts in order to ensure the consistency of hardware and software throughout experimentation.
The other major mobile operating system, Android, includes a much higher number of device models with varying screen sizes and sensors making it impractical for our analysis.
Moreover, the majority of Android devices approximate their reported touch pressure values by considering the size of the touchpoint while the iPhone models we have chosen support ``3D touch'' - a true pressure sensor built into the screen of the devices.
Due to these restrictions, we have narrowed down our efforts to the nine devices shown in Appendix~\ref{appx:iphone_models}.
These design choices left us with a large number of users using a limited amount of models but still let us make a comparison in terms of phone size, resolution, and even hardware differences.
To our knowledge, there is only one other paper~\cite{touch-gesture-based} in the field which focuses on iOS devices for touch-based authentication and the dataset is not publicly available.
While we have placed specific restrictions on our data collection and experimentation, the dataset can be used for developing systems beyond the specifics of this study.
\subsection{Application}
To facilitate our study we developed an iOS application that collects touch and sensor data as users perform common smartphone interactions.
We collected coordinates and pressure data for each user interaction with the screen at the maximum rate of 60Hz.
Furthermore, we also recorded the accelerometer and gyroscope data at their maximum frequency of 100Hz.
The application required users to complete two tasks: a social media style task and an image gallery task.
The design and intention of these tasks are described in Section~\ref{sec:tasks}.
We optimized the number of rounds of each task to equalize the completion time and the number of swipes and scrolls collected per task.
Both tasks were intended to be completed with the phone in a vertical position, and thus we did not allow a change in the layout when the device was rotated.
The application home page included elements such as completion streak and earning potential in order to increase user retention throughout the study.
The order in which the two tasks were presented was randomly determined before each session, and the instructions for completing each task were provided before each task begins.
The user was required to perform five rounds of each task, with the correctness of answers being validated to ensure the legitimacy of the data and avoid abuse.
If the user made a mistake, they were prompted to repeat that round of the task.
On completion of both tasks, the touch and sensor data was transmitted to a remote server.
\subsection{Task Design}
\label{sec:tasks}
\myp{Social media task}
The goal of the social media game is to gather touch data by simulating how users tend to use their phones on common vertical scrolling tasks such as browsing a social media feed or looking through a list of news articles.
In this task, users were required to scroll through a feed in order to find articles or posts which fit a given description.
The articles and corresponding images were gathered from the copyright-free content of NewsUSA~\cite{newsonline} and we manually created a non-ambiguous corresponding description for each one of them.
Each description was associated with one unique article or post and there were 600 such pairs available in the system.
The feed was 20 items long and the correct description-answer pair was randomly chosen and mixed with arbitrary decoys pooled from the rest of the pairs.
\myp{Image gallery task}
The goal of the image gallery game is to gather touch data by simulating how users tend to use their phones on common horizontal swiping tasks such as browsing a list of photos or application screens.
In this task, users were presented with a horizontal list of pictures in which only a single image was visible at any given time.
Users were required to count the number of occurrences of a specific object while swiping through the gallery.
For instance, the objects could be animals such as dogs and cats or food items such as pizza.
All the images were gathered from the open computer vision ``Common Objects in Context'' (COCO) dataset~\cite{coco-dataset}.
There were a total of 200 unique images in the system and each challenge presented 20 images in the gallery while ensuring that between 2 and 6 of them contain the target object.
At the end of the round users were required to enter the number of objects they have counted.
The application's source code is available with the rest of the data and code from the project.
\subsection{Limitations}
As with any remote data collection experiment, the lack of direct experimenter involvement poses challenges.
The two actions that could compromise the quality of the dataset are participants completing the study twice or participants asking others to complete some of their sessions.
The first case is highly problematic since the user would appear twice in the data under different labels.
However, to do so the participant would require two MTurk accounts, two Apple accounts, two physical devices, and the capability to accept and complete the HIT twice before it expires.
The second case of participants handing their phones to someone else for some of their sessions is harder to rule out entirely.
However, we have reminded participants not to do so at the start of each session and, the impact of participants disregarding this would be limited to individual sessions.
Lastly, data may have been collected under varying uncontrolled conditions that differ both between users and sessions of the same user.
For instance, a user could be sitting or walking, holding the phone, or having it on a table.
While this may hinder the overall performance (as it adds variability), it should be considered a more realistic representation of the way a touch-based system will be used in practice.
\section{Dataset}
In total, we collected data from 470 users amounting to 6,017 unique sessions and 1,166,092 unique swipes.
On average, users completed 13 sessions with cumulative distribution function plots for each study duration group shown in Appendix~\ref{appx:demographics}.
The majority of the users that completed the first few sessions continued throughout the whole duration of the experiment.
On average, an image gallery task took 1:54 minutes to complete and resulted in the collection of 124 swipes.
The social media task took 1:48 minutes to complete and resulted in 79 scrolls using the same method.
The average duration of a swipe was 58ms and the average flight time between swipes was 630ms.
The demographics of our participants can be found in Appendix~\ref{appx:demographics}.
\section{Machine Learning Pipeline}
Here we present our data and machine learning pipeline and we describe how we investigate the effect of the pitfalls P2, P3, and P4, which require specific steps.
P1 and P5 are analyzed directly by varying the sample size and the aggregation window size, respectively.
Our implementation is available online.
\myp{Division by phone model}
As outlined in Section~\ref{devices}, our participants used 9 distinct phone models for data collection.
While their hardware and sensors are likely to be very similar, there are differences in their screen size, resolution, and shape.
In order to control for the effect of P2, we create distinct data subsets by isolating data collected by each phone model (which we refer to with the phone model name, e.g., \textsc{xs max}).
We compare the performance on this phone model-specific subsets with the performance computed on the entire dataset containing data from all models, which we refer to as \textsc{combined}.
\myp{Preprocessing and feature extraction}
As the first step, we aggregate individual touch samples (consisting of X/Y coordinates and touch pressure) within a game into horizontal swipes (image gallery task) and vertical scrolls (social media task).
In all future steps, scrolls and swipes are classified separately and independently.
In order to avoid including taps, we remove swipes shorter than 3 samples and the ones that do not deviate by more than 5 pixels from the starting point.
For each remaining swipe and scroll, we calculate a set of features directly taken from~\cite{touchalytics}.
All positional features are normalized to the screen resolution.
We also distinguish between the direction (left/right or up/down) of both swipes and scrolls.
\myp{Training data selection}\label{train_test_split}
In order to control for the effect of P3, we consider four methods of dividing the target user's data into training and testing sets.
In the following, $U$ identifies the set comprising of all users, $N_i$ identifies the number of samples (swipes) belonging to user $i$, and $f_{train}$ and $f_{test}$ refer to the fraction of samples used for training and testing, respectively.
\begin{itemize}[leftmargin=5.5mm]
\item \textsc{random}: we choose training samples for a user out of all the available samples at random, i.e., all sessions are merged, testing uses the remaining samples. This process is repeated independently for each user.
\item \textsc{contiguous}: we combine all samples of a user and we select the first portion (in chronological order) of samples for training and the remainder for testing.
\item \textsc{dedicatedSessions}:
for a user, we select a subset of their sessions for training and test on the remaining sessions.
This ensures that each session is used for either training or testing and that training and testing samples are never drawn from the same session.
We investigate selecting sessions both contiguously (in chronological order, with first sessions used for training, later sessions used for testing) and randomly.
\item \textsc{intraSession}: for a user, we select a specific session and use the first half of samples for training and the remainder for testing.
Only samples from the chosen session are used.
\end{itemize}
\myp{Attacker modeling}\label{attacker_modeling}
To evaluate the effect of P4, we examine two different scenarios, one where attacker samples are included in training data and one where they are not.
In both cases, we train a binary model where the user's samples are labeled as positive and multiple other users are combined into a single negative class.
\begin{itemize}[leftmargin=5.5mm]
\item \textsc{excludeAtk}: For each user we randomly divide the remaining users into two equally-sized sets $U_{1}$ and $U_{2}$.
For training, we select positive class data from the available data from the user and negative class data from $U_{1}$.
We ensure the two classes are balanced.
For testing, we treat all users from $U_{2}$ as attackers and classify their samples along with the user's testing samples.
This ensures that there is no overlap in the attackers used for training and testing.
We use this approach over the leave-one-out method proposed in~\cite{eberz2017evaluating} to avoid overfitting when a separate threshold is chosen for each user-attacker pair.
\item \textsc{includeAtk}:
We select a user and split the remaining users into $U_{1}$ and $U_{2}$.
We first train and test the system on $U_{1}$.
This involves training a model for each user $i$ where $N_i*f_{train}$ of the user's samples and $\frac{N_i*f_{train}}{|U_{1}|}$
of each attacker's samples are used for training and the rest for testing.
This ensures that the negative and positive classes are balanced in the training data.
This process is then separately repeated with $U_{2}$.
\end{itemize}
\myp{Scaling}
Following the division of data into training and testing batches along with the inclusion or exclusion of attacker data, we standardize each feature by computing the mean and standard deviation of the training data.
The training and testing samples of both the user and the attackers are scaled by subtracting the mean and dividing by the standard deviation of this training data.
\myp{Classification}
Following scaling, we fit a classifier to our training data for each user.
We then classify the samples in the testing set, which gives us a probability for each sample.
This probability is in turn used for both sample aggregation and threshold selection.
\myp{Sample aggregation}
\label{sample_aggregation}
For this optional step, instead of treating samples independently, we group a set of consecutive samples together and take their mean probability estimation, which we use instead of individual probability estimation for threshold selection and final decision.
\myp{Threshold selection}
Taking the distance scores for the testing samples (both user and attacker samples), we compute the EER for each user.
This is done by finding the distance score threshold where the FAR and FRR are equal.
The mean EER for a given system is the average EER across all users.
\section{Analysis}
\label{sec:analysis}
To quantify each pitfall's effect on the evaluation performance, we analyze their effect one at a time.
Our system implementation is based on one of the seminal papers in the field \cite{touchalytics}.
We report our results from the SVM classifier as it is the best performing method in the study but also experiment with other classifiers (Random Forest, Neural Network, and k-Nearest Neighbors (kNN)).
We discuss classifier differences at the end of this section.
When investigating one pitfall, we control the remaining experimental choices estimating a baseline performance as follows: (i) \textsc{combined}, (ii) \textsc{contiguous}, (iii) \textsc{excludeAtk} and no sample aggregation.
We chose this specific configuration as a default in our experiments for the following reasons.
For phone model mixing and training data selection, we chose the most common configurations in Table~\ref{tab:papers} - \textsc{combined} and \textsc{contiguous} respectively.
However, we chose \textsc{excludeAtk} as previous work on the topic has already shown the negative effects of using the unrealistic \textsc{includAtk} approach \cite{evaluating-behavioral-biometrics}.
We do not use an aggregation of samples in our default configuration as it adds another dimension to the data and results, thus making comparison within experiments and previous work more complicated.
Unless differently specified, we focus on the effect of pitfalls on the \textit{mean EER}, i.e., for an experiment configuration, we train the system, then use the test set to estimate each user's EER (\textit{per-user EER}) and report the average of those.
We also report the mean ROC curve with 95\% confidence intervals where appropriate.
The baseline system resulted in a mean EER of 8.4\% and a standard deviation of $\pm$5.57.
As our goal is to investigate the fundamental effects of evaluation pitfalls, we focus on the most populous left swipe type to limit sources of variability.
Details about the per-user EER distribution and effects of swipe direction on performance can be found in Appendix~\ref{appx:general}.
\input{pgfplots/roc_ABCD}
\subsection*{P1: \ponelabel} %
\label{sec:anal:sample_size_p1}
\label{sample_size}
\input{pgfplots/cep_sample_size_distributions}
\input{pgfplots/cep_sessions_length}
Here we investigate non-trivial effects of user sample size and the effect of the amount of available data per user on the resulting mean EER.
\subsubsection{User sample size}
Oftentimes it is assumed that the EER of a given authentication method can be reliably estimated by sampling roughly 40 users (the median number of users in Table~\ref{tab:papers}).
To investigate this, we randomly sample $n<470$ users from our dataset and compute the mean EER of the system fit on those $n$ and the standard deviation of each sample's per-user EER distribution.
We focus on the standard deviation of the per-user EER distribution as it is a proxy to the evaluation of systematic errors and EER outliers: certain users with high per-user EER are responsible for a larger proportion of the resulting mean EER~\cite{eberz2017evaluating}.
The sampling procedure is repeated 1,000 times for each $n$.
We then use $n$=40 (median user sample size in Table~\ref{tab:papers}) as a reference: we test whether the metrics obtained at $n$=40 reliably predict the behavior for different $n$.
\myp{Effect on mean EER}
The left-hand of Figure~\ref{fig:subsampling} reports the difference in behavior between the EER measured
empirically for various $n$ and the EER extrapolated from the performance of the $n$=40 subset.
The figure shows that increasing the number of users in the model has a non-negligible effect on the EER: while we obtain EER=9.14\% for $n$=40, increasing the number of users has a large benefit, reaching EER=8.41\% for $n$=400.
\myp{Effect on per-user EER standard deviation}
The right-hand of Figure~\ref{fig:subsampling} reports the difference in behavior between the empirical per-user EER standard deviation for various $n$ and the standard deviation extrapolated from the performance of the $n$=40 subset.
Given the effect described in the previous paragraph, to allow for meaningful comparison we adjust the extrapolated standard deviation to account for the reduction in mean EER (which reduces the per-user EER standard deviation).
We do so by adjusting the standard deviation extrapolated at each $n$ with the scaling ration between the empirical mean EER measured at $n$ and the one measured at 40;\footnote{given empirical per-user EER standard deviation and EER mean measured at $n$, $\sigma_{n}$ and $\mu_{n}$, we estimate $\hat{\sigma}_{m}$ using $n$=40 as $\hat{\sigma}_{m} = \frac{\mu_{m}}{\mu_{40}} \sigma_{40}$.} this moves the two distributions to the same mean EER.
Figure~\ref{fig:subsampling} (right) shows how for increasing $n$ there is a notable decrement in the per-user EER standard deviation, which is not solely explained by EER mean reduction presented above.
Overall, we find that increasing the user sample size greatly benefits the machine learning model (at least in our general method and SVM), thanks to the added variety of negative samples coming from larger pools of users.
Larger sample sizes not only lead to lower and more accurate measurement of underlying EER but also have a regularizing effect on the resulting per-user EER distribution, leading to fewer outliers.
This also challenges previous findings regarding the usage of error distribution metrics~\cite{eberz2017evaluating} as user sample sizes also will have an effect on such EER distribution across users.
\subsubsection{Number of sessions and swipes}
Increasing the amount of data collected per user may lead to differences in performance: (i) across several data collection sessions users may get acclimatized to the task (leading to better stability of the collected swipes) and (ii) larger amount of data per user may generally benefit the performance of the machine learning model.
In the following paragraphs, we test both factors separately.
\myp{Effect of user acclimatization}
We use data from the 68 users who completed the full 31 sessions, given a number of sessions $S$, we split the data into the earliest collected $s$ sessions (\textit{Early}) and the latest collected $s$ sessions (\textit{Late}).
If users gradually get used to the experimental settings (i.e., their behavior exhibits reduced variation), then \textit{Early} sessions will perform worse than \textit{Late} sessions when the user has acclimatized after many repetitions.
We apply our authentication pipeline on both early and late sets, making several splits with $s$ ranging from 3 to 15.
We report the results in Figure~\ref{fig:early_late}, showing no significant difference between the performance of early and late sessions.
Therefore the data shows no evidence of task acclimatization leading to changes in performance.
\myp{Effect of amount of data per-user}
We again use data from the 68 users who completed the full 31 sessions, we consider the effect of the increasing amount of data per user by evaluating the system performance as the number of sessions grows.
Figure~\ref{fig:session_length_performance} shows the resulting EER for growing number of sessions.
We found that no specific trends emerge as the session count varies.
We extend the analysis to the remaining users as well by considering the number of swipes per-user rather than the number of sessions.
Figure~\ref{fig:siwpes_performance} shows the relationship between number of swipes and resulting per-user EER, points are labeled by \textit{Short} or \textit{Long} batch depending on whether the user belonged to either study batch (see Section~\ref{sec:studydesign}).
We found that there is not a clear distinction or trend based on the number of swipes, reinforcing the previous results of Figure~\ref{fig:session_length_performance}.
Both figures indeed suggest that the number of swipes or sessions does not necessarily affect the performance of our model which contradicts hypothesis (ii).
While long-term studies are necessary to investigate the stability of the biometric, the availability of long-term data does not affect EER in a significant way.
\subsection*{P2: \ptwolabel} %
\label{model_bias}
In this section, we compare the system performance on data belonging to individual phone models and when merging together data from various phone models (\textsc{combined}).
We then explore this concept further by measuring how accurately we can predict the phone model a swipe originated from.
\myp{Effect of combining phone models}
As evidenced in the previous Section~\ref{sample_size}, increasing $n$ leads to an EER reduction (see Figure~\ref{fig:subsampling}).
To account for this, we compare each single-phone subset to a \textsc{combined} subsample from all phone models, with an equal number of users as for each respective phone model.
Table~\ref{tab:model_bias} presents the results for \textsc{combined} dataset and single-phone model subsets.
The table shows that the \textsc{combined} approach leads to an overestimation of performance.
We observed a decrease in EER for each of the phone models.
Furthermore, we performed a $t$-test and found that the EER difference between a single phone model and a subsample is statistically significant (\textit{P}~$<$~.05) except for \textsc{6s Plus}, \textsc{7 Plus} and \textsc{XS MAX}.
Figure~\ref{fig:roc_phone_7} shows the complete ROC curves for the iPhone 7 model (which includes the most number of users in our dataset) and its respective \textsc{combined} model.
The overestimation of performance is present throughout the whole of the ROC curves apart from extreme TPR and FPR values.
The ROC curves for the other phone models can be found in Appendix~\ref{appx:roc_phone_model}.
\input{tables/device_comparison}
\myp{Phone model identifiability}
We create a phone model classifier whose aim is to identify the iPhone model of a given swipe.
We merge all the available data and label each swipe with its originating phone model; data is then divided into 80/20 train-test splits.
The data is balanced such that each phone model had an equal number of swipes in the training split.
We make sure that users which were used in training were not considered in testing and vice versa (to avoid biasing the prediction with the users' identities).
We fit an SVM classifier with the data.
We perform this experiment once using all 9 phone models and again only with the \textsc{6s}, \textsc{7} and \textsc{8} models as these three have equal screen sizes, resolutions, and pixel densities.
The classifier achieves 44\% accuracy where a random baseline model would yield 11.1\%.
When considering only \textsc{6s}, \textsc{7} and \textsc{8}, we achieved an accuracy of 49\% compared to a baseline of 33.3\%.
A complete confusion matrix for the classification of the experiments including all nine phone models can be found in Appendix~\ref{appx:confusion}.
This shows that differences in the properties of the devices are reflected in the identification outcome, i.e., swipes belonging to similar phone models tend to be more similar.
These results indicate that it is undesirable to mix different phone models in data collection and analysis for touch-based authentication.
Furthermore, it is irrelevant whether the mixed models have similar screen sizes, dimensions, or display pixel densities.
The practice of mixing phone models can lead to an artificial increase of performance between 2.5\% and 4.5\% EER.
\subsection*{P3: \pthreelabel}\label{sec:results_p3} %
We compared the classification performance of our model under the conditions described in Section~\ref{train_test_split}: (i) \textsc{random}, (ii) \textsc{contiguous}, (iii) \textsc{dedicatedSessions} and (iv) \textsc{intraSession}.
For a fair comparison, we only used data from the 409 users which have completed 2 or more sessions as this is a prerequisite for the \textsc{dedicatedSessions} modality.
We present our findings in Table~\ref{data_selection}.
\input{pgfplots/cep_include_exclude_aggregation}
As expected the \textsc{intraSession} method yielded the best performance as users have a more stable interaction pattern during a single session than through time~\cite{touchalytics}.
The fact that the model performed well in this category is hopeful, but in practice, users carry out many sessions throughout time and the \textsc{intraSession} result should not be considered an accurate metric for touch-based authentication systems.
Mixing and randomizing samples from all sessions (\textsc{random} approach) provided a similar effect as the model learns on information about users' interactions throughout all sessions.
\textsc{contiguous} training also allows the model to learn from an overlapping session, which yields better performance.
The \textsc{dedicatedSessions} scenario is the most realistic one for a touch authentication system as it relies on self-contained training sessions - as they will be performed in a deployed system.
\input{tables/training_data_comparison}
We found that results between all of the methods vary considerably and performance seems to be overestimated compared to the realistic \textsc{dedicatedSessions} approach.
An unrealistic training data selection can lead to an increase in performance of 3.8\% EER when using a \textsc{random} approach compared to the \textsc{dedicatedSessions} approach.
The complete ROC curves resulting from this experiment are available in Figure~\ref{fig:include_exclude_40}.
The ROC curve results are mostly consistent with the EER reported in Table~\ref{data_selection} apart from \textsc{random} and \textsc{intraSession} curves where \textsc{random} selection has a higher TPR above 0.08\% FPR.
\subsection*{P4: \pfourlabel}\label{sec:results_p4} %
We compared different attack modeling choices as described in Section~\ref{attacker_modeling}: (i) \textsc{excludeAtk} and (ii) \textsc{includeAtk}.
To do so, we randomly subsampled $n$ users from our dataset at various $n$, for each $n$ we apply our pipeline and compute the resulting EER for the two approaches.
This procedure is repeated 10 times, Figure~\ref{fig:attacker_modeling} and Figure~\ref{fig:include_exclude_difference} illustrate the results.
We find that \textsc{includeAtk} results in consistently lower mean EER when compared to \textsc{excludeAtk}, see Figure~\ref{fig:attacker_modeling}.
However, Figure~\ref{fig:include_exclude_difference} shows how the EER difference between the two approaches decreases exponentially as the number of users ($n$) increases.
This is expected as the fewer users are considered the more the presence of attacker data impacts the classifier (e.g., 10\% of negative training data for $n$=11 users, $<$1\% of negative training data for $n>101$ users).
This diminishing return also explains why in \textsc{includeAtk} the EER increases when more users are included, despite the expectation that more data might result in better performance.
Figure~\ref{fig:include_exclude_difference} shows that at $n$=40, the EER difference between the two approaches is 2.55\%.
As pointed out in Table~\ref{tab:papers}, 80\% of our reported studies falls into P4, meaning that these might not present performance metrics appropriate for the specified threat model.
Overall, depending on the user sample size considered, \textsc{includeAtk} can lead to an artificial performance gain of between 0.3\% and 6.9\%.
Figure~\ref{fig:include_exclude_40} shows the ROC curves of \textsc{includeAtk} and \textsc{excludeAtk} models for 40 users (the average number of users from Table~\ref{tab:papers}).
The ROC curves for 20, 100, 200, 300 and 400 users are also available in Appendix~\ref{appx:roc_include_exclude}.
\subsection*{P5: \pfivelabel} %
When reporting their results, many studies~\cite{touchalytics, which-verifiers-work, unobservable-re-authentication, statistical-touch-images,fusing-typing-swiping-movement} consider the performance of a group of consecutive swipes instead of a single one as we have done so far.
Figure \ref{fig:aggregation} shows the performance of our pipeline when we use an aggregation of consecutive swipes as described in Section~\ref{sample_aggregation}.
The procedure was repeated 10 times and shaded areas show the 95\% confidence interval across the ten repetitions.
As expected, increasing the aggregation window size leads to lower EERs: an EER of 8.2\% obtained on single swipes drops more than a quarter (5.9\%) when aggregating two swipes, and drops to less than 3\% at 12 swipes.
Touch-based authentication studies should be clear when and how they use such aggregations as they evidently have an impact on performance.
It should also be noted that each swipe action takes time to perform which can leave a system at risk.
For instance, our dataset suggests that on the tasks considered, performing 20 swipes would take 14 seconds during which the system would be vulnerable.
Therefore a balance between usability and security should be sought.
\subsection*{Cumulative effects of evaluation choices}
In this subsection, we quantify the difference between \textit{realistic} (pitfall-free) and \textit{unrealistic} (with all pitfalls) evaluation choices for touch authentication systems.
We repeated the following two procedures 100 times and report the mean of all runs and the confidence interval at 95\%.
In the unrealistic methods experiment, we combined phone models (\textsc{combined}), included the attacker into the training data (\textsc{includeAtk}), used the \textsc{random} data selection method and each round randomly subsampled our dataset to the median of $n$=40 participants taken from Table~\ref{tab:papers} (to even out the effect of P1).
This resulted in a 4.9\% EER with a confidence interval of $\pm$0.09.
In the realistic method experiment, again we selected $n$=40 users from the most commonly used iPhone \textsc{7} phone model, used \textsc{excludeAtk} and the \textsc{dedicatedSessions} training data selection.
Each round we randomly select which users are selected as attackers.
This approach resulted in a much worse EER of 13.8\% with a confidence interval of $\pm$0.14.
Figure~\ref{fig:roc_realistic_unrealistic} illustrates the overestimation of performance throughout the ROC curves of these experiments.
The results clearly illustrate that flawed methods have strong effects on the resulting performance and can lead to an artificial boost to performance by 8.9\% EER.
\subsection*{Effects of classifiers on evaluation choices}
In this subsection, we quantify the impact of pitfalls on performance on four of the most widely used machine learning algorithms in the field.
Implementation details for each individual classifier can be found in Appendix~\ref{appx:classifiers}.
The results of our experiments are presented in Table~\ref{tab:classifiers}.
All of the examined pitfalls introduce an overestimation of performance regardless of the classifier chosen.
However, there are differences in individual performance across chosen classifiers.
For instance, the kNN classifier relies heavily on individual swipes similar to the target one, hence the impact of including the attacker data into training is much more pronounced.
These results suggest that the pitfalls apply to a wide range of touch dynamics system implementations.
\input{tables/classifiers}
\section{Best practices}
In order to facilitate better comparison between future studies and achieve unbiased performance evaluation, we propose a standard set of practices to follow when evaluating touch-based authentication systems, derived from our set of common evaluation pitfalls.
\myp{P1: \ponelabel}
While it is hard to advocate for a specific minimum number of users to be required by a study, we recommend researchers to be aware of the effects of user sample sizes in pipelines similar to the one analyzed in this paper.
Based on the findings in Section~\ref{sample_size}, we found that increasing sample size has two important effects: it reduces the resulting mean EER and smooths the variance of the per-user EER distribution.
It is advisable that an analysis of the effect of sample size is included in new studies, and that results for a sample size of $n$=40 are also reported (when applicable).
This best practice must be accounted for at the study design phase, to ensure enough data is initially collected.
\myp{P2: \ptwolabel}
A single phone model should be used to train and test a proposed system.
While this might not always be the final use case (e.g., in other scenarios, one might want to test the generalization performance of a device-specific classifier on a different device), this avoids the bias introduced by data collected on a specific phone model.
Isolating data belonging to different phone models when training will produce more accurate performance measurements.
Care must be taken in data collection to ensure there are enough samples for each phone model that will be studied.
\myp{P3: \pthreelabel}
Randomized swipes selection should not be used to separate training and testing data.
Test data must always have been collected at a time after the training data was collected, to mimic real-world usage, and to account for behavior drift.
For comparison between works, only an initial training phase (enrollment) should be included, as training updates increase the difficulty of comparing figures.
Ideally, at least two sessions should be used to collect training and test data, as the bulk of real-world usage occurs with a time interval between enrollment and authentication.
\myp{P4: \pfourlabel}
Studies should always exclude the attacker from the training set, as one shall never assume they have information about the attacker in a deployed system.
In particular, care should be taken so that any attacker of a model was not included as a negative example when training the model.
Excluding the attacker is particularly important with studies with a limited number of users, where the effect of such an attacker modeling approach greatly affects the resulting performance.
\myp{P5: \pfivelabel}
Using aggregation of consecutive swipes is beneficial to performance, particularly when using the mean of their distances to the decision boundary as shown in Fig \ref{fig:aggregation}.
However, researchers should report the performance of a single swipe model in order to ensure comparability with other studies, as well as other reasonable numbers of swipes that other similar papers have proposed.
Furthermore, information about the flight time between swipes and their duration should also be shared, as these directly relate to the time the system is vulnerable to an attacker.
\myp{P6: \psixlabel}
Historically, in this field, it has been rare for authors to share their data -- see Table~\ref{tab:papers} -- and none of the studies examined in the related work share their analysis code.
This leads to uncertainty when reproducing results, in fact, for some studies, it was unclear from the paper alone whether the study made certain choices regarding the experiments (e.g., we could not clearly define whether 30\% of studies fell into P3).
The code and datasets of touch authentication studies should be made freely available.
This ensures that results can be reproduced by others, and reduces barriers to entry of those wishing to build upon existing work.
\myp{Generality of results}
Although this paper focuses on touch-based authentication, we believe these best practices apply in similar ways to other types of biometric systems such as facial recognition and keystroke authentication.
In particular, non-contiguous training data selection (P3), and inclusion of attacker data in training (P4) are fundamentally flawed and should be avoided in all biometric system evaluations.
However, the effect of mixing similar devices (P2) may vary across different modalities.
Similarly, the sample size implications (P1) might differ in other systems from what we found in our experimentation.
Nevertheless, these points should be examined with caution by the relevant literature.
Further work is required to examine to what extent these pitfalls are prevalent in the study of other biometric authentication systems.
\section{Conclusion}
In this work, we explored the impacts of evaluation choices on touch-based authentication methods.
We investigated performance differences in approaches related both to data gathering and choices in the way classifiers are trained with a certain data split.
For the purpose of this study, we collected a large open-source dataset for touch-based mobile authentication consisting of 470 users, which we made publicly available.
We confirmed large variations in performance based on phone model mixing (up to 5.8\% EER), training data selection (up to 3.8\% EER), user sample size (up to 4\% EER), and attacker modeling (up to 6.9\% EER).
Finally, combining all evaluation pitfalls results in overestimation of performance by 8.9\% EER.
The results are largely similar regardless of the chosen classifier.
We also note that, aside from some extreme threshold settings, these effects are observable throughout the ROC curve.
Based on these findings, we proposed a set of good practices to be considered in order to enable accurate reporting of results and to allow comparability across studies.
\bibliographystyle{paper}
|
1,477,468,750,362 | arxiv | \section{Introduction}
A communication network integrates multiple functionalities such as modulation, filtering, frequency discrimination, integration differentiation etc., to transmit data. To upscale the data rate, signal processing speed and simultaneously lowering power consumption, onchip multiprocessing cores allowing parallel operations set a new trend in the commercial electronic market \cite{ref1}, \cite{ref2}. However with the advances made in the field of communication realization of scalable on chip electronic communication network faced a critical challenge in terms of bandwidth capacities and electromagnetic interference (EMI), which was solved by switching to the optical communication \cite{ref3}, \cite{ref4}. Initially, bulky optical fiber based systems were used to scale up the network performance, which were replaced by scalable on-chip photonic integrated circuits (PICs) in the last decade \cite{ref5} - \cite{ref8}. PICs offer large bandwidth, low EMI, low losses, small footprints and high power efficiency, thus solving the problems associated with electronics and fiber-based networks. However, today most of the PICs available are application-specific (ASPICs). Thus, to implement multiple functionalities, different ASPICs are connected through optical interconnects leading to high interconnection losses and a larger footprint \cite{ref9}. ASPICs are fabricated using a cost-sharing mechanism, where multiple projects from different users with different designs are fabricated on the same wafer, leading to a high development period \cite{ref10}. Another drawback associated with ASPICs is the non-recon
urability of these devices leading to a new fabrication each time the functionality changes even a little. In the era of IoT (internet of things), to meet modern communication demands, PICs need to follow the footprints of their electronic counterparts. Recently, a reconfigurable general purpose photonic processor (GPPP) was demonstrated as programmable photonic integrated circuits (PPICs) for implementing multiple functions on the same chip\cite{ref11}-\cite{ref13}.
PPICs can be broadly categorized into two categories: $(i)$ Feed-forward PPICs used to implement arbitrary matrix operations \cite{ref14}, \cite{ref15} and $(ii)$ Feedback PPICs implemented using tunable couplers acting as routers and waveguide meshes, arranged in a particular topology (rectangular, triangular or hexagonal)\cite{ref12}, \cite{ref16}. Later is used as GPPP as they allow the implementation of densely integrated resonant and non-resonant architectures. Light distribution in these circuits is governed by manipulating the phase of tunable couplers (generally implemented using MZIs due to enhance fabrication tolerance) to reconfigure the circuit for implementing multiple functions parallelly, thus enabling parallel processing. Implementation of a required functionality in these circuits requires the following steps:
\begin{enumerate}
\item \textbf{Mapping: }Given the properties of coupler and interconnection waveguides, find how many units (couplers) are required to implement a given functionality.
\item \textbf{Routing: }Performs path searching between inputs and outputs to satisfy the criteria obtained in the mapping.
\item \textbf{Optimization: }Optimizes the individual device performance in the obtained route to maximize the required output \cite{ref17}.
\end{enumerate}
This article is only limited to routing part among these steps. To program a function in these circuits, user can manually choose the path between the available set of inputs and outp-\\
\\
uts. However, this method becomes cumbersome in cases where large networks are employed and in processes demanding fast reconfiguration. To resolve this issue, these circuits can be represented using graph networks where a node or a collection of nodes along with edges represent a device in the graph. Thus enabling graph and congestion-solving
algorithms to automate the path search in the circuit for implementing the required functionality. Routing and congestion-solving in an integrated photonic network utilize concepts and strategies similar to electronic networks \cite{ref18} but with different constraints due to the reciprocal nature of photonic devices.
Recently, authors have reported routing algorithms based on modified Dijkstra's shortest path and heuristics approach for feedback PPIC networks \cite{ref19}, \cite{ref20}. However, routing using a modified depth first search (DFS) algorithm has not been explored in these circuits yet. DFS algorithm \cite{ref21} - \cite{ref23} is capable of handling complex graphs with negative cycles, more suitable for decision-making, and faster, especially in an extensive network \cite{ref24}. Also, all path and cycle search algorithms crucial for implementing hash lists and resonant architectures in GPPPs have not been discussed previously. Thus, in this paper, we propose DFS based all path, shortest path and cycle search algorithms for GPPP and arbitrary photonic switching networks. The proposed all path search algorithm enables the user to search all the existing paths between a source-target pair, thus allowing the implementation of hash list. For dynamic path allocation in live circuits, the speed of path search becomes a critical criterion. The proposed shortest path search algorithm's results allow speedy path allocation as the search speed scales up during multipath search. Also, in the case of signal processing, cycle search becomes essential to implement simple and higher-order filters based on resonant architectures, and the proposed cycle search algorithm enables the implementation of resonant architectures in GPPP. This paper also discusses the faster BFS based modified bidirectional search algorithm, along with its limitation. The broad applicability of the proposed algorithm in an arbitrary PIC network has been reported by applying the proposed algorithm in a N\texttimes N photonic switching network. A comparison with existing algorithms in terms of complexity, longest path searched, and execution time for various networks under different situations has been discussed.
\begin{figure*}[!t]
\centering
\subfloat[]{\includegraphics[width=2in]{fig1a.png}%
\label{fig1a}}
\hfil
\subfloat[]{\includegraphics[width=2in]{fig1b.png}%
\label{fig1b}}
\caption{(a) An MZI represented using a Bipartite graph and (b) Dummy nodes added to bipartite graph at both the inputs and outputs with corresponding weights.}
\label{fig1}
\end{figure*}
\section{Graph Representation}
\subsection{Representation of a Single MZI}
Directional couplers/ MZIs acting as the routers in a general purpose photonic processors (GPPPs) form the fundamental unit of the circuit and can be represented using a directed or undirected bipartite graph as shown in Figure \ref{fig1a}. These MZIs can stay either in a cross or bar, or tunable state and conventional graph search algorithms can be employed to find the paths between the nodes of an individual MZI or a connected network of MZIs. However, searched path will consist of both physical and nonphysical paths. For example, two paths obtained between node 'a' and 'c' are: 'a'-'c', which is a physical path and 'a'-'d'-'b'-'c', which is a nonphysical path as back reflected paths are not allowed/considered in optical devices. Also, an MZI, if assigned a particular state (bar or cross) after searching a path, must remain in the same state for another path as well while searching multiple paths in a network. This is called the bar cross condition. Thus, various methods can be employed to eliminate the nonphysical paths and simultaneously maintaining the bar cross condition while designing the network and algorithms \cite{ref19}, \cite{ref20}:
\begin{enumerate}
\item The algorithm can be modified such that same device cannot be accessed twice in a path by setting the maximum number of nodes traversed for same device to 2. This method works fine for finding the shortest path but fails to detect the cycles and paths with the desired length (number of MZIs) in the network.
\item The network can be implemented using directed graphs, in this case two layered graphs are needed for accessing the network from both the directions, one for each direction. Also, every time a path is found in any direction, it needs to be engaged in the other layer, to avoid contradictory assignment of states of MZIs while searching multiple paths in a network. It increases the space and time complexity of the path searching algorithms.
\item Another method is to include the dummy nodes with large negative weights at both the inputs and outputs as shown in Figure \ref{fig1b}. In this case if any nonphysical path is encountered the weight will grow large, and for a physical path the total weight for each traversed unit will be 1. It doubles the space complexity of the graph as compared to simple bipartite graph but increases the accuracy of path search algorithm.
\end{enumerate}
Penalties such as delay, insertion loss and power consumption can be added as edge weights in each unit while applying any of the above mentioned solutions, and algorithms can be modified to optimize any of these penalties either one at a time or simultaneously.
\begin{figure*}[ht]
\centering
\subfloat[]{\includegraphics[width=2.2in]{fig2a.png}%
\label{fig2a}}
\hfil
\subfloat[]{\includegraphics[width=2.2in]{fig2b.png}%
\label{fig2b}}
\hfil
\subfloat[]{\includegraphics[width=2.2in]{fig2c.png}%
\label{fig2c}}
\caption{Fundamental units arranged in (a) inner circle and (b) outer circle, and (c) combination of the two circles forming a unit cell.}
\label{fig2}
\end{figure*}
\begin{figure*}[!t]
\centering
\subfloat[]{\includegraphics[width=2.2in]{fig3a.png}%
\label{fig3a}}
\hfil
\subfloat[]{\includegraphics[width=2.2in]{fig3b.png}%
\label{fig3b}}
\hfil
\subfloat[]{\includegraphics[width=2.2in]{fig3c.png}%
\label{fig3c}}
\caption{Combination of 2 unit cells to form a network while deleting the overlapping nodes from one of the cells (overlapping nodes marked in red).}
\label{fig3}
\end{figure*}
\subsection{Network Design}
Given the advantages of hexagonal topology, the hexagonal arrangement of MZIs is utilized to implement the GPPP architecture. The fundamental unit (MZI) is defined using the dummy nodes as shown in Figure \ref{fig1b}. Here large negative and positive weights are allocated to the edges connecting the dummy nodes and primary nodes, respectively. As a result of this allocation, the total weight for a physical path (eg: 'e'-'a'-'c'-'g', 'e'-'a'-'d'-'h') in the MZI becomes 1, whereas the weight for a nonphysical path (eg: 'e'-'a'-'d'-'b'-'c'-'g', 'e'-'a'-'c'-'b'-'d'-'h') becomes 4003. Based on these allocations, as soon as a node of a nonphysical path is encountered in the search algorithm, the total weight becomes $>=3002$. Thus a threshold weight can be assigned for eliminating nonphysical paths in search algorithms. MZIs are arranged across an inner and outer circle to form a unit cell, as shown in Figures \ref{fig2a} and \ref{fig2b}, respectively. Nodes of MZIs along the inner and outer circles are named using 'i' and 'o', respectively. A unit cell is formed by combining the two circular arrangements as shown in Figure \ref{fig2c}. To implement a multi-cell network, the number of cells required in the network is entered through an input prompt, based on which cells are distributed around the central cell. The Centre positions of these cells are calculated using inner and outer circle radii. Unit cells formed at their respective positions are added to the network, and overlapping nodes between the cells are found and deleted from one cell while connecting the individual cells, as shown in Figure \ref{fig3}. Nodes are numbered according to the cell number starting from 0 for the central cell. For example, a node at the inner MZI at a position 'r' in the cell 'q' is represented as 'ixrq' where 'x' can take alphabets between 'a' and 'h' (referring to fundamental unit's node). Position of each node is saved in a list corresponding to its cell and updated every time a node is added or deleted in the network. A 5-cell and an 11-cell network with 36 and 70 MZIs, respectively are prepared for implementing path and cycle search algorithms.
\section{Algorithms}
\subsection{Shortest Path Bidirectional Search} The bidirectional search algorithm is a BFS based algorithm utilized to find the shortest path in a graph. In this algorithm, two simultaneous searches are performed step by step, one in the forward direction initiating from the start node and another in the backward direction, initiating from the target node, while adding weights at each node. Two visited dictionaries are maintained for search in both directions, and nodes visited in forward and backward searches are added to these dictionaries. To eliminate the nonphysical paths, a condition is added to the search algorithm, and a new node is added to the dictionary only if the weight at that node is smaller than the threshold weight and is not in the visited list. An intersection search is employed in each search step to find the intersection between the two dictionaries. If an intersection is found, the path between the start and target node is created from the intersecting node using predecessors and successors in the forward and backward visited dictionaries, respectively, and the total weight is calculated alongside the path. However, this algorithm works well with non-cyclic simple graphs but poses a problem with cyclic or complex graphs.
\subsection{Depth First Search Path Finding}
DFS is a recursive algorithm that finds all the paths between a source and a target node. It can also be employed to search the path of a given weight in the network by simple modification in the return condition. Search starts from the source node and recursively calls the searching function at each neighbouring node. A visited set is maintained, holding the nodes being visited and a weight set is created to hold the weight at the current node, which is updated at each searched node. To eliminate the nonphysical paths threshold weight condition is added to the recursive calling loop such that the node is added to the search only if the total weight at the node is less than the threshold and is not in the visited list. Otherwise, the search returns to the previous node, thus eliminating the nonphysical path simultaneously while searching. If the current and target nodes are the same, then a path with respective weight is returned and added to the list of paths.
\subsection{Shortest and Fixed Length Path DFS Search}\label{sec3.3}
DFS algorithm performs path search with higher accuracy and speed in complex and cyclic graphs. As DFS is a recursive algorithm in order to stop the search, it needs to come out of all the loops that the algorithm had run step by step, returning to the nodes it has gone through previously. For DFS to return the shortest path, the algorithm is modified such that once the first path is searched and stored in the path variable along with its weight, the algorithm will check the path weight at each new node. If the weight exceeds the weight of the path previously searched, it will not search further from that node and return to the previous node, and if the weight is smaller than the stored weight, it will carry the search either until the target node is found or weight exceeds the saved weight. Thus, if a new path with a smaller weight is found, the path is updated with the new path alongside its weight.
A similar modification is also applied to search a path with a given weight. In this case, the first path is updated only when a path with the required weight is found. In order to return the path, recursive calling is stopped by continuously returning the path until return functions roll back to the first node where the calling started.
\subsection{Cycle Search Algorithm}
However, BFS and Bellman-Ford algorithms support the cyclic graph, but a negative cycle graph limits them. Thus, a recursive search algorithm with a concept similar DFS is employed to implement the cycle search algorithm. A visited and total weight set is created and updated on successful search at each node. Here a parent node is added, and a recursive search is called at each node starting from the parent node. Search at a node progresses if the threshold weight condition is met and the node is not in the visited list, like DFS. If the current node is in the visited list and is the same as the parent node, then a cycle with the respective weight is added to the cycle's list. Otherwise, the search returns to the previous node, simultaneously eliminating nonphysical cycles. To find the shortest and fixed length cycles return conditions are modified using the concept used in section \ref{sec3.3}
\section{Results}
Considering the search cases that may occur in a GPPP, the proposed algorithms are utilized for searching single source-target shortest path, single source-target all path, multiple source-target paths and cycle search from a given parent node. To assess the performance of the algorithms in terms of execution time, mean time with different source-target and parent nodes for path and cycle search algorithms, respectively, has been calculated.
\subsection{Single Source-Target Shortest Path} For searching the shortest path between a pair of source and a target, modified bidirectional search and modified DFS algorithms are employed. A path between nodes 'of33' and 'of01' is found where the total number of travelled units was 9, and the time taken to search the path using the modified bidirectional search was 1.17 ms. The same path was searched using modified DFS in 5.74 ms. Figures \ref{fig4a} and \ref{fig4b} shows two paths searched between source-target pairs 'oe23' and 'of42' (and 'oe33' and 'of01'). Although the modified bidirectional search algorithm is faster than the modified DFS while searching the shortest path, it fails while accessing parallel nodes of the same unit as input and output, as shown in Figure \ref{fig4c} (marked in red oval). An intersection is found at the node 'oe0', which is the corner node of MZI. Paths are created from this node, we can see both paths in the forward and backward direction from the source, and target nodes are physical, but the overall path is nonphysical. This problem is not encountered in modified DFS, shown in Figure \ref{fig4d}. Mean exectuion time for DFS shortest path search algorithm for 5-cell and 11-cell networks were calculated to be 7.16 ms and 88 ms, respectively. A 7-cell architecture similar to ones reported is \cite{ref20} was also analyzed and the mean execution time for DFS shortest search was 7.52 ms.
\begin{figure*}[!t]
\centering
\subfloat[]{\includegraphics[width=2.5in]{fig4a.png}%
\label{fig4a}}
\hfil
\subfloat[]{\includegraphics[width=2.5in]{fig4b.png}%
\label{fig4b}}
\hfil
\subfloat[]{\includegraphics[width=2.5in]{fig4c.png}%
\label{fig4c}}
\hfil
\subfloat[]{\includegraphics[width=2.5in]{fig4d.png}%
\label{fig4d}}
\caption{Single source-target paths (a) 'oe23' to 'of42', (b) 'oe33' to 'of01', (c) 'of33' to 'oe33' case where bidirectional search fails (failure point is marked in red oval), and (d) 'of33' to 'oe33' case where DFS search pass.}
\label{fig4}
\end{figure*}
\subsection{Single Source-Target All Paths}
DFS search is also employed for finding all the existing physical paths between a pair of source and target. Case I in Figure \ref{fig5a} shows the path existing between the two parallel nodes of the same unit, 'of3'-'oe3' in a 3 unit cells network. A total of 20 paths were found, time taken to find these paths was 4 ms figure shows 3 of these paths with weights of 8 (red), 8 (green) and 12 (violet). In case II, shown in Figure \ref{fig5b}, paths were found between 'of33' and 'oe21'. A total of 484 paths with weights ranging from 7 to 33 were found, and the time taken to search these paths was 218.3 ms. The Figure shows 3 of these paths with weights of 7 (orange), 11 (green) and 33 (violet). Mean execution time to search paths over different source target pairs in a 5-cell network was 220 ms.
\begin{figure*}[!t]
\centering
\subfloat[]{\includegraphics[width=2.5in]{fig5a.png}%
\label{fig5a}}
\hfil
\subfloat[]{\includegraphics[width=2.5in]{fig5b.png}%
\label{fig5b}}
\caption{All paths between source and target using DFS (a) case I 'of3'-'oe3' with weight 8 (orange), 8 (green) and 12 (violet) and (b) case II 'of33'-'oe21' with weight 7 (orange), 11 (green) and 33 (violet).}
\label{fig5}
\end{figure*}
\subsection{Multiple Source-Target Path}
Due to higher accuracy, modified DFS is employed to find the shortest path between multiple source-target pairs. To prevent traversing of the same unit twice while path search and to maintain bar cross condition, the visited list is updated with the list of nodes engaged in the previous paths. Three different cases were taken for finding 4,6, and 7 paths between different pairs of source-target nodes in the network. In the first case, paths were found between 'of33'-'oe01', 'oe33'-'of01', 'of11'-'oe02' and 'oe21'-'of02', as shown in Figure \ref{fig6a}. The time taken to search these paths was 16.7 ms. Here we can observe that in comparison to the path found between the nodes 'oe33'-'of01' in the single source-target shortest path case, the path found in this case is different. Since a path exists between the nodes 'of33'-'oe01', an appropriate path is found to prevent violation of the bar and cross condition of the individual unit. In case II, paths were found between the nodes 'of33'-'oe01', 'oe33'-'of01', 'of11'-'of02', 'oe21'-'oe02', 'oe23'-'of42' and 'of13'-'oe52', as shown in Figure \ref{fig6b}. Here again, while comparing the paths obtained for the nodes 'oe23'-'of42' in the single source-target shortest path and multi source-target shortest path, we can observe that instead of getting the shortest path as in the prior case, a path maintaining the bar and cross condition for individual units was found. The time taken to search 6 paths was 17.8 ms. Figure \ref{fig6c} shows case III, where paths are found between the nodes 'of33'-'oe01', 'oe33'-'of01', 'of11'-'of52', 'oe21'-'oe42', 'of34'-'oe02', 'oe34'-'of02' and 'oe13'-'oe54'. Here time taken to search 7 paths was 18.4 ms. To further evaluate the accuracy of the algorithm, another case of searching paths between 7 source-target pairs was taken. This case was run two times with different orders of source-target pairs. First order was 'oe23'-'of42', 'of13'-'oe52', 'of33'-'oe01', 'oe33'-'of01', 'oe21'-'oe02', 'of11'-'oe52' and 'of34'-'oe11', and the second order was 'of33'-'oe01', 'oe33'-'of01', 'oe21'-'oe02', 'of11'-'of02', 'of34'-'oe11', 'oe23'-'of42' and 'of13'-'oe52'. Path obtained for the two cases are shown in Figures \ref{fig6d} and \ref{fig6e}, respectively. Since searching paths between multiple source-target pairs is a sequential process where paths are searched in the given order of source-target pairs, we can observe that for the first order, 5 paths were found out of 7; however, for second order, 6 paths were found out of 7. In all the paths found here, no nonphysical path or violation of the bar cross condition was observed, the algorithm returned empty if no physical path existed, which proves the accuracy of the algorithm. A 7 cell structure similar to one reported in \cite{ref19}, \cite{ref20} have also been analyzed in \ref{fig6f} and execution time for searching 6 paths between 'of44'-'oe11', 'oe33'-'oe02', 'of111'-'of412', 'of23'-'of01', 'of211'-'of512' and 'of34'-'of52' was 18.5 ms. For the same network 8 paths were searched in the exectuion time of 19.1 ms.
\begin{figure*}[]
\centering
\subfloat[]{\includegraphics[width=2.7in]{fig6a.png}%
\label{fig6a}}
\hfil
\subfloat[]{\includegraphics[width=2.7in]{fig6b.png}%
\label{fig6b}}
\hfil
\subfloat[]{\includegraphics[width=2.7in]{fig6c.png}%
\label{fig6c}}
\hfil
\subfloat[]{\includegraphics[width=2.7in]{fig6d.png}%
\label{fig6d}}
\hfil
\subfloat[]{\includegraphics[width=2.7in]{fig6e.png}%
\label{fig6e}}
\hfil
\subfloat[]{\includegraphics[width=2.7in]{fig6f.png}%
\label{fig6f}}
\caption{Multiple source-target paths (a) case I with four paths, (b) case II with six paths, (c) case III with seven paths, (d) and (e) case with different order of source-target pairs, and (f) six paths in a 7 cell architecture similar to \cite{ref20}.}
\label{fig6}
\end{figure*}
\subsection{Cycle Search From a given Parent Node}
The cycle search algorithm is employed to search all the cycles from a given parent node. The parent node is selected as the starting node of any MZI. Two cases, one for a smaller network of three cells and another for a larger network, are considered. For the case I shown in Figure \ref{fig7a}, all possible cycles with 'ih2' as the parent node were searched, and ten cycles with weights ranging from 6 to 18 were found. The Figure shows two of these cycles with weights 6 (orange) and 10 (green). The network consists of 24 MZIs, and the time taken to search all the cycles was 7.56 ms. It is worth noting that while searching total of 20 cycles are found, accounting for search in both the directions from the parent and then cycles with the same nodes are removed. In case II, all the cycles were searched along the node 'ih33' in the larger network, and 96 cycles with weights ranging from 6 to 30 are found. Figure \ref{fig7b} shows the cycles of weight 6 (orange) and 10 (green), and Figure \ref{fig7c} shows the cycle of weight 30. From the Figure, it is observed that no nonphysical path is traversed in any of the cycles, which proves the accuracy of the cycle search algorithm. Mean exectution time ofcycle search algorithm for various parent nodes was calculated to be 163.9 ms.
\subsection{N\texttimes N Photonic Switching Network}
N\texttimes N photonic switching networks are commonly employed for fast switching at data centres. DFS algorithm can also be used to find all the possible combinations of paths available in these networks. Two methods can be applied to find these paths, one is by listing all the paths between the inputs and outputs and saving them to the hash list, and another is dynamically searching the path given a few required input and output combinations. In both cases, the requirement is to ensure that each unit's bar and cross condition remain intact. Since the DFS algorithm proves to be highly accurate, the condition always remains intact. Figure \ref{fig7d} shows one such possible combination of input and outputs found in a 4$\times$4 switching network.
\begin{figure*}[]
\centering
\subfloat[]{\includegraphics[width=2.7in]{fig7a.png}%
\label{fig7a}}
\hfil
\subfloat[]{\includegraphics[width=2.7in]{fig7b.png}%
\label{fig7b}}
\hfil
\subfloat[]{\includegraphics[width=2.7in]{fig7c.png}%
\label{fig7c}}
\hfil
\subfloat[]{\includegraphics[width=2.7in]{fig7d.png}%
\label{fig7d}}
\hfil
\caption{Cycle search algorithm (a) case I cycles from parent node 'ih2' with weight 6 (orange) and 10 (green), (b) case II cycles from parent node 'ih33' with weight 6 (orange) and 10 (green) (c) case II cycle of weight 30 from the parent node 'ih33'(orange), starting nodes represented in the red circles and (d) possible combination of a $4 \times 4$ switching network found using Modified DFS algorithm.}
\label{fig7}
\end{figure*}
\subsection{Eliminating the Malfunctioning Units}
Cases of malfunctioning units are very common in a large network, so it becomes necessary to eliminate those units from the path search. Malfunctioning may occur in terms of power consumption, non-functional unit, or insertion loss. Weights in each unit can be modified to correspond to any of these terms or a combination of these terms. Since the cost of traversing a physical path in an individual unit is 1, thus complexity in assigning weights in terms of either malfunctioning term becomes easy. Based on these weighted units, if any individual unit malfunctions, it can be provided with a higher weight, and then the path may be optimized for the required purpose. Another way of eliminating these malfunctioning units is to remove the nodes of the malfunctioning unit from the graph, thus not allowing the search algorithms to encounter the nodes in a malfunctioning unit. Both these ways are accurate but demand changes to be made in the graph either in terms of weights or in terms of removal of nodes which is a time-consuming process. Another way to counter the traversing through a malfunctioning unit is to simply add the nodes corresponding to the malfunctioning unit to the visited list in the search algorithms, thus preventing the algorithms from traversing through these units. This method is much more dynamic and faster than the first two methods but limits the network from accessing these units entirely.
\begin{table*}[h]
\begin{center}
\caption{Performance comparison table for various algorithms in terms of complexity, maximum units traversed (longest path), the number of paths found, and mean/ total execution time taken in the search.}
\label{tab1}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
\centering
\multirow{2}{10em}{\centering Algorithm} & \multirow{2}{5em}{Complexity} & \multirow{2}{5em}{Total MZIs} & \multirow{2}{7em}{\centering Maximum MZIs Traversed} & \multirow{2}{7em}{\centering Number of Paths Searched} & \multirow{2}{5em}{Time Taken}\\
& & & & & \\
\hline
\hline
\multirow{4}{4em}{\centering Modified Dijkstra} & \multirow{4}{5em}{\centering O(V\textsuperscript{2})} & 30 & 9 & 1 & 56 ms\\
& & 81 & 14 & 1 & 703 ms \\
& & 42 & 9 & 1 & 31 ms \\
& & 42 & 9 & 6 & 78.9 s \\
\hline
\multirow{2}{10em}{\centering Modified Bidirectional Search} & \multirow{2}{5em}{\centering O(2\textsuperscript{d/2})} & 36 & 9 & 1 & 1.13 ms\\
& & 36 & 10 & 7 & 7.91 ms \\
\hline
\multirow{7}{4em}{\centering Modified DFS} & \multirow{7}{5em}{\centering O(V+E)} & 36 & 9 & 1 & 7.2 ms\\
& & 36 & 11 & 7 & 18.4 ms \\
& & 36 & 31 & 484 & 220 ms \\
& & 42 & 9 & 1 & 7.5 ms \\
& & 42 & 9 & 6 & 18.5 ms \\
& & 42 & 9 & 8 & 19.1 ms \\
& & 70 & 17 & 1 & 88 ms \\
& & 70 & 17 & 6 & 320 ms \\
\hline
Cycle Search & O(V+E) & 36 & 30 & 96 & 163.9 ms \\
\hline
\end{tabular}
\end{center}
\end{table*}
\section{Discussion and Conclusion}
The algorithms discussed here provide fast and dynamically controlled routing routines for arbitrary photonic networks allowing the search for the shortest path, all paths and cycles in a given network. The proposed algorithm can be further extended to search a paths of a desired length by varying the return condition in proposed shortest path algorithm. Using these algorithms, various functionalities such as arbitrary delay lines, beam forming networks, linearized modulation units, resonant architectures, higher order filters etc., can be programmed in general-purpose photonic processors. Complexity, maximum units (MZIs) traversed in the path/ paths searched and mean execution time (total execution time in case of multipath search) for searching the paths through the different networks for various algorithms is listed in Table \ref{tab1}. The performance of proposed algorithms in terms of the mean execution time required to find paths/ cycles has been calculated using the Google Collab servers. Performance can be further enhanced using more robust processors. We also analyzed time consumption using an intel core i7 7\textsuperscript{th} Gen laptop Processor (12 Gb RAM, clock frequency 2.8 GHz) and observed that the time consumption of algorithms was reduced to half compared to Collab servers. Observations in Table \ref{tab1} show that a single path search is executed in 7.2 ms and a multipath search is executed in 18 ms (which is 2.5 times the speed of a single path search), which indicates that the algorithm speed scales up as paths are searched in the multipath search. Similar observations are made for an extensive network where 6 paths were searched in 320 ms. Another way to enhance the speed of these search algorithms is to create a hash list consisting of all the paths between each input and output using proposed DFS all path search algorithm. This will reduce the complexity of the graph search to O(E). However, each time a path is engaged, it needs to be updated in the hash list by creating a copy without engaged nodes, which might affect the performance. Methods to eliminate or counter the malfunctioning units have also been discussed, where the user can either eliminate the malfunctioning unit from the search by listing the nodes of such units in the visited list or by modifying the weight of the malfunctioning unit allowing the user to choose the path consisting of the malfunctioning unit if required.
{ |
1,477,468,750,363 | arxiv | \section{Introduction}
Most of the multi-loop analytical calculations in quantum field theories
have been dine for so-called single-scale problems. This means that the
evaluated integrals are basicaly expressed as numerical constants
up to a trivial scale factor. Examples of such problems include
almost all renormalization group calculations, evaluations of the
critical exponents, anomalous magnetic moments of the electron and the muon,
matching calculations in effective theories (e.g. HQFT, NRQFT) and many others.
Usually analytical results involve the so-called Euler--Zagier (EZ) sums
of the form
\begin{equation}
\sum_{n_1>n_2>\dots>n_k}
\frac{(\pm1)^{n_1}}{n_1^{a_1}}
\dots
\frac{(\pm k)^{n_k}}{n_1^{a_k}}
\end{equation}
or more generally multiple polylogarithms
\begin{equation}
\sum_{n_1>n_2>\dots>n_k}
\frac{z_1^{n_1}}{n_1^{a_1}}
\dots
\frac{z_k^{n_k}}{n_1^{a_k}}
\end{equation}
where $z_1,\,\dots z_k$ are some parameters and $a_1,\,\dots,a_k$
are positive integers. The sum $a_1 + a_2 + \dots + a_k$
is called the {\it weight} in such a case.
The above definitions include e.g. well-known\\ irrationalities like
$\zeta$ functions $\zeta(a),\,\zeta(a,b),\,\dots$, (poly)logarithms
${\rm Li}_a(1/2),\,\ln 2,\,\dots$ and ``sixth\\ root of unity'' constants
${\rm Ls}_j^{(k)}(\pi/3),\,{\rm Ls}_j^{(k)}(2\pi/3),\\\,\dots$.
There is no doubt, that by consideration of more complicated problems
and in higher loops some mew constants will appear.
As examples we can mention some elliptic integrals (see e.g. \cite{I1,I2,I3}).
In this paper we concentrate on a very important single-scale problem:
the total width of positronium decay in QED. Positronium (Ps),
the lightest known atom, provides an ultra-pure laboratory for
high-precision tests of QED. In fact, thanks to the smallness of the
electron mass $m$ relative to typical hadronic mass scale, its
theoretical description is not plagued by strong interaction uncertainties
and its properties, such as decay widths and energy levels
can be calculated perturbatively in non-relativistic QED (NRQED) \cite{Caswell:1985ui}
with very high precision.
Ps comes in two ground states, ${}^1S_0$ parapositronium ($p$-Ps)
and ${}^3S_1$ orthopositronium ($o$-Ps), which decay to two and three
photons, respectively.
\section{Orthopositronium}
In this section we are concerned with the lifetime of $o$-Ps,
which has been the subject of a vast number of theoretical and
experimental investigations. Its first precision measurement \cite{BH},
of 1968, had to wait nine years to be compared with first complete
one-loop calculation \cite{Caswell:1976nx}, which came two decades after the
analogous calculation for $p$-Ps \cite{HB} being considerably simpler
owing to the two-body final state. In the year 1987, the Ann Arbor
group \cite{I8} published a measurement that exceeded the theoretical
prediction avalaible by ten experimental stantard deviations.
This is so-called $o$-Ps lifetime puzzule triggered an avalanche of
both experimental and theoretical activities, which eventually
resulted in what now appears to be the resolution of this puzzle.
In fact, the 2003 measurements at Ann Arbor \cite{I9} and
Tokio \cite{I10}
\begin{eqnarray}
\Gamma(\mbox{Ann Arbor}) &=&
7.0404(10)(8)~\mu s^{-1},
\nonumber\\
\Gamma(\mbox{Tokyo}) &=&
7.0396(12) (11)~\mu s^{-1},
\end{eqnarray}
agree mutually and with the present theoretical prediction,
\begin{equation}
\Gamma(\mbox{theory}) = 7.039979(11)~\mu s^{-1}.
\end{equation}
The latter is evaluated from
\begin{eqnarray}
\Gamma(\mbox{theory}) &=& \Gamma_0\left[1 + A \frac{\alpha}{\pi}
+\frac{\alpha^2}{3} \ln\alpha \right.
\nonumber\\
&+& B \left(\frac{\alpha}{\pi}\right)^2
- \frac{3\alpha^3}{2\pi} \ln^2 \alpha
\nonumber\\
&+& \left. C \frac{\alpha^3}{\pi} \ln \alpha \right],
\label{Gamma}
\end{eqnarray}
where \cite{Ore:1949te}
\begin{equation}
\Gamma_0 = \frac{2}{9}(\pi^2-9)\frac{m\alpha^6}{\pi}
\end{equation}
is the LO result.
The leading logarithmically enhanced ${\mathcal O}(\alpha^2\ln\alpha)$ and
${\mathcal O}(\alpha^3\ln^2\alpha)$ terms were found in
Refs.~\cite{Caswell:1978vz,Khriplovich:1990eh} and Ref.~\cite{Kar},
respectively.
The coefficients $A=-10.286606(10)$
\cite{Caswell:1976nx,Caswell:1978vz,SH,Adkins:2000fg,Adkins:2005eg},
$B=45.06(26)$ \cite{Adkins:2000fg}, and $C=-5.51702455(23)$
\cite{Kniehl:2000dh} are only available in numerical form so far.
Comprehensive reviews of the present experimental and theoretical status of
$o$-Ps may be found in Ref.~\cite{AFS}.
Given the fundamental importance of Ps for atomic and particle physics, it is
desirable to complete our knowledge of the QED prediction in
Eq.~(\ref{Gamma}).
Since the theoretical uncertainty is presently dominated by the errors in the
numerical evaluations of the coefficients $A$, $B$, and $C$, it is an urgent
task to find them in analytical form, in terms of irrational numbers,
which can be evaluated with arbitrary precision.
In this Letter, this is achieved for $A$ and $C$.
The case of $B$ is beyond the scope of presently available technology, since
it involves two-loop five-point functions to be integrated over a three-body
phase space.
The quest for an analytic expression for $A$ is a topic of old vintage:
about 25 years ago, some of the simpler contributions to $A$, due to
self-energy and outer and inner vertex corrections, were obtained analytically
\cite{Stroscio:1982wj}, but further progress then soon came to a grinding halt.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=0.45\textwidth]{diapic.eps}
\end{center}
\caption{
Feynman diagrams contributing to the total decay width of $o$-Ps at
${\mathcal O}(\alpha)$.
Self-energy diagrams are not shown.
Dashed and solid lines represent photons and electrons, respectively.}
\end{figure}
The ${\mathcal O}(\alpha)$ contribution in Eq.~(\ref{Gamma}),
$\Gamma_1=\Gamma_0A\alpha/\pi$, is due to the Feynman diagrams where a virtual
photon is attached in all possible ways to the tree-level diagrams, with three
real photons linked to an open electron line with threshold kinematics.
Such diagrams are shown in Fig.~1.
After angular integration over three-photon phase space
\begin{equation}
\int [dk_1] [dk_2] [dk_3] \delta(k_1+k_2+k_3-q)
\end{equation}
we can rewrite the one-loop contribution to the width as (see \cite{Adkins:2005eg})
\begin{eqnarray}
\Gamma_1 &=& \frac{m\alpha^7}{36\pi^2}
\int\limits^1_0\frac{{\mathrm d}x_1}{x_1}\,\frac{{\mathrm d}x_2}{x_2}\,
\frac{{\mathrm d}x_3}{x_3}\delta(2-x_1-x_2-x_3)
\nonumber\\
&& {}\times[F(x_1,x_3) + {\mathrm{perm.}}],
\label{eq:org}
\end{eqnarray}
where $x_i$, with $0\le x_i\le 1$, is the energy of photon $i$ in the $o$-Ps
rest frame normalized by its maximum value, the delta function ensures energy
conservation, and {\it perm.}\ stands for the other five permutations of
$x_1,x_2,x_3$.
The function $F$ includes dilogarithm and arc\-tangent functions as given in \cite{Adkins:2005eg}.
Ror illustration, we just mention, that the above expression, after re-parametrization,
consists of integrals of the following type
\begin{eqnarray}
\frac{P(x_1,x_2,x_3)}{Q(x_1,x_2,x_3)}
\int\limits_0^1 \frac{dy\, \ln(x_1+(1-x_1)y^2)}{(1-x_1)x_3-x_1(1-x_3)y^2}\,,
\nonumber\\
\frac{P(x_1,x_2,x_3)}{Q(x_1,x_2,x_3)}
\int\limits_0^1 \frac{dy\, \ln(x_1+(1-x_1)y^2)}{x_1 x_3-(1-x_1)(1-x_3)y^2}\,,
\nonumber
\end{eqnarray}
with $P,Q,P'Q'$ being some polynomials.
The analytical integration of the above expressions is rather tedious and
requires a number of tricks, e.g. expansion in series. Only a few integrals
could be done strightforwardly, e.g.. with {\it Mathematica} or {\it Maple}.
However, we established all irrational constants in terms of which the complete
one-loop correction can be expressed. These include among others usual EZ sums
up to weigth four, including e.g.
\begin{eqnarray}
\ln2 \,, \qquad \zeta(n) \,, \qquad {\rm Li}_4 \left( \frac{1}{2} \right)
\,, \quad\mbox{etc.} \nonumber
\end{eqnarray}
and some additional constants of new type. At weight one, we have
\begin{eqnarray}
\ln(R) \,, \quad \mbox{where} \quad R = \frac{\sqrt{2}-1}{\sqrt{2}+1} \nonumber
\end{eqnarray}
and up to weight four our basis includes the following constants
\begin{eqnarray}
&& {\rm Li}_2 \left( \frac{1}{3} \right) \,, \qquad
{\rm Li}_4 \left( \frac{1}{3} \right) \,, \qquad
{\rm Li}_4 \left( -\frac{1}{3} \right) \,, \nonumber \\
&& {\rm Li}_3 \left( \frac{1}{\sqrt{2}} \right) \,, \qquad
{\rm Li}_3 \left( R \right) \,, \qquad
{\rm S}_{1,2} \left( R \right) \,, \qquad \nonumber \\
&& {\rm Li}_4 \left( \pm R \right) \,, \qquad
{\rm S}_{1,3} \left( \pm R \right) \,, \qquad
{\rm S}_{2,2} \left( \pm R \right) \,,\nonumber
\end{eqnarray}
with ${\rm S}_{a,b}$ being the generalized polylogarithm
\begin{eqnarray}
{\rm S}_{a,b}(x) = \frac{(-1)^{a+b-1}}{(a-1)!b!} \int\limits_0^1
\frac{dt}{t} \ln^{a-1}t\, \ln^b(1-tx) \,.\nonumber
\end{eqnarray}
Unfortunately, not all integrals can be computed analytically.
In more complicated cases, the integrations are not separated
after expansion into infinite series. We then rely on the PSLQ
algorithm \cite{PSLQ}, which allows one to reconstruct the
representation of a numerical result known to very high precision
in terms of a linear combinations of a set of constants with
rational coefficients, if that set is known beforehand. The experience
gained with the explicit solution of the simpler integrals helps us
to exhaust the relevent set.
In order for PSLQ to work in our applications, the numerical values of the
integrals must be known up to typically 150 decimal figures.
\section{Parapositronium}
Let us now turn to the case of parapositronium. Its total width
was recently measured to be \cite{I28}
\begin{equation}
\Gamma_p({\rm exp}) = 7990.9\mu s^{-1} \,.
\end{equation}
At present, the following radiative corrections within NRQED are
available:
\begin{eqnarray}
&&
\Gamma_p ~=~ \frac{\alpha^5\,m_e}{2} \Biggl\{
1
+ \frac{\alpha}{\pi} \, \left(\frac{\pi^2-20}{4} \right)
\nonumber\\
&&
+ \frac{\alpha^2}{\pi^2} \, \left( - 2\pi^2 \ln\alpha + A_p \right)
+ \frac{\alpha^3}{\pi} \left(
- \frac{3}{2} \ln^2\alpha \right.
\nonumber\\
&&
\left.
+ ( \frac{533}{90} - \frac{\pi^2}{2} + 10\ln2 ) \ln\alpha \right)
\Bigg\} \,.
\nonumber
\end{eqnarray}
The first-order corrections were obtained in \cite{HB}, while the
logarithmically enhanced terms were computed in \cite{Caswell:1978vz,Khriplovich:1990eh}.
Here the constant $A_p=5.12443(33)$ is known only numerically \cite{I20}
and our next goal is to establish the irrational constants that contribute
to this quantity.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=0.45\textwidth]{fig2.eps}
\end{center}
\caption{
Diagrams contributing to the decay width of $p$-Ps at $O(\alpha^2)$
and their reduction to simpler master integrals.
Dashed and solid lines represent massless and masive lines, respectively.}
\end{figure}
This quantity recieves contributions from two-loop diagrams of $e^+e^-$
annihilation into two photons in threshold kinematics. However, the
generic planar and non-planar diagrams (see Fig.~2, upper row) can be
reduced via integration by parts to simpler integrals (Fig.~2, middle row).
These, in turn, as we shall see, contain constants that are related to the
sunset diagram (Fig.~2, bottom row) at very special kinematics, namely when
the external momentum $q$ is restricted by $q^2=-m^2$. The sunset diagrams with
such kinematics have been considered in great detail in \cite{I1}.
In particular the result for the sunset is expressed in terms of special sums
osf elliptic nature,
\begin{eqnarray}
\sum_{n=1}^{\infty} (-1)^n
\frac{\left(2n \atop n\right)}{\left(4n \atop 2n\right)}
\left\{ \phi,\, \frac{\phi}{n},\, \frac{1}{n^2}
\right\} \,,
\label{aa}
\end{eqnarray}
which we can call $a_\phi,\, a_{\phi 1}$ and $a_2$, respectively, and other sums
\begin{eqnarray}
\sum_{n=1}^{\infty}
\frac{(-16)^n}{\left(2n \atop n\right)\left(4n \atop 2n\right)}
\left\{ 1,\, \frac{1}{n}
\right\} \,,
\label{bb}
\end{eqnarray}
which we call $b_0$ and $b_1$. In (\ref{aa}), $\phi$ stands for
$$
\phi = S_1(n-1) - 3S_1(2n-1)+2S_1(4n-1) \,,
$$
with $S_a(n)=\sum_{j=1}^n 1/j^a$ being a harmonic sum.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=0.20\textwidth]{fig3.eps}
\end{center}
\caption{
Example of vertex diagram $J$, contributing to the decay width of $p$-Ps.
All lines have mass $m$. The dot on a line means the square of the
propagator.
}
\end{figure}
Starting from (\ref{aa}) and (\ref{bb}), one can construct sums of higher weights,
e.g. $a_3,\, a_{\phi2},\, b_3$, etc. With such constructed sums, we evaluate
more complicated diagrams, including vertexes and boxes. We illustrate it
evaluating diagram $J$ shown in Fig.~3. The resul is
\begin{equation}
J = \frac{9}{16} \zeta(3) - \frac{1}{8} a_3 - \frac{1}{8} a_{\phi2}
- \frac{1}{32} b_3
\label{J}
\end{equation}
and a similar result follows for the box diagrams of Fig.~2. Formula (\ref{J})
shows the deep relation of the vertex diagram with the sunset diagram
(in fact such relation follows from the differential equations).
Concluding this section we want to mention that there are relations
between the above sums and also their relation to the elliptic integrals
has been found in \cite{I1}.
\section{Conclusions}
Thus, we established the analytical structure of the results for the nex unknown
corrections both for ortho- and parapositronium lifetimes. We found that new
constants, that are not related to the Euler--Zagier sums appear in both cases,
We are grateful to G. S. Adkins for providing us with the computer code
employed for the numerical analysis in \cite{Adkins:2005eg} and M. Yu. Kalmykov
for the fruitful discussions.
|
1,477,468,750,364 | arxiv | \section{Introdution}
Let $\Omega$ be a bounded domain of class $C^3$ and $f:\mathbb{R}\to\mathbb{R}$ be a positive, nondecreasing, locally Lipschitz, and superlinear function in the sense that
\begin{align}
\lim_{u\to \infty}\frac{f(u)}{u} = \infty. \notag
\end{align}
We consider the semilinear elliptic problem
\begin{equation}
\label{gelfandf}
\left\{
\begin{alignedat}{4}
-\Delta u_\lambda&=\lambda f(u_{\lambda})&\hspace{2mm} &\text{in } \Omega,\\
u_\lambda&>0 & &\text{in } \Omega, \\
u_\lambda&=0 & &\text{on } \partial \Omega,
\end{alignedat}
\right.
\end{equation}
where $\lambda$ is a positive parameter.
Equation (\ref{gelfandf}) is known as the Gelfand problem. It was first studied by Barenblatt in relation to combustion theory in a volume edited by Gelfand \cite{Gelf}. Later, it was studied by many authors.
We say that a solution $u_\lambda \in C^2_0(\Omega)$ of (\ref{gelfandf}) is stable if
\begin{align}
\int_{\Omega}\lambda f'(u_{\lambda})\xi^2\,dx\le\int_{\Omega}|\nabla\xi|^2\,dx\hspace{6mm}\text{for all $\xi \in C^\infty_0(\Omega)$}.\notag
\end{align}
If we define the associated energy $E$ as
\begin{align}
E(u_\lambda):=\int_{\Omega}\left(\frac{|\nabla u_{\lambda}|^2}{2}-\lambda F(u_\lambda)\right) \,dx, \notag
\end{align}
where $F(u):=\int_{0}^{u}f(s)\,ds$, then the stability of a solution $u$ is interpreted as the nonnegativity of the second variation $E''$ at the critical point $u$. In paticular, any local minimizer of $E$ is a stable solution.
The following theorem is the fundamental result to deal with this problem.
\begin{theorem}[see \cite{B,Br,Dup}]
Let $f$ be a positive, nondecreasing, locally Lipschitz, and superlinear function. Then there exists a constant $\lambda^* = \lambda^*(\Omega,N,f) \in(0,\infty)$ such that
\begin{itemize}
\item For $0<\lambda<\lambda^*$, there exists a minimal classical solution $u_\lambda\in C^2(\overline{\Omega})$ of $(\ref{gelfandf})$.
In particular, $u_\lambda$ is stable and $u_\lambda<u_{\lambda'}$ for $\lambda<\lambda'$.
\item For $\lambda>\lambda^*$, there exists no classical solution $u\in C^2(\overline{\Omega})$ of $(\ref{gelfandf})$.
\item For $\lambda=\lambda^*$, we define $u^*:=\lim_{\lambda \uparrow \lambda^* } u_\lambda$. Then the function $u^*$ is an $L^1$-weak solution in the sense that $u^*\in L^1(\Omega)$, $f(u^*)\mathrm{dist}(\cdot,\partial \Omega)\in L^1(\Omega)$, and
\begin{equation}
-\int_{\Omega}u^*\Delta \xi \,dx = \int_{\Omega}\lambda^*f(u^*)\xi \,dx\hspace{8mm} \text{for all}\hspace{2mm}\xi\in C^2_0(\overline{\Omega}).\notag
\end{equation}
This solution is called the extremal solution of $(\ref{gelfandf})$.
\end{itemize}
\end{theorem}
We are interested in the regularity of the extremal solution. To deal with the regularity problem, the dimension $N$ is a very important factor. Indeed, when $N\ge10$, Joseph and Lundgren \cite{JL} constructed the singular extremal solution $u^*(x)=\log(1/|x|^2)$ for $f(u)=e^u$, $\lambda^*=2(N-2)$, and $\Omega=B_1$. On the other hand, when $N\le9$, Crandall and Rabinowitz \cite{CR} proved that the extremal solution
is bounded if the nonlinearity $f\in C^2(\mathbb{R})$ is a positive, increasing, and convex function which satisfies
\begin{equation}
\lim_{u\to \infty} \frac{f(u)f''(u)}{f'(u)^2}<\infty. \notag
\end{equation}
Typical examples are $f(u)=(1+u)^p$ or $f(u)=e^u$.
Based on these results, Brezis asked in \cite{B} whether extremal solutions are bounded for all $N\le9$, $\Omega$, and convex nonlinearities
$f$.
In the last few decades, many studies have been done to prove the open problem (see \cite{Ned,cabrecapella,cabre2010,cabre2019,Vil,CR-O,CSS}). Finally, recently, Cabr\'e, Figalli, Ros-Oton, and Serra \cite{CFRS} solved the open problem positively. Moreover, they proved the boundedness of extremal solutions for general nonlinearities and convex domains.
The case in which the nonlinearities and domains are not convex is more challenging problem. For this case, Cabr\'e \cite{cabre2010} showed the boundedness of extremal solutions when $N=2$. By extending his method, Castorina and Sanch\'on \cite{CS} showed it when $N\le4$ and $f(u)$ is convex after a large $u$. Sanch\'on \cite{san} also proved it when $N\le6$ and $\liminf_{u\to\infty}f(u)f''(u)/f'(u)^2>0$. Moreover, Aghajani \cite{A} proved it if the nonlinearities satisfy the following:
\begin{align}
\label{Aghajani}
\frac{1}{2}<\beta_{-}:=\liminf_{u\to\infty}\frac{f'(u)F(u)}{f(u)^2}\le \beta_{+}:=\limsup_{u\to\infty}\frac{f'(u)F(u)}{f(u)^2}<\frac{7}{10}\notag
\end{align}
or
\begin{equation}
\frac{1}{2}<\beta_{-}=\beta_{+}<\infty.\notag
\end{equation}
Our goal is to relax the convexity of nonlinearities. In order to achieve this goal, we focus on the framework of power convexity. For $m\in \mathbb{N}$, we say that a nonnegative function $f$ is $m$-convex if $f^m$ is convex. We note that if $f$ is $m$-convex, then $f$ is $l$-convex for all $m\le l$. Power convexity is a useful extension of convexity and it has become one of the well-known objects of study in elliptic and parabolic equations. For a systematic study of power convexity, see for instance \cite{i,Ishige}.
In this paper, we succeed in generalizing the result in \cite{CFRS} to all $m$-convex nonlinearities. More precisely, we prove the following:
\begin{theorem}
\label{th1}
Let $N\le 9$, $m\in \mathbb{N}$, and $\Omega\subset \mathbb{R}^N$ be any bounded domain of class $C^3$. Assume that $f$ is a positive, nondecreasing, $m$-convex, and superlinear function. Then the extremal solution of $(\ref{gelfandf})$ is bounded.
\end{theorem}
The following collorary follows immediately from Theorem \ref{th1}.
\begin{corollary}
\label{cor}
Let $N\le 9$ and $\Omega\subset \mathbb{R}^N$ be any bounded domain of class $C^3$. Assume that $f \in C^2(\mathbb{R})$ is a positive, increasing, and superlinear function which satisfies
\begin{equation}
\label{ratio}
\liminf_{u\to \infty} \frac{f(u)f''(u)}{f'(u)^2}>-\infty.
\end{equation}
Then the extremal solution of $(\ref{gelfandf})$ is bounded.
\end{corollary}
We provide a unified viewpoint on the results of previous studies by considering $m$-convexity. As mentioned before, the ratio $f(u)f''(u)/f'(u)^2$ was introduced by Crandall and Rabinowitz \cite{CR} to state a condition that is much stronger than convexity. Then, after years of research, it was eliminated by Cabr\'e, Figalli, Ros-Oton, and Serra \cite{CFRS}. Corollary \ref{cor} revives an important feature of the ratio in the
boundedness of extremal solutions; and our results thus clarify the relation between the results in \cite{CR}, \cite{CS}, \cite{san}, and \cite{CFRS} through consideration of $m$-convexity.
The idea of the proof is as follows. Since a extremal solution $u^*\in L^1(\Omega)$ is approximated by smooth stable solutions, it is sufficient to provide an a priori $L^\infty$ estimate of classical stable solutions. In order to provide the estimate, we apply the method used in \cite{CFRS}. In \cite{CFRS}, the authors first provided an interior $L^\infty$ estimate and a global $W^{1,2+\gamma}$ estimate of the classical stable solutions with all nonlinearities. Thanks to the global $W^{1,2+\gamma}$ estimate, they constructed a closedness result for stable solutions with convex nonlinearities. As a consequence, they provided a Liouville-type result and by using a blow-up argument, they proved a boundary $L^\infty$ estimate. Our attempt is to extend the closedness result to a larger class of solutions. We define the class of $m$-convex functions and provide a new closedness result for stable solutions with $m$-convex nonlinearities. As a result, we prove the main theorem as an analogue of \cite{CFRS}.
\section{A closedness result for stable solutions with \texorpdfstring{$m$}{Lg}-convex nonlinearities}
The aim of this section is to provide a closedness result for solutions with $m$-convex nonlinearities. As mentioned in the introduction, this result is necessary to prove a Liouville-type result and apply a blow-up argument.
Let $m$ be a natural number. We say that a nonnegative function $f:\mathbb{R}\to [0,\infty]$ is $m$-convex if $f^m$ is convex in
$(-\infty, \sup_{f(t)<\infty} t)$ and define
\begin{align}
C^m:=\left\{f:\mathbb{R}\to [0,\infty]:
\begin{array}{ll}
f\text{ is lower semi-continuous, nonnegative, }\\
\text{nondecreasing, and $m$-convex.}
\end{array}
\right\}. \notag
\end{align}
For $f\in C^m$, we define $f'_{-}(t)$ as
\begin{align}
f'_{-}(t):=
\begin{cases}
\lim_{k\downarrow 0} \frac{f(t)-f(t-k)}{k} &\text{if}\hspace{2mm} f(t)<\infty, \\
\infty & \text{if}\hspace{2mm} f(t)=\infty.\\
\end{cases}
\notag
\end{align}
We note the following two properties. The first notation is that the inclusion $C^{m_1}\subset C^{m_2}$ holds for $m_1\le m_2$. Indeed, let $f\in C^{m_1}$ and define $F(g):=g^\frac{m_2}{m_1}$. Since $F$ and $f^{m_1}$ is convex, we have
\begin{equation}
F\left(f^{m_1}(\frac{s_1+s_2}{2})\right)\le F\left(\frac{f^{m_1}(s_1)+f^{m_1}(s_2)}{2}\right)\le \frac{F(f^{m_1}(s_1))+F(f^{m_1}(s_2))}{2}\notag
\end{equation}
for all $s_1,s_2\in (-\infty, \sup_{f(t)<\infty} t)$. Therefore, we have $f\in C^{m_2}$.
The second notation is that $C^m$ is closed under the scaling $f\mapsto af(b\cdot)$ for all $a>0$, $b>0$. This notation is important since we rescale the nonlinearity $f$ when we use the blow-up analysis.
Next, we give the definition of stable solutions. Let $\Omega$ be a domain and consider the equation
\begin{equation}
\label{eqf}
-\Delta u =f(u) \hspace{3mm} \text{in $\Omega$}.
\end{equation}
\begin{definition}
\label{stable}
Let $m\in \mathbb{N}$ and $f\in C^m$. Then we say that $u\in W^{1,2}_{\rm{loc}}(\Omega)$ is a weak solution of (\ref{eqf})
if $f(u)\in L^1_{\rm{loc}}(\Omega)$ and
\begin{equation}
\int_{\Omega}\nabla u \cdot \nabla \varphi \,dx = \int_{\Omega}f(u) \varphi \,dx \hspace{5mm} \text{for all $\varphi \in C^{\infty}_0 (\Omega)$}.\notag
\end{equation}
Then, we say that $u$ is a stable solution if $f'_{-}(u)\in L^1_{\rm{loc}}(\Omega)$ and
\begin{equation}
\int_{\Omega}f'_{-}(u){\xi}^2\,dx \leq \int_{\Omega}{|\nabla \xi |}^2\,dx \hspace{5mm} \text{for all $\xi \in C^{\infty}_0 (\Omega)$}.\notag
\end{equation}
Moreover, we define
\begin{align}
S^m(\Omega):= \left\{u\in W^{1,2}_{\rm{loc}}(\Omega): u\text{ is a stable solution of (\ref{eqf}) for some }f\in C^m\right\}. \notag
\end{align}
\end{definition}
We note that if $u$ is a weak solution of (\ref{eqf}) for some $f\in C^m$, then $f(t)$ is finite for $t<\sup_{\Omega} u$. It follows from $f(u)\in L^1_{\rm{loc}}(\Omega)$. In particular, if $\{u(x)=\sup_{\Omega} u\}$ has a positive measure, we have $f(\sup_{\Omega}u)<\infty$.
The following proposition is the closedness result.
\begin{proposition}
\label{closedness}
Let $m\in \mathbb{N}$, $\gamma>0$, and let $\{u_k\}_{k\in \mathbb{N}}\subset S^m(\Omega)$ be a $W^{1,2+\gamma}$ locally bounded sequence. Then, there exist a subsequence $\{u_{k_j}\}_{j\in \mathbb{N}}\subset \{u_k\}_{k\in \mathbb{N}}$ and $u\in S^m(\Omega)$ such that $u_{k_j}\to u$ in $W^{1,2}_{\rm{loc}}(\Omega)$ as $j\to \infty$.
\end{proposition}
\begin{remark}
\label{remark}
\rm{We note that if $u$ is a weak solution of (\ref{eqf}) for some $f\in C^m$, then $f(t)$ is finite for $t<\sup_{\Omega} u$. It follows from $f(u)\in L^1_{\rm{loc}}(\Omega)$. In particular, if $\{u(x)=\sup_{\Omega} u\}$ has a positive measure, we have $f(\sup_{\Omega}u)<\infty$.}
\end{remark}
\begin{proof}
By assumption, we have a $W^{1,2+\gamma}$ locally bounded sequence $u_k\in S^m(\Omega)$ of stable solutions of $-\Delta u_k=f_k(u_k)$ with $f_k\in C^m$.
Thanks to the $W^{1,2+\gamma}$ locally boundedness of $u_k$, by using the Rellich-Kondrachov theorem and a diagonal argument, we verify that there exists a function $u\in L^{2}_{\rm{loc}}(\Omega)$ such that up to subsequence if necessary, $u_k\to u$ in $L^2_{\rm{loc}}(\Omega)$ in the sense that for any domain $\Omega'\subset\subset \Omega$, $u_k\to u$ in $L^2(\Omega')$. Furthermore, thanks to an interpolation inequality and the $W^{1,2+\gamma}$ locally boundedness of $u_k$, we get $u\in W^{1,2}_{\rm{loc}}(\Omega)$, $u_k(x)\to u(x)$ in $W^{1,2}_{\rm{loc}}(\Omega)$, and $u_k\to u$ for a.e. $x\in \Omega$ (if necessary, by taking a subsequence).
The following proof generalizes the method of \cite[Theorem 4.1]{CFRS} by focusing on the convexity of $f_{k}^m$ instead of the convexity of $f_{k}$. There are three main differences as follows. The first is that in Step 1, we prove the boundedness of $f_k(l)$ for all $l<\sup_{\Omega}u$ without using the condition of $f$. The second is that in Step 2, we use the $m$-convexity of $f_k$ and apply an algebraic argument to show that $\lVert f_k (u_k)-f_k (u_k-\delta)\rVert_{L^1(\Omega')}\to 0$ uniformly on $k$ as $\delta\to 0$ for all subdomains $\Omega'\subset\subset\Omega$. The last point is that in Step 3, we prove the inequality (\ref{stab}) by using the $(m+1)$-convexity of $f_k$ and show that $f^m_k(u_k)$ on the left side of (\ref{stab}) can be replaced by $f^m_k(u_k-2\delta)$.
\vspace{10pt}
\noindent
\textbf{Step 1.} A compactness estimate on $f_k$.
Set $L:=\sup_{\Omega}u \in (-\infty,\infty]$ and let $l<L$. We claim that
\begin{align}
\label{limsup}
\limsup_{k\to \infty} f_k(l)<\infty.
\end{align}
Indeed, let $l<L$ and $\Omega'\subset\subset \Omega$ be a domain such that $\{u(x)>l\}\cap \Omega'$ has a positive measure.
We choose a nonnegative cut-off function $\eta\in C^\infty_{0}(\Omega)$ such that $\eta=1$ in $\Omega'$.
Since $u_k(x)\to u(x)$ for a.e. $x\in \Omega$, by applying Fatou's Lemma to the sequence $1_{\{u_k>l\}\cap \Omega'}$, we get
\begin{equation}
\label{menseki}
\liminf_{k\to\infty}|\{u_k>l\}\cap \Omega'|\ge|\{u(x)>l\}\cap \Omega'|>0.
\end{equation}
On the other hand, Since $f$ is nonnegative and nondecreasing, we have
\begin{align}
\label{bound of fk}
\begin{split}
f_k(l)&\le\frac{1}{|\{u_k>l\}\cap \Omega'|}\int_{\{u_k>l\}\cap \Omega'}f_k(u_k) \, dx\\
&\le \frac{1}{|\{u_k>l\}\cap \Omega'|}\int_{\Omega}f_k(u_k)\eta\, dx \\
&\le\frac{1}{|\{u_k>l\}\cap \Omega'|}\int_{\Omega}\nabla u_k \cdot \nabla \eta \, dx \le\frac{C}{|\{u_k>l\}\cap \Omega'|} \lVert\nabla u_k\rVert_{L^2(\Omega)}
\end{split}
\end{align}
for some constant $C$ independent of $k$ and all $k$ suffficiently large. Combining (\ref{menseki}) and (\ref{bound of fk}), we obtain (\ref{limsup}).
Furthermore, by the convexity of $f^{m}_{k}$, we have for all $k$ suffficiently large,
\begin{equation}
(f^{m}_{k})'_{-}(l)<\frac{f^{m}_{k} (l+\delta)-f^{m}_{k} (l)}{\delta}<\limsup_{k\to \infty}\frac{f^{m}_{k}(l+\delta)}{\delta}<\infty \hspace{5mm} \text{for all $l<L$}.\notag
\end{equation}
Since $m$-convex functions are $(m+1)$-convex, in a similar way, it follows that $\limsup_{k\to\infty}(f^{m+1}_{k})'_{-}(l)<\infty$ for all $l<L$.
Therefore, $f^{m}_k$ is uniformly bounded and equicontinuous. By Ascoli Arzela's theorem and a diagonal argument, there exist the function $g: (-\infty,L)\to \mathbb{R}$ such that $f^{m}_k\to g$ locally uniformly on $(-\infty,L)$.
Define $g(L):=\lim_{l\uparrow L}g(l)$, $g(l):=\infty$ for $l>L$, and $f:=g^{\frac{1}{m}}$. Then it is easy to check $f\in C^m$ and $f_k\to f$ locally uniformly on $(-\infty,L)$.
\vspace{10pt}
\noindent
\textbf{Step 2.} $-\Delta u = f(u)$ in $\Omega$.
For every nonnegative test function $\xi\in C^{0,1}_{0}(\Omega)$, we have
\begin{align}
\int_{\Omega} \nabla u \cdot \nabla \xi \,dx = \lim_{k \to \infty} \int_{\Omega} \nabla u_k \cdot \nabla \xi \,dx =\lim_{k \to \infty}\int_{\Omega} f_k(u_k) \xi \,dx. \notag
\end{align}
We claim that $f_k(u_k) \to f(u)$ for a.e. $x \in\{u<L\}$ as $k\to \infty$.
Indeed, let $x \in \{u<L\}$ and let $l$ be a positive constant satisfying $u(x)<l<L$. Then we have
\begin{align}
|f^{m}_k(u_k(x))-g(u(x))| \le& |g(u(x))-f^{m}_k(u(x))|+|f^{m}_k(u(x))-f^{m}_k(u_k(x))| \notag \\
\le & |g(u(x))-f^{m}_k(u(x))|+(f^{m}_k)'_{-}(l)|u(x)-u_k(x)| \notag \\
\to &0 \hspace{2mm}\text{as $k \to \infty$}\notag.
\end{align}
Thus, this claim holds.
In the following, $\eta \in C^\infty_0(\Omega)$ denotes a nonnegative cut-off function such that $\eta=1$ on the support of $\xi$.
\vspace{10pt}
\noindent
\textbf{Case 1.} $L=\infty$.
We have
\begin{align}
\int_{\rm{supp} (\xi)} f_k (u_k) u_k\, dx \le \int_{\Omega} f_k (u_k) u_k \eta\, dx
= \int_{\Omega} \nabla u_k \cdot \nabla (u_k \eta)\,dx \le C \notag
\end{align}
for some constant $C$ independent of $k$, where the last bound follows from the $W^{1,2}$ locally boundedness of $u_k$. We take a continuous function $\varphi : \mathbb{R} \to [0,1]$ such that $\varphi=0$ on $(-\infty,0]$ and $\varphi=1$ on $[1,\infty)$. Then we deduce that
\begin{align}
\label{ineqj}
\begin{split}
\int_{\rm{supp}(\xi)} \varphi(u_k-j) f_k (u_k) \,dx \le&\int_{\rm{supp}(\xi)\cap\{u_k>j\}} f_k (u_k) \,dx \\
\le& \frac{1}{j}\int_{\rm{supp} (\xi) \cap\{u_k>j \}} f_k (u_k)u_k \,dx \le \frac{C}{j}.
\end{split}
\end{align}
Therefore, by Fatou's Lemma, we also have
\begin{align}
\label{fatou}
\int_{\rm{supp}(\xi)} f(u)\varphi(u-j)\, dx \le \frac{C}{j}\hspace{5mm}\text{for all $j\ge 1$}.
\end{align}
Especially, we get $f(u) \in L^1_{\rm{loc}}(\Omega)$.
Furthermore, since
\begin{align}
&f_k (u_k)[1-\varphi(u_k-j)]\le f_k (j+1) \le C_j, \notag\\
&f_k (u_k)[1-\varphi(u_k-j)] \to f(u)[1-\varphi(u-j)] \hspace{5mm}\text{for a.e. $x\in \Omega$ as $k\to \infty$}, \notag
\end{align}
by combing (\ref{ineqj}) and (\ref{fatou}) and applying the dominated convergence theorem, we have
\begin{align}
\int_{\rm{supp}(\xi)}&|f_k (u_k)- f(u)|\,dx \notag \\
\le& \int_{\rm{supp}(\xi)}|f_k (u_k)(1-\varphi(u_k-j))- f(u)(1-\varphi(u-j))|\, dx \notag \\
&+ \int_{\rm{supp}(\xi)}|f_k (u_k)\varphi(u_k-j)+f(u)\varphi(u-j)|\,dx\notag \\
\le& \frac{C}{j} + o(1). \notag
\end{align}
By letting first $k\to \infty$ and then $j\to \infty$, we get the result.
\vspace{10pt}
\noindent
\textbf{Case 2.} $L<\infty$.
Let $\delta>0$. Since $(u_k-L-\delta)_{+} \ge \delta $ in $\{u_k>L+2\delta\}$, we have
\begin{align}
\delta&\int_{\rm{supp}(\xi) \cap \{u_k>L+2\delta\}}f_k (u_k)\,dx \notag \\
&\le\int_{\rm{supp}(\xi) \cap \{u_k>L+2\delta\}}f_k (u_k)(u_k-L-\delta)_{+}\,dx \notag \\
&\le \int_{\Omega}f_k(u_k)(u_k-L-\delta)_{+}\eta\,dx \notag\\
&\le \int_{\Omega}\left(|\nabla u_k|^2\eta+\nabla u_k \cdot \nabla\eta(u_k-L-\delta)_{+}\right)1_{\{u_k>L+\delta\}}\,dx. \notag
\end{align}
Since $u_k$ are uniformly bounded in $W^{1,2+\gamma}(\rm{supp}(\eta))$ and $1_{\{u_k>L+\delta\}} \to 0$ and
$(u_k-L-\delta)_{+} \to 0$ for a.e. $x\in \Omega$ as $k\to \infty$, we deduce from H\"older's inequality that,
\begin{align}
\lim_{k\to \infty}\int_{\rm{supp}(\xi)\cap\{u_k>L+2\delta\}} f_k(u_k)\,dx=0. \notag
\end{align}
Hence, we get
\begin{align}
\label{estimate}
\begin{split}
\int_{\Omega}f_k(u_k)\xi \,dx=&\int_{\Omega \cap \{u_k\le L+2\delta\}}f_k(u_k)\xi \,dx + o(1) \\
\le&\hspace{2mm} \mathrm{I} + \mathrm{II}+o(1),
\end{split}
\end{align}
where
\begin{align}
&\mathrm{I} = \int_{\Omega\cap\{u_k\le L+2\delta\}}f_k(u_k-3\delta)\xi \,dx , \notag \\
&\mathrm{II} = \int_{\Omega\cap\{u_k\le L+2\delta\}}\left(f_k(u_k)-f_k(u_k-3\delta)\right)\xi \,dx. \notag
\end{align}
We note that $f_k(u_k-3\delta) \to f(u-3\delta)$ for a.e. $x\in \Omega$ and $f_k (u_k-3\delta)\le f_k (L-\delta)\le C_\delta$
in $\rm{supp}(\xi) \cap \{u_k\le L+2\delta\}$ for some constant $C_{\delta}$ depending on $\delta$ but not on $k$. Thus we deduce from the dominated convergence theorem that
\begin{align}
\label{estimatei}
\mathrm{I} = \int_{\Omega}f(u-3\delta)\xi \,dx + o(1).
\end{align}
On the other hand, we prove $\mathrm{|II|}<C\delta$ for some $C$ independent of $k$ and $\delta$. Indeed, let $k>0$. We assume $\{x: u_k(x)=\sup_{\Omega}u_k \}$ has a positive measure.
As mentioned in Remark \ref{remark}, we have $f_k(\sup_{\Omega} u_k)<\infty$. Thus we have
\begin{align}
f_k(u_k(x))-f_k(u_k(x) - 3\delta) =& \frac{f_k(u_k(x))-f_k(u_k(x) - 3\delta)}{f^{m}_k(u_k(x))-f^{m}_k(u_k(x) - 3\delta)}\int^{u_k(x)}_{u_k(x)-3\delta} (f^{m}_{k})'_{-} (s) \,ds \notag\\
\le& 3m\delta\frac{f_k(u_k(x))-f_k(u_k(x) - 3\delta)}{f^{m}_k (u_k(x))-f^{m}_k(u_k(x) - 3\delta)}f^{m-1}_k (u_k) (f_k)'_{-}(u_k) \notag\\
\le& 3m\delta (f_k)'_{-}(u_k(x)) \notag
\end{align}
for all $x\in \Omega$. In a similar way, if $\{x: u_k(x)=\sup_{\Omega}u_k\}$ is a null set, the above estimate holds for a.e. $x \in \Omega$.
Thanks to the stability of $u_k$, we get
\begin{align}
\label{estimateii}
|\mathrm{II}| = C\delta \int_{\Omega\cap\{u_k\le L+2\delta\}}(f_{k})'_{-}(u_k)\xi\,dx <C\delta
\end{align}
for some constant $C$ independent of $k$ and $\delta$. Combing (\ref{estimate}), (\ref{estimatei}), and (\ref{estimateii}), we have
\begin{align}
\int_{\Omega}f(u-3\delta)\xi\,dx -C\delta&\le\liminf_{k\to \infty} \int_{\Omega}f_k(u_k)\xi\,dx \notag \\
&\le\limsup_{k\to \infty} \int_{\Omega}(f_k)(u_k)\xi\, dx\le\int_{\Omega}f(u-3\delta)\xi\,dx+C\delta,\notag
\end{align}
for some constant $C$ independent of $\delta$. Since $\delta$ is arbitrary, by applying the monotone convergence theorem,\footnote{It is sufficient to prove only when $\xi>0$. Indeed, every $\xi\in C^\infty_{0}(\Omega)$ is represented as a difference between two positive functions belonging to $C^{0,1}_{0}(\Omega)$.} we have $f(u)\in L^1_{\rm{loc}}(\Omega)$ and
\begin{align}
\lim_{k\to \infty} \int_{\Omega}f_k(u_k)\xi \,dx =\int_{\Omega}f(u) \xi \,dx.\notag
\end{align}
\vspace{10pt}
\noindent
\textbf{Step 3.} The stability of $u$.
Define $T:=\sup_{f(t)=0}t$. Then without loss of generality, we assume $T<L$. Let $0<\varepsilon<\min\{1, (L-T)/8\}$. Since $f_k\to f$ locally uniformly on $(-\infty, L)$, there exists a large number $K>0$ such that
\begin{align}
\label{a}
f_k(T+2\varepsilon)\ge\frac{f(T+2\varepsilon)}{2}>0\hspace{5mm} \text{for all $k>K$}.
\end{align}
Let $j\in \mathbb{N}$ and $0<\delta<\varepsilon$. We denote $L_j:=\min\{j,L\}$ and $E_k:=\{T+4\varepsilon<u_k<L_{j}+\delta \}$. We note that since $f_k$ is $m$-convex (and thus $(m+1)$-convex), we have
\begin{equation}
f_k^{m+1} (u_k-2\varepsilon)-f_k^{m+1}(u_k-2\varepsilon-\delta)\le (m+1)\delta f_k^{m}(u_k)(f_k)'_{-}(u_k).\notag
\end{equation}
Thanks to this notation and stability inequality, we have for all $\xi\in C^{\infty}_{0}(\Omega)$ and $k>K$,
\begin{align}
\label{stab}
\begin{split}
\int_{E_k} \frac{f^{m+1}_k(u_k-2\varepsilon)-f^{m+1}_k(u_k-2\varepsilon-\delta)}{(m+1)\delta f_k^{m}(u_k)}\xi^2\,dx \le \int_{\Omega} |\nabla \xi|^2 \,dx.
\end{split}
\end{align}
Our attempt is to replace $f^m_k(u_k)$ on the left side of (\ref{stab}) with $f^m_k(u_k-2\delta)$. In order to justify it, we claim that we have
\begin{align}
\label{aa}
\begin{split}
\int_{E_k}\frac{f^{m+1}_k(u_k-2\varepsilon)-f^{m+1}_k(u_k-2\varepsilon-\delta)}{(m+1)\delta}\cdot&\frac{f_k^{m}(u_k)-f_k^{m}(u_k-2\delta)}{f_k^{m}(u_k)f_k^{m}(u_k-2\delta)}\xi^2\,dx\\
&<C\delta
\end{split}
\end{align}
for all $k>K$ and some constant $C$ independent of $\delta$ and $k$. Indeed, since $f_k$ is $(m+1)$-convex and $(f^{m+1}_k)'_{-}(l)$ is uniformly bounded for all $l<L$, thanks to (\ref{a}), we get for all $k>K$,
\begin{equation}
\label{b}
\frac{f^{m+1}_k(u_k-2\varepsilon)-f^{m+1}_k(u_k-2\varepsilon-\delta)}{(m+1)\delta f_k(u_k)f_k^{m}(u_k-2\delta)}<C(f^{m+1}_k)'_{-}(u_k-2\varepsilon)<C,
\end{equation}
where $C$ is a constant independent of $\delta$ and $k$.
On the other hand, since $f_k$ is $m$-convex, we have for all $k>K$,
\begin{equation}
\label{c}
\int_{E_k}\frac{f_k^{m}(u_k)-f_k^{m}(u_k-2\delta)}{f_k^{m-1}(u_k)}\xi^2\,dx\le C\delta\int_{\Omega}(f_k)'_{-}(u_k)\xi^2\, dx<C\delta
\end{equation}
for some constant $C$ independent of $\delta$ and $k$. Therefore this claim holds by combing (\ref{b}) and (\ref{c}).
Moreover, thanks to $f_k\to f$ locally uniformly on $(-\infty, L)$ and $u_k\to u$ a.e. in $\Omega$, By combing (\ref{aa}) and (\ref{stab}) and applying Fatou's lemma, we have
\begin{align}
\int_{\{T+6\varepsilon<u\le L_j \}}&\frac{f^{m+1}(u-2\varepsilon)-f^{m+1}(u-2\varepsilon-\delta)}{(m+1)\delta f^{m}(u)}\xi^2\,dx\notag\\
&\le\int_{\{T+6\varepsilon< u\le L_j \}}\frac{f^{m+1}(u-2\varepsilon)-f^{m+1}(u-2\varepsilon-\delta)}{(m+1)\delta f^{m}(u-2\delta)}\xi^2\,dx\notag\\
&\le\liminf_{k\to\infty}\int_{E_k} \frac{f^{m+1}_k(u_k-2\varepsilon)-f^{m+1}_k(u_k-2\varepsilon-\delta)}{(m+1)\delta f_k^{m}(u_k-2\delta)}\xi^2\,dx\notag\\
&\le C\delta+\int_{\Omega} |\nabla \xi|^2 \,dx\notag
\end{align}
for some constant $C$ independent of $\delta$ and $k$. Since
\begin{equation}
\frac{f^{m+1}(t-2\varepsilon)-f^{m+1}(t-2\varepsilon-\delta)}{(m+1)\delta}\uparrow f^m(t-2\varepsilon)f'_{-}(t-2\varepsilon) \hspace{5mm} \text{as $\delta\downarrow0$ for $t<L_j$}\notag
\end{equation}
and
\begin{equation}
f^m(t-2\varepsilon)f'_{-}(t-2\varepsilon) \uparrow f^m(t)f'_{-}(t) \hspace{5mm} \text{as $\varepsilon\downarrow0$ for $t<L_j$}\notag,
\end{equation}
by letting first $\delta\to 0$ and then $\varepsilon\to0$ and $j\to\infty$, we get
\begin{equation}
\int_{\{T<u\}} f'_{-}(u)\xi^2\, dx\le \int_{\Omega}|\nabla \xi|^2\,dx. \notag
\end{equation}
Since $f'_{-}(u)=0$ in $\{u\le T\}$, we get the result.
\end{proof}
\section{A Liouville-type result}
In this section, we provide a Liouville-type result. To begin with, we denote $B_r:=\{|x|<r\}$, $\mathbb{R}^{N}_{+}:=\mathbb{R}^{N}\cap \{x_N>0\}$, and $B_{r}^{+}:= B_r \cap \{x_N>0\}$.
The following proposition states a Liouville-type result. Thanks to Proposition \ref{closedness}, this proposition is proved as a modification of \cite[Proposition 6.3]{CFRS}.
\begin{proposition}
\label{LT}
Assume $3\le N \le 9$ and let $m\in \mathbb{N}$. There exists a dimensional constant $\alpha_{N}>0$ such that the following holds.
Assume that a nonnegative function $u\in W^{1,2}_{\rm{loc}}(\overline{\mathbb{R}^{N}_{+}}) \cap C^{0}_{\rm{loc}}(\mathbb{R}^{N}_{+})$ satisfies $u_R \in S^{m}(B^{+}_{2})$ and $u_R = 0$ on $\partial B^{+}_{2}\cap\{x_N =0 \}$ in the trace sense for all $R\ge 1$, where $u_R (x):=u(Rx)$.
Suppose in addition that, for some $\alpha \in (0,\alpha_N)$ and $\gamma>0$, we have
\begin{align}
\label{ass}
\lVert \nabla u_R \rVert_{L^{2+\gamma}(B^{+}_{3/2})} \le C_1 \lVert \nabla u_R \rVert_{L^{2}({B^{+}_2})}\le C_2 R^\alpha \hspace{2mm}\text{for all $R\ge1$}
\end{align}
with constants $C_1$ and $C_2$ independent of $R$, and that $u$ satisfies
\begin{equation}
\label{cp}
\int_{\mathbb{R}^{N}_{+}} (\{ (N-2)\eta +2x \cdot \nabla \eta \}\eta |\nabla u|^2 - 2(x\cdot \nabla{u})\nabla u\cdot \nabla(\eta^2)-|x\cdot \nabla u|^2|\nabla \eta|^2) \,dx \le 0
\end{equation}
for all $\eta \in C^{0,1}_0 (\overline{\mathbb{R}^{N}_{+}})$. Then, $u\equiv 0$.
\end{proposition}
\begin{proof}
For the reader's convenience, we note the sketch of the proof as follows. We first use the appropriate function as a test function in (\ref{cp}) and have an inequality, which tells us that the radial derivative of $u$ in a half-ball is controlled by the gradient term in a half-annulus.
Next, we prove the following statement: there exists a dimensional constant $C>0$ such that if for some $R\ge1$ we have
\begin{equation}
\int_{B_1^+}|\nabla u_R|^2\,dx \ge \frac{1}{2}\int_{B_2^+}|\nabla u_R|^2\,dx \notag,
\end{equation}
then
\begin{equation}
\label{RI}
\int_{B_{3/2}^+}|\nabla u_R|^2\,dx\le C\int_{B_{3/2}^+ \setminus B_{1}^+}|x|^{-N}|x\cdot\nabla u_R|^2\,dx.
\end{equation}
This statement tells us that the gradient term in a half-ball is controlled by the radial derivative of $u$ in a half-annulus under the condition that the mass of $|\nabla u|^2$ in a half-annulus is not large with respect to that in a half-ball.
By combing the previous two results, using an iteration argument and controlling the gradient term by using (\ref{ass}), we have $x\cdot u_R\equiv 0$ in $B^{+}_{2}$ for all $R\ge1$. As a result, we have $u_R\equiv 0$ in $B^{+}_{2}$ for all $R\ge1$\footnote{This fact can be proved by imitating the following method.} and we get the result. Therefore, it is sufficient to prove (\ref{RI}).
In this proof, we assume by contradiction the existence of a sequence $v_k:=u_{R_k}/\lVert\nabla u_{R_k} \rVert_{L^2(B^{+}_{3/2})} \in W^{1,2}_{\rm{loc}}(\overline{\mathbb{R}^{N}_{+}})\cap C^{0}_{\rm{loc}}(\mathbb{R}^{N}_{+})$ with $v_k=0$ on $\partial B^+_2 \cap\{x_N=0\}$ such that
\begin{equation}
\int_{B_1^{+}}|\nabla v_k|^2\, dx \ge \frac{1}{2}\int_{B_2^{+}}|\nabla v_k|^2\, dx, \notag
\end{equation}
\begin{equation}
\int_{B_{3/2}^{+}}|\nabla v_k|^2\, dx =1,\hspace{5mm}\text{and}\hspace{5mm} \int_{B_{3/2}^{+}\setminus B_1^{+}}|x|^{-N}|x\cdot\nabla v_k|^2 \,dx \to 0 \hspace{2mm} \text{as $k\to\infty$. \notag}
\end{equation}
Since
\begin{equation}
\lVert \nabla v_k \rVert_{L^{2+\gamma}(B^{+}_{3/2})}\le C_1\lVert \nabla v_k \rVert_{L^{2}({B^{+}_2})}\le 2C_1 \notag
\end{equation}
and
\begin{equation}
\lVert v_k \rVert_{L^{2+\gamma}({B^{+}_{3/2}})}\le C\lVert \nabla v_k \rVert_{L^{2+\gamma}({B^{+}_{3/2}})}\le C\hspace{2mm} \text{by the poincar\'e inequality}, \notag
\end{equation}
similary to the beginning of the proof of Proposition \ref{closedness}, We have $v_k\to v$ in $W^{1,2}(B_{3/2}^{+})$ for some $v\in W^{1,2}(B_{3/2}^{+})$ by taking a subsequence if necessary.
Then we have
\begin{equation}
\int_{B_{3/2}^{+}}|\nabla v|^2\,dx=1\hspace{5mm}\text{and} \hspace{5mm} x\cdot\nabla v\equiv0\hspace{2mm} \text{in}\hspace{2mm}B_{3/2}^{+}\setminus B_{1}^{+}.\notag
\end{equation}
Moreover, we verify $v\in S^m(B_{3/2}^{+})$ by using Proposition \ref{closedness} and $v=0$ on $\{x_N=0\}\cap B_{3/2}^+$ by the continuity of the trace operator.
In particular, $v$ is a weak solution of $-\Delta v = g(v)$ in $B^{+}_{3/2}$ with some $g\in C^m$.
Thanks to the $0$-homogenity of $v$, we know that $\Delta v$ is $(-2)$-homogeneous and $g(v)$ is $0$-homogeneous. Thus we have $g(v)\equiv 0$. In particular, $v$ is a harmonic $0$-homogeneous function in $B_{3/2}^{+}\setminus{B_{1}^+}$ satisfying $v=0$ on $\{x_N=0\}\cap B_{3/2}^+$. As a consequence, $v$ takes its infimum at an interior point. Thanks to the strong maximum principle, we get $v\equiv0$ in $B_{3/2}^{+}\setminus{B_{1}^+}$. Thanks to the superharmonicity of $v$ and the strong maximum principle, we have $v\equiv0$ in $B_{3/2}^{+}$. This result contradicts $\int_{B^{+}_{3/2}}|\nabla v|^2\,dx=1$.
\end{proof}
\section{A blow-up argument}
In this section, we provide a priori $L^\infty$ estimate of $u$ by using a blow-up argument. At first, we introduce the notion of a small deformation of a half-ball.
\begin{definition}[see \cite{CFRS}] Let $\vartheta\ge 0$. We define that a domain $\Omega\subset\mathbb{R}^N$ is a $\vartheta$-deformation of $B^{+}_2$ if $\Omega = \Phi(B^{+}_2)$
for some $\Phi \in C^3(B_2;\mathbb{R}^{N})$ such that $\Phi(0)=0$, $D\Phi(0)=\rm{Id}$, and
\begin{equation}
\lVert D^2 \Phi \rVert_{L^{\infty}(B_2)} +\lVert D^3 \Phi \rVert_{L^{\infty}(B_2)}\le\vartheta, \notag
\end{equation}
where the norms of $D^2 \Phi$ and $D^3 \Phi$ are computed with respect to the operator norm.
\end{definition}
We note that given a bounded $C^3$ domain, we can cover its boundary with finite small balls. Then by rescaling and rotating these balls, we can regard its boundary as a finite union of $\vartheta$-deformations of $B^{+}_2$ with $\vartheta$ sufficiently small.
Therefore, it is enough to provide an a priori $L^\infty$ bound only if $\Omega$ is a small deformation of $B^{+}_2$.
The following theorem states an a priori $L^\infty$ estimate of $u$ and we prove it by modifying the proof of \cite[Theorem 6.1]{CFRS}.
\begin{theorem} [see \cite{CFRS}]
\label{blowup}
Let $3\le N \le 9$, $0\le\vartheta\le\frac{1}{100}$, $m\in \mathbb{N}$, and $\Omega\subset\mathbb{R}^N$ be a $\vartheta$-deformation of $B^+_2$. Assume that $u\in C^2(\overline{\Omega \cap B_1})$ is a nonnegative stable solution of
\begin{equation}
\label{eqx}
-\Delta u=f(u)\hspace{5mm}\text{in $\Omega\cap B_1$}\hspace{5mm}\text{and}\hspace{5mm}u=0\hspace{5mm}\text{on $\partial \Omega \cap B_1$} \notag
\end{equation}
for a positive, nondecreasing, $m$-convex, and superlinear function $f$. Then, there are some constants $\alpha=\alpha(m,N)>0$ and $C=C(m,N)>0$ such that
\begin{equation}
\lVert u \rVert_{C^{\alpha}(\overline{\Omega}\cap B_{1/2})}\le C \lVert u \rVert_{L^1(\Omega\cap B_1)}.\notag
\end{equation}
\end{theorem}
\begin{proof}
In the proof of \cite[Theorem 6.1]{CFRS}, the convexity of $f$ is used only to apply the closedness result for $S^1(\Omega)$ and a Liouville-type result.
By replacing $S^1(\Omega)$ with $S^m(\Omega)$ and using Proposition \ref{closedness} and Proposition \ref{LT}, We can prove this theorem.
\end{proof}
\begin{proof}[Proof of Theorem \ref{th1} and Corollary \ref{cor}]
Let $u_\lambda$ be the minimal solution of (\ref{gelfandf}) for some $\lambda<\lambda^*$ and $f\in C^m$. Since $C^m$ is closed under scaling, by applying Theorem \ref{blowup}, covering argument, and an interior $L^{\infty}$ estimate of $u_\lambda$ (see \cite[Theorem 1.2]{CFRS}), we get
\begin{equation}
\lVert u_{\lambda} \rVert_{L^{\infty}(\Omega)}\le C \lVert u_{\lambda} \rVert_{L^1(\Omega)} \notag
\end{equation}
for some constant $C>0$ dependent only on $N$, $m$, and $\Omega$.
By applying the monotone convergence theorem, we prove Theorem \ref{th1}.
If $f\in C^2(\mathbb{R})$ satisfies (\ref{ratio}), there exists a large number $m\in \mathbb{N}$ such that $f''f+(m-1)f'^2>0$. Since
\begin{equation}
(f^m)''=mf^{m-2}((m-1)f'^2+f''f)>0,\notag
\end{equation}
the function $f$ is $m$-convex and this result follows from Theorem \ref{th1}.
\end{proof}
\section*{Acknowledgments}
The author would like to thank my supervisor, Associate Professor Michiaki Onodera, for his valuable advice.
\bibliographystyle{plain}
|
1,477,468,750,365 | arxiv | \section{Introduction}
Variations in the layer stacking of quasi-two-dimensional (quasi-2D) materials can sometimes have important effects on material properties. For example, the chromium trihalides CrX$_{3}$ (X=Cl, Br, I) have interlayer magnetic coupling that changes with layer stacking \cite{klein_enhancement_2019,chen_direct_2019,li_pressure-controlled_2019}, and MoTe$_{2}$ is reported to be a Weyl semimetal in its low-temperature $T_{d}$ phase but not in its higher-temperature $1T^{\prime}$ phase \cite{sun_prediction_2015,deng_experimental_2016}. These materials are also examples where stacking changes can be conveniently induced by modifying an external parameter such as temperature \cite{clarke_low-temperature_1978,mcguire_crystal_2017}.
Unfortunately, theoretical investigation of these transitions and the stacking dependence of properties is hindered by the weakness of the interlayer van der Waals (vdW) interactions, which results in small energy differences between stacking variations and increases the precision needed for calculations.
Even in a material as simple and as frequently studied as graphite, there have been scant experimental and contradictory theoretical studies on whether the rhombohedral or Bernal stacking has a lower free energy at room temperature \cite{nery_ab-initio_2021}.
Experiments where properties are measured across stacking variations could provide much needed insight into interlayer interactions and stacking energetics.
In MoTe$_{2}$, one can switch between three different layer stacking orders by changes in temperature \cite{tao_appearance_2019,schneeloch_evolution_2020}. MoTe$_{2}$ crystallizes in the monoclinic $1T^{\prime}$ phase, which can be preserved at room temperature over the more stable 2H phase by quenching \cite{clarke_low-temperature_1978}. On cooling $1T^{\prime}$ below $\sim$280 K, disordered stacking appears, with a gradual transition into the orthorhombic $T_{d}$ phase. On warming above $\sim$260 K, $T_{d}$ abruptly transitions into the pseudo-orthorhombic $T_{d}^{*}$ phase, and further warming results in disordered stacking with a gradual transition back into the $1T^{\prime}$ phase. W substitution up to $x \sim 0.2$ results in increased transition temperatures but similar transitions \cite{schneeloch_evolution_2020}.
The interlayer interaction between neighboring layers can be thought of as a double-well potential \cite{heikes_mechanical_2018}, where the minima correspond to two stacking options, which we label ``A'' and ``B'', that are accessible by layer sliding along the $a$-axis (Fig.\ \ref{fig:1}(a,b)).
The multitude of stacking configurations accessible from $1T^{\prime}$ via temperature changes are all constructible by an A/B sequence of stacking operations \cite{schneeloch_evolution_2020,schneeloch_emergence_2019}.
For example, repeated AA...\ stacking yields $T_{d}$, AABB...\ yields $T_{d}^{*}$, and AB...\ yields $1T^{\prime}$.
Performing an inversion operation reverses the A/B stacking sequence while swapping every ``A'' with ``B'' and vice versa; for example, inversion of the $T_{d}$ twin with AA...\ stacking results in the other $T_{d}$ twin, which has BB...\ stacking.
Thus, for $T_{d}$, the A and B stacking operations are symmetry-equivalent, and this statement can be extended to all A/B stacking sequences under the assumption of identical and centrosymmetric layers \cite{schneeloch_evolution_2020}. (This assumption is justified by the fact that differences in the intralayer positioning of atoms between, e.g., $1T^{\prime}$-MoTe$_{2}$ and $T_d$-MoTe$_{2}$ are $\lesssim$ 0.5\% of the lattice constants, as seen from reported coordinates in, e.g., Ref.\ \cite{heikes_mechanical_2018}.) Thus, to a first approximation, we should expect interlayer vibrational coupling between neighboring layers to be similar regardless of overall stacking.
The $a$-axis interlayer shear mode (ISM) has been studied for its relevance in identifying the $T_d$ phase and in modulating its Weyl semimetal properties.
These studies include Raman spectroscopy in MoTe$_{2}$ \cite{zhang_raman_2016, ma_raman_2016, chen_activation_2016, cao_barkhausen_2018, cheon_structural_2021} and WTe$_{2}$ \cite{xiao_berry_2020, kim_determination_2016, jiang_raman_2016}, and various ultrafast spectroscopy techniques in MoTe$_{2}$ \cite{zhang_light-induced_2019,fukuda_ultrafast_2020,qi_photoinduced_2021,rivas_generation_2019} and WTe$_{2}$ \cite{he_coherent_2016,soranzio_ultrafast_2019,hein_mode-resolved_2020,drueke_observation_2021,qi_photoinduced_2021,ji_manipulation_2021,sie_ultrafast_2019}.
Raman spectroscopy, however, is limited to measuring the zone-center energy $\hbar \omega_m$ (i.e., the maximum of the ISM dispersion), and only in the $T_d$ phase (for bulk samples) is this mode Raman-active.
The ultrafast spectroscopy techniques involve firing a femtosecond light pulse at the sample, then measuring the picosecond-scale changes in the intensity of electron diffraction, reflectivity, second harmonic generation, angle-resolved photoemission spectroscopy (ARPES), etc., frequently in the form of oscillations of angular frequency $\omega_m$.
These techniques have provided much insight into the connection between the electronic topology and the structure; for instance, modulations in electronic states near the Weyl node locations with the oscillation of the interlayer shear mode in MoTe$_{2}$ have been observed via ARPES \cite{hein_mode-resolved_2020}, and a link between the Weyl fermions and relaxation dynamics of this mode has been suggested \cite{drueke_observation_2021}.
However, ultrafast spectroscopy techniques may have complications such as the fluence- and pump-frequency-dependence of observed mode frequencies \cite{sie_ultrafast_2019}.
Meanwhile, theoretical studies on MoTe$_{2}$ and WTe$_{2}$ have had wide discrepancies on properties relevant to the interlayer interactions, such as values of $\omega_{m}$ or the $a$-axis displacement between the A/B stacking options. Experimentally, $\hbar \omega_{m}$ for MoTe$_{2}$ has been reported from Raman spectroscopy as 1.61 meV (10 K, $T_d$) \cite{ma_raman_2016} or 1.56 meV (78 K, $T_d$) \cite{chen_activation_2016}, and from ultrafast spectroscopy as 1.61 meV (300 K, $1T^{\prime}$) \cite{fukuda_ultrafast_2020} and 1.74 meV ($\leq$ 240 K, $T_d$) \cite{zhang_light-induced_2019}. Density functional theory (DFT) calculations, on the other hand, have resulted in much wider variation, with values of 1.40 \cite{heikes_mechanical_2018}, 1.28 \cite{ma_raman_2016}, and 1.14 meV \cite{chen_activation_2016} for $T_d$-MoTe$_{2}$, and 1.09 \cite{ma_raman_2016} and 1.90 meV \cite{chen_activation_2016} for $1T^{\prime}$-MoTe$_{2}$. %
The elastic constant $C_{55}$ describes the resistance to shear strain in the long-wavelength limit of the ISM. For $T_d$-MoTe$_{2}$, $C_{55}$ has been calculated as 24.3 \cite{rano_ab_2020} and 3.9 GPa \cite{singh_engineering_2020}, and for $1T^{\prime}$-MoTe$_{2}$ as 2.9 GPa \cite{singh_engineering_2020}, which imply (via the linear chain model, to be discussed below) $\hbar \omega_m$ values of 3.34, 1.34, and 1.15 meV, respectively.
The layer-sliding distance $\epsilon$ between the A/B stacking options
also tends to be underestimated in DFT calculations (e.g., the calculated $\beta$ angles of $1T^{\prime}$-MoTe$_{2}$ and $1T^{\prime}$-WTe$_{2}$ in Ref.\ \cite{kim_origins_2017} are lower than the experimental values \cite{clarke_low-temperature_1978,tao_t_d_2020}.)
Inelastic neutron scattering (INS) is uniquely useful as a probe of phonons across a range of momentum transfers, and can yield insights on the interlayer phonons of Mo$_{1-x}$W$_{x}$Te$_{2}$ beyond that estimated via DFT or reported in Raman or ultrafast spectroscopy measurements.
We present inelastic neutron scattering measurements on a Mo$_{0.91}$W$_{0.09}$Te$_{2}$ crystal, measuring the ISM mode in the $T_d$, $T_d^*$, and $1T^{\prime}$ phases.
The phonon energies are consistent with a linear chain model (LCM), but the interlayer force constants for $T_d^*$ and $1T^{\prime}$ are, respectively, about 76(3)\% and 83(3)\% that of the $T_d$ phase.
The large change in the force constants for different stacking orders, in contrast to the minimal change in the relative positioning of neighboring layers regardless of stacking, suggests that stacking-induced electronic band structure changes may play a substantial role in the interlayer vibrational coupling.
\section{Experimental Details}
Inelastic neutron scattering was performed on a $\sim$0.6 g Mo$_{0.91}$W$_{0.09}$Te$_{2}$ crystal, labeled ``MWT1'' and measured in previous neutron scattering studies \cite{tao_appearance_2019, schneeloch_evolution_2020}. The W fraction in Mo$_{1-x}$W$_{x}$Te$_{2}$ was estimated to be $x \approx 0.09(1)$ from the interlayer spacing obtained from the position of the $(004)$ peak in neutron scattering measurements, roughly consistent with the $x \approx 0.06(1)$ value obtained via energy-dispersive x-ray spectroscopy measurements of the surface. A second $\sim$0.1 g crystal, labeled MT2 and having composition Mo$_{1-x}$W$_{x}$Te$_{2}$ with $x \leq 0.01$ \cite{tao_appearance_2019,schneeloch_evolution_2020}, was used for a single measurement. MWT1 and MT2 were grown from a Te flux; details can be found in Ref.\ \cite{schneeloch_evolution_2020,tao_appearance_2019}.
Cold-neutron triple axis spectrometer measurements were performed at the CTAX instrument at the High Flux Isotope Reactor of Oak Ridge National Laboratory, and on the SPINS instrument at the NIST Center for Neutron Research at the National Institute of Standards and Technology. Final neutron energy was fixed at 4.5 and 5.0 meV for CTAX and SPINS, respectively. The collimations were 48$^{\prime}$-40$^{\prime}$-S-40$^{\prime}$-120$^{\prime}$ for CTAX and open-80$^{\prime}$-S-80$^{\prime}$-open for SPINS. For CTAX, a Be filter was used after the sample. For SPINS, Be filters were used before and after the sample. For all analyzer and monochromator crystals, the $(002)$ plane of pyrolytic graphite was used.
For simplicity, we present all data in the $T_{d}$-phase reciprocal space coordinates based on an orthorhombic unit cell with $a \approx 6.3$ \AA, $b \approx 3.47$ \AA, and $c \approx 13.8$ \AA, regardless of the phase being measured. The intensities for the data from a particular instrument share the same arbitrary units. Error bars denote a standard deviation of statistical uncertainty.
\section{Results}
\begin{figure}[h]
\begin{center}
\includegraphics[width=8.6cm]
{Figure1.pdf}
\end{center}
\caption{(a) Crystal structure of Mo$_{1-x}$W$_{x}$Te$_{2}$, with A/B stacking options displayed. (b) Diagram of interlayer interaction energy as a function of relative displacement of neighboring layers along the $a$-axis.
(c) A depiction of the dispersion along $(2,0,L)$ for the $a$-axis interlayer shear mode based on the linear chain model for a four-layer unit cell (i.e., $T_d^*$). One sub-branch of the LCM dispersion is made bold. The sets of blue circles, red squares, and green triangles each mark a particular vibrational mode on the LCM curve, and are accompanied by diagrams of the polarization of the interlayer vibrations, depicting the relative phases $(...,1,1,1,1,...)$, $(...,1,-i,-1,i,...)$, and $(...,1,-1,1,-1,...)$, respectively.
}
\label{fig:1}
\end{figure}
\begin{figure}[h]
\begin{center}
\includegraphics[width=8.6cm]
{Figure2.pdf}
\end{center}
\caption{
Calculated inelastic neutron scattering intensity for each phase as determined by the LCM, setting $T= 270$ K and $\hbar \omega_{m}$ to the values for each phase listed in Table \ref{tab:freqAve}. Intensity convoluted with an energy FWHM of 0.3 meV. The left (right) shows the intensity for the $T_d^*$/$1T^{\prime}$ twin fractions derived from elastic scans taken on the SPINS (CTAX) instrument. The blue and pink bars denote scans taken on CTAX and SPINS, respectively. The letters refer to the data sets in Fig.\ \ref{fig:3}.
}
\label{fig:2}
\end{figure}
\begin{figure}[h]
\begin{center}
\includegraphics[width=8.6cm]
{Figure3.pdf}
\end{center}
\caption{
(a-h) Scans of inelastic neutron scattering intensity vs.\ $\hbar \omega$ taken on (a-d) CTAX and (e-h) SPINS, as labeled in Fig.\ \ref{fig:2}.
Blue and magenta curves are resolution-convoluted $S(\mathbf{Q},\omega)$ calculations. For the blue curves, intensity, twin fraction, and $\hbar \omega_m$ were allowed to vary. For magenta curves, intensities in (b-d) and (f-h) were constrained by the LCM and fitted intensities of (a) and (e); twin fractions were set to values consistent with elastic $(2,0,L)$ scans; and $\hbar \omega_m$ was set to the average values for each phase listed in Table \ref{tab:freqAve}. Dashed lines are background. Gray points are data not included in fit.
}
\label{fig:3}
\end{figure}
\begin{figure}[h]
\begin{center}
\includegraphics[width=8.6cm]
{Figure4.pdf}
\end{center}
\caption{Comparison of linear chain model with Mo$_{0.91}$W$_{0.09}$Te$_{2}$ neutron scattering data. The data points are $\hbar \omega_m \sin(\frac{\pi}{2} q)$ plotted against $q$, where $q$ is the LCM wavevector corresponding to the branch that dominates the contribution to the intensity, and $\hbar \omega_m$ are the values obtained from fits which are shown in Table \ref{tab:maxFreq}.
The LCM curves are $\hbar \omega = \hbar \omega_{m} \sin{\frac{\pi}{2} q}$ for each phase, with $\hbar \omega_m$ given by the values in Table \ref{tab:freqAve}.
The side curves show changes in the LCM curve by a standard deviation in $\hbar \omega_m$.
}
\label{fig:4}
\end{figure}
The linear chain model is often used in studying interlayer vibrational modes of quasi-2D materials, especially in the context of Raman spectroscopy measurements on few-layer crystals \cite{liang_low-frequency_2017}. The LCM represents interlayer vibrational coupling as if the layers were particles coupled by springs to their neighbors.
For an infinite chain, the dispersion is given by
\begin{equation}
\label{eq:LCM}
\hbar \omega_q = 2 \hbar \sqrt{ \frac{K_x}{\mu}} \left|\sin{\frac{\pi q}{2}}\right|,
\end{equation}
where $q$ is the LCM wavevector (scaled such that $q=1$ at the BZ boundary, with $q$ in the same r.l.u.\ as $L$), $\hbar \omega_q$ is the phonon energy, $K_x$ is the interlayer force constant for the ISM, $\mu$ is the areal mass density per layer, and $\hbar$ is Planck's constant divided by $2 \pi$. The only free parameter in this model is the ratio $K_x/\mu$.
The LCM dispersion measured by neutron scattering has complications over the $\left|\sin \frac{\pi q}{2}\right|$ form due to layers having differing orientation and in-plane positioning. To illustrate, Fig.\ \ref{fig:1} depicts the dispersion for the four-layer unit cell of $T_d^*$ along $(2,0,L)$, in which the $\left|\sin \frac{\pi q}{2}\right|$ dispersion is ``folded back'' every half-integer $L$, resulting in four different sub-branches repeated every half-integer $L$. (This dispersion can also be interpreted as joined acoustic/optic branches.)
To compute the expected phonon intensity for $T_d^*$, we employ our core LCM assumption, which is that the polarization vectors are uniform within each layer, are aligned exclusively along the $a$-axis, and have the LCM phases $\frac{1}{\sqrt{N}} e^{-i \pi l q} = \frac{1}{\sqrt{N}} e^{-i \pi l (L-L_0)}$, where $l = 0,...,N-1$ is the layer index, $N$ is the number of layers in the unit cell, and $L_0$ is a multiple of 1/2 corresponding to a $T_d^*$ Bragg peak location $(2,0,L_0)$.
The integrated intensity of a phonon peak in a constant-$\mathbf{Q}$ scan for $\hbar \omega > 0$ at temperature $T$ is proportional to $\frac{1}{\omega} |F(\mathbf{Q})|^2 (n(\omega,T) + 1)$ \cite{shirane_neutron_2002}. The quantity $n(\omega, T)$ is the Bose factor, and $F(\mathbf{Q})$ is the dynamic structure factor, given by
\begin{equation}
F(\mathbf{Q}) = \sum_j \frac{b_j}{\sqrt{m_j}} (\mathbf{Q} \cdot \mathbf{\xi}^s_j) e^{i \mathbf{Q} \cdot \mathbf{d_j}}.
\end{equation}
The index $j$ runs over each atom in the unit cell; $b_j$ are the nuclear scattering lengths; $m_j$ and $\mathbf{d}_j$ are the masses and positions for atom $j$; $s$ labels a sub-branch; and $\mathbf{\xi}^s_j$ are the phonon polarization vectors.
(We neglect the Debye-Waller factor, which is $\sim$1 in the region of interest.)
The expected LCM-derived INS intensity for the $T_d$, $T_d^*$, and $1T^{\prime}$ phases is shown in Fig.\ \ref{fig:2}.
The $T_d$ and $1T^{\prime}$ phases fold back every integer $L$ away from their Bragg peaks due to their two-layer unit cells, but $1T^{\prime}$ has the additional complication that the intensity for each twin is shifted along $L$ by $\pm 2 \epsilon$ due to its monoclinic symmetry, with $\epsilon$ ($\sim$ 0.147 at 320 K \cite{schneeloch_evolution_2020}) being the $a$-axis displacement between the two stacking options.
The $T_d^*$ phase also has differing INS intensity for each twin, though the dispersion overlaps since the structure is pseudo-orthorhombic. For $T_d$, meanwhile, both twins produce identical INS intensity.
\begin{table}[t]
\caption{Values of $\hbar \omega_{m}$ obtained from fitting. ``Label'' corresponds to one of the data sets in Fig.\ \ref{fig:3}, except for ``MT2'' which denotes the data set corresponding to the MT2 sample. Nominal coordinates, phase, temperature, and the instrument used are also tabulated.}
\label{tab:maxFreq}
\begin{ruledtabular}
\begin{tabular}{llllll}
label & coordinates & phase & T (K) & inst. & $\hbar \omega_{m}$ (meV) \\
\hline
(a) & (2,0,1.47) & $T_d$ & 272 & CTAX & 1.71(3)\\
& (2,0,1.46) & $T_d$ & 194 & CTAX & 1.76(6) \\
MT2 & (2,0,1.49) & $T_d$ & 232 & CTAX & 1.77(9) \\
(b) & (2,0,1.25) & $T_d$ & 260 & CTAX & 1.74(5)\\
(e) & (2,0,-1.53) & $T_d$ & 270 & SPINS & 1.694(29)\\
\hline
(f) & (2,0,-3.0) & $T_d^*$ & 285 & SPINS & 1.49(4)\\
(g) & (2,0,-2.5) & $T_d^*$ & 285 & SPINS & 1.48(6)\\
\hline
(c) & (2,0,1.0) & $1T^{\prime}$ & 326 & CTAX & 1.57(3) \\
(d) & (2,0,0.79) & $1T^{\prime}$ & 326 & CTAX & 1.512(20)\\
(h) & (2,0,-2.23) & $1T^{\prime}$ & 320 & SPINS & 1.55(3)\\
& (2,0,-2.23) & $1T^{\prime}$ & 500 & SPINS & 1.472(21) \\
& (2,0,-2.23) & $1T^{\prime}$ & 600 & SPINS & 1.422(14) \\
\end{tabular}
\end{ruledtabular}
\end{table}
\begin{table}[t]
\caption{Values of $\hbar \omega_m$ for each phase, obtained from averaging within each phase the values of $\hbar \omega_m$ listed in Table \ref{tab:maxFreq}. Interlayer force constants $K_x$ and the ratios $K_x / K_x^{T_d}$ are also included.}
\label{tab:freqAve}
\begin{ruledtabular}
\begin{tabular}{llllll}
phase & $\hbar \omega_m$ (meV) & $K_x$ (10$^{19}$ N/m$^{3}$) & $K_x / K^{T_d}_x$ \\
\hline
$T_d$ & 1.709(22) & 0.919(24) & \\
$T_d^*$ & 1.486(26) & 0.694(25) & 76(3)\% \\
$1T^{\prime}$ & 1.554(25) & 0.760(24) & 83(3)\% \\
\end{tabular}
\end{ruledtabular}
\end{table}
We conducted scans of neutron scattering intensity along energy transfer $\hbar \omega$ at various points $L$ along $(2,0,L)$, as shown in Fig.\ \ref{fig:3}. (A few additional scans at different temperatures and on the MT2 crystal are shown in the Supplemental Materials \cite{supplement}.) Elastic scans along $(2,0,L)$ \cite{supplement} were taken before or after the inelastic scans to account for errors due to thermal expansion or changes in alignment. The curves in Fig.\ \ref{fig:3} show calculated $S(\mathbf{Q},\omega)$ convoluted with the instrument resolution. The blue curves are the result of fits where the overall intensity, $1T^{\prime}$/$T_d^*$ twin fraction, and dispersion maximum $\hbar \omega_m$ were allowed to vary, except for (c), in which the twin fraction was set to 100\% of the BA-stacked $1T^{\prime}$ twin. There is no obvious sign of broadening beyond the instrument resolution.
(In these calculations, we relied on computed elastic constants \cite{rano_ab_2020} to estimate the dispersion of the ISM perpendicular to the $(2,0,L)$ line. We also estimated the in-plane sample mosaic from an analysis of our elastic $(2,0,L)$ scans. Inaccuracies in these assumptions could introduce systematic errors in the fitted $\hbar \omega_m$ values, though the ratios of $\hbar \omega_m$ between the phases is largely unchanged. See the Supplemental Materials for these and other fitting details, as well as why the layer breathing longitudinal acoustic mode can be neglected \cite{supplement}.)
The fitted $\hbar \omega_m$ values are shown in Table \ref{tab:maxFreq}, and show remarkable consistency within each phase.
This consistency can be better seen in the plot of $\hbar \omega_m \sin(\frac{\pi q}{2})$ vs.\ $q$ in Fig.\ \ref{fig:4}, where $q$ is the LCM wavevector from the LCM sub-branch with the dominant contribution to the intensity.
The two $T_d$ points near $q=0.53$ (corresponding to data sets (a) and (e)) overlap, and the point near $q=0.25$ (from (b)) is also consistent with the LCM curve. The two $T_d^*$ scans result in overlapping points near $q=0.5$, and the three $1T^{\prime}$ points are all consistent with the same curve.
Averages within each phase of the fitted $\hbar \omega_m$ values are shown in Table \ref{tab:freqAve}, with $\hbar \omega_m = 1.709(22)$, 1.486(26), and 1.554(25) meV for the $T_d$, $T_d^*$, and $1T^{\prime}$ phases, respectively.
The nearly-undoped crystal MT2 in its $T_d$ phase has a value of $\hbar \omega_m = 1.77(9)$ meV, consistent with $T_d$-MWT1. (The W-fraction dependence of $\hbar \omega_m$ can be estimated assuming a linear relation from reported values on MoTe$_{2}$ and WTe$_{2}$ \cite{ma_raman_2016}, yielding an expected decrease of $\sim$0.06 meV from MoTe$_{2}$ to MWT1, consistent with observations.)
The interlayer force constants $K_x$ are also listed, and we see that the $T_d^*$ and $1T^{\prime}$ phases have values of $K_x$ which are, respectively, $\sim$76\% and 83\% that of $T_d$.
Thus, the vibrational coupling of the ISM is substantially weaker in $T_d^*$ and $1T^{\prime}$ than in $T_d$. This is remarkable considering that the $a$-axis displacement $\epsilon$ between the two stacking options is almost unchanged between $T_d$ and $1T^{\prime}$ (as can be seen from the discussion of the parameter $\delta=(\epsilon+1)/2$ parameter in Ref.\ \cite{schneeloch_evolution_2020}.)
Some temperature-induced phonon softening can be seen in our data, but the rate is far too low to account for the changes in $\hbar \omega_m$ between the phases.
From the decrease in $\hbar \omega_m$ on warming from 320 to 600 K (in the $1T^{\prime}$ phase) for data taken near $(2,0,-2.23)$, we can estimate the softening rate to be -3.3(7)$\cdot$10$^{-4}$ K$^{-1}$.
Softening of the interlayer phonons is expected due to the known anharmonicity of the interlayer interaction \cite{heikes_mechanical_2018}, and would be consistent with the gradual reduction with warming in the spacing between the local minima (i.e., in $\epsilon$) \cite{schneeloch_evolution_2020}.
Softening of the ISM modes has also been observed in WTe$_{2}$, where the relative change in $\omega_{m}$ per Kelvin is roughly $-4 \cdot 10^{-4}$ K$^{-1}$ within the range $0 \leq T \leq 300$ K \cite{he_coherent_2016}, a magnitude comparable to that in our data on $1T^{\prime}$-Mo$_{0.91}$W$_{0.09}$Te$_{2}$.
Interestingly, a substantially greater softening was seen for the layer-breathing longitudinal acoustic mode in thin film MoTe$_{2}$, at $-2.0(1) \cdot 10^{-3}$ K$^{-1}$ \cite{rivas_generation_2019}.
In any case, a softening rate of the magnitude seen from 320 to 600 K is insufficient to explain the energy difference in the phonons between the $T_d$ and $T_d^*$ phases. If the rate were, say, $-4 \cdot 10^{-4}$ K$^{-1}$, we would only expect $\hbar \omega_{m}$ to decrease by about $-0.006$ meV from 270 to 285 K, or $-0.02$ meV from 270 to 320 K, far less than the 0.16 and 0.22 meV differences seen between $T_d$ and the other two phases. Thus, it is clear that the large changes in the interlayer force constant are due to changes in stacking. %
Such abrupt changes in $\hbar \omega_m$ can also be seen in Raman spectroscopy data on 22 and 155 nm thick MoTe$_{2}$ crystals
\cite{cao_barkhausen_2018}. (We note that, for the $T_d^*$ phase, \emph{two} interlayer force constants are allowed by symmetry in the LCM, but we expect little difference in intensity from a single-spring-constant model with an average value $K_x = \sqrt{K^1_x K^2_x}$, even if $K_x^1$ and $K_x^2$ differed by $\sim$20\%; see Supplemental Materials for details \cite{supplement}.)
While the energies are largely consistent with the LCM, the LCM-calculated intensities are not fully consistent with the data, which is especially evident for the $1T^{\prime}$ phase. The magenta curves in Fig.\ \ref{fig:3} are the ideal LCM $S(\mathbf{Q},\omega)$, in which the intensity for each instrument was set to the value determined from the $T_d$ measurements in sets (a) and (e), and kept fixed for the remaining data sets in (b-d) and (f-h). The twin fractions were set according to an analysis of elastic $(2,0,L)$ scan intensity \cite{supplement}, and $\hbar \omega_m$ was set for each phase to the values listed in Table \ref{tab:freqAve}.
For the $T_d^*$ data, predicted intensities in the fitting ranges are somewhat greater than observed, though changes in the sample mosaic between phases could plausibly explain an overall decrease in intensity. There is a significant difference between the effective $T_d^*$ twin fractions needed to reproduce the (f) and (g) data (54(5)\% and 84(8)\% of the AABB twin, respectively), and the 70\% fraction that is consistent with the elastic data.
However, for $1T^{\prime}$, the effective twin fractions needed to reproduce the inelastic intensity ($\sim$100\%, 70(6)\%, and 81(3)\% of the BA twin for data sets (c), (d), and (h), respectively) are much different from the twin fractions consistent with the elastic data (25\%, 25\%, and 65\%), suggesting a substantial deviation from the linear chain model.
Such a deviation may be especially clear in $1T^{\prime}$ due to the twins of that phase having distinct peaks in much of the inelastic data, as opposed to the overlapping intensities of the $T_d^*$ twins.
Regardless, such a large deviation suggests that the polarization vectors deviate from our assumption of uniformity within each layer, with a significant degree of intralayer vibrational motion, even if the mode energies remain consistent with the linear chain model.
\section{Discussion}
In a way, the structure of Mo$_{1-x}$W$_{x}$Te$_{2}$ is simple: Given identical, centrosymmetric layers, the layers are stacked according to an A/B sequence which determines whether the inversion symmetry centers of each layer are displaced by $+\delta$ or $-\delta$ along the $a$-axis relative to those of the layer below.
Differences in intralayer positioning between layers are small ($\leq$0.5\% of the lattice constants, as mentioned in the Introduction), and the parameter $\delta = (\epsilon - 1)/2$ appears to be practically unchanged between $T_d$ and $1T^{\prime}$ after accounting for an overall trend of a decrease in $\delta$ (or $\epsilon$) on warming \cite{schneeloch_evolution_2020}.
Nevertheless, our results indicate a large ($\sim$20\%) change in the interlayer shear vibrational coupling $K_x$ between $T_d$ and the other two phases.
Presumably, while steric short-range interactions determine the relative $a$-axis displacement of the layers, the vibrational coupling depends strongly on the overall stacking of the layers, possibly through changes in the electronic band structure.
(There may be an interesting correlation between the interlayer vibrational coupling and the resistivity. The (in-plane) resistivity appears to jump during the $T_d$$\rightarrow$$T_d^*$ transition, while being largely unchanged on further warming into $1T^{\prime}$ \cite{tao_appearance_2019}, which mimics the trends in $K_x$.)
The possibility of the band structure determining the interlayer vibrational coupling has implications for the stacking energetics.
The free energy is a function of the vibrational and electronic band structure. However, if the interlayer vibrational coupling can be modified by $\sim$20\% by stacking changes, then the effect of stacking-dependent changes in the band structure may need to be carefully considered (i.e., with calculations precise enough to compute realistic values of $\hbar \omega_m$) before the vibrational contribution to the free energy can be properly evaluated.
Of course, with our use of the linear chain model, we have made some assumptions that should be investigated further. First, are the layers essentially identical, or are deviations in intralayer atomic positions between the layers important for the stacking energetics or other properties?
Are the intralayer vibrations that may complement the LCM modes the same in every layer?
Second, our results may hold in the bulk, but how do the properties of surface layers and thin films of Mo$_{1-x}$W$_{x}$Te$_{2}$ differ from bulk samples? It is known that the transition of MoTe$_{2}$ is broadened or suppressed for thin samples \cite{cao_barkhausen_2018,he_dimensionality-driven_2018,paul_tailoring_2020}. There is some evidence for weaker interlayer vibrational coupling for few-layer samples; the interlayer force constants $K_x$ from Raman measurements on $
\leq$8-layer MoTe$_{2}$ are 0.673(11) and 0.604(15) for $T_d$- and $1T^{\prime}$-MoTe$_{2}$, respectively \cite{cheon_structural_2021}, both substantially smaller than our values of $0.919(24)$ and 0.760(15) for bulk Mo$_{0.91}$W$_{0.09}$Te$_{2}$. Also, bilayer WTe$_{2}$ shows signs of a transition above $\sim$340 K (in the disappearance of a second harmonic generation signal \cite{fei_ferroelectric_2018}); if the intralayer positions are unchanged, then the only explanation for the arrival of inversion symmetry in a bilayer structure would be a structure with $\delta=0.5$ (analogous to the hypothetical $T_0$ phase discussed in Ref.\ \cite{huang_polar_2019}), which would require a substantial change of interlayer vibrational coupling compared to bulk samples. (A transition from $T_d$ to $1T^{\prime}$ in bulk WTe$_{2}$ has been observed near $\sim$560 and 613 K \cite{tao_t_d_2020,dahal_tunable_2020}, but the $\delta$ parameter is largely unchanged across this transition \cite{tao_t_d_2020}.)
Of course, the tendency for the transition to be suppressed due to insufficient thickness, and the gradual suppression of stacking-related diffuse scattering on either warming into $1T^{\prime}$ or cooling into $T_d$ \cite{tao_appearance_2019}, further indicates the importance of long-range interlayer interactions to the stacking energetics.
Stacking energetics are of prime importance for many quasi-2D materials, but they are still poorly understood.
Ideally, we could obtain insight from studies on graphite, which is another layered semimetal that can have multiple stacking variations, and where the relative positioning of neighboring layers is the same regardless of overall stacking.
It is curious how Bernal-stacked graphite is dominant in nature, despite the weakness of the interlayer interactions.
However, despite the attention that graphite/graphene has received and the simplicity of its structure, the energy differences between different stacking possibilities in graphite are not well understood. For example, DFT calculations have been inconsistent on whether Bernal or rhombohedral graphite has the lower free energy at room temperature \cite{charlier_first-principles_1994,anees_ab_2014, savini_bending_2011,taut_electronic_2013,nery_ab-initio_2021}.
There has been some focus on how changes in the electronic structure affect the free energy, with electronic temperature argued to be essential to determining which graphite stacking is preferred at a certain temperature \cite{nery_ab-initio_2021}. Meanwhile, the vibrational contribution to the stacking-dependence of the free energy in graphite tends to be neglected. There is some evidence that the interlayer modes of trilayer graphene are $\sim$1-2\% softer for rhombohedral-like than Bernal-like stacking \cite{lui_stacking-dependent_2015}, so it would be interesting to see how changes in the vibrational spectra with stacking effect the free energy in graphite.
Indeed, our results show that the interlayer vibrational coupling of a van der Waals layered material can change substantially between phases of different stacking.
The possible connection between the band structure and interlayer vibrational coupling may yield insight into how the transitions in Mo$_{1-x}$W$_{x}$Te$_{2}$ are effected by optical or electronic means; such means include pulses of light in ultrafast spectroscopy \cite{sie_ultrafast_2019}, an electron beam \cite{huang_polar_2019}, and an applied electric field (for few-layer WTe$_{2}$) \cite{fei_ferroelectric_2018, xiao_berry_2020}. Additionally, there are other materials that exhibit stacking transitions in few-layer films even when not seen in the bulk; such transitions can be induced with an applied electric field on bilayer hexagonal boron nitride \cite{yasuda_stacking-engineered_2021}, and with laser irradiation on trilayer graphene \cite{zhang_light-induced_2020}.
Given the difficulty of calculating properties that depend on the weak interlayer interactions, our finding that the interlayer vibrational coupling can change by $\sim$20\% between differently-stacked phases should provide insight into how stacking transitions may occur in a wide range of other systems.
It seems unusual that there is such a large change in an elastic constant (namely, $C_{55} = K_x t$, where $t$ is the interlayer spacing \cite{grzeszczyk_raman_2016}) between phases. Comparable changes have been seen in NiTi in the vicinity of its martensitic transition \cite{grabec_evolution_2021}, and graphite does have a greatly increased $C_{55}$ constant after irradiation \cite{ayasse_softening_1979}, but Mo$_{1-x}$W$_{x}$Te$_{2}$ may be unique in being a van der Waals layered system with reversible changes in $C_{55}$ of the magnitude observed. Furthermore, since a sufficiently strong applied electric field can induce stacking changes in few-layer WTe$_{2}$ \cite{fei_ferroelectric_2018,xiao_berry_2020}, it may be worth investigating if a smaller electric field can modulate the interlayer vibrational coupling, which would open an avenue of research into whether elastic properties can be modulated by electrical means.
Additionally, if changes in the band structure are responsible for the changes in the elastic constant $C_{55}$, then it may, conversely, be possible to modify the Weyl dispersion by applying a shear stress to Mo$_{1-x}$W$_{x}$Te$_{2}$.
Our results suggest a coupling between the elastic/vibrational properties and the interlayer electronic structure which should prove a fruitful avenue for future exploration.
\section{Conclusion}
We performed inelastic neutron scattering measurements to observe the $a$-axis interlayer shear mode phonons in the $T_d$, $T_d^*$, and $1T^{\prime}$ phases. The phonon peak positions were consistent with the linear chain model, though there is a substantial difference in the interlayer force constants between the phases, with the $K_x$ values of $T_d^*$ and $1T^{\prime}$ about 76(3)\% and 83(3)\% that of $T_d$.
The large change in $K_x$, in contrast to the small changes in the $\delta$ (or $\epsilon$) parameters or the intralayer positions, suggests that stacking-induced changes in the electronic band structure may be responsible for the change in vibrational properties.
\section*{Acknowledgements}
This work has been supported by the Department of Energy, Grant number
DE-FG02-01ER45927. A portion of this research used resources at the High Flux Isotope Reactor and the Spallation Neutron Source, which are DOE Office of Science User Facilities operated by Oak Ridge National Laboratory. We acknowledge the support of the National Institute of Standards and Technology, US Department of Commerce, in providing neutron research facilities used in this work.
\nocite{fobes_neutronpy_2020,zheludev_reslib_2007-2,cooper_resolution_1967,rano_ab_2020,singh_engineering_2020,tao_appearance_2019}
|
1,477,468,750,366 | arxiv | \section{Introduction}
Study of full QCD thermodynamics with the Kogut-Susskind quark action has
been pursued over a number of years.
A basic question for this system is the
order of chiral phase transition for light quarks. For the case of two
flavors, this question was examined by finite-size scaling studies
carried out around
1989-1990\cite{founf2,columbianf2}.
On lattices with the temporal size $N_t=4$ and the quark mass in the
range $m_q=0.025-0.01$,
it was found that the peak height of
susceptibilities increases up to a spatial lattice size $L=12$,
but stays constant within errors between $L=12$ and 16.
The conclusion then was that a phase transition is
absent down to $m_q \approx$ 0.01, which was thought consistent with the
transition being of second-order at $m_q=0$ as suggested by the sigma model
analysis\cite{sigmamodel}.
A more detailed study based on universality argument was recently
attempted\cite{karsch,karschlaermann}.
Critical exponents were extracted from the quark mass dependence of the
critical coupling and the peak height of various susceptibilities
on an $8^3 \times 4$ lattice with $m_q$=0.075, 0.0375 and 0.02.
It was found that the magnetic exponent is
in reasonable agreement with that of the $O(4)$ spin model expected
from universality arguments\cite{sigmamodel}, while the thermal
exponent shows a sizable deviation from the $O(4)$ value.
We have attempted to systematically extend the previous studies both
regarding the spatial volume dependence
and the quark mass dependence to further examine the universality nature
of the transition. For this purpose
we have carried out simulations on lattices of spatial size $L=8, 12$ and $16$
at the quark mass of $m_q=0.075, 0.0375, 0.02$ and $0.01$ in lattice units.
In this article we report on
results of scaling analyses based on these runs\cite{preliminary}.
Studies similar to ours are being carried out by other
groups\cite{laermann,toussaint}.
\section{Simulation}
The full QCD system we study is defined by the partition function
\begin{equation}
Z=\int\prod {\rm d} U_l \exp(S_g)\det(D)^{N_f/4}
\end{equation}
with $S_g$ the standard single-plaquette gauge action, and $D$ the
Kogut-Susskind quark operator. Simulations are made
on $L^3 \times 4$ lattices with $L$ = 8, 12 and 16. For the quark mass
$m_q$, we employ $m_q$ = 0.075, 0.0375, 0.02 and 0.01 for each spatial lattice
size $L$.
The hybrid $R$ algorithm\cite{hybridR} is adopted to update gauge
configurations. In Table~\ref{tab:runs},
we list the values of $\beta$ where our runs are
made. To control systematic errors of the algorithm,
we choose the molecular dynamics step size to be $\delta\tau\approx m_q/2$
as listed in Table~\ref{tab:runs}.
For each run, 10000 trajectories of unit length are generated starting
from an ordered configuration. Two runs are made for
$m_q=0.01$ on a $12^3\times 4$ lattice since the first run at $\beta=5.266$
appears to be predominantly in the low-temperature phase
(see Fig.~\ref{fig:histories} below).
Critical exponents we obtain for $L=12$ using two runs separately,
however, agree within our statistical errors.
We therefore show results obtained with the first run in this article.
Inversion of the quark operator
is made with the conjugate gradient algorithm,
reducing the number of floating point operations
by half through the even-odd decimation procedure. The stopping condition
for the even part of the source vector $b_e$ is
$\sqrt{{\vert\vert b_e-(D^\dagger Dx)_e\vert\vert^2}/3V} < 10^{-6}$
with $V$ the space-time volume
$V=L^3\times 4$.
Observables are calculated at every trajectory.
For computing average values of observables we discard the initial 2000
trajectories of each run. The errors are estimated by the Jackknife method
with a bin size of 800 trajectories.
Values of observables in the region of $\beta$ around the
simulation point are evaluated by the standard reweighting
technique\cite{reweighting}.
The numerical calculations have been performed on the Fujitsu VPP500/80
supercomputer at KEK.
\begin{table}
\begin{center}
\caption{Parameters of our runs.}
\label{tab:runs}
\vspace*{2mm}
\begin{tabular}{lllll}
\hline
$L$&$m_q=0.075$&$0.0375$&$0.02$&$0.01$\\
&$\ \delta\tau=0.05$&0.02&0.01&0.005\\
\hline
8 &$\ \ \beta=5.35$&5.306&5.282&5.266\\
12 &\ \ \ \ \ \ \ \ 5.348&5.306&5.282&5.266\\
& & & &5.2665\\
16 &\ \ \ \ \ \ \ \ 5.345&5.306&5.282&5.266\\
\hline
\end{tabular}
\end{center}
\vspace*{-0.7cm}
\end{table}
\section{Observables}
In the course of our simulation, we measure the following susceptibilities:
\begin{eqnarray}
\chi_m&=&V\left[\langle\left(\overline{\psi}\psi\right)^2\rangle-
\langle\overline{\psi}\psi\rangle^2\right],\\
\chi_{t,f}&=&V\left[\langle \left(\overline{\psi}\psi\right)
\left(\overline{\psi}D_0\psi\right)\rangle
-\langle\overline{\psi}\psi\rangle
\langle\overline{\psi}D_0\psi\rangle\right]
\label{eq:sustebegin}\\
\chi_{t,i}&=&V\left[\langle \left(\overline{\psi}\psi\right)P_i\rangle
-\langle\overline{\psi}\psi\rangle
\langle P_i\rangle\right],\\
\chi_{e,f}&=&V\left[\langle\left(\overline{\psi}D_0\psi\right)^2\rangle-
\langle\overline{\psi}D_0\psi\rangle^2\right],\\
\chi_{e,i}&=&V\left[\langle \left(\overline{\psi}D_0\psi\right)P_i\rangle
-\langle\overline{\psi}D_0\psi\rangle
\langle P_i\rangle\right],\\
\chi_{e,ij}&=&V\left[\langle P_iP_j\rangle
-\langle P_i\rangle\langle P_j\rangle\right],
\label{eq:susteend}
\end{eqnarray}
where $D_0$ denotes the temporal component of the Dirac operator,
$i,j=\sigma, \tau$,
and $P_{\sigma,\tau}$ the spatial and temporal plaquette.
Calculation of the fermionic susceptibilities $\chi_m$, $\chi_{t,f}$
and $\chi_{e,f}$ is non-trivial because of the presence of disconnected
double quark loop contributions.
We use the volume source method without gauge
fixing\cite{kuramashi} to evaluate these susceptibilities.
Let us illustrate our procedure for $\chi_m$.
Performing quark contractions and correcting for the flavor factor
arising from the four-flavor nature of the Kogut-Susskind quark field,
we find
\begin{eqnarray}
\chi_m&=&\chi_{disc}+\chi_{conn},\\
\chi_{disc}&=&\left(\frac{N_f}{4}\right)^2\frac{1}{V}\Bigl[
\langle\left(\mbox{Tr}D^{-1}\right)^2\rangle\nonumber\\
&&\qquad\qquad\qquad -\langle\mbox{Tr}D^{-1}\rangle^2
\Bigr],\\
\chi_{conn}&=&-\frac{N_f}{4}\frac{1}{V}\sum_{x,y}\langle
D_{x,y}^{-1}D_{y,x}^{-1}\rangle.
\end{eqnarray}
Let us define the quark propagator for unit source placed at every
space-time site with a given color $b$ by
\begin{equation}
G_x^{a,b}\equiv\sum_y \left(D^{-1}\right)_{x,y}^{a,b}.
\end{equation}
From $G_x^{a,b}$, we calculate four quantities
$O_i (i=1,4)$ defined by
\begin{eqnarray}
O_1&=&\sum_{x,y}\sum_{a,b}G_x^{a,a}G_y^{b,b},\\
O_2&=&\sum_{x,y}\sum_{a,b}G_x^{a,b}G_y^{b,a},\\
O_3&=&\sum_{x}\sum_{a,b}G_x^{a,a}G_x^{b,b},\\
O_4&=&\sum_{x}\sum_{a,b}G_x^{a,b}G_x^{b,a}.
\end{eqnarray}
It is then straightforward to show that
\begin{eqnarray}
\left(\mbox{Tr}D^{-1}\right)^2&=&+\frac{9}{8}O_1-\frac{3}{8}O_2
-\frac{1}{8}O_3\nonumber\\
&&\hfill +\frac{3}{8}O_4,\\
\sum_{x,y}D_{x,y}^{-1}D_{y,x}^{-1}&=&-\frac{3}{8}O_1+\frac{9}{8}O_2
+\frac{3}{8}O_3\nonumber\\
&&\hfill -\frac{1}{8}O_4,
\end{eqnarray}
up to terms which are gauge non-invariant, and hence do not contribute
to the average over gauge configurations.
We note that $O_1$ contains connected contributions in addition to the
dominant disconnected double quark loop contribution,
and {\it vice versa} for $O_2$.
The terms $O_3$ and $O_4$ represent contact contributions in
which the source and sink points of quark coincide.
\section{Finite-size scaling analysis}
\begin{figure}[bt]
\centerline{\epsfxsize=75mm \epsfbox{fig1.epsf}}
\vspace*{-9mm}
\caption{Peak height of the chiral susceptibility $\chi_m$ as a function of
spatial volume $L^3$. For $L=12$ and $m_q=0.01$ the upper point is from the
run at $\beta=5.266$ and the lower one from $\beta=5.2665$. }
\label{fig:heightvsvolume}
\vspace*{-0.7cm}
\end{figure}
\begin{figure}[bt]
\centerline{\epsfxsize=75mm \epsfbox{fig2.epsf}}
\vspace*{-9mm}
\caption{Time history of the chiral order parameter $\overline{\psi}\psi$
for the runs with $m_q$=0.01.}
\label{fig:histories}
\vspace*{-0.7cm}
\end{figure}
We start examination of our data with an analysis of spatial volume dependence
of susceptibilities for each quark mass.
Let $\chi_m^{max}$ be the peak height of $\chi_m$ as a function of $\beta$
evaluated with the reweighting technique.
In Fig.~\ref{fig:heightvsvolume}
we plot the peak height $\chi_m^{max}$ as a function of the spatial volume.
For the heavier quark masses of $m_q=0.075$ and 0.0375 the peak height
increases little over the sizes $L=8-16$, clearly showing that
a phase transition is absent for these masses. For $m_q=0.02$
an increase of the peak height is seen
between $L=8$ and 12. The increase, however does not continue beyond
$L=$12; the peak height stays constant within errors between $L=12$ and 16.
We conclude absence of a phase transition also for $m_q=0.02$
confirming the previous work\cite{founf2,columbianf2}.
For the lightest quark mass $m_q=0.01$ employed in our simulation,
we observe a large increase of the peak height
between $L=8$ and 12. Furthermore, the increase continues up to $L=16$.
The size dependence is consistent with a linear behavior in spatial volume,
which one expects for a first-order phase transition. Other susceptibilities
exhibit a similar size dependence as the quark mass is decreased from
$m_q=0.075$ to 0.01.
This behavior contrasts with the results of a previous study\cite{columbianf2}
which found that the peak height of susceptibilities for $L=16$
stays consistent with those for $L=12$ at
$m_q\approx 0.01$\cite{founf2}.
It is likely that a smaller statistics (2500 trajectories\cite{columbianf2}
as compared to 10000 employed here)
led to an underestimate of susceptibilities in ref.~\cite{columbianf2}.
An important question is whether a linear increase seen in
Fig.~\ref{fig:heightvsvolume} could be regarded as evidence for a first-order
phase transition at $m_q=0.01$. We think that this is not so for several
reasons. Looking at the time histories
of the chiral order parameter $\overline{\psi}\psi$ shown in
Fig.~\ref{fig:histories},
we observe an apparent flip-flop behavior between two different values of
$\overline{\psi}\psi$ for $L=8$.
However, the time histories for $L=12$ and 16
are more dominated by irregular patterns, and the width of fluctuation
is smaller. These features are also reflected in the histograms.
While we clearly see a double-peak distribution for $L=8$, it is less
evident for $L=12$ and barely visible for $L=16$.
Furthermore, the width of the
distribution is smaller for larger lattice sizes and the distance
between the position of two possible peaks is narrower.
These observations suggest the possibility that the increase of
the peak height seen for $m_q=0.01$ up to $L=16$ is a transient
phenomenon due to insufficient spatial volume, similar to an increase
observed between $L=8$ and $12$ for $m_q=0.02$.
In order to check this point, we attempt to normalize the lattice volume by
a relevant length
scale, which we take to be the pion correlation length
$\xi_\pi=1/m_\pi$ at zero temperature.
Using a parametrization of available data for pion
mass as a function of $\beta$ and $m_q$ by the MILC
Collaboration\cite{milcthermo}, we
find $\xi_\pi\approx 3.0$ for $m_q=0.02$ and $\xi_\pi\approx 4.4$ for
$m_q=0.01$. Hence the size $L=8$ for $m_q=0.02$ roughly corresponds to
$L=12$ for $m_q=0.01$, and $L=12$ to $L=16$. When compared in this
correspondence the histograms for $m_q=0.02$ and 0.01 are similar in shape.
It is quite possible that
the peak height for $m_q=0.01$ levels off if measured on a larger lattice,
{\it e.g.,} $L=24$.
While a definitive conclusion has to await simulations on larger spatial
sizes, we think it likely that a first-order phase transition is
absent also at $m_q=0.01$.
\section{Analysis of quark mass dependence}
\subsection{Scaling laws and exponents}
\label{sec:scaling}
We have seen in the previous section that the spatial volume
dependence of our data do not show
clear evidence of a phase transition down to $m_q=0.01$. In the
present section we assume that the two-flavor chiral transition is of
second-order occurring at $m_q=0$. Various scaling laws follow from
this assumption for the quark mass dependence of the susceptibilities,
from which we can extract information about critical exponents.
For a given quark mass $m_q$, let $g_c^{-2}(m_q)$ be the peak position of
the chiral susceptibility $\chi_m$ as a function of the coupling constant
$g^{-2}$ and let $\chi_m^{max}(m_q)$ be the peak height.
These quantities are expected to scale toward $m_q=0$ as
\begin{eqnarray}
g_c^{-2}(m_q)&=&g_c^{-2}(0)+c_gm_q^{z_g} \label{eq:zg} \\
\chi_m^{max}(m_q)&=&c_mm_q^{-z_m}. \label{eq:zm}
\end{eqnarray}
The peak height of other susceptibilities similarly scales as
\begin{eqnarray}
\chi_{t,i}^{max}(m_q)&=&c_{t,i}\ m_q^{-z_{t,i}},\qquad i=f,\sigma,\tau\\
\chi_{e,i}^{max}(m_q)&=&c_{e,i}\ m_q^{-z_{e,i}},\qquad i=f,\sigma,\tau\\
\chi_{e,ij}^{max}(m_q)&=&c_{e,ij}\ m_q^{-z_{e,ij}},\qquad i,j=\sigma,\tau
\label{eq:zte}
\end{eqnarray}
We note that $\chi_{t,i}$ form three singular parts of the thermal
susceptibility
$\chi_t$=$V\left[\langle \left(\overline{\psi}\psi\right)\epsilon\rangle
-\langle\overline{\psi}\psi\rangle \langle \epsilon\rangle\right]$
with $\epsilon$ the energy density, and $\chi_{e,i}$ and $\chi_{e,ij}$ form
six singular parts of the specific heat
$C=V\left[\langle\epsilon^2\rangle-\langle\epsilon\rangle^2\right]$.
The leading exponents $z_t$ and $z_e$
for $\chi_t$ and $C$ are then given by
$z_t=\mbox{Max}\{z_{t,i}\}$ and
$z_e=\mbox{Max}\{z_{e,i},z_{e,ij}\}$.
\begin{table}[t]
\begin{center}
\setlength{\tabcolsep}{0.2pc}
\caption{Critical exponents extracted by fits of critical coupling and
peak height of susceptibilities for fixed spatial size $L$ as compared to
$O(2), O(4)$\protect\cite{baker,guillou,kanaya} and mean-field (MF) values. }
\label{tab:exponents}
\vspace*{2mm}
\begin{tabular}{lllllll}
\hline
&$O(2)$ &$O(4)$ &MF &$L=8$ &$L=12$ &$L=16$\\
\hline
$z_g$ &0.60 &0.54 &2/3 &0.70(11)&0.74(6)&0.64(5)\\
\hline
$z_m$ &0.79 &0.79 &2/3 &0.70(4)&0.99(8)&1.03(9)\\
\hline
$z_t$ &0.39 &0.33 &1/3 \\
$z_{t,f}$ & & & &0.42(5)&0.75(9)&0.78(10)\\
$z_{t,\sigma}$ & & & &0.47(5)&0.81(10) &0.82(12)\\
$z_{t,\tau}$ & & & &0.47(5)&0.81(9) &0.83(12)\\
\hline
$z_e$ &-0.01 &-0.13 &0 \\
$z_{e,f}$ & & & &0.21(4)&0.28(7)&0.38(7)\\
$z_{e,\sigma}$ & & & &0.25(6)&0.56(11) &0.58(13)\\
$z_{e,\tau}$ & & & &0.22(6)&0.52(10) &0.55(12)\\
$z_{e,\sigma\sigma}$ & & & &0.18(5)&0.46(8) &0.43(10)\\
$z_{e,\sigma\tau}$ & & & &0.20(5)&0.51(9) &0.50(12)\\
$z_{e,\tau\tau}$ & & & &0.19(5)&0.48(9) &0.47(11)\\
\hline
\end{tabular}
\end{center}
\vspace*{-0.7cm}
\end{table}
For a second-order chiral phase transition, we expect the exponents to be
expressed in terms of the thermal and magnetic exponents $y_t$ and $y_m$;
\begin{eqnarray}
z_g&=&y_t/y_h, \\
z_m&=&2-d/y_h, \\
z_t&=&1+y_t/y_h-d/y_h, \\
z_e&=&2y_t/y_h-d/y_h.
\end{eqnarray}
Therefore two relations exist among the four exponents $z_g, z_m, z_t$ and
$z_e$, which we take to be
\begin{eqnarray}
z_g+z_m&=&z_t+1 \label{eq:consistency1}\\
2z_t-z_m&=&z_e.
\label{eq:consistency2}
\end{eqnarray}
The natural values to expect for the exponents are those of $O(2)$
corresponding to exact $U(1)$ symmetry of the Kogut-Susskind quark
action at finite lattice spacing. Sufficiently close to the continuum
limit, we may also expect the $O(4)$ values as predicted by the effective
sigma model analysis.
The possibility of mean-field exponents arbitrarily close to the critical
point has also been suggested\cite{kocickogut}.
\subsection{Results for exponents}
Our results for the exponents are tabulated in Table~\ref{tab:exponents}.
The exponent $z_g$ that governs the
scaling behavior of the critical coupling $g_c^{-2}(m_q)$ is extracted
from the fit of form (\ref{eq:zg}).
We observe that $z_g$ does not have a clear size dependence within our
error of
about 10\%, and that the values are similar to $O(2)$, $O(4)$ or
mean-field predictions, also listed in the Table, within one to two standard
deviations.
\begin{figure}[bt]
\centerline{\epsfxsize=75mm \epsfbox{fig3.epsf}}
\vspace*{-9mm}
\caption{Peak height of the chiral susceptibility $\chi_m$ as a function
of $m_q$ for fixed spatial size $L$. Solid lines are fits to a single power
(\protect\ref{eq:zm}). Dashed line indicates the slope expected for
$O(2)$ and $O(4)$ exponents which are very similar.}
\label{fig:peakheight}
\vspace*{-0.7cm}
\end{figure}
Let us turn to the exponents determined from the peak height of
susceptibilities. The values in Table~~\ref{tab:exponents} are
extracted by fits employing a scaling behavior with a single
power as given in (\ref{eq:zm}--\ref{eq:zte}).
In Fig.~\ref{fig:peakheight} we illustrate the fit for the quark mass
dependence of the peak height of the chiral susceptibility $\chi_m$.
We observe in Table~\ref{tab:exponents}
that all the exponents $z_m, z_t$ and $z_e$
increase as we increase the spatial lattice size $L$.
The value of $z_m$ for the smallest size $L$ =8 is not so different from
the $O(2)$ and $O(4)$ values. It deviates from the theoretical
prediction for $L$ = 12 and 16, however, and takes a value about 20 \%
larger, which amounts to a two standard deviation difference.
For $z_t$ and $z_e$ various susceptibilities defined
in (\ref{eq:sustebegin})-(\ref{eq:susteend}) generally give consistent
results.
We observe, however, a $10-20$\% larger value of $z_t$ compared
with the theoretical prediction already for $L=8$, and the discrepancy
increases to a factor two difference for $L=12$ and 16.
The disagreement is more apparent for the exponent $z_e$ for which
values in the range $z_e\approx 0.5-0.6$ are obtained for larger sizes
in contrast to a negative value for the $O(2)$ and $O(4)$ theories.
\begin{figure}[bt]
\centerline{\epsfxsize=75mm \epsfbox{fig4.epsf}}
\vspace*{-9mm}
\caption{Consistency check of exponents for a given spatial size $L$;
$z_t+1$ against $z_g+z_m$, and $z_e$ against $2z_t-z_m$. Lines are predictions
for $O(2)$ symmetry. Values for $O(4)$ are similar.}
\label{fig:consistency}
\vspace*{-0.7cm}
\end{figure}
We have noted in Sec.~\ref{sec:scaling} that the four exponents $z_g, z_m, z_t$
and $z_e$ should satisfy two consistency equations
reflecting the fact
that two relevant operators govern a second-order phase transition.
In Fig.~\ref{fig:consistency} we plot the two sides of the equations
(\ref{eq:consistency1}) and (\ref{eq:consistency2}) using the
values of exponents given in Table~\ref{tab:exponents}.
For $z_t$ and $z_e$ we take an average over operator combinations since
the values are mutually in agreement within the error.
We observe that the consistency is well satisfied
for each spatial volume even though values of individual exponents deviate from
those of $O(2), O(4)$ or mean-field theory predictions.
We have also attempted fits allowing for a constant term in the fitting
function $\chi_i^{max}=c_{0i}+c_{1i}m_q^{-z_i}$. We are not able to
obtain reliable fits taking $z_i$ as a free parameter,
since the errors of fitted values are too large. Fixing the exponent
$z_i$ to the theoretical $O(2)$ or $O(4)$ values, we find that
the quality of fit
generally worsens compared with the single power fit.
In particular, the fit tends to misses the point for the smallest quark mass
$m_q=0.01$ for $L=16$.
We are led to conclude that the exponents show deviation from
$O(2)$ or $O(4)$ values, at least in the range of quark mass
$m_q=0.075-0.01$ explored in our simulation.
\subsection{Results for scaling function}
\begin{figure}[bt]
\centerline{\epsfxsize=75mm \epsfbox{fig5.epsf}}
\vspace*{-9mm}
\caption{Scaling function $F_m(x)$ normalized as
$\chi_m(g^2,m_q)\cdot (m_q/0.01)^{z_m}$ as a function of
$x=(6/g_c^2(m_q)-6/g_c^2(0))\cdot (m_q/0.01)^{-z_g}$ for $L=16$ with measured
values $z_g=0.6447, z_m=1.033, 6/g_c^2(0)=5.2353$.}
\label{fig:scalingfunction}
\vspace*{-6mm}
\end{figure}
For a second-order phase transition, the singular part of the chiral
susceptibility $\chi_m(g^2,m_q)$ is expected to scale as
\begin{equation}
\chi_m(g^2,m_q)=m_q^{-z_m}\cdot F_m(x),
\end{equation}
where $F_m(x)$ is a function of scaling variable $x$ which we take to be
\begin{equation}
x=\left(6/g_c^2(m_q)-6/g_c^2(0)\right)\cdot m_q^{-z_g}.
\end{equation}
We show in Fig.~\ref{fig:scalingfunction} estimates of the scaling function
using data for the size $L=16$. Both $F_m(x)$ and $x$ are normalized by
the values for $m_q=0.01$, and the measured values are employed for the
exponents: $z_g=0.6447$, $z_m=1.033$ and $6/g_c^2(0)=5.2353$.
Given the magnitude of statistical error which increases from 10\% to 20\%
as $m_q$ decreases from $m_q=0.075$ to $0.01$, we find scaling with respect
to quark mass to be reasonably satisfied.
We have also calculated the scaling function $F_m(x)$ using
the $O(4)$ values for
the exponents\cite{kanaya} $z_g=0.538$, $z_m=0.794$ and the value of
$6/g_c^2(0)$ obtained with a fit of $g_c^2(m_q)$ with the $O(4)$ value for
$z_g$. We find that scaling worsens. In particular the curve for the
smallest quark mass $m_q=0.01$ is too high in this case.
\section{Conclusions}
In this article we have reported results of our study of the
two-flavor chiral phase transition with the Kogut-Susskind quark action on an
$N_t=4$ lattice. Our analysis of the spatial volume dependence of the
peak height of susceptibilities confirms the absence of a phase transition
for $m_q\geq 0.02$ as reported previously\cite{founf2,columbianf2}.
At $m_q=0.01$ the peak height exhibits
an almost linear increase over the sizes
$L=8-16$ contradicting a previous work\cite{columbianf2}.
We have argued, based on an examination of fluctuations of
observables and a consideration of spatial volume normalized by the
zero-temperature pion mass, that the increase is a transient phenomenon
arising from an insufficient spatial volume.
We conclude that a first-order transition is likely to be absent also
at $m_q=0.01$.
We have also found that the quark mass dependence of susceptibilities
is consistent with a second-order transition located at $m_q=0$;
the critical exponents we have obtained satisfy required consistency
conditions, and the susceptibility $\chi_m$ reasonably scales in terms of
variable defined with the measured exponents.
However, the values of exponents
themselves deviate from either $O(2), O(4)$ or mean-field theory predictions.
Further work is needed to elucidate the universality nature of the
two-flavor chiral phase transition in finite-temperature QCD.
\section*{Acknowledgements}
This work is supported by the Supercomputer Project (No.~1) of High Energy
Accelerator Research Organization (KEK), and also in part by the Grants-in-Aid
of the Ministry of Education (Nos. 08640349, 08640350, 08640404,
08740189, 08740221).
|
1,477,468,750,367 | arxiv | \section{Introduction}
The (discrete) normalized Fourier transform (DFT) is a complex mapping sending input $x\in \C^n$ to $Fx\in \C^n$, where $F$ is a unitary matrix defined by
\begin{equation}\label{dft} F(k,\ell) = n^{-1/2}e^{-i2\pi k\ell/n}\ .
\end{equation}
The Walsh-Hadamard transform is a real orthogonal mapping in $\R^n$ (for $n$ an integer power of $2$) sending
an input $x$ to $Fx$, where $$F(k,\ell)=\frac 1 {\sqrt n} (-1)^{\langle [k-1], [\ell-1]\rangle}\ ,$$ with $\langle\cdot,\cdot\rangle$ is dot-product, and $[p]$ denotes (here only) the bit
representation of the integer $p\in \{0,\dots, n-1\}$ as a vector of $\log_2 n$ bits.
Both transformations are special (and most important) cases of abstract Fourier transforms defined with respect to corresponding Abelian groups.
The Fast Fourier Transform (FFT) of Cooley and Tukey \cite{CooleyT64} is a method for computing the DFT of $x\in \C^n$
in time $O(n\log n)$. The fast Walsh-Hadamard transform computes the Walsh-Hadamard
transform in time $O(n\log n)$. Both fast transformations perform a sequence of rotations on pairs of coordinates, and
are hence special cases of so-called linear algorithms, as defined in
\cite{Morgenstern:1973:NLB:321752.321761}.
The DFT is instrumental as a subroutine in fast polynomial multiplication \cite{CLRS} (chapter 30),
fast integer multiplication \cite{DeKSS13,Furer:2007:FIM:1250790.1250800}, cross-correlation and auto-correlation detection in
images and time-series (via convolution) and, as a more recent example, convolution networks for deep learning \cite{MathieuHLC14}. Both DFT and Walsh-Hadamard are useful for
fast Johnson-Lindenstrauss transform for dimensionality reduction \cite{DBLP:journals/siamcomp/AilonC09,DBLP:journals/dcg/AilonL09,DBLP:journals/talg/AilonL13,DBLP:journals/siamma/KrahmerW11} and the
related restricted
isometry property (RIP) matrix construction \cite{DBLP:journals/jacm/RudelsonV07,DBLP:journals/corr/abs-1301-0878,DBLP:journals/siamma/KrahmerW11}).
It is beyond the scope of this work to survey all uses of Fourier transforms in both theory of algorithms and in complexity.
For the sake of simplicity the reader is encouraged to assume that $F$ is the Walsh-Hadamard transform, and that by the acronym ``FFT'' we refer to the fast Walsh-Hadamard transform. The modifications required for the DFT (rather, the real embedding thereof) require a slight modification to the potential function which
we mention but do not elaborate on for simplicity. Our results nevertheless apply also to DFT.
It is not known whether $\Omega(n\log n)$ operations are necessary, and this problem is one of the most
important open problems in theoretical computer science \cite{wiki}.
It is trivial that a linear number of steps is necessary, because every input coordinate must be probed.
Papadimitriou derives in \cite{Papadimitriou:1979:OFF:322108.322118} an $\Omega(n\log n)$ lower bound for DFT over finite fields using a notion of an information flow network. It is not clear how to extend
that result to the Complex field. There have also been attempts \cite{Winograd76} to reduce the constants hiding in the upper bound
of $O(n\log n)$, while also separately counting the number of additions versus the number of multiplications (by constants).
In 1973, Morgenstern proved that if the moduli of the constants used in the computation are are bounded by $1$
then the number of steps required for computing the \emph{unnormalized} Fourier transform, defined by $n^{1/2}F$ in the linear algorithm model is at least
$\frac 1 2 n\log_2 n$. He used a potential function related to matrix determinant, which makes the technique inapplicable for
deriving lower bounds for the (normalized) $F$.
Morgenstern's result also happens to imply that the transformation $\sqrt n\Id$ ($\sqrt n$ times the identity) has the same
complexity as the Fourier transform, which is not a satisfying conclusion.
Also note that stretching the input norm by a factor of $\sqrt n$ requires representing numbers of $\omega(\log n)$
bits, and it cannot be simply assumed that a multiplication or an addition over such numbers can be done in $O(1)$ time.
Ailon \cite{Ailon13} studied the complexity of the (normalized) Fourier transform in a computational model
allowing only orthogonal transformations acting on (and replacing in memory) two intermediates at each step.
He showed that at least $\Omega(n\log n)$ steps were required.
The proof was done by defining a potential function on the matrices $M^{(t)}$ defined by composing the first $t$ gates. The potential
function is simply the sum of Shannon entropy of the probability distributions defined by the squared modulus of elements in the
matrix rows. (Due to orthogonality, each row, in fact, thus defines a probability distribution).
That result had two shortcomings: (i) The algorithm was assumed not to be allowed to use extra memory in addition to
the space used to hold the input. In other words, the computation was done \emph{in place}. (ii) The result was sensitive to the normalization of $F$, and was not useful in deriving
any lower bound for $\gamma F$ for $\gamma \not \in \{\pm 1\}$.
In \cite{Ailon14}, Ailon took another step forward by showing a lower bound for computing
\emph{any scaling} of the Fourier transform in a stronger model of computation which we call \emph{uniformly well conditioned}.
At each step, the algorithm can perform a nonsingular linear transformation on at most two intermediates, as long as
the matrix $M^{(t)}$ defining the composition of the first $t$ steps must have condition number at most $\kappa$, for all $i$.
We remind the reader that condition number of a matrixis defined as
the ratio between its largest and smallest (nonzero) singular values.
Otherwise stated, the result implies that if an algorithm computes the Fourier transform in time $(n\log n)/b$ for some $b>1$, then some
$M^{(t)}$ must have condition number at least $\Omega(b)$. This means that the computation output relies on
an ill conditioned intermediate step.
The result in \cite{Ailon14} made a qualitative claim about compromise of numerical stability due to a ill condition.
\subsection{Our Contribution}
Here we establish (Theorem~\ref{thm:subspaces}) that a $b$-factor speedup of FFT
for $b=b(n)=\omega(1)$ either \emph{overflows} at $\Omega(n)$ different time steps due to $\Omega(n)$ pairwise orthogonal input directions,
or \emph{underflows} at $\Omega(n)$ different time steps, losing accuracy of order $\Omega(b)$ at $n$ orthogonal
input directions.
Note that achieving this could not be simply done by a more careful
analysis of \cite{Ailon14}, but rather requires an intricate analysis of the entropy of Fourier transform under
transformations of small trace. This analysis (Lemma~\ref{lem:Fafterproj}) is interesting in its own right.
\section{Computational Model and Notation}\label{sec:modelnotation}
We remind the reader of the computational model discussed in \cite{Ailon13,Ailon14}, which is a special case of
the linear computational model. The machine state represents a vector in $\R^\ell$ for some $\ell\geq n$,
where it initially equals the input $x\in \R^n$ (with possible padding by zeroes, in case $\ell>n$). Each step (gate)
is either a \emph{rotation} or a \emph{constant}. A rotation applies a $2$-by-$2$ rotation mapping on a pair of
machine state coordinates (rewriting the result of the mapping to the two coordinates). We remind the reader
that a $2$-by-$2$ rotation mapping is written in matrix form as $\left (\begin{matrix} \cos \theta & \sin \theta \\ -\sin \theta & \cos\theta\end{matrix}\right )$
for some real (angle) $\theta$. A constant gate multiplies a single machine state coordinate (rewriting the result) by
a nonzero constant. In case the constant equals $-1$, we call it a reflection gate.
In case $\ell=n$ we say that we are in the in-place model. Any nonsingular linear mapping over $\R^n$ can be decomposed into a sequence of rotation and constant gates in the in-place model, and hence our model is, in a sense, universal. FFT works in the in-place model, using rotations (and possibly reflections) only. A restricted method for dealing
with $\ell>n$ was developed in \cite{Ailon14}, and can be applied here too in a certain sense (see Section~\ref{sec:future} for a discussion). We focus in this work on the in-place model only.
Since both rotations and constants apply a linear transformation on the machine state, their composition is a linear transformation. If $\A_n$ is an in-place algorithm for computing a linear mapping over $\R^n$, it
is convenient to write it as $\A_n = (M^{(0)}=\Id, M^{(1)}, \dots, M^{(m)})$ where $m$ is the number of steps (gates),
$M^{(t)}\in \R^{n\times n}$ is the mapping that satisfies that for input $x\in \R^n$ (the initial machine state),
$M^{(t)}x$ is the machine state after $t$ steps. ($\Id$ is the identity matrix). The matrix $M^{(m)}$ is the target
transformation, which will typically be $F$ in our setting. In fact, due to the scale invariance of the potential function we use, we could take $M^{(m)}$ to be any nonzero scaling
of $F$, but to reduce notation we simply assume a scaling of $1$.
For any $t\in[m]$, if the $t$'th gate is a rotation, then $M^{(t)}$ defers
from $M^{(t-1)}$ in at most two rows, and if the $t$'th gate is a constant, then $M^{(t)}$ defers from $M^{(t-1)}$
in at most one row.
\subsection{Numerical Architecture}
The in-place model
implicitly assumes representation of a vector in $\R^n$ in memory using $n$ words.
A typical computer word represents
a coordinate (with respect to some fixed orthogonal basis) in the range $[-1,1]$ to within some accuracy $\eps=\Theta(1)$.\footnote{The range $[-1,1]$ is immaterial and can be replaced with any range of the form $[-a,a]$ for $a>0$.}
For sake of simplicity, $\eps$ should be thought of as $2^{-31}$ or $2^{-63}$ in modern
computers of $32$ or $64$ bit words, respectively.
To explain the difficulties in speeding up FFT on computers of fixed precision in the in-place model, we need to understand
whether (and in what sense) standard FFT is at all suitable on such machines. First, we must restrict the domain of inputs. Clearly this domain cannot be $\R^n$, because computer words can only represent
coordinates in the range $[-1,1]$, by our convention.
We consider input from an $n$-ball of radius $ \Theta(\sqrt{n})$, which we denote $\B(\Theta(\sqrt{n}))$.
An $n$-ball is invariant under orthogonal
transformations, and is hence a suitable domain.
Encoding a single coordinate of such an input might require $\omega(1)$ bits (an overflow). However, using
well known tools from high dimensional geometry,
encoding a single coordinate of a \emph{typical} input chosen randomly from $\B(\Theta(\sqrt{n}))$ requires $O(1)$ bits, fitting inside a machine word.\footnote{By ``encoding'' here we simply
mean the base-$2$ representation of the integer $\lfloor x(i)/\eps \rfloor$.}
We hence take a statistical approach and define a state of overflow as trying to encode, in some fixed memory word (coordinate), a random number of $\omega(1)$ bits
in expectation, at a fixed
time step in the algorithm. This definition allows us to avoid dealing with accommodation of integers requiring
super-constant bits and, in turn, with logical bit-operation complexity. Although the definition might seem impractical at first, it allows
us to derive very interesting information vs computational speed tradeoffs.
(In the future work Section~\ref{sec:future} we shall discuss allowing varying word sizes and its implications on complexity.)
By our definition, standard FFT for input drawn uniformly from $\B(\Theta(\sqrt n))$ does not overflow at
all, because any coordinate of the machine state at any step is tightly concentrated (in absolute value) around $\Theta(1)$.
It will be easier however to replace the uniform distribution from the ball with the multivariate Gaussian $\N(0,\Theta(n)\cdot\Id)$,
which is a good approximation of the former for large $n$. With this assumption, any coordinate of the standard FFT machine
state at any step follows the law $\N(0,\Theta(1)$).
By simple integration against the Gaussian measure, one can verify that the expected number of bits required to encode
such a random variable (to within fixed accuracy $\eps$) is $\Theta(1)$, hence no overflow occurs.
This input assumption together with the no-overflow
guarantee {\bf will serve as our benchmark}.
For further discussion on the numerical arhitecture and definition of \emph{overflow} we refer the reader, due to lack of space, to Appendix~\ref{sec:discussion}.
\section{The Matrix Quasi-Entropy Function}\label{sec:notationentropy}
The set $\{1,\dots, q\}$ is denoted by $[q]$.
By $\R^{a\times b}$ we formally denote matrices of $a$ rows and $b$ columns.
Matrix transpose is denoted by $(\cdot)^T$.
We use $(\cdot)^{-T}$ as shorthand for $((\cdot)^{-1})^T=((\cdot)^{T})^{-1}$.
If $A\in \R^{a\times b}$ is a matrix and $I$ is a subset of $[b]$, then (borrowing from Matlab syntax) $\col{A}{I}$ is the submatrix obtained
by stacking the columns corresponding to the indices in $I$ side by side and $\row{A}{I}$ is the submatrix obtained
by stacking the rows corresponding to the indices in $I$ one on top of the other.
We shall also write, for $i\in [b]$, $\col{A}{i}$ and $\row{A}{i}$
as shorthands for $\col{A}{\{i\}}$ and $\row{A}{\{i\}}$, respectively. All logarithms are base $2$.
We slightly abuse notation and extend the definition of the quasi-entropy function $\Phi(M)$ defined on nonsingular matrices $M$ from \cite{Ailon14}, as follows. Given two matrix arguments $A,B\in \R^{a\times b}$ for some $a,b\geq 1$, $\Phi(A,B)$ is defined as
$$\sum_{i=1}^a\sum_{j=1}^b -A(i,j)B(i,j)\log |A(i,j)B(i,j)|\ .$$
This extends naturally to vectors, namely for $u,v\in \R^a$, $\Phi(u,v)$ is as above by viewing $\R^a$ as $\R^{a\times 1}$.
If $A,B\in \R^{a\times b}$ and $a,b$ are even, then we define the \emph{complex quasi-entropy} function $\Phi^\C(A,B)$ to be:
$$\sum_{i=1}^a\sum_{j=1}^{b/2} -(A(i,2j-1)B(i,2j-1) + A(i,2j)B(i,2j))\log |A(i,2j-1)B(i,2j-1) + A(i,2j)B(i,2j)|\ .$$
The function $\Phi^\C$ can be used for proving our results for the real representation of the complex DFT, which
we omit from this manuscript for simplicity.
The reason we need this modification to $\Phi$ for DFT is explained in the proof of Lemma~\ref{lem:Fafterproj}, needed by Theorem~\ref{thm:subspaces} below. Elsewhere,
we will work (for convenience and brevity) only with $\Phi$.
Abusing notation, and following \cite{Ailon14}, we define for any nonsingular matrix $M$:
$\Phi(M) := \Phi\left (M, M^{- T}\right )\ , \Phi^\C(M) := \Phi^\C\left(M, M^{-T}\right)$.
It is easy to see that $\Phi(F) = n\log n$ for the Walsh-Hadamard transform, because all matrix elements
are $\pm 1/\sqrt n$. If $F$ is a real representation of the $(n/2)$-DFT, then clearly
$\Phi^\C(F) = n\log(n/2)$, because all matrix elements of the (complex representation of the) $(n/2)$-DFT are complex unit roots times
$(n/2)^{-1/2}$.
It will be also useful to consider a generalization of the potential of a nonsingular matrix $M$,
by allowing linear operators acting on the rows of $M$ and $M^{-T}$, respectively. More precisely,
we will let $\Phi_{P,Q}(M)$ be shorthand for
$\Phi(MP, M^{-T}Q)$,
where $P,Q\in \R^{n\times a}$ are some mappings. (We will only be working with projection matrices $P,Q$ here).
Similarly, $\Phi_{P,Q}^\C(M, M^{-T}) := \Phi^C(MP, M^{-T}Q)$.
Finally, for any matrix $A \in \R^{n\times n}$, let $\sigma_1(A),\dots, \sigma_n(A)$ denote its singular values, where we use the
convention $\sigma_1(A) \geq \cdots \geq \sigma_n(A)$. If $A$ is nonsingular, then the condition number $\kappa(A)$ is defined by $\sigma_1(A)/\sigma_n(A)$.
For any matrix $A$, we let $\|A\|$ denote its spectral norm and $\|A\|_F$ its Frobenius norm. If $x$ is
a vector, hence, $\|x\|=\|x\|_2=\|x\|_F$. Let $\B$ denote the Euclidean unit ball in $\R^n$
\section{Genralized Ill Conditioned Bottleneck from Speedup}
We show that if an in-place algorithm $\A_n=(M^{(0)}=\Id,\dots, M^{(m)}=F)$ speeds up FFT by a factor of $b\geq 1$, then for some $t$ $M^{(t)}$ is ill conditioned (in a generalized sense, to be explained).
This is a generalization of the main result in \cite{Ailon14}, with a simpler proof that we provide in Appendix~\ref{sec:proof:thm:main}
for the sake of completeness.
\begin{thm}\label{thm:main}
Fix $n$, and let $\A_n = \{\Id=M^{(0}, \dots, M^{(m)}\}$ be an in-place algorithm computing some linear function in $\R^n$
and let $P,Q\in \R^{n\times n}$ be two matrices.
For any $t\in[m]$, let $\{i_t,j_t\}$ denote the set of at most two
indices that are affected by the $t$'th gate (if the $t$'th gate is a constant gate, then $i_t=j_t$, otherwise it's a rotation
acting on indices $i_t,j_t$).
Then for any $R \in [\lfloor n/2\rfloor ]$ there exists $t\in [m]$ such that
\begin{equation}\label{eq:thm:main1}
\sqrt{\left \|\row{(M^{(t)}P)}{I_t}\right \|_F^2 \left \|\row{((M^{(t)})^{-T}Q)}{I_t}\right \|_F^2} \geq \frac {R(\Phi_{P,Q}(M^{(m)})-\Phi_{P,Q}(\Id))} {m\log {2R}}\ , \\
\end{equation}
where $I_t = \bigcup_{t'=t}^{t+R-1}\{i_{t'},j_{t'}\}$.
Additionally, if $R=1$ then the $t$'th gate can be assumed to be a rotation.
In particular, if $M^{(m)}=F$ and $m=(n\log n)/b$ for some $b\geq 1$ (``$\A_n$ speeds up FFT by a factor of $b$'') and $P=Q=\Id$, then
\begin{equation}\label{eq:thm:main}
\sqrt{\left \|\row{(M^{(t)})}{I_t}\right \|_F^2 \left \|\row{((M^{(t)})^{-T})}{I_t}\right \|_F^2} \geq \frac {Rb} {\log {2R}}\ . \\
\end{equation}
\end{thm}
For the main result in this paper in the next section, we will only need the case $R=1$ of the theorem. It is worthwhile, however,
to state the case of general $R>1$ because it gives rise to a stronger notion of ill-condition than is typically used.
Since this is not the main focus of this work, we omit the details of this discussion. Henceforth, we will only use the
theorem with $R=1$.
We discuss the implication of the theorem, in case $R=1, P=Q=\Id$. The theorem implies that
an algorithm with $m=(n\log n)/b$
must exhibit an intermediate matrix $M^{(t)}$ and a pair of indices $i_t, j_t$ such that the $t$'th gate is a rotation
acting on $i_t, j_t$ and additionally:
$$ \sqrt{\left( \|\row{M^{(t)}}{i_t}\|^2+\|\row{M^{(t)}}{j_t}\|^2\right )\left( \|\row{(M^{(t)})^{-T}}{i_t}\|^2+\|\row{(M^{(t)})^{-T}}{j_t}\|^2\right )} \geq b\ .$$
Hence, either
\begin{eqnarray*}
&(i)&\ \ \sqrt{ \|\row{M^{(t)}}{i_t}\|^2+\|\row{M^{(t)}}{j_t}\|^2}\geq \sqrt b
\mbox{\ \ \ \ \ {\bf -or-} \ \ \ \ \ } \\
&(ii)&\ \ \sqrt{ \|\row{(M^{(t)})^{-T}}{i_t}\|^2+\|\row{(M^{(t)})^{-T}}{j_t}\|^2} \geq \sqrt b\ .
\end{eqnarray*}
\paragraph{Case (i).} We can assume wlog that \begin{equation}\label{eq:overdef}\|\row{M^{(t)}}{i_t}\|^2 \geq b/2\ .\end{equation}
Let $\xover^T:=\row{M^{(t)}}{i_t}/\|\row{M^{(t)}}{i_t}\|\in \R^n$ ($\xover$ is the normalized $i_t$'th row of $M^{(t)}$, transposed).
Recall that the input $x$ is distributed according to the law $\N(0,\Theta(1)\cdot \Id)$.
The $i_t$'th coordinate just before the $t$'th gate equals $\|\row{M^{(t)}}{i_t}\| x^T \xover$, and is hence distributed
$\N(0,\Theta(\|\row{M^{(t)}}{i_t}\|^2))$.
Using (\ref{eq:overdef}), this is $\N(0,\Omega(b))$.
If $b=b(n) = \omega(1)$, then by our definition we reach overflow.
Note that it is possible
as a preprocessing step to replace $x$ with $x-(x^T \xover)\xover$ (eliminating the overflow component), and then to
reintroduce the offending component by adding $(x^T \xover)F\xover$ as a postprocessing step.
In the next section, however, we shall show that, in fact, there must be $\Omega(n)$ pairwise orthonormal directions (in input space)
that overflow at $\Omega(n)$ different time steps, so such a simple ``hack'' cannot work.
\paragraph{Case (ii).} This scenario, as the reader guesses, should be called \emph{underflow}.
In case (ii), wlog \begin{equation}\label{tytyty}\|\row{(M^{(t)})^{-T}}{i_t}\|^2 \geq b/2\ .\end{equation} Now define $\xunder^T=\row{(M^{(t)})^{-T}}{i_t}/\|\row{(M^{(t)})^{-T}}{i_t}\|\in \R^n$, and consider the orthonormal basis $u_1,\dots u_n\in \R^n$ so that $u_1=\xunder$.
For any $t'\in [m]$ (and in particular for $t'=t$):
$$ g_1 := \xunder^T x = (\xunder^T (M^{(t')})^{-1})\cdot (M^{(t')}x)\ .$$
Now notice that the $i_t$'th coordinate of $ (\xunder^T (M^{(t)})^{-1})$ has magnitude at least $\sqrt{b/2}$ by
(\ref{tytyty}) and the construction of $\xunder$. Also notice that for all $i\neq i_t$, the row $\row{M^{(t)}}{i}$ is orthogonal to $\xover$,
by matrix inverse definition. This means that coordinate $i\neq i_t$ of $M^{(t)} x$ contains no information about $g_1$.
All the information in $g_1$ is hence contained in $(M^{(t)} x)(i_t)$. More precisely, $g_1$ is given by
$ g_1 = ((M^{(t)})^{-T}\xunder)(i_t) \times (M^{(t)} x)(i_t) - e$,
where $e$ is a random variable independent of $g_1$. But $|((M^{(t)})^{-T}\xunder)(i_t)| \geq \sqrt{b/2}$,
and $(M^{(t)} x)(i_t)$ is known only up to an additive error of $\eps$, due to our assumptions on quantization in the
numerical architecture. This means that $g_1$ can only be known
up to an additive error of at least $\eps\sqrt{b/2}$, for \emph{any} value of $e$.
It is important to note that this uncertainty cannot be ``recovered'' later by the algorithm, because at any step the machine state contains all the information about the input
(aside from the input distribution prior). In other words, any information forgotten at
any step cannot be later recalled (see Figure~\ref{fig:fig} in the Appendix).
Notice that at step $0$, the input vector coordinates $x(1),\dots, x(n)$ are
represented in individual words, each of which gives rise to an uncertainty interval of width $\eps$.
So merely storing the input in memory in the standard coordinate system implies knowing its location up to
an uncertainty $n$-cube with side $\eps$, and of diameter $\eps\sqrt n$.\footnote{To be precise, we must acknowledge the prior distribution on $x$ which also provides information about its whereabouts.} An uncertainty interval of size $\eps\sqrt{b/2} = O(\eps\sqrt{\log n})$ in a single direction is therefore relatively benign.
The next section tells us, however, that the problem is amplified $\Omega(n)$-fold.
\section{Many Independent Ill Conditioned Botlenecks}
\begin{thm}\label{thm:subspaces}
Fix $n$, and let $\A_n = \{\Id=M^{(0)}, \dots, M^{(m)}=F\}$ be an in-place algorithm computing $F$ in time $m=(n\log n)/b$ for some $b\geq 1$.
Then one of the following (i)-(ii) must hold:
\begin{itemize}
\item[(i)] (Severe Overflow) There exists an orthonormal system $v_1,\dots, v_{n'}\in \R^n$ , integers $t_1,\dots, t_{n'}\in [m]$ and $i_1,\dots, i_{n'}\in[n]$ with $n'=\Omega(n)$ such that for all $j\in[n']$,
\begin{equation}\label{pakapu}\row{M^{(t_j)}}{i_j}P_j = \alpha_j v_j\ \ \mbox{ with}\ \alpha_j=\Omega(\sqrt b) \ ,\end{equation}
where $P_j$ is projection onto the space orthogonal to $v_1,\dots, v_{j-1}$.
\item[(ii)] (Severe Underflow) There exists an orthonormal system $u_1,\dots, u_{n'}\in\subseteq \R^n$ , integers $t_1,\dots, t_{n'}\in [m]$ and $i_1,\dots, i_{n'}\in[n]$ with $n'=\Omega(n)$ such that for all $j\in[n']$, \begin{equation}\label{eq:severeunder}\row{(M^{(t_j)})^{-T}}{i_j}Q_j = \gamma_j u_j\ \ \mbox{ with}\ \gamma_j=\Omega(\sqrt b) \ ,\end{equation}
where $Q_j$ is projection onto the space orthogonal to $u_1,\dots, u_{j-1}$.
\end{itemize}
In both cases (i) and (ii), the gates at time $t_1,\dots t_{n'}$ are rotations, and for all $j\in [n']$ the index $i_j$ is one of the two indices affected by the corresponding rotation.
Additionally, the set $\{t_1,\dots, t_{n'}\}$ is of cardinality at least $n'/2$.
\end{thm}
The proof heavily relies on Lemma~\ref{lem:Fafterproj} (Section~\ref{sec:lemmas}) and is deferred to Appendix~\ref{sec:proof:subspaces} due to lack of space. We discuss its numerical implications, continuing the discussion
following Theorem~\ref{thm:main}.
In the severe overflow case, Theorem~\ref{thm:subspaces}
tells us that there exists an orthonormal collection $v_1,\dots, v_{n'}$ (with $n'=\Omega(n)$) in input space, such that each
$v_i$ behaves like $\xover$ from the previous section.
This means that, if the speedup factor $b$ is $\omega(1)$, we have overflow caused by a linear number of independent input components, occurring at
$\Omega(n)$ different time steps (by the last sentence in the theorem).
In the extreme
case of speedup $b=\Theta(\log n)$ (linear number of gates), this means that in a constant fraction of time steps overflow
occurs.
For the severe underflow case
we offer a geometric interpretation. The theorem tells us that there exists an orthonormal collection
$u_1,\dots, u_{n'}$ in the input space that is bad in the following sense. For each $j\in [n']$, redefine $g_j=u_j^T x$
to be the input component in direction $u_j$. Again, the
variables $g_1,\dots, g_{n'}$ are iid $\N(0,\Theta(1))$.
The first element in the series, $u_1$, can be analyzed as $\xunder$ (from the previous section)
whereby it was argued that before the $t_1$'th step, the component
$g_1 = u_1^T x$ can only be known to within an interval of width $\Omega(\gamma_1\eps)$, independently of
information from components orthogonal to $u_1$. We remind the reader that by this we mean that the \emph{width} of the interval is independent, but the
location of the interval depends smoothly (in fact, linearly) on information from orthogonal components of $x$ (see Figure~\ref{fig:fig} in the appendix).
As for $u_2,\dots,u_{n'}$: For each $j\in [n']$, let $z_j := (M^{(t_j)})^{-T}(i_j,:)$. Therefore $u_1=z_1/\|z_1\|$ and by (\ref{eq:severeunder}), for $j>1$ we
can write
$ z_j = \gamma_j u_j + h_j$,
where $h_j \in \span\{u_1,\dots, u_{j-1}\}$.
Treating $z_j/\|z_j\|$ again as $\xunder$, we
conclude that the component $(z_j/\|z_j\|)^T x $ can only be known to within an interval of size $\Omega(\eps\|z_j\|)$,
given any value of the projection of input $x$ onto the space orthogonal to $z$.
We extend the list of vectors $z_1,\dots, z_{n'}$, orthonormal vectors $u_1,\dots, u_{n'}$, numbers $\gamma_1,\dots, \gamma_{n'}$ and projections $Q_1,\dots, Q_{n'}$ to size $n$ as follows.
Having defined $z_j, u_j, Q_j,\gamma_j$ for some $j\geq n'$, we inductively define $Q_{j+1}$
as projection onto the space orthogonal to $\span\{z_1,\dots, z_{j}\}=\span\{u_1,\dots, u_j\}$ and $z_{j+1}$ to be a standard basis vector such
that $\|Q_{j+1}z_{j+1}\|^2 \geq 1-j/n$. (Such a vector exists because there must exist an index $i_0\in [n]$ such that $\sum_{j'=1}^j u_{j'}(i_0)^2 \leq j/n$, by orthonormality of the collection $u_1,\dots, u_j$;
Now set $z_{j+1}$ to have a unique $1$ at coordinate $i_0$ and $0$ at all other coordinates.)
We let $u_{j+1}$ be $Q_{j+1}z_{j+1}/\|Q_{j+1}z_{j+1}\|$, that is, a normalized vector pointing
to the component of $z_{j+1}$ that is orthogonal to $\span\{z_1,\dots, z_j\}=\span\{u_1,\dots, u_j\}$.
The number $\gamma_{j+1}$ is defined as $\|Q_{j+1}z_{j+1}\|$. By construction, $\gamma_{j+1} \geq \sqrt{1-j/n}$.
The above extends the partial construction arising from the severe underflow to a full basis,
with the following property:
\begin{prop}\label{prop:ppppp}
For any $j\in [n]$, even given exact knowledge of the exact projection $\tilde x$ of $x$ onto the space orthogonal to $z_j$,
the quantity $x^T(z_j/\|z_j\|)$ upon termination of the algorithm can only be known to within an interval
of the form $[s,s+\eps\|z_j\|]$ where $s$ depends smoothly (in fact, linearly) on $\tilde x$.
\end{prop}
The proposition is simply a repetition of the analysis done for $\xunder$ in the previous section. For $j>n'$ it is a simple consequence of the
fact that upon initialization of the algorithm with input $x$, each coordinate of $x$ (and in particular $x^Tz_{j}$) is stored
in a single machine word, while all other machine words store information independent of $x^T z_j$. Hence the uncertainty of width $\eps\|z_j\|=\eps$.
What do we know about $x$ upon termination of the algorithm? As stated earlier, any information that
was lost during execution, cannot be later recovered.
Let $\I$ denote the set of possible inputs, given the information the we are left with upon
termination.
Consider the projection $Q_2$ onto the space orthogonal to $u_1=z_1/\|z_1\|$, as
a function defined over $\I$. Let $\I_2 = Q_2 \I$ denote its image. The preimage of any point $w\in \I_2$ must contain a line segment
of length at least $\eps \gamma_1$ parallel to $u_1$, due to the uncertainty in $x^Tu_1$. Hence the volume of $\I$
is at least $\eps\gamma_1$ times the $(n-1)$-volume of $\I_2$.\footnote{We need to be precise about measurability,
but this is a simple technical point from the fact that the interval endpoint depends smoothly on the projection, as claimed in Proposition~\ref{prop:ppppp}.}
Continuing inductively, we lower bound the $(n-j+1)$-volume of $\I_j := Q_j \I = Q_j\I_{j-1}$ for $j>2$. Consider
the projection $Q_{j}$ as a function operating on $\I_{j-1}$, and any point $w$ in the image
$\I_j$. By definition of $Q_j$, there exists $\hat w \in \I$ such that $Q_j\hat w = w$. By proposition~\ref{prop:ppppp}, the intersection of the line ${\cal L} = \{\hat w + \eta z_j: \eta\in \R\}$ with $\I$ must contain a segment $\Delta$ of size $\eps \|z_j\|$. The projection $Q_j\Delta$
of this segment is contained in the line $Q_j{\cal L} = \{w + \eta u_j:\eta \in \R\}$. The
size of the segment is $\eps\|Q_jz_j\|=\eps \gamma_j$. This means that the $(n-j+1)$-volume
of $\I_{j+1}$ is at least $\eps\gamma_j$ times the $(n-j)$-volume of $\I_{j+1} = Q_{j+1}\I_j$.
Concluding, we get that the volume of $\I$ is at least $\prod_{j=1}^n \gamma_j$.
From the construction immediately preceding Proposition~\ref{prop:ppppp}, we get (using the fact that $n'=\Omega(n)$):
$\log \frac{\vol(\I)}{\eps^n} \geq n' \log\sqrt{b/2} + \sum_{j=n'+1}^n \log\sqrt{1-\frac{j-1}{n}} =\Omega(n\log b)$.
This tells us that the volume of uncertainty in the input (and hence, the output) of a $b$-speedup
of FFT in the in-place model is at least $b^{\Omega(n)}$ times the volume of uncertainty incurred simply by
storing the input in memory.
\section{Main Technical Lemma}\label{sec:lemmas}
The following is the most important technical lemma in this work. Roughly speaking, it tells us that
application of operators that are
close to $\Id$ to the rows of $F$ and $F^{-T}$ does not reduce the corresponding potential by much.
Similarly, assuming that $P,Q$ are PSD with spectral norm at most $1$, applying these transformations to the rows of $\Id$ does not increase the corresponding
potential by much.
\begin{lem}\label{lem:Fafterproj}
Let $P,Q \in \R^{n\times n}$ be two matrices.
Let $\hat P = \Id-P, \hat Q = \Id-Q$
Then
\begin{eqnarray}
\Phi(FP, F^{-T}Q) & \geq&
n\log n - (\tr\hat P + \tr\hat Q)\log n - O\left ((\|\hat P\|_F^2+\|\hat Q\|_F^2)\log n\right ) \ . \label{eq:lem:Fafterproj1}
\end{eqnarray}
If, additionally, $P$ and $Q$ are positive semi-definite contractions, then
\begin{eqnarray}
\Phi_{P,Q}(\Id) &=& \Phi(P,Q)\leq \trace \hat P + \trace \hat Q + O\left ((\|\hat P\|_F^2+\|\hat Q\|_F^2)\log n\right ) \label{eq:lem:Fafterproj2}\ .
\end{eqnarray}
\end{lem}
The proof, deferred to Appendix~\ref{sec:proof:Fafterproj} for lack of space,
takes advantage of the smoothness of the matrices $F$ and $\Id$ (that is, almost all matrix elements have exactly the same magnitude).
This is the reason we needed to modify $\Phi$ and work with $\Phi^\C$ for the complex case: If $F$ were the real
representation of the $n/2$-DFT matrix, then it is not smooth in this sense. It does hold though that for any $i\in [n]$ and $j\in [n/2]$: $F(i,2j-1)^2+F(i,2j)^2=2/n$,
so the matrix is smooth only in the sense that all pairs of adjacent elements have the same norm (viewed as $\R^2$ vectors).
\section{Future Work}\label{sec:future}
Taking into account bit operation complexity, and using state-of-the-art integer multiplication algorithms
\cite{DeKSS13,Furer:2007:FIM:1250790.1250800} it can be quite easily shown that both severe overflow and severe
underflow could be resolved by allowing flexible word size, accommodating either large numbers (in the overflow case)
or increased accuracy (in the underflow case). In fact, allowing $O(\log b)$-bit words at the time steps at which overflow (or underflow)
occur, of which there are $\Omega(n)$ many by Theorem~\ref{thm:subspaces}, suffice. Hence, this work
does not rule out the possibility of (in the extreme case of $b=\Theta(\log n)$) a Fourier transform algorithm in the in-place model using a linear number of gates, in bit operation complexity of $\tilde\Omega(n\log \log n)$, where $\tilde O()$ here
hides $\log\log \log n$ factors arising from fast integer multiplication algorithms. We conjecture that such
an algorithm does not actually exist, and leave this as the main open problem.
Another problem that was left out in this work is going beyond the in-place model. In the more general model,
the algorithm works in space $\R^{\ell}$ for $\ell>n$, where the $(\ell-n)$ extra coordinates can be assumed to be
initialized with $0$, and the first $n$ are initialized with the input $x\in \R^n$. The final matrix $M^{(m)}$ of Fourier transform algorithm $\A_n=\{\Id=M^{(0)},\dots, M^{(m)}\}$ contains $F$ as a sub matrix, so that the output $Fx$ can simply be extracted
from a subset of $n$ coordinates of $M^{(m)} x$, which can be assumed to be the first. The matrix $M^{(m)}$ (and its inverse-traspose) therefore
contains $(\ell-n)$ extra rows. The submatrix defined by the extra rows (namely, the last $\ell-n$) and the first $n$ columns were referred to in \cite{Ailon14} as the ``garbage'' part of the computation. To obtain an $\Omega(n\log n)$ computational
lower bound in the model assumed there,\footnote{In \cite{Ailon14}, the model simply assumed that all matrices $M^{(t)}$ for $t-1\dots m$ have bounded
condition number. Quantifying the effect of ill condition on numerical stability, overflow and underflow, was not done there. } it was necessary to show that $\Phi_{P,P}(M^{(m)})P=\Omega(n\log n)$, where
$P\in \R^{\ell\times \ell}$ is projection onto the space spanned by the first $n$ standard basis vectors.\footnote{The function $\Phi_{P,Q}(M)$ was not defined in \cite{Ailon14}, and was only implicitly used.} To that end,
it was shown that such a potential lower bound held as long as spectral norm of the ``garbage'' submatrices was
properly upper bounded. That result, in fact, can be deduced as a simple outcome of Lemma~\ref{lem:Fafterproj} that was developed here.
What's more interesting is how to generalize Theorem~\ref{thm:subspaces} to the non in-place model, and more importantly
how to analyze the numerical accuracy implications of overflow and underflow to the non in-place model. Such a generalization
is not trivial and is another immediate open problem following this work.
Another interesting possible avenue is to study the complexity of Fourier transform on input $x$ for which some prior
knowledge is known. The best example is when $Fx$ is assumed sparse, for which much interesting work on the upper bound side
has been recently done by Indyk et al. (see \cite{DBLP:conf/soda/IndykKP14} and references therein).
Many algorithms use the Fourier transform as a subroutine. In certain cases (fast polynomial multiplication,
fast integer multiplication \cite{DeKSS13,Furer:2007:FIM:1250790.1250800},
fast Johnson-Lindenstrauss transform for dimensionality reduction \cite{DBLP:journals/siamcomp/AilonC09,DBLP:journals/dcg/AilonL09,DBLP:journals/talg/AilonL13,DBLP:journals/siamma/KrahmerW11} and the
related restricted
isometry property (RIP) matrix construction \cite{DBLP:journals/jacm/RudelsonV07,DBLP:journals/corr/abs-1301-0878,DBLP:journals/siamma/KrahmerW11}) the Fourier
transform subroutine is the algorithm's bottleneck. Can we use the techniques developed here to derive
lower bounds (or rather, time-accuracy tradeoffs) for those algorithms as well? Moreover, we
can ask how the implications of speeding up the Fourier transform subroutine (as derived in this work)
affect the numerical outcome of these algorithms, assuming they insist on using Fourier transform as a black box.
\bibliographystyle{plain}
|
1,477,468,750,368 | arxiv | \section*{}
~~
In the Euclidean quantum theory physical functions in the Lorentzian
section of a complexified spacetime are assumed to be analytically
continued from some functions in the Euclidean section [1, 2]. The
non-trivial topological structure of the Euclidean section may cause
some interesting effects in the Lorentzian section. For example,
people have found that in a Lorentzian section whose corresponding
Euclidean section is periodic in the Euclidean time $\tau=it$ there
is the Hawking-Unruh effect (such as in the black hole spacetime and
the de Sitter spacetime). In such a spacetime (Here we call it the
Lorentzian Hawking-Unruh type spacetime (L-HU-spacetime), and call
the corresponding Euclidean section the Euclidean Hawking-Unruh type
spacetime (E-HU-spacetime)), observers whose worldlines are the
integral curves of $(\partial/\partial t)^a$, feel that they are in a
thermal bath with the temperature $T_0=1/\beta_0=\kappa/2\pi$, where
$\beta_0$ is the period of the Euclidean time and $\kappa$ is the
surface gravity of the event horizon [3 - 5]. In this letter I show
that in a L-HU-spacetime the temperature may be quantized with the
quanta $T_0$ which is the lowest possible temperature for the thermal
equilibrium.
It is well known that the Euclidean thermal Green function
$G_T(\tau,\vec{x};\tau^\prime, \vec{x}
^\prime)$ is a periodic (for bosons) or
anti-periodic (for fermions) function of $\tau$ (and $\tau^\prime$)
with the period $\beta=1/T$ where $T$ is the temperature [6, 7]. This
holds for systems with vanishing chemical potential in a stationary
spacetime or a conformally stationary spacetime [7]. It is easy to
show that $\beta$ is the {\it fundamental period} of such functions.
For example, for the scalar field, $G_T(\tau,\vec{x};
\tau^\prime, \vec{x}^\prime)=i{\rm tr}[e^{-\beta H}{\cal
T} (\phi(\tau,\vec{x})\phi(\tau^\prime,\vec{
x}^\prime))] /{\rm tr}(e^{-\beta H})$ where ${\cal T}$ denotes the
Wick time-ordering. $G_T(\tau,\vec{x};\tau^\prime,\vec{x}
^\prime)=G_T(\tau+\alpha,\vec{x};\tau^\prime, \vec{x}
^\prime)$ ($0<\alpha\leq\beta$) leads to
$\sum_{mn}(e^{(\alpha-\beta)E_m-\alpha E_n}-e^{-\beta
E_n})\vert\langle m\vert\phi(\tau,\vec{x})\vert
n\rangle\vert^2=0 $ ($H\vert m\rangle=E_m\vert m\rangle$ and $\langle
m\vert n\rangle=\delta_{mn}$) in the limit
$\tau^\prime\rightarrow\tau$ and $\vec{
x}^\prime\rightarrow \vec{x}$, which immediately results
in $\alpha=\beta$.
In an E-HU-section with the period $\beta_0$ in the Euclidean time
$\tau$, every thermal Green function for bosons should be periodic
with respect to the translation $\tau\rightarrow\tau+\beta_0$ [4, 7].
While it is also a periodic function of $\tau$ with the {\it fundamental
period} $\beta$, we must have $\beta_0=n\beta$ ($n = 1, 2, ...$)
because every doubly-periodic function reduces to a singly-periodic
function when the ratio of the two periods is real [8]. Such a
conclusion also holds for fermions since an anti-periodic function
with the period $\beta$ is also a period function with the period
$2\beta$ and every spinor thermal Green function for fermions should
be anti-periodic with respect to the translation
$\tau\rightarrow\tau+\beta_0$ [7]. Therefore the allowed temperature
in the Lorentzian section is
\begin{eqnarray}
T_n=nT_0,~~~n=1,2,...\nonumber
\end{eqnarray}
with $T_0=1/\beta_0=\kappa/2\pi$, which means that in the
L-HU-spacetime, the temperature should be quantized with quanta
$T_0$, and $T_0$ is the lowest possible temperature for the thermal
equilibrium.
|
1,477,468,750,369 | arxiv | \section{Introduction}
\label{sec:intro}
While it is not unusual in physics to contemplate mechanical systems whose configuration space is described by a curved manifold, systems, in particular point particles, with a curved {\it momentum space} have only recently received attention. Interest for momentum space with a non-trivial geometry sparked within the community working on quantum aspects of gravity in the past years and can be traced back to two main lines of research: the study of classical and quantum kinematics of point particles coupled to gravity in three space-time dimensions and Planck-scale {\it deformations} of the Poincar\'e group which can accommodate a fundamental, observer independent, energy scale.
A non-trivial momentum space geometry was first explicitly suggested by 't Hooft in \cite{'tHooft:1996uc}, based on his ``polygon approach" to the kinematics of point-particles coupled to three-dimensional gravity \cite{thooft1984}. Such particles are described by conical defects of space-time with their mass proportional to the deficit angle of the cone, the amount a test vector gets rotated when transported around the tip of the cone i.e. the location of the particle. Matschull and Welling \cite{Matschull:1997du} provided the first systematic description of the phase space of such particles showing that their momenta belong to the Lie group $SL(2, \mathbb{R})$, the (double cover of the) three-dimensional Lorentz group. In a parallel work, Bais and Muller \cite{Bais:1998yn} (whose treatment was expanded and carried out in detail in \cite{Bais:2002ye}), starting from the formulation of gravity as a Chern-Simons theory of the Poincar\'e group, argued that a description of the phase space of particles coupled to the theory requires the original symmetry group to be {\it deformed} to a quantum group, known as the ``quantum double" of the Lorentz group. Here, as in \cite{Matschull:1997du}, the key ingredient in the formulation of phase space is the use of holonomies of the flat connection of the theory, i.e. elements of the Lorentz group, to describe momenta of the particles. In particular these papers provide the first evidence of the connection between quantum deformations of relativistic symmetries and group-valued momenta (here the adjective ``quantum" refers to the fact that ordinary Lie algebra and group structures are replaced by Hopf algebras also known as ``quantum groups").
Quantum deformations of the Poincar\'e algebra were first proposed in \cite{Lukierski:1991pn, Lukierski:1992dt}. The appeal of such models, in particular of the so-called $\kappa$-Poincar\'e algebra, relies on the energy scale set by the deformation parameter $\kappa$, naturally associated with a Planckian scale. These mathematical models served as a basis for the formulation of the so-called ``doubly special relativity" theories \cite{AmelinoCamelia:2000ge,AmelinoCamelia:2000mn}, in which the deformation parameter is regarded as an observer independent scale. It was soon realized \cite{KowalskiGlikman:2002ft} that the new features of these deformed symmetries can be understood in terms of a non-trivial geometry of momentum space and, in particular, it was showed in \cite{KowalskiGlikman:2004tz} that the $\kappa$-deformed momentum space in four space-time dimensions is given by a Lie group obtained from the Iwasawa decomposition of the de Sitter group $SO(4,1)$. In the last few years the non-trivial geometric properties of momentum space and the associated deformed phase spaces, provided the arena for then new paradigm of ``relative locality" \cite{AmelinoCamelia:2011bm}.
In all these works the deformed phase spaces are constructed following different techniques tailored for the particular model under consideration. This is especially the case for the fundamental structures defined on such phase spaces: the Poisson brackets. In this paper we describe an approach to the construction of deformed phase spaces with group valued momenta and their Poisson structures which relies only on minimal, model independent, ingredients. Using elements of the theory of Poisson Lie groups we show how it is possible to construct such deformed phase spaces using as the only input the algebraic structure of the generators of the momentum Lie group.
\footnote{The structures that appear in the theory of Poisson-Lie groups, mainly developed in the seminal papers by Drinfeld \cite{Drinfeld:1983ky, Drinfeld:1986in} Semenov-Tian-Shansky \cite{SemenovTianShansky:1983ik, SemenovTianShansky:1985my} and the classical $r$-matrix introduced previously by Sklyanin \cite{Sklyanin:1980ij, Kulish:1980ii}, came as a classical limit of the structures that appeared in the theory of quantum integrable systems and in turn they appear in the theory of classical integrable systems. See \cite{kosmann1997} for a good review on the subject and the references within.}
Our approach has two advantages: on one side it only relies on the specification of the momemtum group manifold and thus it does not depend on the details of the fundamental theories from which the model is derived, on the other side it can be easily applied to the construction of deformed phase spaces with {\it any} momentum Lie group. Moreover it provides a solid starting point for the quantization of models with curved momentum space and their associated deformed symmetries.
The paper is organized as follows: in the next Section we illustrate in detail the basics of the theory of Poisson Lie groups and Lie bi-algebras needed for our discussion. We do so in a self-contained way in order to make the tools borrowed from the mathematical literature accessible to a physics audience. In Section III we re-formulate the conventional phase space description of a relativistic (spinless) point particle using the language of Poisson Lie groups. This will set the stage for the construction, described in Section IV, of the deformed phase space associated with momenta living on the $AN(n)$ Lie group, the $n$-dimensional generalization of the momentum space associated to the $\kappa$-Poincar\'e algebra. In Section V we describe the analogous construction for a phase space with a $SL(2,\mathbbm{R})$ momentum space, the momentum space of point particles coupled to gravity in three space-time dimensions. In the following Section VI we elaborate on the description of phase space for multi-particle systems, a subject which has been object of much controversy in the literature. We conclude in Section VII with a summary of our results and an outlook for future developments.
\section{From symplectic manifolds to Poisson-Lie groups}
We begin our discussion with a review of equivalent formulations of a classical particle's phase space. We start from the conventional picture of phase space as the cotangent bundle of a configuration space equipped with a symplectic form which determines, together with a Hamiltonian and via the Poisson bracket, the dynamics of the system. We show how this familiar picture can be recast in the more abstract language of Poisson-Lie groups and associated $r$-matrices which allow a rather straightforward generalization to phase spaces which {\it do not} possess the structure of cotangent bundles and, in particular, to phase spaces in which momenta belong to a non-abelian Lie group.
\subsection{From symplectic manifolds to Poisson manifolds}
In the usual textbook formulation the states of a classical system belong to a phase space which is given by a {\it symplectic manifold}: an even dimensional differential manifold $\Gamma$ equipped with a non-degenerate closed two-form $\omega$. In most cases $\Gamma$ is the cotangent bundle of the configuration space $M$: $\Gamma = T^*M$. Classical observables are differentiable functions on phase space i.e. belong to $C^\infty (\Gamma)$. The dynamics is determined by a function on $\Gamma$, the Hamiltonian, and the evolution of the system is described by an integral curve of the Hamiltonian vector field $X_H$ on $\Gamma$ determined by Hamilton equations \cite{AbrMars}. One way to describe the action of the Hamiltonian vector field $X_H$ on functions in $C^\infty(\Gamma)$ is in terms of the {\it Poisson bracket}, namely a map $\{\ \cdot\ ,\ \cdot\ \} : C^\infty(\Gamma) \times C^\infty(\Gamma) \to C^\infty(\Gamma)$ with the properties of a Lie bracket, i.e., antisymmetry, the Jacobi identity and the Leibniz-like property. If the Poisson bracket is non-degenerate (there is no point in $\Gamma$ in which $\{f,h\} = 0$ for any $f,h \in C^\infty(\Gamma)$) the Poisson structure is {\it symplectic}. The Hamiltonian vector field $X_f$ can be defined, for any function $f \in C^\infty(\Gamma)$ via the relation
\begin{equation}
\omega(\ \cdot\ ,X_f) = df\,,
\end{equation}
from which, the Poisson bracket can be written in terms of the symplectic form as
\begin{equation}
\{ f, g \} = - \omega(X_f, X_g) = -X_f(g)\,.
\end{equation}
The properties which characterize the Poisson bracket are determined by the symplectic form $\omega$, in particular the antisymmetry is given by the two-form nature of $\omega$, the Jacoby identity corresponds to $d\omega = 0$ and the exterior differentiation accounts for the Leibniz-like property.
Formally we can generalize the mathematical description of the phase space from that of a symplectic manifold to a {\it Poisson manifold}. This is nothing but a pair $(\Gamma, \{\ ,\ \})$ where $\Gamma$ is a differential manifold and $\{\ ,\ \}$ a Poisson bracket with the properties specified above. The Poisson bracket can be expressed in terms of a skew-symmetric rank-two tensor $w \in T\Gamma \otimes T\Gamma$ by means of the {\it dual pairing} between a vector space and its dual $\langle \ \cdot\ ,\ \cdot\ \rangle : T\Gamma \times T^*\Gamma \to C^\infty(\Gamma)$ and its generalization to tensor fields. Thus, we define the Poisson bracket for $f,g \in C^\infty(\Gamma)$ as
\begin{equation} \label{PBbivector}
\{ f,g \} = \langle w, df \otimes dg \rangle.
\end{equation}
The skew-symmetric tensor $w$ is called {\it Poisson bi-vector} and induces a mapping $w(\; ,\ \cdot\ ): T^*\Gamma \to T\Gamma$ as $\langle w,\ \cdot\ \otimes df \rangle = X_f$ which can also be expressed as $X_f = \{\ \cdot\ , f\}$ \cite{chari}. The important point to notice is that this map is not necessarily invertible and in those cases we do not have a symplectic structure. On the contrary, every symplectic structure is a Poisson structure.
Our goal is to provide a consistent description of the phase space and Poisson structure of particles whose momenta live on a {\it non-abelian Lie group}. In general for phase spaces which are (direct products of) group manifolds, i.e. where both configuration space and momentum space can be {\it curved spaces}, there is a suitable mathematical formalism to define Poisson brackets which are compatible with the group structure, these are the so-called Poisson-Lie groups. The Poisson structure of a Poisson-Lie group, however, is never symplectic. If one renounces to the requirement of compatibility with the group structure there is still a concise mathematical way to define a symplectic Poisson structure for the Lie group, called the Heisenberg double. Below we provide the minimal concepts needed to consistently determine such symplectic Poisson structure starting from the infinitesimal algebraic structure of the group.
\subsection{Lie groups as phase spaces}
\label{sec:poisson}
Let us start by considering a phase space $\Gamma = T \times G$, given by the Cartesian product of a $n$-dimensional Lie group {\it configuration space} $T$ and a $n$-dimensional Lie group {\it momentum space} $G$. Of course, in this case the phase space no longer bears the structure of a cotangent bundle and it is not obvious how the structures reviewed in the previous section, in particular the Poisson brackets, can be generalized. In order to see how it is possible to extend such tools we start from the Lie algebras associated to these Lie groups, $\mathfrak{t}$ for $T$ and $\mathfrak{g}$ for $G$. Denoting the generators as $\{P_\mu\}$ for $\mathfrak{t}$ and $\{X^\mu\}$ for $\mathfrak{g}$, $\mu = 0,\ldots,n-1$, the Lie brackets are
\begin{equation} \label{liebrackets}
[P_\mu, P_\nu] = d_{\mu\nu}^\sigma P_\sigma \qquad \text{and} \qquad [X^\mu, X^\nu] = c^{\mu\nu}_{\sigma} X^\sigma,
\end{equation}
where $d_{\mu\nu}^\sigma$ and $c^{\mu\nu}_\sigma$ are the structure constants of the Lie algebras.
Since $T$ and $G$ will describe, respectively, the positions and momenta of the classical system, it is useful to regard $\mathfrak{t}$ and $\mathfrak{g}$ as {\it dual vector spaces} with a dual pairing defined in terms of the basis elements as
\begin{equation} \label{dualpairing}
\langle P_\mu, X^\nu \rangle = \delta_\mu^\nu.
\end{equation}
Let us notice that such duality between $\mathfrak{t}$ and $\mathfrak{g}$ allows one to define Poisson brackets on both spaces. To see this let us consider an element $Y\in \mathfrak{t}$, since $\mathfrak{t}$ is a vector space the tangent space $T_Y\mathfrak{t} \simeq \mathfrak{t}$ is isomorphic to $\mathfrak{t}$ itself. If we take a smooth function $f\in C^{\infty}(\mathfrak{t})$ then the differential $(df)_Y: T_Y\mathfrak{t}\rightarrow \mathbb{R}$ can be seen as an element of the space $T^*_Y\mathfrak{t} \simeq \mathfrak{g}$. The Poisson bracket on $C^{\infty}(\mathfrak{t})$ is then given in terms of the commutators of $\mathfrak{g}$ by
\begin{equation}\label{LiePoiss}
\{f,g\}(Y)\equiv \langle Y, [(df)_Y, (dg)_Y]\rangle \,.
\end{equation}
In the same way the Lie brackets on $\mathfrak{t}$ determine Poisson brackets on $C^{\infty}(\mathfrak{g})$. In particular let us consider coordinate functions $f = x^{\mu}$ and $g = x^\nu$ such that $dx^\mu, dx^\nu \in \mathfrak{g}$, it is easy to see that the Lie algebra structure of $\mathfrak{g}$ induces the following Poisson bracket on these functions
\begin{equation}
\{x^{\mu}, x^{\nu}\} = c^{\mu\nu}_\sigma x^{\sigma}\,,
\end{equation}
and, analogously, the Lie algebra structure on $\mathfrak{t}$ defines a Poisson structure \mbox{$\{p_{\mu}, p_{\nu}\} = d_{\mu\nu}^{\sigma} p_{\sigma}$} on $C^{\infty}(\mathfrak{g})$
with $p_{\mu}$ coordinate functions on $\mathfrak{g}$ such that the associated differential coincides with the generators $X^{\mu}$. Notice that being such brackets isomorphic to Lie brackets, they automatically satisfy all the required properties for being a Poisson bracket i.e. skew symmetry and Jacobi identity. These brackets are known in the literature as Kirillov-Kostant brackets and are the key object in the description of phase spaces in terms of {\it co-adjoint orbits} \cite{kirillov1976}.
The Poisson brackets we just discussed reflect a new structure which can be defined on the Lie algebras $\mathfrak{t}$ and $\mathfrak{g}$. Indeed we can write down maps $\delta_{\mathfrak{t}}:\mathfrak{t} \rightarrow \mathfrak{t} \otimes \mathfrak{t}$ and $\delta_{\mathfrak{g}}:\mathfrak{g} \rightarrow \mathfrak{g} \otimes \mathfrak{g}$ given by
\begin{equation} \label{cocommutators}
\delta_{\mathfrak{t}}(X^\mu) = d_{\alpha\beta}^\mu X^\alpha \otimes X^\beta \qquad \text{and} \qquad \delta_{\mathfrak{g}}(P_\mu) = c^{\alpha \beta}_\mu \ P_\alpha \otimes P_\beta\,.
\end{equation}
It is easy to see that through the dual pairing \eqref{dualpairing} the functions $\delta_{\mathfrak{t}}$ and $\delta_{\mathfrak{g}}$ determine the Lie brackets of $\mathfrak{g}$ and $\mathfrak{t}$, respectively via the relations
\begin{equation}
\delta_{\mathfrak{t}} (X^\mu) (P_\alpha, P_\beta) = \langle X^\mu , [P_\alpha , P_\beta] \rangle\,,\,\,\,\,\,\, \delta_{\mathfrak{g}} (P_\mu) (X^\alpha, X^\beta) = \langle P_\mu , [X^\alpha, X^\beta] \rangle\,,
\end{equation}
which explains why in the literature such functions are called {\it co-commutators}.
Besides its direct relationship with a Poisson structure on the dual space, the co-commutator defined on a Lie algebra is also linked to a Poisson structure on the associated Lie group. This is of immediate relevance for the analysis we present in this work. In order to illustrate in some detail this link, we review the basics of Poisson structures on a Lie group. The main goal will be to introduce the necessary tools to derive a formula connecting the Poisson bi-vector on the group manifold with the co-commutator on its Lie-algebra.
\subsubsection{Poisson structures on Lie groups}
The framework of Poisson-Lie groups is based on the requirement of having a Poisson structure on the group $G$ such that the group multiplication map $m:G \times G \to G$ denoted as $m(g_1,g_2) = g_1 g_2$ is a {\it Poisson map}.
This means that, in terms of the Poisson bracket decomposition for functions on the Cartesian product of spaces, the following property must hold:
\begin{multline}
\{f_1 \circ m,f_2 \circ m\}_{G \times G} (g_1,g_2) = \{f_1 \circ m(\ \cdot\ , g_2), f_2 \circ m(\ \cdot\ ,g_2)\}_G (g_1) \\
+ \{f_1 \circ m(g_1,\ \cdot\ ), f_2 \circ m(g_1,\ \cdot\ )\}_G (g_2).
\end{multline}
Such expression can be translated into a condition for the Poisson bracket on the group $\{\cdot, \cdot\}_G$ using the left and right translation maps
\begin{equation}
L_{g_1}(g_2) = g_1g_2\,,\,\,\,\, R_{g_1}(g_2)= g_2g_1\,,
\end{equation}
to become
\begin{equation} \label{PoissonProd}
\{f_1, f_2\}_G (g_1g_2) = \{f_1 \circ R_{g_2}, f_2 \circ R_{g_2} \}_G (g_1) + \{f_1 \circ L_{g_1}, f_2 \circ L_{g_1} \}_G (g_2).
\end{equation}
A Lie group equipped with a Poisson bracket satisfying such property is known as a {\it Poisson-Lie group}.
We would now like to rewrite the Poisson bracket in terms of a Poisson bi-vector. In doing so, we will be able to describe the infinitesimal version of the Poisson structure on the Lie group and to connect it with the co-commutator.
Let us focus on the right translation map $R_{g}$. We have the following maps induced by $R_{g_2}$: the pullback between cotangent spaces $R_{g_2}^*: T^*_{g_1g_2}G \to T^*_{g_1}G$ and the pushforward between tangent spaces $R_{g_2}{}_* : T_{g_1}G \to T_{g_1g_2}G$. Indeed, for a tangent vector at $g\in G$, $X \in T_{g_1}G$ and the differential $df \in T_{g_1g_2}G$, we have the dual pairing between elements of the dual spaces
\begin{equation} \label{dualpairingright}
\langle X, R_{g_2}^* df \rangle = \langle R_{g_2}{}_*X, df \rangle.
\end{equation}
The analogous relation for the left translation and its induced maps is
\begin{equation} \label{dualpairingleft}
\langle Y, L_{g_1}^* dh \rangle = \langle L_{g_1}{}_*Y, dh \rangle ,
\end{equation}
where $Y \in T_{g_1g_2}G$ and $dh \in T^*_{g_2}G$. The pullback and pushforward of $R_{g}$ and $L_{g}$ can be generalized to the tensor product of spaces of any rank using the above relations.
The Poisson bi-vector is then defined in terms of the dual pairing between tangent and cotangent spaces as follows
\begin{equation} \label{pbwg}
\{f_1,f_2\} (g) = \langle w_{g}, df_1 \otimes df_2|_{g} \rangle,
\end{equation}
where for notational simplicity we dropped the subscript $G$ on the Poisson bracket. It is rather straightforward to write down the analogous of condition (\ref{PoissonProd}) for the Poisson bi-vector $w_{g}$
\begin{equation} \label{bivecg1g2}
w_{g_1g_2} = R_{g_2}{}_*{}^{\otimes 2} |_{g_1} w_{g_1} + L_{g_1}{}_*{}^{\otimes 2} |_{g_2} w_{g_2},
\end{equation}
which states that a Poisson structure for a Lie group $G$ is Poisson-Lie if and only if the value of its Poisson bivector at $g_1g_2 \in G$ is the sum of the right translate by $g_2$ of its value at $g_1$ plus the left translate by $g_1$ of its value at $g_2$. Notice that for $g_1 = g_2 = e$, where $e \in G$ is the identity element of the group, we have that $w_e = 0$, therefore the rank of the Poisson structure is zero at the identity element of the Lie group, hence {\it the Poisson structure of a Poisson-Lie group is not symplectic}. It is possible, however, to define other Poisson structures for a Lie group which turn the latter into a symplectic manifold and for which the requirement of being Poisson-Lie is dropped. Since these structures result appealing from a physical standpoint, in what follows we show how the co-commutator at the Lie algebra level and the Poisson bi-vector are related. This will allow us to introduce a symplectic Poisson structure on the group and to exhibit the differences between Poisson-Lie and Poisson structures.
In order to make contact with the notion of co-commutator we focus on the right translate of the Poisson bivector $w$ to the identity element of $G$, denoted by $w^R : G \to \mathfrak{g} \times \mathfrak{g}$, where $\mathfrak{g}= T_e G$ is the Lie algebra of $G$. First, we note that
\begin{align} \label{wciclo1}
\langle w_g, (df_1 \otimes df_2) |_g \rangle &= \langle w^R(g), R_g^*{}^{\otimes 2} |_e (df_1 \otimes df_2) |_g \rangle, \nonumber \\
&= \langle R_g{}_*{}^{\otimes 2} |_e w^R(g), (df_1 \otimes df_2) |_g \rangle,
\end{align}
hence we have for the Poisson bivector
\begin{equation} \label{wciclo2}
w_g = R_g{}_*{}^{\otimes 2} w^R(g)\,.
\end{equation}
To obtain an expression involving Lie algebra elements we express the group element as $g = e^{tX}$ and consider the derivative of the second term in (\ref{wciclo1}), using $w^R(e) = 0$ we obtain
\begin{equation} \label{diffPB2}
\frac{d}{dt} \langle w^R(e^{tX}), R_{e^{tX}}^* (df_1 \otimes df_2) |_{e^{tX}} \rangle |_{t=0} = \langle \frac{d}{dt} w^R(e^{tX}) \big|_{t=0} , R_{e^{tX}}^*{}^{\otimes 2} (df_1 \otimes df_2) |_{e^{tX}} |_{t=0} \rangle.
\end{equation}
We now {\it define} the co-commutator in terms of $w^R$ as
\begin{equation} \label{diffPBcoco}
\frac{d}{dt} w^R(e^{tX}) \big|_{t=0} \equiv \delta(X)\,.
\end{equation}
Denoting $\xi_i = df_i |_{e}$ and equating \eqref{diffPB2} to the derivative of the l.h.s. of \eqref{wciclo1}, we can write down the following relation between co-commutator and Poisson bracket on the group
\begin{equation} \label{dualrelcoco}
\langle X, d\{f_1,f_2\} |_e \rangle = \langle \delta(X), \xi_1 \otimes \xi_2 \rangle\,.
\end{equation}
For Poisson-Lie groups $w^R$ must comply with a condition analogous to (\ref{bivecg1g2}) which ensures that the group multiplication on $G$ is a Poisson map. Such condition will translate on a ``local" condition on the co-commutator through (\ref{diffPBcoco}) which we will derive explicitely. Writing
\begin{equation} \label{wciclo3}
w^R(g) = R_{g^{-1}}{}_*{}^{\otimes 2} w_g\,,
\end{equation}
we can act on \eqref{bivecg1g2} with the tangent linear map associated to the right action $R_{g_1g_2}^{-1} = R_{(g_1g_2)^{-1}} = R_{g_2^{-1}g_1^{-1}}$ and using the identity $R_{g_1}L_{g_2} = L_{g_2}R_{g_1}$ we obtain
\begin{align} \label{wciclo5}
w^R(g_1g_2) &= R_{g_1^{-1}}{}_*{}^{\otimes 2} L_{g_1}{}_*{}^{\otimes 2} R_{g_2^{-1}}{}_*{}^{\otimes 2} w_{g_2} + R_{g_1^{-1}}{}_*{}^{\otimes 2} w_g \\
&= R_{g_1^{-1}}{}_*{}^{\otimes 2} L_{g_1}{}_*{}^{\otimes 2} w^R(g_2) + w^R(g_1).
\end{align}
The translation $L_g R_{g^{-1}} g' = g g' g^{-1} = \mathrm{Ad}_g g'$ is the adjoint action of $g$ on $g'$ in $G$. Then, denoting the action of the tangent linear maps as $L_{g_1}{}_* R_{g_1^{-1}}{}_* = \mathrm{Ad}_{g_1}$, we can write the above relation as
\begin{equation} \label{cociclowR}
w^R(g_1g_2) = \mathrm{Ad}_{g_1}{}^{\otimes 2} w^R(g_2) + w^R(g_1),
\end{equation}
which is the required condition on $w^R$ which ensures that the Poisson structure is compatible with group multiplication. Let us differentiate \eqref{cociclowR} in order to derive the analogous condition for the co-commutator $\delta(X)$. We start by noticing that from the definition (\ref{diffPBcoco}) it follows\footnote{The property is easily verified by taking the derivative in the direction of $X$ of $w^R(e) = 0$.} that $\delta(-X) = -\delta(X)$. Next we look at the co-commutator of $[X,Y]$
\begin{align} \label{dcomm1}
\delta([X,Y]) &= \delta\bigg(\frac{d}{ds}\ \frac{d}{dt} (e^{sX}e^{tY}e^{-sX}) |_{s,t=0}\bigg), \nonumber \\
&= \frac{d}{ds}\ \frac{d}{dt} w^R(e^{sX}e^{tY}e^{-sX}) |_{s,t=0},
\end{align}
taking into account the identity
\begin{align} \label{commutator}
[X,Y]= \frac{d}{dt}\ \frac{d}{ds} \bigg(\mathrm{Ad}_{e^{sX}} e^{tY} \bigg) \bigg|_{t,s=0} &= \frac{d}{ds} \bigg(\mathrm{Ad}_{e^{sX}} \bigg) \bigg|_{s=0}\ \frac{d}{dt} e^{tY} \bigg|_{t=0}, \nonumber \\
&= \frac{d}{ds} \bigg( \mathrm{Ad}_{e^{sX}} \bigg) \bigg|_{s=0} Y\,,
\end{align}
and applying the condition (\ref{cociclowR}) for $w^R$ twice we get
\begin{align} \label{dcomm2}
\delta([X,Y]) &= \frac{d}{ds}\ \frac{d}{dt} \bigg[ w^R(e^{sS}) + \mathrm{Ad}_{e^{sX}}{}^{\otimes 2}\ w^R(e^{tY}e^{-sX}) \bigg]_{s,t=0}, \nonumber \\
&= \frac{d}{ds}\ \frac{d}{dt} \bigg[ w^R(e^{sS}) + \mathrm{Ad}_{e^{sX}}{}^{\otimes 2}\ \big( w^R(e^{tY}) + \mathrm{Ad}_{e^{tY}}{}^{\otimes 2} w^R(e^{-sX}) \big) \bigg]_{s,t=0}, \nonumber \\
&= \frac{d}{ds} \big(\mathrm{Ad}_{e^{sX}}{}^{\otimes 2}\big)\ \frac{d}{dt} w^R(e^{tY}) \big|_{s,t=0} + \frac{d}{ds} \big( \mathrm{Ad}_{e^{sX}}{}^{\otimes 2} \big)\ \frac{d}{dt} \big(\mathrm{Ad}_{e^{tY}}{}^{\otimes 2} \big) w^R(e^{-sX}) \big|_{s,t=0} \nonumber \\
&\hspace{4cm} + \mathrm{Ad}_{e^{sX}}{}^{\otimes 2} \ \frac{d}{dt} \big(\mathrm{Ad}_{e^{tY}}{}^{\otimes 2} \big)\ \frac{d}{ds} w^R(e^{-sX}) \big|_{s,t=0},
\end{align}
where we used that $w^R(e) = 0$, $\mathrm{Ad}_{e}{}^{\otimes 2} = \mathbbm{1} \otimes \mathbbm{1}$. Introducing the notation
\begin{equation} \label{XpuntoY}
X.\delta(Y) = (\mathrm{ad}_X \otimes \mathbbm{1} + \mathbbm{1} \otimes \mathrm{ad}_X) \delta(Y)\,,
\end{equation}
We can rewrite (\ref{dcomm2}) as the following equation for the co-commutator
\begin{equation} \label{cocylecoco}
\delta([X,Y]) = X.\delta(Y) - Y.\delta(X)\,,
\end{equation}
which is known in the mathematical literature as the {\it co-cycle condition} \cite{chari,tjin1992}. A Lie algebra equipped with a co-commutator satisfying the co-cycle condition is called a {\it Lie bi-algebra}. We will see that the Lie-bialgebra structure can give rise to Poisson structures which are not Poisson-Lie but are symplectic, and that the requirement \eqref{cocylecoco} for the Lie-bialgebra structure is still crucial in order to have a proper Poisson structure. Thus, from our perspective, the fundamental question is to look for co-commutators which satisfy the co-cycle condition which we can use to construct a Poisson structure on a phase space with group-valued momenta.
\subsubsection{Poisson structures and the $r$-matrix}
One way to construct a co-commutator which automatically satisfies the co-cycle condition is to consider one of the form
\begin{equation} \label{deltar}
\delta(X) \equiv X.r = \left( \mathrm{ad}_X \otimes \mathbbm{1} + \mathbbm{1} \otimes \mathrm{ad}_X \right) r\,
\end{equation}
where $r$ is a generic element of $\mathfrak{g} \otimes \mathfrak{g}$ called the {\it $r$-matrix}. In order for $\delta$ to be a genuine co-commutator the $r$-matrix must satisfy the following two conditions:
\begin{enumerate}
\item The symmetric part of $r$, $r_+ = \frac{1}{2} \left( r^{ij} + r^{ji} \right) X_i \otimes X_j$ is ad-invariant element of $\otimes^2 \mathfrak{g}$, with $r = r^{ij} X_i \otimes X_j$ where $\{X_i\}$ is a basis of $\mathfrak{g}$.
\item $\big[[r,r]\big] = [r_{12}, r_{13}] + [r_{12}, r_{23}] + [r_{13}, r_{23}]$ is an ad-invariant element of $\otimes^3 \mathfrak{g}$, where
\begin{align} \label{rij}
r_{12} &= r^{ij}\ X_i \otimes X_j \otimes \mathbbm{1}, \nonumber \\
r_{13} &= r^{ij}\ X_i \otimes \mathbbm{1} \otimes X_j, \nonumber \\
r_{23} &= r^{ij}\ \mathbbm{1} \otimes X_i \otimes X_j,
\end{align}
and
\begin{align}
[r_{12},r_{13}] = r^{ij} r^{kl} [X_i,X_k] \otimes X_j \otimes X_l, \nonumber \\
[r_{12},r_{23}] = r^{ij} r^{kl} X_i \otimes [X_j,X_k] \otimes X_l, \nonumber \\
[r_{13},r_{23}] = r^{ij} r^{kl} X_i \otimes X_k \otimes [X_j,X_l].
\end{align}
\end{enumerate}
The first condition is directly related to skew-symmetry of the Poisson bracket and Lie brackets defined by $\delta$. Analogously, the second condition ensures that such brackets satisfy the Jacobi identity. Notice that from the first condition we see that $(\mathrm{ad}_g \otimes \mathbbm{1} + \mathbbm{1} \otimes \mathrm{ad}_g) r_+ = 0$ is trivially satisfied if $r_+=0$, i.e. using a skew-symmetric $r$-matrix. It is also clear that the simplest way of satisfy the second condition is if $\big[[r,r]\big] = 0$. This equation is know as the {\it classical Yang Baxter equation} (CYBE) and its solution is called the {\it classical $r$-matrix}. The condition of $\big[[r,r]\big]$ being ad-invariant
\begin{equation} \label{adschouten}
X^i . \big[[r,r]\big] = 0 \quad \forall X^i \in \mathfrak{g}
\end{equation}
where
\begin{equation} \label{adschoutenop}
X^i . \big[[r,r]\big] \equiv \big(\mathrm{ad}_{X^i} \otimes \mathbbm{1} \otimes \mathbbm{1}\ +\ \mathbbm{1} \otimes \mathrm{ad}_{X^i} \otimes \mathbbm{1}\ +\ \mathbbm{1} \otimes \mathbbm{1} \otimes \mathrm{ad}_{X^i} \big) \big[[r,r]\big],
\end{equation}
is known as the {\it modified classical Yang-Baxter equation} (mCYBE). The most important point for us is that the co-commutator, as defined by \eqref{deltar}, can be \textit{integrated} to a Poisson structure on $G$. One possibility is to hold on the requirement of compatibility with group multiplication and obtaining a Poisson-Lie structure on $G$. Nevertheless, other Poisson structures can be associated to the same co-commutator which are not Poisson-Lie but which e.g. are symplectic and thus are good candidates for describing the Poisson bracket of a deformed phase space. These last structures are the ones we are interested in.
Before moving on let us describe in some detail the Poisson bi-vectors associated a $r$-matrix. A possible choice of Poisson bi-vector associated to a given $r$-matrix is such that its right translate to the identity element of $G$ is given by
\begin{equation} \label{wRr}
w^R(g) = \mathrm{Ad}_g{}^{\otimes 2} r - r.
\end{equation}
Writing an element of $G$ as $g = e^{tX}$, is a straightforward calculation to check that \eqref{wRr} has the ``correct" derivative
\begin{align} \label{derivwR}
\delta(X) &= \frac{d}{dt} w^R(e^{tX}) \Big|_{t=0}, \nonumber \\
&= \bigg[ \frac{d}{dt} \big(\mathrm{Ad}_{e^{tX}} \big) \otimes \mathrm{Ad}_{e^{tX}} + \mathrm{Ad}_{e^{tX}} \otimes \frac{d}{dt} \big(\mathrm{Ad}_{e^{tX}} \big) \bigg]_{t=0} \ r, \nonumber \\
& = ( \mathrm{ad}_X \otimes \mathbbm{1} + \mathbbm{1} \otimes \mathrm{ad}_X )\ r = X.r\,.
\end{align}
It is also easily checked that (\ref{wRr}) satisfies the co-cycle property \eqref{cociclowR}.
It is possible to define a different Poisson structure on $G$, starting from the same $r$-matrix, which also satisfies the above properties. This structure is determined by the Poisson bi-vector
\begin{equation} \label{wRrHeisenberg}
w^R(g) = \mathrm{Ad}_g{}^{\otimes 2} r + r^*,
\end{equation}
where $r^*$ is minus the transpose of $r$, that is, if $r = r^{ij} X_i \otimes X_j$ then $r^* = -r^{ji} X_i \otimes X_j$, ($r^* = - r^t$). As we will see this Poisson bivector {\it does not} give rise to a Poisson-Lie structure but it allows to define a symplectic Poisson structure on the group manifold as it is not necessarily degenerated at the group identity.
It is useful to write down the explicit form of the Poisson brackets on the group in terms of the $r$-matrix. The Poisson bracket associated to \eqref{wRr} can be obtained from \eqref{pbwg}
written in terms of the right translate of $w_g$ to the identity element, c.f. equation \eqref{wciclo1}
\begin{align} \label{wgPBr}
\{f_1,f_2\}(g) &= \langle w^R(g), R_g^*{}^{\otimes 2}\ (df_1 \otimes df_2)|_g \rangle, \nonumber \\
&= \langle (\mathrm{Ad}_g^{\otimes 2}\ r - r, R_g^*{}^{\otimes 2}\ (df_1 \otimes df_2)|_g \rangle, \nonumber \\
&= \langle L_g{}_* R_{g^{-1}}{}_*{}^{\otimes 2} \ r, R_g^*{}^{\otimes 2}\ (df_1 \otimes df_2)|_g \rangle \nonumber \\
&\hspace{4cm} - \langle r, R_g^*{}^{\otimes 2}\ (df_1 \otimes df_2)|_g \rangle, \nonumber \\
&= \langle r, L_g^*{}^{\otimes 2} \ (df_1 \otimes df_2)_g \rangle - \langle r, R_g^*{}^{\otimes 2}\ (df_1 \otimes df_2)|_g \rangle\,.
\end{align}
where in the third line we used the equality $L_{g_1}{}_* R_{g_1^{-1}}{}_* = \mathrm{Ad}_{g_1}$. Thus we have
\begin{equation}
\{f_1,f_2\}(g) = \langle r, L_g^*{}^{\otimes 2} \ (df_1 \otimes df_2)_g \rangle \nonumber - \langle r, R_g^*{}^{\otimes 2}\ (df_1 \otimes df_2)|_g \rangle\,.
\end{equation}
The corresponding expression for the bivector \eqref{wRrHeisenberg} is given by
\begin{equation} \label{wgPBHeisenberg}
\{f_1, f_2\}(g) = \langle r, L_g^*{}^{\otimes 2} (df_1 \otimes df_2) |_g \rangle + \langle r^*, R_g^*{}^{\otimes 2} (df_1 \otimes df_2 ) |_g \rangle.
\end{equation}
In order to write the brackets in a form which will be more convenient for actual calculations we express the $r$-matrix as $r = r^{ij}\ X_i \otimes X_j$ in terms of a basis $\{X_i\}$ for $\mathfrak{g} = T_e G$. The bracket \eqref{wgPBr} will be written as
\begin{align} \label{PBrij}
\{f_1,f_2\} (g) &= \langle r^{ij}\ X_i \otimes X_j, \big( L_g^*{}^{\otimes 2} - R_g^*{}^{\otimes 2} \big) |_e\ (df_1 \otimes df_2)_g \rangle, \nonumber \\
&= \langle r^{ij}\ \big( L_g{}_*{}^{\otimes 2}\ (X_i \otimes X_j) - R_g{}_*{}^{\otimes 2}\ (X_i \otimes X_j), (df_1 \otimes df_2)_g \rangle, \nonumber \\
&= \langle r^{ij}\ \big( X^L_i \otimes X^L_j - X^R_i \otimes X^R_j \big), (df_1 \otimes df_2)_g \rangle
\end{align}
where $X^L_i = L_g{}_*\ X_i$ and $X^R_i = R_g{}_*\ X_i$ are the left and right translate of $X_i \in T_e G = \mathfrak{g}$, and thus in compact form
\begin{equation} \label{PBleftright}
\{f_1,f_2\} = r^{ij}\ \big( X^L_i f_1\ X^L_j f_2 - X^R_i f_1\ X^R_j f_2 \big)\,.
\end{equation}
Analogously for the bracket \eqref{wRrHeisenberg} we can write
\begin{align} \label{PBleftrightHei}
\{f_1,f_2\} &= r^{ij}\ X^L_i f_1\ X^L_j f_2 + (r^*)^{ij} X^R_i f_1\ X^R_j f_2 \nonumber \\
&= r^{ij}\ X^L_i f_1\ X^L_j f_2 - r^{ji} X^R_i f_1\ X^R_j f_2.
\end{align}
Let us finally notice that if $G$ is a matrix group, the matrix elements $t_{lm}$ in $GL_n(\mathbb{R})$ can be seen as coordinate functions on the group, $t_{lm}(g)$. The Poisson bracket is determined by these matrix elements
\begin{equation}
X^L(t_{lm}) = (TX)_{lm}, \qquad X^R(t_{lm}) = (XT)_{lm},
\end{equation}
where $T$ is the matrix whose elements are $t_{lm}$. Then
\begin{equation}
\{t_{mn}, t_{kl}\} = \sum_{a,b} \left( r_{an\, bl}\ t_{ma} t_{kb} - r_{makb} t_{an} t_{bl}\right),
\end{equation}
where $r_{an\, bl} \equiv (r^{ij} X_i \otimes X_j)_{an\,bl} = r^{ij}\ (X_i)_{an} (X_j)_{bl}$.
Usually, this Poisson bracket is denoted as $\{t_{mn}, t_{kl} \} = \{ T\ \overset{\otimes}{,}\ T \}_{mnkl}$ and it is written as
\begin{equation}
\{T\ \overset{\otimes}{,}\ T\} = [ T \otimes T, r ].
\end{equation}
In the next section we will apply the tools illustrated so far to cartesian product Lie groups of the type $T \times G$. Imposing a duality relation between the Lie algebras $\mathfrak{t}$ and $\mathfrak{g}$ the group manifold $\Gamma = T \times G$ can be seen as a deformation of ordinary phase spaces with group manifold configuration space {\it and} momentum space.
\subsection{Deforming phase spaces: the classical doubles}
In order to define a Poisson structure on the group phase space $\Gamma = T \times G$ we will look for an ``exponentiated" version of co-commutators defined on the Lie (bi)-algebra $\mathfrak{t} \oplus \mathfrak{g}$. The starting point, of course, will be to define a Lie algebra structure on the vector space $\mathfrak{t} \oplus \mathfrak{g}$ compatible with the duality relation (\ref{dualpairing}) between $\mathfrak{t}$ and $\mathfrak{g}$. Such relation is encoded in the natural inner product on $\mathfrak{t} \oplus \mathfrak{g}$ given by
\begin{equation} \label{extdualpairing}
(P_\mu, P_\nu) = 0, \qquad (X^\mu, X^\nu) = 0 \qquad \text{and} \qquad (P_\mu, X^\nu) = \langle P_\mu, X^\nu \rangle\,.
\end{equation}
We want to define on $\mathfrak{t} \oplus \mathfrak{g}$ a Lie bracket such that the inner product above is {\it invariant} under the adjoint action of the elements of $\mathfrak{t} \oplus \mathfrak{g}$, that is
\begin{equation} \label{adinvariantinner}
([Z_A,Z_B],Z_C) = (Z_A,[Z_B,Z_C]),
\end{equation}
where $Z_A = \{P_\mu, X^\mu\}$, $A = 1,\ldots 2n$. The following Lie brackets
\begin{equation} \label{liealgdouble}
[P_\mu, P_\nu] = d_{\mu\nu}^\sigma P_\sigma, \qquad [X^\mu, X^\nu] = c^{\mu\nu}_\sigma X^\sigma \qquad \text{and} \qquad [P_\mu,X^\nu] = c^{\nu\sigma}_\mu P_\sigma - d_{\mu\sigma}^\nu X^\sigma,
\end{equation}
comply with such requirement as it can be easily verified. However, in order to show that these brackets turn $\mathfrak{t} \oplus \mathfrak{g}$ into a Lie algebra we must ensure that they satisfy the Jacobi identity. It turns out that the brackets (\ref{liealgdouble}) on $\mathfrak{t} \oplus \mathfrak{g}$ satisfy the Jacobi identity {\it if and only if} the co-commutator $\delta_{\mathfrak{t}}$ on the Lie algebra $\mathfrak{t}$ satisfies the co-cycle condition (\ref{cocylecoco}) \cite{chari}. As we discussed at the beginning of Section II.B, the Lie algebra structure on $\mathfrak{t}$ defines a co-commutator $\delta_{\mathfrak{g}}$ on the dual ``momentum" Lie algebra $\mathfrak{t}$ via
\begin{equation} \label{LBdualcoco}
\langle P, [X_i, X_j]_{\mathfrak{t}} \rangle = \langle \delta_{\mathfrak{g}}(P), X_i \otimes X_j \rangle\,.
\end{equation}
Thus we see that given the Lie algebra structure \eqref{liealgdouble} on $\mathfrak{t} \oplus \mathfrak{g}$ and the pairing through the inner product \eqref{extdualpairing}, a Lie bi-algebra structure on $\mathfrak{t}$ (and by duality on $\mathfrak{g}$) is naturally induced and it can be shown to be {\it unique} \cite{chari}.
We want to define now a Lie-bialgebra structure on the whole direct sum Lie algebra $\mathcal{D} = \mathfrak{t} \oplus \mathfrak{g}$ i.e. define a co-commutator $\delta_{\mathcal{D}}$ which reproduces $\delta_{\mathfrak{t}}$ and $\delta_{\mathfrak{g}}$ as given in equations \eqref{cocommutators} when restricted, respectively, to $\mathfrak{t}$ and $\mathfrak{g}$. It turns out that there is a canonical way\footnote{Sometimes the $r$-matrix is written directly as the skew-symmetric part $r_- = \frac{1}{2}\left( P_\mu \otimes X^\mu - X^\mu \otimes P_\mu \right) \equiv P_\mu \wedge X^\mu$, which obviously yields the same Lie-bialgebra structure. One just have to notice that in such cases the $r_-$-matrix satisfies trivially $\mathrm{ad} r_+ = 0$ and also it is a solution for the mCYBE instead of the CYBE. See \cite{Zakrzewski1994} for the use of an analogous classical $r$-matrix in the context of deformations of the Poincar\'e group.} of defining such co-commutator in terms of the $r$-matrix belonging to $\mathfrak{t} \otimes \mathfrak{g}$
\begin{equation} \label{rmadouble}
r = P_\mu \otimes X^{\mu}\,,
\end{equation}
as
\begin{equation}
\label{cocodouble}
\delta_{\mathcal{D}}(Z_A) = Z_A.r = \left( \mathrm{ad}_{Z_A} \otimes \mathbbm{1} + \mathbbm{1} \otimes \mathrm{ad}_{Z_A} \right) r\,.
\end{equation}
It is easily verified that such co-commutator reduces to \eqref{cocommutators}, when $Z_A= X^{\mu}$ or $Z_A= P_{\mu}$. It can be also proved \cite{chari} that \eqref{cocodouble} defines a genuine Lie bi-algebra structure on $\mathcal{D}$, i.e. that $\delta_{\mathcal{D}}(Z_A)$ complies with the properties listed in Section IIB.
We can now use the $r$-matrix \eqref{rmadouble} do define a Poisson structure along the lines illustrated in the previous section. There we saw that, given an appropriate $r$-matrix, there are two possible choices of Poisson bi-vector which can be used to define a Poisson structure on $\Gamma = T \times G$. Let us consider two functions on $\Gamma$, $f_1,f_2 \in C^\infty(T \times G)$, then the Poisson bracket is given by $\{ f_1, f_2 \}(h) = \langle w_h, (df_1 \otimes df_2)_h \rangle$, for $h \in T \times G$. In terms of the $r$-matrix we have a Poisson bracket given by
\begin{equation} \label{rmatrix1}
\{f_1, f_2\} = - r^{AB} \big( Z_A^R f_1\, Z_B^R f_2\ \pm\ Z_A^L f_1\, Z_B^L f_2 \big),
\end{equation}
where $Z_A^L = L_h{}_* Z_A$ and $Z_B^R = R_h{}_* Z_B$ are the left and right translates of $Z_A \in \mathfrak{t} \oplus \mathfrak{g}$. If $\Gamma$ is a matrix group then \eqref{rmatrix1} is given as
\begin{equation} \label{rmatrix2}
\{\gamma_{ij}, \gamma_{kl}\} = - \sum_{a,b} ( r_{iakb} \gamma_{aj} \gamma_{bl} \pm r_{ajbl} \gamma_{ia} \gamma_{kb} ),
\end{equation}
where $\gamma \in \Gamma$ with $\gamma_{ij}$ its components which can be understood as coordinate functions for the group, \mbox{$r_{iakb} \equiv (r^{AB} Z_A \otimes Z_B)_{iakb}$}, and $Z_A^L(\gamma_{ij}) = (\gamma Z_A)_{ij}$, $Z_A^R(\gamma_{ij}) = (Z_A \gamma)_{ij}$. This can be expressed compactly as
\begin{equation} \label{rmatrix3}
\{ \gamma\ \overset{\otimes}{,}\ \gamma \} = - [r, \gamma \otimes \gamma]_\pm,
\end{equation}
with the plus subscript denoting the anti-commutator and the minus for the commutator. The bracket given by the commutator equips $\mathcal{D}$ with the structure of {\it Drinfeld double} and the one with the anticommutator the strucuture of the {\it Heisenberg double}. We would like to stress the fundamental difference between these two structures. On the one hand, the Drinfeld structure \eqref{wRr} satisfies the compatibility condition \eqref{bivecg1g2} for having a Poisson-Lie group, but the Poisson structure is always degenerated at the identity element $e \in G$, hence the structure is not symplectic\footnote{It is possible, however, to foliate the group manifold in a set of {\it symplectic leaves} with the Poisson structure of the manifold restricted to each leaf \cite{alekseev1994}.}. On the other hand, in the case of the Heisenberg double, one renounces to the requirement of compatibility between the group multiplication and the Poisson structure favouring the possibility of a global symplectic structure \cite{kosmann1997}.
In what follows we will focus on explicit examples of phase spaces and their Poisson structures making use of the tools developed so far. We will start from a ordinary massive relativistic particle and will proceed to consider {\it deformed phase spaces} in three and more space-time dimensions which are characterized by momenta living on a non-abelian Lie group.
\section{Relativistic spinless particle: flat momentum space}
\label{subsec:flat}
In this section we start by describing the (undeformed) phase space of a relativistic spinless particle using the tools developed in the previous Section.
Even though the following treatment might appear as a byzantine academic exercise, it will serve as a starting point for introducing the deformations of momentum space which we will develop in the next Sections.
The configuration space of a spinless relativistic particle in $n+1$-dimensions can be identified with the Abelian group of translations, $\mathcal{T} \simeq \mathbb{R}^{n,1}$. At any point in the configuration space $x \in \mathcal{T}$ the cotangent (momentum) space is $T_x^* \mathbb{R}^{n,1} = \mathbb{R}^{n,1}{}^*$, where $\mathbb{R}^{n,1}{}^*$ stands for the dual, as a vector space, to $\mathbb{R}^{n,1}$. Following a ``geometric" approach, at this point one would introduce a symplectic structure and with it a Poisson bracket for the space of functions on the cotangent bundle $T^*\mathcal{T}$. Here we will follow an algebraic approach along the lines discussed in the previous section. Our phase space manifold $\Gamma$ is given by
\begin{equation}
\Gamma = \mathcal{T} \times \mathcal{T}^* \simeq \mathbb{R}^{n,1} \times \mathbb{R}^{n,1}{}^*.
\end{equation}
We denote the Lie algebra associated to each component of $\Gamma$ as $\mathfrak{t}$ and $\mathfrak{t}^*$ for $\mathcal{T}$ and $\mathcal{T}^*$, respectively, and the coordinates for $\mathcal{T}$ as $x^\mu$ and $p_\mu$ for $\mathcal{T}^*$, with $\mu=0,\ldots, n$. For the Lie algebra $\mathfrak{t}$ we denote the basis elements as $\{P_\mu \}$, whereas for $\mathfrak{t}^*$ we use $\{X^\mu \}$. The (trivial) Lie brackets are
\begin{equation} \label{trivialcomm}
[P_\mu,P_\nu] = 0 \qquad \text{and} \qquad [X^\mu, X^\nu] = 0,
\end{equation}
for all $\mu,\nu$. We will see that the coordinate bases for the Lie algebras related to the groups coordinates are
\begin{equation} \label{coordbases}
P_\mu = \frac{\partial}{\partial x^\mu} \qquad \text{and} \qquad X^\mu = -\frac{\partial}{\partial p_\mu},
\end{equation}
where $X^\mu = \eta^{\mu\nu} X_\nu$, $\eta^{\mu\nu} = \mathrm{diag}(+,-,\ldots,-)$ and $a,b = 1,\ldots,n$.
Taking into account that $\mathfrak{t}$ and $\mathfrak{t}^*$ are dual spaces $\langle P_\mu, X^\nu \rangle = \delta_\mu^{\; \nu}$, we can define an inner product for $\mathfrak{t} \oplus \mathfrak{t}^*$ extending the dual pairing as follows
\begin{equation} \label{innerprodD}
( P_\mu, P_\nu ) = ( X^\mu, X^\nu ) = 0 \qquad \text{and} \qquad ( P_\mu, X^\nu ) = \langle P_\mu, X^\nu \rangle = \delta_\mu^{\; \nu}.
\end{equation}
As we saw in the previous section we can define a Lie bracket for $\mathfrak{t} \oplus \mathfrak{t}^*$ asking that the inner product of the direct sum is $\mathrm{ad}$-invariant. The ``mixed'' commutator vanishes for any two members of $\mathfrak{t} \oplus \mathfrak{t}^*$, thus we obtain an Abelian Lie algebra
\begin{equation} \label{commD}
[P_\mu, P_\nu] = 0, \qquad [X^\mu, X^\nu] = 0, \qquad \text{and} \qquad [P_\mu, X^\nu] = 0.
\end{equation}
Therefore, for $\mathfrak{t}$ and $\mathfrak{t}^*$ we have trivial co-commutators
\begin{equation} \label{coco-t}
\delta_{\mathfrak{t}}(P_\mu) = 0 \qquad \text{and} \qquad \delta_{\mathfrak{t}^*}(X^\mu) = 0,
\end{equation}
in accordance with the two first expressions in \eqref{commD}.
At this point we only have a Lie algebra structure for $\mathfrak{t} \oplus \mathfrak{t}^*$. We are interested in defining a Lie-bialgebra structure on such Lie algebra in order to obtain the ``double" of the Lie algebra of translations, $\mathcal{D}(\mathfrak{t})$. Since we are dealing with an abelian Lie algebra, the Lie bialgebra structure will have a trivial co-commutator. Nevertheless, we want to carry on with this construction as we want to present the next cases as a \emph{deformation} of momentum space to a non-abelian Lie group.
The co-commutator for the double $\mathcal{D}(\mathfrak{t})$ can be obtained from \eqref{cocodouble} choosing an $r$-matrix which fulfills the conditions of the ad-invariance of its symmetric part and the (modified) Yang Baxter equation. In this case where the Lie algebra of $\mathfrak{t} \oplus \mathfrak{t}^*$ is trivial any $r$-matrix satisfies the Lie-bialgebra conditions and has a trivial co-commutator. The canonical Poisson bracket can be obtained from the following anti-symmetric $r$-matrix
\begin{equation} \label{rcanonical}
r = \frac{1}{2} \left( P_\mu \otimes X^\mu - X^\mu \otimes P_\mu \right) \equiv \, P_\mu \wedge X^\mu .
\end{equation}
Grouping the set of generators for $\mathfrak{t}$ and $\mathfrak{t}^*$ as $Z^A = (P_\mu, X^\mu)$ where $A=1,\ldots, 2(n+1)$ we see that
\begin{equation} \label{cocoDtrivial}
\delta_{\mathcal{D}(\mathfrak{t})} (Z^A) = (\mathrm{ad}_{Z^A} \otimes \mathbbm{1} + \mathbbm{1} \otimes \mathrm{ad}_{Z^A}) r = 0
\end{equation}
for all $Z^A \in \mathfrak{t} \oplus \mathfrak{t}^*$, i.e. we have Lie-bialgebra structure turning $\mathfrak{t} \oplus \mathfrak{t}^*$ in a classical double.
We can now derive the Poisson bracket for the Poisson-Lie group whose Lie algebra is given by $\mathfrak{t} \oplus \mathfrak{t}^*$ with the trivial commutators \eqref{commD}. From the canonical $r$-matrix \eqref{rcanonical} and choosing the plus sign in \eqref{rmatrix1} we see that
\begin{align}
\{f_1, f_2\} &=- 2 r^{AB} Z_A f_1 Z_B f_2, \nonumber \\
&= -\delta^\mu_\nu \left( P_\mu f_1 X^\nu f_2 - X^\nu f_1 P_\mu f_2 \right).
\end{align}
Using the coordinate basis \eqref{coordbases} we obtain
\begin{equation}
\{f_1, f_2\} = \frac{\partial f_1}{\partial x^\mu} \frac{\partial f_2}{\partial p_\mu} - \frac{\partial f_1}{\partial p_\mu} \frac{\partial f_2}{\partial x^\mu} ,
\end{equation}
and the Poisson brackets for the coordinate functions on the phase space are
\begin{align} \label{pbflat}
\{x^0, x^a\} = 0 \qquad \{x^a, x^b\} &= 0 \qquad \{p_0,p_a\} = 0 \qquad \{p_a, p_b\} = 0 \nonumber \\
\{x^0, p_0\} = 1 \qquad \{x^a, p_0 \} &= 0 \qquad \{x^0, p_a\} = 0 \qquad \{x^a, p_b\} = \delta^a_b,
\end{align}
where $a,b = 1,\ldots,n$. This Poisson structure is also symplectic as the Poisson bivector is non-degenerate. Notice that if we had chosen the minus sign in \eqref{rmatrix1}, i.e. the Drinfeld double structure, we would have obtained a trivial Poisson-Lie structure! Thus the Heisenberg double structure, in the flat momentum space, can be used to define the canonical Poisson brackets of textbook classical mechanics of a relativistic point particle.
\section{Deforming momentum space to the $AN(n)$ group}
\label{sec:deform}
As a first example of non-abelian momentum space we will consider an $n$-dimensional Lie sub-group of the $n+2$-dimensional Lorentz group $SO(n+1,1)$ denoted as $AN(n)$. The three and four-dimensional versions of this group have been subject of an extensive study over the past years in the context of $\kappa$-deformations of relativistic symmetries \cite{KowalskiGlikman:2003we, Freidel:2007hk, meusburger2009, Arzano:2010kz, arzano2011, AmelinoCamelia:2011nt}. These models are characterized by {\it deformed} translation generators which act according to a ``generalized Leibniz rule" on tensor product representations. Such generators are associated to a $AN(n)$ momentum space which as a manifold is described by ``half" of the $n+1$-dimensional de Sitter space. The deformation parameter $\kappa$, with dimension of inverse length, is related to the curvature of de Sitter space. The Lie algebra $\mathfrak{an}(n)$, when its generators are identified with space-time coordinates, is known in the literature as the $n+1$-dimensional $\kappa$-Minkowski space. For further technical details on $\kappa$-deformations the interested reader can consult \cite{Majid:1994cy, Kosinski:1999ix, AmelinoCamelia:2001fd, Agostini:2006zza, Arzano:2007ef, Arzano:2009ci, Kim:2009jk, Meljanac:2010ps, Borowiec:2010yw, Arzano:2014jfa}. The main goal of this Section will be to show that starting from {\it minimal} ingredients, namely the structures constants the Lie algebra $\mathfrak{an}(n)$, we can construct a suitable Poisson structure on a deformed phase space in which momenta belong to the $AN(n)$ Lie group. Such phase space provides a description of the kinematics of a classical $\kappa$-deformed relativistic particle.
Our starting point will be the manifold $\Gamma = T \times AN(n)$ where $T= \mathbbm{R}^{n,1}$ that is the ordinary $n+1$-dimensional Minkowski configuration space. The usual flat momentum space, however, is now replaced by the group manifold $AN(n)$. In all examples that will follow we will restrict to models with flat configuration space $T= \mathbbm{R}^{n,1}$.
Let us look at the Lie algebras of both components of the cartesian product group $\Gamma$. Denoting again with $\{P_\mu\}$ the basis of $\mathfrak{t}$ and with $\{\tilde{X}^\mu\}$, $\mu = 0, \ldots, n$ the basis of $\mathfrak{an}(n)$ we have
\begin{equation} \label{liealgdeform}
[P_\mu, P_\nu] = 0 \qquad \text{and} \qquad [\tilde{X}^\mu, \tilde{X}^\nu] = - \frac{1}{\kappa} \left( \tilde{X}^\mu \delta^\nu_0 - \tilde{X}^\nu \delta^\mu_0 \right),
\end{equation}
We immediately see that in the limit $\kappa \to \infty$ we recover the undeformed case of a ordinary relativistic particle reviewed in the previous Section. The algebra $\mathfrak{an}(n)$ is usually expressed as
\begin{equation} \label{liealgan}
[\tilde{X}^0, \tilde{X}^a] = \frac{1}{\kappa} \tilde{X}^a \qquad [\tilde{X}^a, \tilde{X}^b] = 0,
\end{equation}
where $a,b = 1, \ldots, n$. The two vector spaces $\mathfrak{t}$ and $\mathfrak{an}(n)$ are dual with respect to the inner product $\langle P_\mu, \tilde{X}^\nu \rangle = \delta_\mu^\nu$. The Lie algebra structure of $\mathfrak{an}(n)$ reflects on a non-trivial co-commutator for $\mathfrak{t}$
\begin{equation} \label{cocottilde}
\delta_{\mathfrak{t}} (P_\mu) = -\frac{1}{\kappa} \left( P_\mu \otimes P_0 - P_0 \otimes P_\mu \right) = \frac{2}{\kappa} \left( P_0 \wedge P_\mu \right),
\end{equation}
while for $\mathfrak{an}(n)$ we have
\begin{equation} \label{cocottildestar}
\delta_{\mathfrak{an}} (\tilde{X}^\mu) = 0.
\end{equation}
The direct sum of Lie algebras $\mathfrak{t} \oplus \mathfrak{an}(n)$ can be equipped with an inner product, cfr. \eqref{adinvariantinner}, invariant under the action of $\mathfrak{t}$ and $\mathfrak{an}(n)$
\begin{equation} \label{innerprod}
(P_\mu, P_\nu) = 0, \qquad (\tilde{X}^\mu, \tilde{X}^\nu) = 0 \qquad \text{and} \qquad (P_\mu, \tilde{X}^\nu) = \delta_\mu^\nu\,.
\end{equation}
Such product can be used to derive Lie brackets defining a Lie algebra structure on $\mathfrak{t} \oplus \mathfrak{an}(n)$. The Lie brackets are given by \eqref{liealgdeform} together with
\begin{equation} \label{lietan}
[P_\mu, \tilde{X}^\nu] = -\frac{1}{\kappa} \left(\delta^\nu_\mu P_0 - \delta^\nu_0 P_\mu \right),
\end{equation}
which written explicitly read
\begin{align}
[P_0, \tilde{X}^{\mu}] &= 0 \qquad [P_a, \tilde{X}^0] = \frac{1}{\kappa} P_a \qquad [P_a, \tilde{X}^b] = -\frac{1}{\kappa} \delta_a^b P_0.
\end{align}
As we showed in Section II, in order to define a Poisson structure on $\Gamma$ is suffices to introduce an $r$-matrix which turns $\mathfrak{t} \oplus \mathfrak{an}(n)$ into a Lie bi-algebra. A candidate $r$-matrix can be obtained from \eqref{rcanonical} simply replacing $ X^\mu$ with the new generators $\tilde{X}^\mu$
\begin{equation} \label{rmatrixan}
r = \frac{1}{2} \left(P_\mu \otimes \tilde{X}^\mu - \tilde{X}^\mu \otimes P_\mu \right) \equiv P_\mu \wedge \tilde{X}^\mu.
\end{equation}
It is easily checked that this skew-symmetric $r$-matrix \eqref{rmatrixan} satisfies the two conditions to render $\mathfrak{t} \oplus \mathfrak{an}(n)$ a Lie-bialgebra, that is, $r_+ = 0$ is trivially ad-invariant and a direct calculation shows that the Schouten bracket satisfies the modified Classical Yang-Baxter equation
\begin{equation} \label{mcybetilde}
Z_A. \big[ [r,r]\big] = 0 \qquad \forall \ Z_A \in \mathfrak{t} \oplus \mathfrak{an}(n).
\end{equation}
The co-commutator on $\mathfrak{t} \oplus \mathfrak{an}(n)$ is defined by \eqref{cocodouble} and on the generators of $\mathfrak{t}$ and $\mathfrak{an}(n)$ reduces to
\begin{align} \label{coco-Dtilde}
\delta_{\mathfrak{t} \oplus \mathfrak{an}} (P_\mu) &= - \frac{1}{\kappa} \left( P_\mu \otimes P_0 - P_0 \otimes P_\mu \right) = \frac{2}{\kappa}\, P_0 \wedge P_\mu, \\
\delta_{\mathfrak{t} \oplus \mathfrak{an}} (\tilde{X}^\mu) &= 0\,.
\end{align}
It can be verified that the co-commutator satisfies the cocycle condition
\begin{equation} \label{cocycle-Dtilde}
\delta_{\tilde{\mathcal{D}}(\mathfrak{t})} \big([Z_A,Z_B]\big) = Z_A .\ \delta_{\tilde{\mathcal{D}}(\mathfrak{t})}(Z_B) - Z_B .\ \delta_{\tilde{\mathcal{D}}(\mathfrak{t})}(Z_A)\,,
\end{equation}
so the Lie-bialgebra $\mathfrak{t} \oplus \mathfrak{an}(2)$ can be seen as a {\it classical double} $\tilde{\mathcal{D}}(\mathfrak{t}) \equiv \mathfrak{t} \oplus \mathfrak{an}(n)$.
We can use the structures just described to construct a Poisson structure on the group $\tilde{D}(T) = T \times AN(n)$. We will use a matrix representation for the Lie algebra $\mathfrak{an}(n)$ and we will extend it in order to include in the representation the Lie algebra $\mathfrak{t}$. It is common practice to describe the group $AN(n)$ in embedding coordinates which make clear the identification of the group manifold with half of the $(n+1)$-de Sitter hyperboloid embedded in $(n+1) +1$-Minkowski space. In this case the generators of the corresponding Lie algebra are given by combinations of the generators of the Lie algebra $\mathfrak{so}(n+1,1)$ and are represented by $(n+2) \times (n+2)$ matrices \cite{Arzano:2014jfa}. However, in order to include translations, to obtain a matrix representation for the Lie algebra $\mathfrak{t} \oplus \mathfrak{an}(n)$, it will be necessary to work in a different representation. We thus introduce the {\it adjoint representation} $\mathcal{R}$ for $\mathfrak{an}(n)$ defined by $\mathcal{R}: \mathfrak{an}(n) \to \mathfrak{gl}(\mathfrak{an}(n))$, $\tilde{X} \mapsto \mathrm{ad}_{\tilde{X}}$, where $\mathrm{ad}_{\tilde{X}}(\tilde{Y}) := [\tilde{X},\tilde{Y}]$ for $\tilde{X},\tilde{Y} \in \mathfrak{an}(n)$. The generators of the adjoint representation are determined by structure constants of the Lie algebra, $[\tilde{X}^\mu, \tilde{X}^\nu] = c^{\mu\nu}_{\ \ \alpha} \tilde{X}^\alpha$, thus the matrices associated to the representation are given by $[\mathcal{R}(\tilde{X}^\mu)]_\alpha^{\ \,\beta} = c^{\mu\beta}_{\ \ \ \alpha}$. It is possible to construct another representation from $\mathcal{R}$ via the matrices $\mathcal{R}^*(\tilde{X}^\mu) = - (\mathcal{R}(\tilde{X}^\mu))^{\mathrm{T}}$, where the superscript $\mathrm{T}$ stands for the transpose of the matrix \cite{fuchsbook}. We will call this matrix representation the {\it co-adjoint} representation $\mathcal{R}^*$ of $\mathfrak{an}(n)$.
In what follows we will work with the co-adjoint representation for $\mathfrak{an}(n)$ since it allows to extend the $(n+1)\times (n+1)$-matrix representation of $\mathfrak{an}(n)$ to include the basis of $\mathfrak{t}$ arranged in an extra column resulting in a $(n+2) \times (n+2)$-matrix representation for the Lie algebra $\mathfrak{t} \oplus \mathfrak{an}(n)$. Let us write such matrix representation explicitly. The basis for $\mathfrak{an}(n)$ is given by $n+1$ matrices of size $(n+1) \times (n+1)$. The basis of the Lie algebra $\mathfrak{t} \oplus \mathfrak{an}(n)$ can be represented in terms of $(n+2)\times(n+2)$-matrices. The matrices corresponding to $\mathfrak{t}$ read
\begin{equation} \label{tan-matbasis}
P_\mu =
\begin{pmatrix}
0_{(n+1) \times (n+1)} && \mathbf{u}_\mu \\
&& && \\
\mathbf{0}_{(n+1)}^{\, \mathrm{T}} && 0
\end{pmatrix},
\end{equation}
where $\mathbf{0}_n$ and $\mathbf{0}_{(n+1)}$ are $n$ and $(n+1)$-component zero vectors, respectively, and $\mathbf{u}_\mu = (0,\ldots, 1,\ldots, 0)$ is a $(n+1)$-component vector with $1$ in the $\mu$th-entry for $\mu = 0,\ldots, n$. The matrices representing the $\mathfrak{an}(n)$ sector are given by
\begin{equation} \label{tan-matbasis1}
\tilde{X}^0 = -\frac{1}{\kappa}
\begin{pmatrix}
&& \mathbf{0}_n^{\, \mathrm{T}} && && \\
\mathbf{0}_{(n+2)} && \mathbbm{1}_{n \times n} && && \mathbf{0}_{(n+2)} \\
&& \mathbf{0}_n^{\, \mathrm{T}} && &&
\end{pmatrix}
\quad \text{and} \quad
\tilde{X}^a = \frac{1}{\kappa}
\begin{pmatrix}
&& \mathbf{e}_a^{\, \mathrm{T}} && \\
\mathbf{0}_{(n+2)} && 0_{n \times n} && \mathbf{0}_{(n+2)} \\
&& \mathbf{0}_n^{\, \mathrm{T}} &&
\end{pmatrix},
\end{equation}
where $\mathbf{e}_a = (0,\ldots, 1,\ldots, 0)$ is a $n$-component vector with $1$ in the $a$th-entry for $a= 1,\ldots, n$. A general group element $d \in \tilde{D}(T) = T \times AN(n)$ can be expressed as $d = t\ g$ where $t$ is a pure translation and $g$ is a pure $AN(n)$ element. The explicit matrix form of the group element is given by
\begin{equation} \label{decomp}
d =
\begin{pmatrix}
\tilde{g} && \mathbf{x}_{n+1} \\
&& \\
\mathbf{0}^{\, \mathrm{T}}_{n+1} && 1
\end{pmatrix}, \quad \text{with} \quad
g =
\begin{pmatrix}
\tilde{g} && \mathbf{0}_{n+1} \\
\mathbf{0}^{\, \mathrm{T}}_{n+1} && 1
\end{pmatrix}, \quad
t =
\begin{pmatrix}
\mathbbm{1}_{(n+1) \times (n+1)} && \mathbf{x}_{n+1} \\
&& \\
\mathbf{0}_{n+1}^{\, \mathrm{T}} && 1
\end{pmatrix},
\end{equation}
where $\tilde{g} \in AN(n)$ is a $(n+1) \times (n+1)$ matrix and $\mathbf{x}_{n+1} = (x^0, x^a)$ is a $(n+1)$-component vector with real entries $x^\mu \in \mathbb{R}$ that parametrize the group elements of $T \sim \mathbb{R}^{n,1}$, $t = e^{x^\mu P_\mu}$. The $AN(n)$ group can be parametrized in different ways using a set of real coordinates $\{p_0, \ldots, p_n\}$ so the matrix group entries are $\tilde{g}_{ij} (p)$.
Before presenting the Poisson brakets for particular parametrizations we will first write down the results in a general, coordinate independent, form. Using the matrix representation for $\mathcal{D}(\mathfrak{t})$ the Poisson brackets for the Heisenberg double are determined by
\begin{equation} \label{rmatrix4}
\{d\, \overset{\otimes}{,}\, d\} = - [r, d \otimes d]_+,
\end{equation}
with the $r$-matrix given by \eqref{rmatrixan}. In order to see how the Poisson brackets can be read off the expression above, one should recall that, in a simplified case in which $d$ is a $2\times 2$-matrix, the left hand side of \eqref{rmatrix4} would be given, for example, by
\begin{equation} \label{lhsexpl}
\{d\, \overset{\otimes}{,}\, d\} =
\begin{pmatrix}
\{d_{11}, d_{11}\} && \{d_{11}, d_{12}\} && \{d_{12}, d_{11}\} && \{d_{12}, d_{12}\} \\
\{d_{11}, d_{21}\} && \{d_{11}, d_{22}\} && \{d_{12}, d_{21}\} && \{d_{12}, d_{22}\} \\
\{d_{21}, d_{11}\} && \{d_{21}, d_{12}\} && \{d_{22}, d_{11}\} && \{d_{22}, d_{12}\} \\
\{d_{21}, d_{21}\} && \{d_{21}, d_{22}\} && \{d_{22}, d_{21}\} && \{d_{22}, d_{22}\}
\end{pmatrix},
\end{equation}
whereas the components in the right hand side of \eqref{rmatrix4} are simply those of the product of the matrices $- \left( r (d \otimes d) - (d \otimes d) r \right)$. An explicit calculation of \eqref{rmatrix4} leads thus to the following Poisson brackets
\begin{equation} \label{genpbanxx}
\{x^0, x^a\} = \frac{1}{\kappa}\, x^a \qquad \text{and} \qquad \{x^a, x^b\} = 0,
\end{equation}
\begin{equation} \label{genpbanpp}
\{g_{ij}(p), g_{kl}(p)\} = 0,
\end{equation}
and
\begin{equation} \label{genpbanxp}
\{x^\mu, g\} = - \tilde{X}^\mu\ g,
\end{equation}
where $g_{ij}(p)$ are the entries of the matrix representing a pure $AN(n)$ element in $T \times AN(n)$ and the last bracket can be written explicitly in terms of coordinates $x^\mu$ and the entries of $g$ as $\{x^\mu, g_{ij}(p)\} = [\tilde{X}^\mu\ g]_{ij}(p)$ i.e. using the explicit parametrization of the matrix representation of $g$ \footnote{It is worth to mention that using the adjoint representation $\mathcal{R}$ for $\mathfrak{an}(n)$, we can describe the right decomposition of $d = g\ t \in T \times AN(n)$. The Poisson brackets are again obtained from \eqref{rmatrix4} just changing the sign of the $r$-matrix, $r \to -r$ and are given by \eqref{genpbanxx}, \eqref{genpbanpp} and
\begin{equation} \label{genpbanrightxp}
\{x^\mu, g\} = g\ \tilde{X}^\mu.
\end{equation}}
From \eqref{genpbanpp} we can see that for any coordinates for the momentum group manifold the Poisson brackets are
\begin{equation} \label{genpbanppbis}
\{p_\mu, p_\nu\} = 0.
\end{equation}
For illustrative purposes we write down the explicit form of the Poisson brackets above for some specific parametrizations widely used in the literature. A pure $AN(n)$ group element $g$ can be written as
\begin{equation} \label{gbeta}
g = e^{-\beta p_0 \tilde{X}^0} e^{-p_1 \tilde{X}^1 -p_2 \tilde{X}^2} e^{-(1-\beta) p_0 \tilde{X^0}},
\end{equation}
where the values for $ 0 \leq \beta \leq 1$ describe the different coordinates systems. Among them, $\beta = 0$ corresponds to the ``time-to-the-right" parametrization and in this case $\{p_\mu\}$ are known as {\it bicrossproduct coordinates} (since they are associated with the so-called bicrossproduct basis of the $\kappa$-Poincar\'e algebra \cite{Majid:1994cy}), $\beta = 1$ corresponds to the time-to-the-left parametrization and $\beta = \tfrac{1}{2}$ to the time-symmetric parametrization \cite{Agostini:2003vg}. The general group element \eqref{gbeta} gives rise to the following general element $d = t\ g \in T \times AN(n)$
\begin{equation} \label{dmatrix}
d =
\begin{pmatrix}
1 && -\frac{1}{\kappa} e^{\frac{(1-\beta)p_0}{\kappa}} \mathbf{p}_n^{\mathrm{T}} && \\
\mathbf{0}_{n} && \mathrm{e}^{\frac{p_0}{\kappa}} \mathbbm{1}_{n \times n} && \mathbf{x}_{(n+1)} \\
&& \mathbf{0}_{n}^{\mathrm{T}} && 1
\end{pmatrix},
\end{equation}
where $\mathbf{p}_n = ( p_1,\ldots,p_n)$ and $\mathbf{x}_{(n+1)} = (x^0, x^1,\ldots,x^n)$. Notice that the $AN(n)$ part is an upper-triangular matrix and this is a consequence of choosing the co-adjoint representation for the basis of its Lie algebra. From \eqref{genpbanxp} and using that $\{x^\mu,f(p)\} = \{x^\mu,p_\nu\} \tfrac{\partial f(p)}{\partial p_\nu}$ we find that the Poisson brackets for the different coordinates systems labelled by $\beta$ are
\begin{align} \label{Poissonbraxp}
\{x^0,p_0\} &= 1, \nonumber \\
\{x^0, p_a\} &= -\frac{p_a}{\kappa}(1-\beta), \nonumber \\
\{x^a,p_0\} &= 0, \nonumber \\
\{x^a, p_b\} &= \delta^a_b\, \mathrm{e}^{\frac{p_0}{\kappa}\beta}.
\end{align}
It is worth to write down the relations for the cases $\beta = 0, \tfrac{1}{2}, 1$ and their first order expansions in $\tfrac{1}{\kappa}$. For the time-to-the-right case $\beta = 0$ the deformed brackets at {\it all orders} in $\kappa$ read
\begin{equation} \label{Poissonrightan}
\{x^0,p_0\} = 1\,,\qquad
\{x^0, p_a\} = -\frac{p_a}{\kappa}\,,\qquad \{x^a,p_0\} = 0\,,\qquad
\{x^a, p_b\} = \delta^a_b\ .
\end{equation}
The case $\beta = 1$ or time to the left has the following Poisson brackets
\begin{equation} \label{Poissonleftan}
\{x^0,p_0\} = 1\,,\qquad
\{x^0, p_a\} = 0\,,\qquad
\{x^a,p_0\} = 0\,,\qquad
\{x^a, p_b\} = \delta^a_b\, \mathrm{e}^{\frac{p_0}{\kappa}},
\end{equation}
and to the second order in $\tfrac{1}{\kappa}$ the only relevant relation is
\begin{equation} \label{Poissonleftan1order}
\{x^a, p_b\} = \delta^a_b\, \left( 1 + \frac{p_0}{\kappa} + \frac{p_0^2}{2 \kappa^2} + \mathcal{O}\left(\frac{1}{\kappa^3}\right) \right).
\end{equation}
Finally, the time symmetric case $\beta = \tfrac{1}{2}$ gives
\begin{equation} \label{Poissontsymm}
\{x^0,p_0\} = 1\,,\qquad
\{x^0, p_a\} = -\frac{p_a}{2 \kappa}\,,\qquad
\{x^a,p_0\} = 0\,,\qquad
\{x^a, p_b\} = \delta^a_b\, \mathrm{e}^{\frac{p_0}{2 \kappa}},
\end{equation}
for which the expansion up to second order is
\begin{equation} \label{Poissontsymm1order}
\{x^a, p_b\} = \delta^a_b\, \left( 1 + \frac{p_0}{2 \kappa} + \frac{p_0^2}{8 \kappa^2} + \mathcal{O}\left(\frac{1}{\kappa^3}\right) \right).
\end{equation}
The brackets for the time-to-the-right parametrization \eqref{Poissonrightan}, $\beta = 0$ coincide with those found in \cite{Lukierski:1993wx, Arzano:2010kz, AmelinoCamelia:1997jx, AmelinoCamelia:2011nt}. \footnote{In \cite{AmelinoCamelia:2011nt} the Poisson brackets are $\{x^0, x^a\} = -\frac{1}{\kappa} x^a$ and $\{x^0,p_a\} = \frac{p_a}{\kappa}$. This difference in the sign can be traced back to a matter of convention on the signature of the embedding Minkowski space for the $AN(n)$ manifold.}. It is worth mentioning that in \cite{Arzano:2010kz, AmelinoCamelia:2011nt} the Poisson brackets were obtained using a different procedure starting from the Kirillov symplectic form \cite{kirillov1976} to write down the kinetic term for the reduced action of the relativistic particle. \footnote{For a different approach to deformed phase spaces which makes use of the theory of Hopf algebroids see \cite{Meljanac:2014jsa, Lukierski:2015zqa, Lukierski:2016utz}} In our approach the Poisson structure is derived purely in terms of the algebraic structure of the generators of the momentum group manifold.
Before closing the section a remark is in order. We determined a symplectic Poisson structure on the phase space group manifold $\Gamma=T \times AN(n)$ using the Heisenberg double construction. The alternative Drinfeld double construction, has the property of being compatible with the Lie group multiplication and indeed is related to the symmetries of the phase space \cite{Bonzom:2014wva}. Using \eqref{rmatrix3} with the commutator, i.e. $\{d\, \overset{\otimes}{,}\, d\} = - [r, d \otimes d]_-$, we can find the Poisson brackets associated to the Drinfeld double structure. These are given again by \eqref{genpbanxp}, \eqref{genpbanpp} but now $\{x^\mu, g\} = 0$ for all $\mu$, so the ``cross" brackets between position and momenta vanish identically.
\section{Deforming momentum space to the group $SL(2,\mathbb{R})$}
\label{sec:sl2}
We now discuss another important example of group valued momenta which emerge in the context of three-dimensional gravity and, like the $AN(n)$ case discussed above, are associated to a deformation of the Poincar\'e group.
As it is well known, in three space-time dimensions gravity does not possess local degrees of freedom. Point-like particles can be included in theory as {\it topological defects}. For instance, coupling a spinless particle at rest to gravity results in a conical metric with the particle sitting at the tip of the cone \cite{thooft1984}. This conical metric can be pictured as a wedge cut out from the spatial plane characterized by a deficit angle proportional to the mass of the particle $\alpha = 8\pi G m$, where $G$ is Newton's gravitational constant which in two dimensions has units of inverse mass. The deficit angle is described through a rotation by $8\pi Gm$ which is captured by calculating the holonomy of the flat connection around the location of the particle. Thus the momentum at rest of the particle is determined by a rotation proportional to $m G$ i.e. a group element belonging to $SL(2,\mathbb{R})$, the (double cover) of the group of isometries of three-dimensional Minkowski space. A description of a moving defect can be obtained by boosting the conical metric, in this case the three momentum of the particle will be a general element of $SL(2,\mathbb{R})$ \cite{Matschull:1997du, lotito2014}. Various treatments exist for the description of the phase space of point particles coupled to gravity in three dimensions \cite{Matschull:1997du, Meusburger:2003ta, Osei:2011ig} and its symmetries \cite{Ballesteros:2010zq, Ballesteros:2013dca}.
Here we will show how our general prescription for defining Poisson structures on group manifold phase spaces can be applied to this case where the phase space is given by the cartesian product of the group of translations times the double cover of the three dimensional Poincar\'e group, $\mathbb{R}^{2,1} \times SL(2,\mathbb{R})$. In particular we take as the only input of our construction the algebraic properties determined by the Lie group structure of the (extended) momentum space $SL(2,\mathbb{R})$.
As in the previous Section, we start from the Lie algebra $\mathfrak{t} \oplus \mathfrak{sl}(2,\mathbb{R})$ and the infinitesimal counterpart of the Poisson bivector, the co-commutator $\delta_{\mathfrak{t} \oplus \mathfrak{sl}(2,\mathbb{R})}$. Let us denote with $\{P_\mu\}$ and $\{\tilde{X}^\mu\}$ the generators of $\mathfrak{t}$ and $\mathfrak{sl}(2,\mathbb{R})$, respectively, whose Lie algebras are determined by the following Lie brackets
\begin{equation} \label{lietsl2r}
[P_\mu, P_\nu] = 0,
\end{equation}
for all $\mu = 0,1,2$ and
\begin{equation} \label{lietsl2r2}
[\tilde{X}^0, \tilde{X}^1] = - \frac{1}{\ell} \tilde{X}^2, \quad [\tilde{X}^0, \tilde{X}^2] = \frac{1}{\ell} \tilde{X}^1, \quad [\tilde{X}^1, \tilde{X}^2] = \frac{1}{\ell} \tilde{X}^0.
\end{equation}
The Lie brackets of $\mathfrak{sl}(2,\mathbb{R})$ are obtained from the usual relation $[\tilde{X}_{\mu}, \tilde{X}_{\nu}] = \frac{1}{\ell} \epsilon_{\mu\nu\sigma} \tilde{X}^{\sigma}$, raising and lowering indices using a ``mostly minus" Minkowski metric and the totally skew-symmetric Levi-Civita pseudotensor is such that $\epsilon_{012} = 1$ \cite{schroerswilhelm2014}.
The direct sum $\mathfrak{t} \oplus \mathfrak{sl}(2,\mathbb{R})$ can be made into a Lie algebra extending the inner product as in \eqref{innerprodD} and defining Lie brackets such that the product is ad-invariant. The resulting Lie algebra structure is given by the brackets (\ref{lietsl2r}, \ref{lietsl2r2}) together with
\begin{equation} \label{tplussl2rlie}
\begin{split}
[P_0, \tilde{X}^1] = \frac{1}{\ell} P_2, \quad [P_0, \tilde{X}^2] = -\frac{1}{\ell} P_1, \quad [P_1, \tilde{X}^0] = \frac{1}{\ell} P_2, \\
[P_1, \tilde{X}^2] = - \frac{1}{\ell} P_0, \quad [P_2, \tilde{X}^0] = - \frac{1}{\ell} P_1, \quad [P_2, \tilde{X}^1]= \frac{1}{\ell} P_0.
\end{split}
\end{equation}
We can now introduce co-commutators on $\mathfrak{t} \oplus \mathfrak{sl}(2,\mathbb{R})$ using an $r$-matrix analogous to \eqref{rmatrixan}. Denoting the complete set of generators $Z_A = \{P_\mu, \tilde{X}^\mu\}$ we have for the co-commutator the explicit relations
\begin{equation} \label{inclusionsl2r}
\delta_{\mathfrak{t} \oplus \mathfrak{sl}(2, \mathbb{R})} (\tilde{X}^\mu) = 0,
\end{equation}
\begin{equation} \label{inclusionsl2r2}
\delta_{\mathfrak{t} \oplus \mathfrak{sl}(2, \mathbb{R})} (P_0) = \frac{2}{\ell}\, P_1 \wedge P_2, \quad \delta_{\mathfrak{t} \oplus \mathfrak{sl}(2, \mathbb{R})} (P_1) = \frac{2}{\ell}\, P_0 \wedge P_2, \quad \delta_{\mathfrak{t} \oplus \mathfrak{sl}(2, \mathbb{R})} (P_2) = -\frac{2}{\ell}\, P_0 \wedge P_1\,.
\end{equation}
These co-commutators turn $\mathfrak{t} \oplus \mathfrak{sl}(2,\mathbb{R})$ into a Lie bi-algebra i.e. a {\it classical double} which we denote as $\tilde{\mathcal{D}}(\mathfrak{t})$.
We can represent the generators as matrices using the co-adjoint representation, as in the previous Section. In such representation the generators $Z_A$ can be written as $4\times 4$-matrices as
\begin{equation} \label{slP0}
P_\mu =
\begin{pmatrix}
0_{3 \times 3} && \mathbf{u}_\mu \\
\mathbf{0}^{\mathrm{T}}_3 && 0
\end{pmatrix},
\end{equation}
where $\mathbf{u}_\mu$ is a $3$-component vector with 1 in the $\mu$-entry for $\mu = 0, \ldots, 2$ and as
\begin{equation} \label{slXs}
\tilde{X}^0 = \frac{1}{\ell}
\begin{pmatrix}
0 && 0 && 0 && 0 \\
0 && 0 && 1 && 0 \\
0 && -1 && 0 && 0 \\
0 && 0 && 0 && 0
\end{pmatrix},
\quad
\tilde{X}^1 = \frac{1}{\ell}
\begin{pmatrix}
0 && 0 && -1 && 0 \\
0 && 0 && 0 && 0 \\
-1 && 0 && 0 && 0 \\
0 && 0 && 0 && 0
\end{pmatrix},
\quad
\tilde{X}^2 = \frac{1}{\ell}
\begin{pmatrix}
0 && 1 && 0 && 0 \\
1 && 0 && 0 && 0 \\
0 && 0 && 0 && 0 \\
0 && 0 && 0 && 0
\end{pmatrix}.
\end{equation}
A general element of the $T \times SL(2,\mathbb{R})$ group manifold can be decomposed in terms of a pure translation and a pure $SL(2,\mathbb{R})$ transformation as $d = t g$ with the following general matrix representation
\begin{equation} \label{decompsl2r}
d =
\begin{pmatrix}
\tilde{g} && \mathbf{x}_{n+1} \\
\mathbf{0}^{\, \mathrm{T}}_{n+1} && 1
\end{pmatrix},
\end{equation}
where $\tilde{g} \in SL(2,\mathbb{R})$ is a $3 \times 3$ matrix and
\begin{equation} \label{gandtsl2r}
g =
\begin{pmatrix}
\tilde{g} && \mathbf{0}_{n+1} \\
\mathbf{0}^{\, \mathrm{T}}_{n+1} && 1
\end{pmatrix}, \quad \text{and} \quad
t =
\begin{pmatrix}
\mathbbm{1}_{(n+1) \times (n+1)} && \mathbf{x}_{n+1} \\
\mathbf{0}_{n+1}^{\, \mathrm{T}} && 1
\end{pmatrix},
\end{equation}
are the matrices representing a pure Lorentz transformation and a pure translation, respectively.
As in the previous section we can now use the Heisenberg double relation $\{d\, \overset{\otimes}{,}\, d\} = - [r, d \otimes d]_+$ to write derive the Poisson brackets for the $T \times SL(2,\mathbb{R})$ phase space. The general expressions for the Poisson bracket in terms of the coordinates $x^{\mu}$ appearing in \eqref{decompsl2r} and of momenta are given by
\begin{equation} \label{genpbsl2rxx}
\{x^0, x^1\} = -\frac{1}{\ell}\, x^2 \qquad, \{x^0, x^2\} = \frac{1}{\ell}\, x^1 \qquad \text{and} \qquad \{x^1, x^2\} = \frac{1}{\ell}\, x^0,
\end{equation}
\begin{equation} \label{genpbsl2rpp}
\{g_{ij}(p), g_{kl}(p)\} = 0 \implies \{p_\mu, p_\nu\} = 0,
\end{equation}
and
\begin{equation} \label{genpbsl2rxp}
\{x^\mu, g\} = - \tilde{X}^\mu\, g.
\end{equation}
Using the adjoint representation, instead of the co-adjoint one, the Poisson brackets are given again by \eqref{genpbsl2rxx} and \eqref{genpbsl2rpp} but now the mixed brackets read $\{x^\mu, g\} = - g\, \tilde{X}^\mu$. These brackets coincide with those found in \cite{Matschull:1997du}, one of the earliest descriptions of the phase space of gravitating particle in three dimensions based on the Hamiltonian treatment of the reduced action of the gravity-plus-particle system.
It is instructive to focus on a given parametrization of the momentum group manifold. We consider the ``exponential coordinates" \cite{lotito2014} for which $g \in SL(2, \mathbb{R})$ is obtained as $g = e^{-p_\mu \tilde{X}^\mu}$, $\mu= 0,1,2$. The mass parameter $\ell = 1/4\pi G$ is determined by the three-dimensional Newton's constant \cite{thooft1984} and in the limit $G\rightarrow 0$ one recovers the usual flat momentum space $\mathbb{R}^{2,1}$. The matrix that describes the general group element $d \in T \times SL(2,\mathbb{R})$ is given as
\begin{equation} \label{gsl2r}
d =
\begin{pmatrix}
\frac{p_0^2 - (p_1^2 + p_2^2) \cos \frac{p}{\ell}}{p^2} && \frac{p_0 p_1 - p_0 p_1 \cos \frac{p}{\ell} - p_2 p \sin \frac{p}{\ell}}{p^2} && \frac{p_0 p_2 - p_0 p_2 \cos \frac{p}{\ell} + p_1 p \sin \frac{p}{\ell}}{p^2} && x^0 \\
-\frac{p_0 p_1 - p_0 p_1 \cos \frac{p}{\ell} + p_2 p \sin \frac{p}{\ell}}{p^2} && \frac{-p_1^2 + (p_0^2 - p_2^2) \cos \frac{p}{\ell}}{p^2} && -\frac{p_1 p_2 - p_1 p_2 \cos \frac{p}{\ell} - p_0 p \sin \frac{p}{\ell}}{p^2} && x^1 \\
-\frac{p_0 p_2 - p_0 p_2 \cos \frac{p}{\ell} - p_1 p \sin \frac{p}{\ell}}{p^2} && -\frac{p_1 p_2 - p_1 p_2 \cos \frac{p}{\ell} - p_0 p \sin \frac{p}{\ell}}{p^2} && \frac{-p_2^2 + (p_0^2 - p_1^2) \cos \frac{p}{\ell}}{p^2} && x^2 \\
0 && 0 && 0 && 1
\end{pmatrix},
\end{equation}
where $p^2 = p_0^2 - p_1^2 - p_2^2$. The explicit, all order, relations for \eqref{genpbsl2rxp} in terms of the coordinates for the group $(x^\mu, p_\mu)$ are rather involved and here we present these Poisson brackets at first order in the deformation parameter $\frac{1}{\ell}$
\begin{align} \label{pbsl2xp}
\{x^0, p_0\} &= 1, \qquad \;\;\,\ \{x^0, p_1\} = -\frac{1}{\ell}\frac{p_2}{2}, \quad \{x^0, p_2\} = \frac{1}{\ell} \frac{p_1}{2}, \nonumber \\
\{x^1, p_0\} &= -\frac{1}{\ell}\frac{p_2}{2}, \quad \{x^1, p_1\} = 1, \qquad \quad \{x^1, p_2\} =- \frac{1}{\ell} \frac{p_0}{2}, \nonumber \\
\{x^2, p_0\} &= \frac{1}{\ell}\frac{p_1}{2}, \quad \;\;\; \{x^2, p_1\} = \frac{1}{\ell} \frac{p_0}{2}, \quad \;\ \{x^2, p_2\} = 1 .
\end{align}
These relations can be written in a compact way as
\begin{equation} \label{pbsl2compact}
\{x_\mu, p_\nu\} = \eta_{\mu\nu} + \frac{1}{\ell}\, \epsilon_{\mu\nu\alpha}\, \frac{p^\alpha}{2}.
\end{equation}
We can consider cartesian coordinates on the group manifold by transforming the general group element in exponential coordinates \eqref{gsl2r} through the relations \cite{lotito2014}
\begin{equation} \label{expovscart}
\tilde{p}_\mu = \frac{\sin \frac{p}{\ell}}{p} p_\mu,
\end{equation}
We find that the group element parametrized by cartesian coordinate is represented by the following matrix
\begin{equation} \label{dcart}
d =
\begin{pmatrix}
\frac{\tilde{p}_0^2 - (\tilde{p}_1^2 + \tilde{p}_2^2) \sqrt{1 - \frac{\tilde{p}^2}{\ell^2}}}{\tilde{p}^2} && \frac{\tilde{p}_0 \tilde{p}_1}{\left(1 + \sqrt{1 - \frac{\tilde{p}^2}{\ell^2}}\right)\ell^2} - \frac{\tilde{p}_2}{\ell} && \frac{\tilde{p}_0 \tilde{p}_2}{\left(1 + \sqrt{1 - \frac{\tilde{p}^2}{\ell^2}}\right)\ell^2} + \frac{\tilde{p}_1}{\ell} && x^0 \\
-\frac{\tilde{p}_0 \tilde{p}_1}{\left(1 + \sqrt{1 - \frac{\tilde{p}^2}{\ell^2}}\right)\ell^2} - \frac{\tilde{p}_2}{\ell} && -\frac{\tilde{p}_1^2 - (\tilde{p}_0^2 - \tilde{p}_2^2) \sqrt{1 - \frac{\tilde{p}^2}{\ell^2}}}{\tilde{p}^2} && -\frac{\tilde{p}_1 \tilde{p}_2}{\left(1 + \sqrt{1 - \frac{\tilde{p}^2}{\ell^2}}\right)\ell^2} - \frac{\tilde{p}_0}{\ell} && x^1 \\
-\frac{\tilde{p}_0 \tilde{p}_2}{\left(1 + \sqrt{1 - \frac{\tilde{p}^2}{\ell^2}}\right)\ell^2} + \frac{\tilde{p}_1}{\ell} && -\frac{\tilde{p}_1 \tilde{p}_2}{\left(1 + \sqrt{1 - \frac{\tilde{p}^2}{\ell^2}}\right)\ell^2} + \frac{\tilde{p}_0}{\ell} && -\frac{\tilde{p}_2^2 - (\tilde{p}_0^2 - \tilde{p}_1^2) \sqrt{1 - \frac{\tilde{p}^2}{\ell^2}}}{\tilde{p}^2} && x^2 \\
0 && 0 && 0 && 1
\end{pmatrix}
\end{equation}
where $\tilde{p}^2 = \tilde{p}_0^2 - \tilde{p}_1^2 - \tilde{p}_2^2$. The Poisson brackets up to first order have the same form as in equations \eqref{genpbsl2rpp}, \eqref{genpbsl2rxx} and \eqref{pbsl2xp}, that is, in compact form we have $\{x_\mu, \tilde{p}_\nu\} = \eta_{\mu\nu} + \frac{1}{\ell}\, \epsilon_{\mu\nu\alpha}\, \frac{\tilde{p}^\alpha}{2}$. These Poisson brackets coincide at first order in the deformation parameter with those found in \cite{bianco2013}. The Poisson brackets in \cite{bianco2013} are given by $\{x^\mu, x^\nu\} = \ell \epsilon^{\mu\nu}_{\;\;\;\, \rho}\, x^\rho$, $\{p_\mu, p_\nu\} = 0$ and $\{p_\mu, x^\nu\} \simeq -\delta^\nu_\mu + \frac{\ell}{2}\ \epsilon_{\mu}^{\;\ \nu\rho}\ p_\rho$, where indices are raised and lowered with a mostly plus Minkowski metric and $\epsilon_{\mu\nu\alpha}$ is such that $\epsilon_{012} = -1$. For comparison with our results we must consider that in \cite{bianco2013} the Lie algebra $\mathfrak{sl}(2,\mathbb{R})$ is defined by the commutators $[X^\mu, X^\nu] = \epsilon^{\mu\nu}_{\;\;\;\, \rho} X^\rho$, where $\{X^\mu\}$ is the basis for $\mathfrak{sl}(2, \mathbb{R})$, while the convention we are following is that of \cite{schroerswilhelm2014}, where $[X_\mu, X_\nu] = \epsilon_{\mu\nu\sigma}X^\sigma$ with a mostly minus Minkowski metric for lowering and raising indices and $\epsilon_{012} = 1$. Therefore, in order to compare we must identify $X^1 \rightarrow -X^2$, $X^2 \rightarrow -X^1$, which translates into the following identifications for the phase space coordinates $x^1 \rightarrow -x^2$, $x^2 \rightarrow -x^1$, $p^1 \rightarrow -p^2$ and $p^2 \rightarrow -p^1$. Taking into account these identifications we can compare and we indeed get the same results up to first order in the deformation parameter as in \cite{bianco2013}.
\section{Composite systems of classical particles}
\label{sec:composite}
The treatment of multi-particle systems with group valued momenta has notoriously been controversial. Indeed when considering the deformations of translations associated to such models it is often assumed that {\it any} elementary system, quantum or classical, exhibits a non-trivial composition of momenta associated with the non-abelian group multiplication of momentum space. This leads to blatant contradictions with the known laws of kinematics of macroscopic bodies known in the literature as ``the soccer ball problem" \cite{amelinosoccer2011, hossenfelder2014}.
In this Section we discuss the composition of momenta in classical systems both in undeformed relativistic kinematics and in the deformed case discussed so far.
We will show that under the assumption that phase spaces of composite systems with group valued momenta are given by the {\it cartesian product} of their components the total momentum of a multi-particle system is given by the abelian sum of the individual momenta of the components. We briefly mention how, upon quantization the non-abelian structure of momentum space comes into play and momenta associated to multiparticle states must compose according to a non-abelian composition rule. Thus as long as we do not observe ``quantum soccer balls" in experiments there is no obvious problem with the composition of momenta in systems with a Lie group momentum space.
Let us start by recalling that, as seen in the previous Sections, the phase space of a classical relativistic (spinless) point particle is just a direct sum vector space given by $\Gamma = \mathbb{R}^{3,1} \oplus \mathbb{R}^{3,1}{}^*$. Let us restrict to momentum space $\Gamma_p = \mathbb{R}^{3,1}{}^*$. The dual space to Minkowski space is isomorphic to Minkowski space itself as vector space $\mathbb{R}^{3,1}{}^*\simeq \mathbb{R}^{3,1}$ equipped with the usual (abelian) vector addition $p+q$ for $p,q \in \Gamma_p$. Let us recall that an {\it observable} $\mathcal{O}$ for a classical system is a map $\mathcal{O}:\Gamma \rightarrow \mathbb{R}$. A particular set of observables is given by the components of momentum $\mathcal{P}^{\mu}$, so that to $p\in \Gamma_p$ one associates a four-vector $\mathcal{P}^{\mu}(p)=p^{\mu}\in \mathbb{R}^{3,1}$.
Now let us consider a composite systems made of two particles. The phase space of such system will be given\footnote{See \cite{Geroch:1985ci} pag. 184 and for a more ``philosophical" discussion see \cite{Aerts:1978}.} by the cartesian product of the respective phase spaces $\Gamma \equiv \Gamma_1 \times \Gamma_2 \simeq \Gamma_1 \oplus \Gamma_2$ where the last isomorphism holds if $\Gamma_1$ and $\Gamma_2$ are \emph{vector spaces}. Again we can focus on the momentum sector of phase space which will be given by $\Gamma_p=\Gamma_{p1} \oplus \Gamma_{p2}$. Given a point $(p_1,p_2)\in \Gamma_{p1} \oplus \Gamma_{p2} \simeq \mathbb{R}^{3,1} \oplus \mathbb{R}^{3,1}$ we want to see how the single particle momenta combine to give the {\it total} momentum associated to such point in phase space. In order to do so let us first recall how the vector composition is defined for direct sums of vector spaces. Given $(p_1,p_2), (p'_1,p'_2) \in \Gamma_{p1} \oplus \Gamma_{p2}$ we can extend the vector addition $+$ defined in $\Gamma_p$ to $\Gamma_{p1} \oplus \Gamma_{p2}$ as follows
\begin{equation}\label{cartadd1}
(p_1,p_2) + (p'_1,p'_2) \equiv (p_1+p'_1, p_2+ p'_2)\,.
\end{equation}
An observation which will be crucial for what follows is that, given the composition above, any element $(p_1,p_2)\in \Gamma_{p1} \oplus \Gamma_{p2}$ can be written as
\begin{equation}\label{cartsum1}
(p_1,p_2) = (p_1, 0) + (0 ,p_2)\,.
\end{equation}
Now let us look at the observable $\mathcal{P}^{\mu}(p_1,p_2)$ associated to the momentum space point $(p_1,p_2) \in \Gamma_{p1} \oplus \Gamma_{p2}$. Starting from the definition of (coordinate) functions on the cartesian product of spaces we have
\begin{equation}
\mathcal{P}^{\mu}(p_1,p_2) \equiv (\mathcal{P}^{\mu}(p_1),\mathcal{P}^{\mu}(p_2)) = (p^{\mu}_1,p^{\mu}_2) = (p^{\mu}_1, 0) + (0 ,p^{\mu}_2) = \mathcal{P}^{\mu}(p_1,0) + \mathcal{P}^{\mu}(0,p_2)
\end{equation}
where in the fourth term we used the analogous of \eqref{cartsum1} for $\mathbb{R}^{3,1} \oplus \mathbb{R}^{3,1}$.
Now let us consider for example $\mathcal{P}^{\mu}(p_1,0)$, this is the momentum observable associated to the two-particle system when particle 2 has vanishing momentum and thus we can make the identification $ \mathcal{P}^{\mu}(p_1,0) = (p^{\mu}_1, 0) \rightarrow p^{\mu}_{1}$. Mathematically this is reflected in the fact that for a generic group $G$ the isomorphism $G \times \{ \mathbbm{1}\}\simeq G$, where $\mathbbm{1}$ is the identity element, holds and, in particular in our case $\mathbb{R}^{3,1} \times \{0\}\simeq \mathbb{R}^{3,1}$. Such isomorphism maps the sum $+$ \emph{restricted} to $ (\mathbb{R}^{3,1} \times \{0\}) \times ( \{0\} \times \mathbb{R}^{3,1})$ to the ordinary sum defined on $\mathbb{R}^{3,1} \times \mathbb{R}^{3,1}$. Thus the {\it total momentum} of the system is the map that associates, via the isomorphsims above, to the observable $\mathcal{P}^{\mu}(p_1,p_2) = \mathcal{P}^{\mu}(p_1,0) + \mathcal{P}^{\mu}(0,p_2)$ the four-vector $p^{\mu}_{12}$ given by
\begin{equation}
\mathcal{P}^{\mu}(p_1,p_2) \rightarrow p^{\mu}_{12} = p^{\mu}_1 + p^{\mu}_2\,,
\end{equation}
obtained from the vector sum of the single particle four-momenta $p^{\mu}_1, p^{\mu}_2$. To summarize: starting from basic first principles we reproduced the familiar composition of momenta we are accustomed to. Even though the discussion above might seem redundant, it helps clarifying the subtleties one faces in treating group valued momenta as we are now going to see.
Let us now consider the case of classical deformed kinematics in which the momentum (vector) space is replaced by a group manifold. As we have seen in the previous sections the deformed phase space will be given by $\Gamma = \mathbb{R}^{3,1} \times G$ where $G$ is a four dimensional non-abelian Lie group.
In analogy with the undeformed case, observables will be given by maps from $\Gamma $ to the real numbers $\mathbb{R}$. In particular {\it a} four-momentum observable $ \mathcal{P}^{\mu}$ associates a four-vector to the any momentum space element $\pi \in \Gamma_{\pi} = G$ namely: $\mathcal{P}^{\mu}(\pi) = \pi^{\mu}$.
In complete analogy with the undeformed case, the phase space of a composite system of two particles is given by $\Gamma = \Gamma_1 \times \Gamma_2 = \mathbb{R}^{3,1}_1 \oplus \mathbb{R}^{3,1}_2 \times G_{1} \times G_2$ and the total momentum space is thus $\Gamma^{(2)}_{\pi} = \Gamma_{\pi 1}\times \Gamma_{\pi 2} \equiv G_{1} \times G_2$.
Now let us focus on the single-particle four-momentum observable $\mathcal{P}^{\mu}: \Gamma_{\pi} \rightarrow \mathbb{R}^{3,1}$.
This observable corresponds to a choice of coordinate function on the momentum group manifold $G$ \cite{arzano2011}. The first thing to notice is that the Lie group multiplication induces a {\it non-abelian} addition law $\triangleright$ for four-momenta defined by
\begin{equation}
\mathcal{P}^{\mu}(\pi \cdot \tilde{\pi}) \equiv \mathcal{P}^{\mu}(\pi) \triangleright \mathcal{P}^{\mu}(\tilde{\pi}) \,,
\end{equation}
for $\pi, \tilde{\pi} \in \Gamma_{\pi} $
The main point we want to stress is that such addition law {\it does not} represent the composition law for classical four-momentum observables. To see that this is the case let us consider the four-momentum observable for the two particle state $(\pi_1,\pi_2)$. From the definition of coordinates on cartesian products of manifolds we have
\begin{equation}
\mathcal{P}^{\mu}(\pi_1, \pi_2) \equiv (\mathcal{P}^{\mu}(\pi_1), \mathcal{P}^{\mu}(\pi_2)) = (\pi^{\mu}_1, \pi^{\mu}_2)
\end{equation}
and from the property of addition on the cartesian product $\mathbb{R}^{3,1}\times\mathbb{R}^{3,1}$
\begin{equation}
(\pi^{\mu}_1, \pi^{\mu}_2) = (\pi^{\mu}_1, 0) + (0 , \pi^{\mu}_2) = \mathcal{P}^{\mu}(\pi_1, \mathbbm{1}) + \mathcal{P}^{\mu}(\mathbbm{1}, \pi_2)
\end{equation}
where in the last equality we used the fact that $\mathbbm{1}$, the identity element in the group $G$, corresponds to ``vanishing" four-momentum. Using the isomorphism $G \times \{ \mathbbm{1}\} \simeq G$ we can make the obvious identification $\mathcal{P}^{\mu}(\pi_1, \mathbbm{1}) \rightarrow \mathcal{P}^{\mu}(\pi_1) = \pi^{\mu}_1$ and $\mathcal{P}^{\mu}(\mathbbm{1}, \pi_2) \rightarrow \mathcal{P}^{\mu}(\pi_2) = \pi^{\mu}_2$.
Now let us recall that the group law $\cdot $ can be extended to the cartesian product $G_1\times G_2$ as follows:
\begin{equation}
(\pi_1, \pi_2) \cdot (\pi'_1, \pi'_2) \equiv (\pi_1\cdot \pi'_1, \pi_2 \cdot \pi'_2)\,.
\end{equation}
In particular {\it any} element $(\pi_1, \pi_2) \in \Gamma_{\pi 1}\times \Gamma_{\pi 2} \equiv G_{1} \times G_2$ can be written as
\begin{equation}
(\pi_1, \pi_2) \equiv (\pi_1, \mathbbm{1}) \cdot (\mathbbm{1}, \pi_2) \,.
\end{equation}
Notice how the \emph{restriction} of the group law $\cdot$ to the cartesian product $(G_1 \times \{ \mathbbm{1}\})\times (\{ \mathbbm{1}\} \times G_2)$ is {\it abelian}. Indeed for the four momentum observable associated to a classical two particle $(\pi_1, \pi_2)$ state we have
\begin{equation}
\mathcal{P}^{\mu}(\pi_1, \pi_2) = \mathcal{P}^{\mu}(\pi_1, \mathbbm{1}) + \mathcal{P}^{\mu}(\mathbbm{1}, \pi_2)\,.
\end{equation}
Once such identity is established one can proceed as in the undeformed case and associate, via the identifications $\mathcal{P}^{\mu}(\pi_1, \mathbbm{1}) \rightarrow \mathcal{P}^{\mu}(\pi_1) = \pi^{\mu}_1$ and $\mathcal{P}^{\mu}(\mathbbm{1}, \pi_2) \rightarrow \mathcal{P}^{\mu}(\pi_2) = \pi^{\mu}_2$ a {\it total momentum} four-vector to the two particle momentum observable $\mathcal{P}^{\mu}(\pi_1, \pi_2)$
\begin{equation}
\mathcal{P}^{\mu}(\pi_1, \pi_2) \rightarrow \pi_{12}^{\mu} = \pi^{\mu}_1 + \pi^{\mu}_2\,.
\end{equation}
This shows\footnote{See also \cite{Amelino-Camelia:2014gga} where similar conclusions for a classical system where reached starting from a different perspective.} that for classical systems, which are described by cartesian products, to the observable $\mathcal{P}^{\mu}(\pi_1, \pi_2)$ we can associate a total momentum four vector $\pi_{12}^{\mu} \in \mathbb{R}^{3,1}$ obtained from the {\it ordinary} vector sum of single-particle four-momenta $\pi^{\mu}_1$ and $\pi^{\mu}_2$.\\
The non-abelian Lie group multiplication of momentum space plays however a non-trivial role when we look at the quantum counterparts of our systems. The state of quantum relativistic particle is described by a vector (to be precise a ``ray") in a Hilbert space $|p\rangle \in \mathcal{H}$. An ``one-particle"-observable in this case is a (linear, self-adjoint) operator on $\mathcal{H}$. We focus on four-momentum $P^{\mu}$, i.e. the observable associated to generators of space-time translations. A basis of $\mathcal{H}$ is given by eigenstates of $P^{\mu}$
\begin{equation}
P^{\mu} |p\rangle = \mathcal{P}^{\mu}(p) |p\rangle\,.
\end{equation}
The abstracts kets $|p\rangle $ admit a representation in term of (wave)-functions on Minkowski space, in particular $\langle x| p\rangle = e^{ipx}= e_p(x)$. A two-particle system (we set aside the question of (un)-distinguishability) is described by the tensor product $\mathcal{H}^2 = \mathcal{H}_1 \otimes \mathcal{H}_2$. How do we define the action of the observable $P^{\mu}$ on two-particle states? We can actually derive it using the point-wise multiplication naturally associated to functions on Minkowski space and linearity of observables and, in particular, of four-momenta which act as derivatives on such functions.
Since for plane waves we have
\begin{equation}
e_{p_1} (x) \cdot e_{p_2}(x) \equiv e_{p_1+p_2}(x)
\end{equation}
by definition
\begin{equation}
P^{\mu}\, (e_{p_1+p_2}(x)) = \mathcal{P}^{\mu} (p_1+p_2)\, e_{p_1+p_2}(x)
\end{equation}
and for the homomorphism property for coordinate functions on $\mathbb{R}^{3,1}$ we have
\begin{equation}
\mathcal{P}^{\mu} (p_1+p_2) = \mathcal{P}^{\mu} (p_1) + \mathcal{P}^{\mu} (p_2)\,.
\end{equation}
The main point which should be clearly stressed is that {\it while in the classical case the four-momentum of a two-particle system is given by $\mathcal{P}^{\mu}(p_1, p_2)$ in the quantum domain we have to consider instead $\mathcal{P}^{\mu}(p_1+ p_2)$.}
From the definition of inner product for tensor product states
\begin{equation}
\langle p_1 p_2 | p'_1 p'_2 \rangle = \langle p_1| \otimes \langle p_2|\,\, |p'_1\rangle \otimes |p'_2\rangle \equiv \langle p_1| p'_1\rangle \langle p_2|p'_2\rangle
\end{equation}
we have
\begin{equation}
\mathcal{P}^{\mu} (p_1+p_2) = \langle p_1 p_2 | P^{\mu} | p_1 p_2 \rangle = \mathcal{P}^{\mu} (p_1) + \mathcal{P}^{\mu} (p_2) = \langle p_1 | P^{\mu} | p_1 \rangle + \langle p_2 | P^{\mu} | p_2 \rangle
\end{equation}
from which we can easily derive the action of $P^{\mu}$ on two particle states
\begin{equation}
P^{\mu} (|p_1\rangle \otimes |p_2\rangle) = P^{\mu} |p_1\rangle \otimes |p_2\rangle + |p_1\rangle \otimes P^{\mu} |p_2\rangle
\end{equation}
i.e. such action is dictated by the familiar {\it Leibniz rule}. In the deformed case the story is well known (see e.g. \cite{Arzano:2012bj, Arzano:2013sta}) and we will briefly review the basic concepts referring the reader to \cite{Arzano:2013sta} for further details. When momentum space is represented by a group manifold $G$ basis vectors of the one paricle Hilbert space $\mathcal{H}$ will be given by kets $|\pi \rangle$ labelled by group elements $\pi \in G$. These kets admit a plane wave representation in terms of {\it non-commutative plane waves} $e_{\pi}(x) = \langle x | \pi\rangle$. Indeed the usual point-wise product for functions over Minkowski space are replace by a {\it non-commutative} $\star$-product
\begin{equation}
e_{\pi_1}(x) \star e_{\pi_2}(x) \equiv e_{\pi_1\cdot \pi_2}(x)
\end{equation}
reflecting the non-abelian nature of the momentum group manifold $G$. The usual algebra of functions on $\mathbb{R}^{3,1}$ representing ``wave-functions" is now replaced by a non-commutative algebra and thus deformed quantum kinematics may be seen as ``non-commutative geometry" of the Hilbert space of quantum states. In analogy with the undeformed case the four-momentum observable $P^{\mu}$ associates four-vectors to the eigen-kets $|\pi\rangle$
\begin{equation}
P^{\mu} |\pi\rangle = \mathcal{P}^{\mu} (\pi) |\pi\rangle
\end{equation}
and thus $P^{\mu} e_{\pi}(x) = \mathcal{P}^{\mu} (\pi)\, e_{\pi}(x)$. From this we see that
\begin{equation}
P^{\mu}\, e_{\pi_1\cdot \pi_2}(x) = \mathcal{P}^{\mu} (\pi_1\cdot \pi_2) e_{\pi_1\cdot \pi_2}(x)
\end{equation}
and thus the total momentum of a quantum two-particle state is given by
\begin{equation}
\mathcal{P}^{\mu} (\pi_1\cdot \pi_2) = \mathcal{P}^{\mu} (\pi_1) \triangleright \mathcal{P}^{\mu} (\pi_2)
\end{equation}
i.e. usual four-momenta addition $+$ is replaced by a non-abelian composition law $\triangleright$. Notoriously this non-abelian composition of momenta reflects in a non-Leibniz action of the quantum observable $P^{\mu}$. Indeed from
\begin{equation}
\langle \pi_1 \pi_2 | P^{\mu} |\pi_1 \pi_2 \rangle = \mathcal{P}^{\mu} (\pi_1) \triangleright \mathcal{P}^{\mu} (\pi_2
\end{equation}
one sees that
\begin{equation}
P^{\mu} |\pi_1 \pi_2 \rangle \neq P^{\mu} |\pi_1\rangle \otimes |\pi_2\rangle + |\pi_1\rangle \otimes P^{\mu} |\pi_2\rangle
\end{equation}
but rather $P^{\mu}$ acts on tensor product states according to a deformed Leibniz rule which can be read off the non-abelian composition law and which we formally write
\begin{equation}
P^{\mu} (|\pi_1\rangle \otimes |\pi_2\rangle) = P^{(1)\mu} |\pi_1\rangle \otimes |\pi_2\rangle + |\pi_1\rangle \otimes P^{(2)\mu} |\pi_2\rangle\, .
\end{equation}
This type of non-symmetric action on tensor product representations is typical of non-trivial Hopf algebras and in fact is the characterizing feature of deformations of the algebra of translation generators appearing both in the ``quantum double" and $\kappa$-Poincar\'e models. What we just showed is that such a non-trivial structure plays a role for multi-component quantum states but it {\it does not} affect the behaviour of classical observables.
\section{Summary}
In this work we formulated a general framework for consistently defining Poisson structures on phase spaces with group valued momenta. Borrowing tools from the theory of Poisson-Lie groups and Lie bi-algebras we showed that such structures can be constructed, using appropriate $r$-matrices, using as input just the algebraic structure of the generators of the momentum space Lie group. We applied our results to well studied examples of group momentum spaces reproducing and generalizing known results. Moving from single to multi-particle systems, we discussed the behaviour of four-momentum observables. We showed how single-particle momenta compose according to a ordinary sum to give the total four-momentum of a composite system.
The most pressing question, which is the subject of ongoing work, is how to generalize the picture we presented to include spin. Since the natural tool for including a relativistic particle's spin in the classical phase space is the use of the co-adjoint orbit method, we will look to adapt the existing extensions of the latter to Poisson-Lie groups to the specific framework of deformed phase-spaces. The other important question is how to bridge the picture we presented in this work with the associated quantum deformations of the Poincar\'e group and algebra. The key to this connection will be a judicious use of the Drinfeld double structure at the classical level. This will provide the classical $r$-matrices compatible with the momentum group structure which will yield the ``infinitesimal" structure to be connected with the deformed Leibniz rule associated to the deformation. Upon quantization we expect all the structure of the deformed relativistic symmetries associated to the various momentum Lie group to emerge naturally from such classical structure. This will allow to complete a picture in which deformed relativistic phase spaces and their associated quantum group symmetries emerge solely from the specification of the non-trivial Lie group structure of momentum space.
\section*{Acknowledgements}
We are grateful to A. Yu. Alekseev and F. Girelli for useful correspondence on the theory of Poisson-Lie groups and Lie bi-algebras.
The work of MA was supported by a Marie Curie Career Integration Grant within the 7th European Community Framework Programme and by the John Templeton Foundation. FN acknowledges support from CONACYT grant No. 250298.
|
1,477,468,750,370 | arxiv | \section{Introduction} \label{sec:intro}
The demographics of AGN in clusters of galaxies have important implications
for the growth of the supermassive black holes at the centers of
cluster galaxies, the nature of AGN fueling, and the impact of AGN on the
intracluster medium (ICM) over cosmic time. The luminous, massive elliptical
galaxies that dominate the galaxy population in the richest clusters are
also expected \citep[and in some cases are measured:][]{houghton06,gebhardt07}
to have the most massive black holes in the local universe. As the
stars in these galaxies appear to have an earlier mean formation epoch
than those in field galaxies \citep[e.g.][]{vandokkum96,kelson97}, the apparent
coevolution of black holes and galaxies \citep[e.g.][and references
therein]{hopkins06a} implies that the bulk of their present black hole
mass was also accreted at earlier times.
This scenario is also motivated by observations of local clusters
that clearly show their galaxy populations are more quiescent than
local field galaxies. An early demonstration by \citet{osterbrock60} showed
that cluster ellipticals were far less likely to have {\rm [\ion{O}{2}]}\ $\lambda3727$
emission than field ellipticals, a result that has since been confirmed by many
studies \citep[e.g.][]{gisler78,dressler85,dressler99}. One big question
that has motivated this work is: Why are galaxy populations
different in clusters? Numerous physical mechanisms have been invoked to
explain the relative lack of star formation in cluster galaxies, as well as
their higher fraction of elliptical and S0 galaxies \citep{dressler80} and
relative lack of cold gas \citep[e.g.][]{giovanelli85}. These include
ram-pressure stripping by the ICM \citep{gunn72},
evaporation of a galaxy's
interstellar medium (ISM) by the hot ICM \citep{cowie77}, tidal effects
with the cluster potential \citep{farouki81,merritt83a,byrd90}, the absence
of newly-accreted cold gas \citep{larson80}, and galaxy harassment
and mergers \citep{richstone76,moore96}.
All of these physical effects may also be important for fueling accretion
onto the central black holes in galaxies because they impact either the
available gas supply in a galaxy, angular momentum transport, or both.
The best and perhaps only candidate process for fueling the most luminous AGN
is the merger of two gas-rich galaxies \citep[e.g.][]{barnes92} and the
relative lack of both cold gas and major mergers is a reasonable explanation
for the nearly complete absence of QSOs hosted by cluster galaxies. For
less luminous AGN the case is less clear because an increasing number of
physical processes such as minor mergers, galaxy harassment, various
types of bars, stellar mass loss, etc.\ could also play a role
\citep[see][for a review]{martini04c}. If mechanisms such as galaxy harassment
and stellar mass loss are important for fueling low-luminosity AGN, then
comparable numbers of low-luminosity AGN may be present in clusters and
the field.
Recent studies of the AGN fraction as a function of environment with
emission-line galaxies from the Sloan Digital Sky Survey (SDSS)
find that the most luminous AGN are rarer in denser environments
\citep[SDSS;][]{kauffmann04,popesso06}, although these studies do
not sample the densest regions of clusters well. This decrease is in contrast
to both lower-luminosity AGN in SDSS \citep{miller03a} and radio observations
\citep{best04,best05b}, which show that the radio AGN fraction does not
decrease significantly in denser environments.
X-ray observations with {\it Chandra}\ show that the X-ray AGN fraction is
larger than expected from AGN selection via visible-wavelength emission-lines.
In previous work we showed that X-ray observations identified approximately
five times as many AGN as selection at visible-wavelengths \citep{martini02,
martini06},
although the precise value of the X-ray excess depends significantly on the
relative sensitivity and luminosity threshold of the observations.
This spectroscopic study of X-ray counterparts confirmed the
many previous studies that suggested a higher X-ray AGN population in clusters
from surface density arguments alone \citep[e.g.][]{cappi01,sun02,ruderman05},
yet it is still not clear if the X-ray AGN fraction is higher than the field
value. To date there is only weak evidence that the X-ray AGN fraction in
clusters is comparable to the fraction in field early-type galaxies
\citep{lehmer07,sivakoff08,arnold09}.
One of the virtues of the emission-line galaxy studies as a function of
environment is that they can directly calculate the fraction of a given
galaxy population that hosts AGN as a function of environment, even though
this technique appears to systematically miss AGN in the densest regions
relative to X-ray and radio selection.
In addition to a local comparison between AGN in different environments,
measurement of the evolution of the AGN population in clusters can
constrain the formation epoch for their supermassive black holes and the
extent of their coevolution with the cluster galaxy population. The
key early work on the evolution of galaxies in clusters was by
\citet{butcher78,butcher84}, who observed a substantial increase in the
fraction of blue galaxies in higher-redshift clusters. The Butcher-Oemler
effect is interpreted as an increase in the amount of star formation
and has been confirmed by many other indicators, in particular {\rm [\ion{O}{2}]}\
emission-line galaxy fractions \citep{poggianti06} and an increase in the
number of $24\mu{\rm m}$ sources in {\it Spitzer}\ observations
of distant clusters \citep{bai07,saintonge08}. The observed increase brings the
star formation rate (SFR) in cluster galaxies closer to those in the field.
At a redshift of $z\sim1$ and higher, observations with {\it Spitzer}\ even find
that galaxies in denser environments have higher star formation rates than
lower-density regions \citep{elbaz07}, which is opposite the trend observed
in the local universe. Similar results have also been
found with deep UV data \citep{heinis07}. The situation is less clear when
star formation is measured with the {\rm [\ion{O}{2}]}\ emission line because while
\citet{poggianti08} find that star formation does not strongly depend on
environment, \citet{cooper08} find the specific star formation rate has a
similar dependence on environment at $z=0$ and $z=1$, although the total star
formation rate is higher in clusters at $z=1$ than in the field.
The existence of the Butcher-Oemler effect and the many indirect arguments
outlined above for a connection between star formation and black hole
accretion suggest that there should be an increased AGN population in
high-redshift clusters. An early study of the high-redshift cluster 3C295 at
$z=0.46$ by \citet{dressler83} found evidence for three AGN and was an
indication that this may be the case; however, their relative scarcity
precluded a detailed statistical study or targeted studies to deliberately
identify cluster AGN. This situation changed dramatically
with the launch of {\it Chandra}, whose superb sensitivity and angular resolution
produced a dramatic increase in efficiency for searches for AGN, particularly
lower-luminosity sources. Just as the case for local clusters, {\it Chandra}\
observations of distant clusters have revealed substantial populations of
point sources \citep{cappelluti05,gilmour09}. Spectroscopic confirmation
that these point sources are associated with cluster members has been
more challenging \citep{johnson03,demarco05}, but in \citet{eastman07} we
combined new observations of MS2053.7-0449 ($z=0.58$) with archival data
on three additional, $z>0.5$ clusters and found an
approximately order of magnitude increase in the fraction of $M_R < -20$ mag
galaxies that hosted AGN more luminous than $L_{X,H} \geq 10^{43}$ erg s$^{-1}$\ in
the hard X-ray band (2--10 keV) relative to the sample of ten low-redshift
$z < 0.32$ clusters in \citet{martini07}. These results have since been
strengthened with detailed studies of clusters at $z\sim1$ with XMM
\citep{vanbreukelen09} and measurements of surface density excesses in
clusters to $z\sim 1.5$ \citep{galametz09}.
In addition to their application to the coevolution of black holes and
galaxies, an increase in the AGN fraction in clusters may also impact the
ICM. At low redshifts many studies have shown that
AGN feedback is a viable explanation for the absence of substantial reservoirs
of cold gas at the centers of clusters \citep[for a recent review
see][]{mcnamara07}. This feedback is ascribed to AGN associated with
the central cluster galaxy, which is almost invariably a luminous radio
source. In our studies of X-ray AGN this is almost the only cluster
galaxy in which we are {\it insensitive} to the presence of an AGN because
it is challenging to measure even a bright nuclear point source when
juxtaposed with the extended emission from the ICM that often peaks near the
central cluster galaxy. Nevertheless, the
evolution of AGN in other cluster galaxies is likely to be connected to the
evolution of the central AGN as the stars in the most luminous cluster galaxies
have comparable ages. An increase in the net energy production by AGN in
higher-redshift clusters is of interest because energy input during
cluster formation has been invoked as an explanation for the minimum entropy
level in the ICM \citep{kaiser91,evrard91}. AGN remain perhaps the most
viable mechanism, if only because most others can be ruled out
\citep{kravtsov04,conroy08}, although the details of how AGN feedback couples
to the ICM remain uncertain. Outside of the central galaxy, an increase
in the number of other AGN associated with clusters of galaxies may also
affect measurement of other cluster properties
\citep{branchesi07b,bignamini08}. Finally, an analogous increase in the
radio-loud AGN population in high-redshift clusters may contaminate searches
for clusters via the Sunyaev-Zel'dovich effect \citep{sunyaev70} at mm and
cm wavelengths. As many searches for clusters that exploit this effect are in
progress, it is important to characterize the potential impact of
evolution of the cluster AGN population on these experiments \citep[e.g.][]{lin07}.
In the next section we describe our expanded high-redshift data, as well as
the selection criteria for X-ray AGN we employ at all redshifts. We then
describe our new observations of low-redshift clusters in \S\ref{sec:lowzdata}.
These two datasets are combined to calculate the cluster AGN fraction and
its evolution in \S\ref{sec:fa}, followed by an examination of the properties
of the cluster AGN in \S\ref{sec:agn}. We discuss the implications of
these results, particularly on the coevolution of black holes and galaxies,
in \S\ref{sec:dis} and conclude with a summary of our results.
Throughout this paper we assume that the cosmological parameters are:
($\Omega_M, \Omega_\Lambda, h$) = (0.3, 0.7, 0.7) where $H_0 = 100h$ km s$^{-1}$
Mpc$^{-1}$. All absolute magnitudes quoted in this paper assume $h = 0.7$.
\section{High-Redshift Data} \label{sec:highzdata}
\begin{deluxetable*}{lrrlrccccl}
\tablecolumns{10}
\tablewidth{7.0truein}
\tabletypesize{\scriptsize}
\tablecaption{High-Redshift Cluster Sample\label{tbl:highz}}
\tablehead{
\colhead{Cluster} &
\colhead{$\alpha_c$} &
\colhead{$\delta_c$} &
\colhead{$z$} &
\colhead{$\sigma$ [km/s]} &
\colhead{$\sigma$ Ref} &
\colhead{$T_X$ [keV]} &
\colhead{$T_X$ Ref} &
\colhead{$R_{200}$ [Mpc]} &
\colhead{Spectra} \\
\colhead{(1)} &
\colhead{(2)} &
\colhead{(3)} &
\colhead{(4)} &
\colhead{(5)} &
\colhead{(6)} &
\colhead{(7)} &
\colhead{(8)} &
\colhead{(9)} &
\colhead{(10)}
}
\startdata
MS 1621.5+2640 & 16:23:34.9 & +26:34:21 & 0.43 & 735 & 1 & 7.6 & 1 & 1.42 & SESXI \\
3C 295 & 14:11:20.5 & +52:12:09 & 0.46 & 1642 & 1 & 5.3 & 1 & 3.12 & SESXI \\
MS 0451.6-0305 & 4:54:11.1 & -03:00:55 & 0.538 & 1371 & 2 & 8.1 & 1 & 2.49 & ChaMP \\
MS 0015.9+1609 & 0:18:33.5 & +16:26:06 & 0.541 & 1234 & 2 & 9.4 & 2 & 2.24 & ChaMP \\
RX J0848.7+4456 & 8:48:47.6 & +44:56:16 & 0.574 & 670 & 3 & 3.2 & 3 & 1.19 & SESXI \\
MS 2053.7-0449 & 20:56:21.3 & -04:37:49 & 0.583 & 865 & 4 & 5.2 & 1 & 1.53 & SESXI,ChaMP \\
RX J0542.8-4100 & 5:42:49.8 & -41:00:07 & 0.634 & 1101 & 3 & 7.9 & 3 & 1.89 & ChaMP \\
RX J2302.8+0844 & 23:02:48.3 & +08:43:48 & 0.722 & 993 & 3 & 6.6 & 3 & 1.61 & ChaMP \\
MS 1137.5+6625 & 11:40:22.1 & +66:08:14 & 0.782 & 967 & 3 & 6.3 & 1 & 1.52 & ChaMP \\
RX J1317.4+2911 & 13:17:22.0 & +29:11:24 & 0.805 & 531 & 3 & 2.2 & 1 & 0.82 & SESXI \\
RX J1716.4+6708 & 17:16:49.3 & +67:08:25 & 0.813 & 1445 & 1 & 6.6 & 1 & 2.22 & SESXI,ChaMP \\
MS 1054-03 & 10:56:55.7 & -03:37:39 & 0.831 & 1156 & 5 & 7.8 & 1 & 1.76 & ChaMP \\
RDCS J0910+5422 & 9:10:44.7 & +54:22:04 & 1.11 & 675 & 6 & 3.5 & 1 & 0.87 & SESXI \\
Lynx E & 8:48:58.3 & +44:51:51 & 1.261 & 740 & 7 & 3.8 & 4 & 0.88 & SESXI \\
Lynx W & 8:48:34.2 & +44:53:35 & 1.27 & 650 & 8 & 1.7 & 4 & 0.77 & SESXI
\enddata
\tablecomments{
Cluster sample and properties derived from the present study. Columns are: (1)
Cluster name; (2 and 3) RA and DEC for the centroid of the extended
X-ray emission; (4) redshift; (5) velocity dispersion; (6) reference for the velocity dispersion; (7) X-ray temperature in keV; (8) reference for the X-ray
temperature; (9) estimate of the virial radius in Mpc \citep[e.g.,][]{treu03}; (10)
origin of most of the spectra.
References for velocity dispersion are: 1: \citet{girardi01}; 2: \citet{carlberg96}; 3: derived from the X-ray temperature following \citet{xue00}; 4: \citet{tran05}; 5: \citet{tran07}; 6: \citet{mei06}; 7: from weak lensing
estimate \citet{jee06}; 8: \citet{stanford01}.
References for X-ray temperatures are: 1: \citet{vikhlinin02}; 2: \citet{ebeling07}; 3: \citet{ettori04}; 4: \citet{jee06}.
}
\end{deluxetable*}
Two large surveys have obtained redshifts for substantial numbers of galaxies
with X-ray counterparts in many deep, archival {\it Chandra}\ observations that
include substantial numbers of high-redshift clusters of galaxies. These are
the Serendipitous Extragalactic X-ray Source Identification Program
\citep[SEXSI;][]{harrison03,eckart05,eckart06} and the Chandra Multiwavelength
Project \citep[ChaMP;][]{kim04a,kim04b,green04,silverman05a}.
We have investigated the fields surveyed by both SEXSI and ChaMP to identify
datasets that contain clusters of galaxies with $z>0.4$ and have sufficient
depth to identify $L_{X,H} \geq 10^{43}$ erg s$^{-1}$\ (rest frame 2--10 keV) AGN
at the cluster redshift.
The SEXSI survey published spectroscopic redshifts for 27 archival
{\it Chandra}\ observations in \citet{eckart06} that were selected to
identify hard X-ray sources over the flux range of $10^{-13} - 10^{-15}$
erg s$^{-1}$ cm$^{-2}$ and isolate those responsible for the hard X-ray background.
The specific selection criteria for the fields were that they must be
high Galactic latitude ($|b|>20^{\circ}$) and be obtained with either the
I or S modes of the Advanced Camera for Imaging Spectroscopy
\citep[ACIS;][]{bautz98} when no grating was used. The X-ray luminosities
quoted by SEXSI are based on
spectral fits that assume a $\Gamma = 1.5$ power law and intrinsic absorption
$N_H$ at the source redshift, although they quote the observed luminosities
(not corrected for obscuration) and provide the best-fit $N_H$ value.
The average spectroscopic completeness is 67\% (see \S\ref{sec:completeness}
below) for sources with $R < 24.4$ mag on the Vega system.
Nine of the 27 SEXSI fields include clusters of galaxies
with $z > 0.4$ and we include seven\footnote{RX J1350.0+6007 was not targeted
for spectroscopy and the X-ray data for CL0442+0202 ($z=1.11$) were
sufficiently shallow ($t = 44$ks) that
they may not be complete to $L_{X,H} = 10^{43}$ erg s$^{-1}$. In addition,
\citet{stern03} classify CL0442+0202 as an overdensity that has not yet
collapsed, rather than as a cluster.}
in our sample. As one field contains 3 clusters, we list nine clusters from
SEXSI in Table~\ref{tbl:highz}.
The ChaMP survey published spectroscopic redshifts for 20 archival
{\it Chandra}\ observations in \citet{silverman05a} that were similarly selected for
depth, high Galactic latitude ($|b|>20^{\circ}$), and no special observing
modes. The spectroscopic completeness of ChaMP is 77\% at $r'<22.5$ mag,
where $r'$ is on the SDSS photometric system \citep[][and $r'_{AB} =
R_{Vega} + 0.17$]{fukugita96}.
Their X-ray luminosities are based on spectral fits
that assume a $\Gamma = 1.9$ power law and intrinsic absorption $N_H$ at the
source redshift, as well as the appropriate Galactic absorption, although
they also quote the observed luminosities (only corrected for Galactic
absorption).
The final sample presented in \citet{silverman05a} was restricted to X-ray
sources with $L_X > 10^{42}$ keV in the 2--8 keV band in order to insure all
are AGN. Most (69\%) are spectroscopically classified as broad-line
AGN (BLAGN). Nine of these 20 ChaMP fields include clusters of galaxies with
$z > 0.4$ and we include
eight\footnote{We exclude CL J0152.7-1357 ($z=0.831$) because the exposure time
is shorter than the others at $t = 34.6$ ks and therefore the X-ray data may
not be complete to $L_{X,H} = 10^{43}$ erg s$^{-1}$.} of these in our study (see
Table~\ref{tbl:highz}). Two of these clusters are common to both ChaMP and
SEXSI (MS2053.7-0449 and RXJ1716.4+6708) and therefore the final sample
has fifteen clusters with $z > 0.4$. While spectroscopic data for X-ray
sources in other high-redshift clusters exist \citep[e.g.][]{johnson06}, we
limit our high-redshift sample to these fifteen to maximize the uniformity of
the dataset.
\begin{deluxetable*}{lllllllll}
\tablecolumns{9}
\tablewidth{7.0truein}
\tabletypesize{\scriptsize}
\tablecaption{High-Redshift Cluster AGN Sample\label{tbl:highzagn}}
\tablehead{
\colhead{AGN} &
\colhead{Cluster} &
\colhead{$z$} &
\colhead{$R$ [mag]} &
\colhead{log $L_{X,H}$ [erg s$^{-1}$]} &
\colhead{$\delta v/\sigma$} &
\colhead{$\Delta R$ [arcmin]} &
\colhead{$R/R_{200}$} &
\colhead{Class} \\
\colhead{(1)} &
\colhead{(2)} &
\colhead{(3)} &
\colhead{(4)} &
\colhead{(5)} &
\colhead{(6)} &
\colhead{(7)} &
\colhead{(8)} &
\colhead{(9)}
}
\startdata
CXOSEXSI J141127.4+521131& 3C295 &0.451 &19.78 &43.4 &1.13 &1.23 &0.14 &ALG \\
CXOSEXSI J141123.4+521331& 3C295 &0.472 &19.05 &43.8 &1.5 &1.45 &0.16 &BLAGN \\
E0015+162 & MS0015.9+1609 &0.553 &18.41 &45.48 &1.89 &3.35 &0.58 &BLAGN \\
CXOSEXSI J084858.0+445434& RX J0848.7+4456 &0.573 &19.58 &43.8 &0.28 &2.5 &0.83 &BLAGN \\
CXOMP J054248.2-410140 & RDCSJ0542-4100 &0.634 &20.64 &43.24 &0 &1.58 &0.32 &NELG \\
CXOMP J054251.4-410205 & RDCSJ0542-4100 &0.637 &19.63 &43.35 &0.5 &1.99 &0.33 &ALG \\
CXOMP J054259.5-410241 & RDCSJ0542-4100 &0.638 &20.50 &43.37 &0.67 &3.16 &0.63 &NELG \\
CXOMP J054240.8-405626 & RDCSJ0542-4100 &0.639 &20.89 &43.67 &0.83 &4.05 &0.81 &NELG \\
CXOMP J054255.0-405922 & RDCSJ0542-4100 &0.644 &22.08 &43.08 &1.67 &1.24 &0.25 &NELG \\
CXOMP J114022.0+660816 & MS1137+6625 &0.786 &20.37 &43.24 &0.7 &0.04 &0.01 &BLAGN \\
CXOSEXSI J171636.9+670829& RXJ1716.4+6708 &0.795 &22 &44 &2.06 &1.19 &0.24 &ELG \\
CXOSEXSI J131718.8+291111& RX J1317.4+2911 &0.803 &21.98 &43.3 &0.63 &0.68 &0.38 &BLAGN \\
CXOSEXSI J171703.8+670900& RXJ1716.4+6708 &0.812 &21.79 &43 &0.11 &1.53 &0.31 &ELG \\
CXOSEXSI J171714.5+671136& RXJ1716.4+6708 &0.815 &22.68 &43.2 &0.23 &4.02 &0.82 &ELG \\
CXOMP J105650.6-033508 & MS 1054-03 &0.818 &21.76 &43.22 &1.84 &2.82 &0.73 &BLAGN \\
CXOU J091043.3+542152 & RDCSJ0910+5422 &1.104 &24 &43.06 &1.26 &0.29 &0.16 &AGN2 \\
CXOSEXSI J084905.3+445203& LynxE &1.266 &24.61 &43.8 &1.11 &1.27 &0.74 &ELG \\
CXOSEXSI J084831.6+445442& LynxW &1.267 &25.42 &43.2 &0.61 &1.23 &0.8 &ELG
\enddata
\tablecomments{
AGN in high-redshift clusters of galaxies. Columns are: (1) AGN name; (2)
Cluster; (3) AGN redshift; (4) $R-$band magnitude; (5) Rest-frame, hard-X-ray
luminosity (2--10 keV); (6) Velocity offset from the cluster systemic velocity
normalized by the cluster velocity dispersion; (7) Projected radial offset
relative to the centroid of the X-ray gas in arcminutes; (8) Projected radial
offset normalized by the cluster virial radius; (9) Spectroscopic
classification. The $R-$band magnitude of E0015+162 is from \citet{orndahl03}.
The remaining values are from either \citet{eckart06} for the SEXSI sample or
from \citet{silverman05a} for the ChaMP sample (although corrected from $r'$
to $R$ as noted in Section~\ref{sec:highzdata}). The 2--8 keV X-ray
luminosities from \citet{silverman05a} have been corrected to the 2--10 keV
band as described in Section~\ref{sec:highzdata}.
}
\end{deluxetable*}
We have also compiled additional data for each cluster listed in
Table~\ref{tbl:highz} that will be important for our subsequent
analysis. One quantity is the center of the cluster, which is
needed to determine if a given AGN falls within the projected
virial radius of the cluster. We associate the center of each cluster
with the centroid of the extended X-ray emission. While these coordinates
do not always agree with the standard coordinates quoted in the literature,
this assumption makes our analysis more uniform. The redshift and velocity
dispersion are also needed to determine if an AGN is within the cluster.
In most cases velocity dispersions for these clusters are available
in the literature and we quote the origin of the measurement we adopt in the
table. When the velocity dispersion has not been measured, we estimate
this quantity from the X-ray temperature and the $\sigma - T_X$ relationship
from \citet{xue00}. Specifically, we used the relation
$\sigma = 10^{2.51\pm0.01} T^{0.61\pm0.01}$ km s$^{-1}$\ derived from their combined
group and cluster sample with orthogonal distance regression
\citep{feigelson92}. Based on their data, we estimate that there is
a 30\% uncertainty in $\sigma$ at fixed $T$.
One potential concern for our subsequent analysis is that the
\citet{xue00} $\sigma-T$ relation may not hold at higher redshift.
\citet{lubin04} investigated this point for several optically-selected
clusters and found that they were 2-9 times cooler than expected from
the local relation; however, the difference was much less stark for
X-ray selected, high-redshift clusters similar (and in several cases identical
to) those presented here. \citet{fang07} showed that high-redshift,
X-ray selected clusters are consistent with the low-redshift $L_X - \sigma$
relation, although spectroscopically-selected groups and clusters do not
agree as well \citep[see also][]{andreon08}.
Finally, we have calculated the projected size of the virial radius for
each cluster following \citet{treu03} and throughout this paper we associate
the virial radius with $R_{200}$, the radius within which the cluster is a
factor of 200 overdensity. Of the three clusters we have in common with
\citet{poggianti06}, for 3C~295 and MS1054-03 we adopt nearly the same
$\sigma$ and our $R_{200}$ estimate is nearly identical to theirs,
while for MS0015.9+1609 we adopt a slightly larger velocity
dispersion (1234 km s$^{-1}$\ from \citet{carlberg96} rather than their 984 km s$^{-1}$) and
consequently infer a larger radius.
Because the most relevant ChaMP measurements are the 2--8 keV luminosity,
rather than 2--10 keV luminosity, we multiply the ChaMP 2--8 keV luminosities
by a factor of 1.2. This correction factor was calculated for a $\Gamma=1.7$
power law with PIMMS. There is some uncertainty in this correction factor
because not all AGN have this power-law form, particularly as we assume this
correction for their observed rather than intrinsic (unobscured) spectra, but
this is not a significant effect compared to other sources of systematic errors
that we discuss below. There are no additional AGN from ChaMP that enter the
sample after this step because there are none just below the $10^{43}$ erg s$^{-1}$\
threshold in the 2--8 keV band. We also estimated the difference in luminosity
for an AGN calculated with the $\Gamma=1.5$ power law employed by SEXSI, the
$\Gamma=1.9$ employed by ChaMP, and a $\Gamma=1.7$ power law to determine
if these differences would cause any sources to fall in or out of the same
and none would do so.
In the two clusters observed by both ChaMP and SEXSI, there is one
cluster AGN common to both surveys: CXOSEXSI J171636.9+670829. The
redshifts from the two surveys agree exactly ($z=0.795$) and the luminosities
agree well: $L_{X,2-10} = 10^{44}$ erg s$^{-1}$ and $L_{X,2-8} = 10^{43.88}$ erg s$^{-1}$.
We also correct the ChaMP $r'$ measurements to the Vega $R$ band as discussed
above. Based on the magnitudes of these sources and a simple $k-$correction,
we estimate that none of these sources falls below our galaxy luminosity
threshold. As these are fairly luminous AGN, in some cases the AGN may dominate
the total flux and we may have overestimated the host galaxy luminosity.
E0015+162 \citep{margon83} is the most X-ray luminous AGN in our sample by
over an order of magnitude and is a useful case study to test the importance
of this concern.
This AGN has a total $R =18.41$ mag and a fainter host galaxy magnitude of
$R = 19.8$ mag \citep{orndahl03}, which corresponds to a factor of 3.6 in
flux. If the other AGN have similar or smaller $L_R/L_X$ ratios (such as
due to obscuration), then we expect their AGN contribution to the measured
$R-$band flux to be negligible because they are all much less luminous than
E0015+162.
We identify AGN in these clusters with the following four criteria: 1) The hard
X-ray luminosity must be $L_{X,H} \geq 10^{43}$ erg s$^{-1}$; 2) The AGN redshift
must fall within $3\sigma$ of the cluster mean redshift, where $\sigma$ is the
cluster velocity dispersion; 3) The AGN must fall within the projected virial
radius $R_{200}$ of the cluster; 4) The absolute magnitude of the host galaxy
must be greater than $M_R = M_R^*(z)+1$ mag.
Most of these criteria were adopted from \citet{eastman07}, although the
absolute magnitude criterion is different and we discuss our motivation
for this choice in \S\ref{sec:richness} below. With these criteria we identify
18 AGN in the 15 clusters with $z>0.4$, or an average of more than one per
cluster. The properties of the $z>0.4$ AGN are presented in
Table~\ref{tbl:highzagn}.
\section{New Low-Redshift Observations} \label{sec:lowzdata}
\begin{deluxetable*}{lcccrcccc}
\tablecolumns{9}
\tablewidth{7.0truein}
\tabletypesize{\scriptsize}
\tablecaption{New Low-Redshift Clusters\label{tbl:lowz}}
\tablehead{
\colhead{Cluster} &
\colhead{$\alpha_c$} &
\colhead{$\delta_c$} &
\colhead{$z$} &
\colhead{$\sigma$ [km/s]} &
\colhead{$\sigma$ Ref} &
\colhead{$T_X$ [keV]} &
\colhead{$T_X$ Ref} &
\colhead{$R_{200}$ [Mpc]} \\
\colhead{(1)} &
\colhead{(2)} &
\colhead{(3)} &
\colhead{(4)} &
\colhead{(5)} &
\colhead{(6)} &
\colhead{(7)} &
\colhead{(8)} &
\colhead{(9)}
}
\startdata
Abell 1240 & 11:23:37.3& +43:06:54& 0.1590& 698 & 1 & ... & ... & 1.64 \\
Abell 1942 & 14:38:22.0& +03:40:07& 0.2240& 903 & 2 & 5.6 & 1 & 1.96 \\
Abell 2125 & 15:41:13.2& +66:16:01& 0.2465& 1113 & 3 & 3.2 & 2 & 2.39 \\
MS1455.0+2232 & 14:57:15.1& +22:20:29& 0.2578& 1032 & 4 & 5.5 & 3 & 2.20 \\
ZwCl 1358.1+6245& 13:59:50.6& +62:31:04& 0.3280& 1003 & 4 & 6.5 & 3 & 2.06 \\
MS1512.4+3647 & 15:14:22.4& +36:36:21& 0.3720& 575 & 4 & 3.6 & 3 & 1.15
\enddata
\tablecomments{
New low-redshift clusters and their properties derived from the present study.
Columns are: (1)
Cluster name; (2 and 3) RA and DEC for the centroid of the extended
X-ray emission; (4) redshift; (5) velocity dispersion; (6) reference for the velocity dispersion; (7) X-ray temperature in keV; (8) reference for the X-ray
temperature; (9) estimate of the virial radius in Mpc \citep{treu03}.
References for velocity dispersion are: 1: derived from the X-ray luminosity
following \citet{xue00}; 2: derived from the X-ray temperature following
\citet{xue00}; 3: \citet{miller04}; 4: \citet{borgani99}.
References for X-ray temperatures are: 1: \citet{ota04}; 2: \citet{wang04}; 3: \citet{mushotzky97}.
}
\end{deluxetable*}
AGN more luminous than $L_{X,H} = 10^{43}$ erg s$^{-1}$\ are sufficiently rare in
low-redshift clusters that Poisson uncertainties (as opposed to sources of
systematic errors) from the low-redshift sample may dominate the statistical
significance of any evidence of evolution. Our previous study of ten
clusters with $z<0.32$ only identified one AGN above this luminosity threshold
\citep{martini06}, while our more recent observations of three additional
clusters (all at $z<0.08$) have identified only one additional AGN above this
luminosity \citep{sivakoff08}. We have therefore studied six additional
clusters with $0.15 < z < 0.4$ to find other X-ray AGN more luminous than
$L_{X,H} = 10^{43}$ erg s$^{-1}$\ with a combination of {\it Chandra}\ archival data and
follow-up spectroscopy of candidate cluster X-ray AGN at the MDM Observatory.
These clusters were selected to be the nearest massive clusters in the
{\it Chandra}\ archive whose estimated virial radii fit within the {\it Chandra}\ ACIS
field of view (FOV) and were accessible during our observing runs. The new
clusters and their physical properties are listed in Table~\ref{tbl:lowz}.
\subsection{{\it Chandra}\ X-ray Analysis}
\begin{deluxetable}{lrrcc}
\tabletypesize{\footnotesize}
\tablewidth{0pt}
\tablecaption{{\it Chandra} Observation Logs\label{tab:xobs}}
\tablehead{
\colhead{Cluster} &
\colhead{OBSID} &
\colhead{Detector} &
\colhead{T} &
\colhead{$L_{X,H,{\rm Lim}}$} \\
& & & (ks) & ($10^{41} {\rm \, erg \, s}^{-1}$)\\
\colhead{(1)} &
\colhead{(2)} &
\colhead{(3)} &
\colhead{(4)} &
\colhead{(5)}
}
\startdata
Abell~1240 & \dataset[ADS/Sa.CXO#obs/04961]{4961} & ACIS-I & 51.3 & $1.2$\\
Abell~1942 & \dataset[ADS/Sa.CXO#obs/03290]{3290} & ACIS-I & 57.5 & $2.2$\\
Abell~2125 & \dataset[ADS/Sa.CXO#obs/02207]{2207} & ACIS-I & 81.5 & $1.9$\\
MS~1455.0+2232 & \dataset[ADS/Sa.CXO#obs/04192]{4192} & ACIS-I & 91.9 & $1.8$\\
ZwCl~1358.1+6245 & \dataset[ADS/Sa.CXO#obs/00516]{516} & ACIS-S3 & 53.0 & $2.6$\\
MS~1512.4+3647 & \dataset[ADS/Sa.CXO#obs/00800]{800} & ACIS-S3 & 36.4 & $4.6$
\enddata
\tablecomments{{\it Chandra} Observation Log. Columns are:
(1) Cluster targeted;
(2) Observation ID of {\it Chandra} data;
(3) Detector used;
(4) Usable exposure;
(5) Estimate of the $2.0$--$8.0 {\rm \, keV}$ luminosity limit of the observation for a cluster galaxy.}
\end{deluxetable}
The X-ray observations were processed following the same techniques employed by
\citet{sivakoff08}. We reduced all data using {\sc ciao 3.4}%
\footnote{\url{http://asc.harvard.edu/ciao/}.}
with {\sc caldb 3.3.0.1} and NASA's {\sc ftools 6.0}%
\footnote{\url{http://heasarc.gsfc.nasa.gov/docs/software/lheasoft/}%
\label{ftn:heasoft}.}. The observations are summarized in Table~\ref{tab:xobs}.
Only minor differences in reduction were required for these archival
observations. The majority of the clusters had data with an aimpoint centered on
the four ACIS-I chips ($\sim 17\arcmin$ FOV) and frame times of $3.1 {\rm \,
s}$. These data were telemetered and cleaned in Very Faint mode. The more
distant clusters, ZwCl~1358.1+6245 and MS~1512.4+3647, were observed with the
aimpoint placed on the ACIS-S3 detector ($8.4'$ FOV) and had frame times
of $3.3 {\rm \, s}$. Their data were telemetered and cleaned in Faint mode, and
thus have a slightly higher background. As all observations were operated at
$-120 ^{\circ} \,{\rm C}$ the X-ray data were corrected for the time dependence
of the gain and the charge-transfer inefficiency with their photon energies
determined using the gain file acisD2000-01-29gain$\_$ctiN0006.fits. The
archival data of all observations already had applied the newest tools to
detect hot pixels and cosmic ray afterglows. We only consider events with ASCA
grades of 0, 2, 3, 4, and 6. Known aspect offsets were applied
for each observation. All observations were corrected for quantum efficiency
degradation and had exposure maps determined at $1.5 {\rm \, keV}$. We excluded
bad pixels, bad columns, and columns adjacent to bad columns or chip node
boundaries. We also filtered out times when the blank-sky rate was more than
three times
the expected blank-sky rate derived from calibrated blank-sky backgrounds to
avoid the most extreme periods of high background (``background flares'') that
{\it Chandra}\ may encounter. MS~1512.4+3647 had two separate pointings and
this introduced difficulties into our standard processing. We therefore
excluded the shorter second pointing, which accounted for less than
25\% of the total integration time.
To detect X-ray sources that are potential X-ray AGN in these clusters, we
applied the wavelet detection algorithm ({\sc ciao wavdetect}) with
scales ranging from 1 to 64 pixels in steps of $\sqrt{2}$ factors and required
a source detection threshold of $10^{-6}$. Source detection was only performed
in regions with an exposure of greater than 10\% of the total for the
observation. Our source detection threshold corresponds to $\la 4$ falsely
detected X-ray sources (due to a statistical fluctuation) for each observation.
Using \citet{kim07}, we have estimated the statistical X-ray positional
uncertainty (1$\sigma$) due to {\sc wavdetect}. In Table~\ref{tab:xobs}, we
list an estimated limiting X-ray luminosity for each observation that
corresponds to five counts on axis \citep[for consistency with][]{martini06}.
For our analysis we concentrated on sources with at least 20 broad
(0.3--8.0 keV) X-ray counts. These sources are unlikely to be due to
statistical fluctuations except where they are coincident with ICM emission.
We used ACIS Extract 3.131\footnote{http://www.astro.psu.edu/xray/docs/TARA/ae\_users\_guide.html}
to create source extraction regions enclosing 90\% of the flux in the X-ray PSF
and to determine a masking radius that encircled 97\% of the flux. For most of
the sources, whose photons had median energies of $\sim 0.6$--$2.6 {\rm \,
keV}$, we determined the regions assuming the PSF at $1.497 {\rm \, keV}$. A few
sources had harder emission and their PSF was calculated assuming an energy of
$4.51 {\rm \, keV}$. In a relatively small number of crowded regions, the
PSF fraction was reduced to prevent overlapping source extraction regions. We
also used ACIS Extract to correct the ({\sc ciao wavdetect}) position to the
mean position of detected events for sources within 5\arcmin of the observation
aimpoint or to the position that best correlated with the PSF for sources
beyond 5\arcmin of the observation aimpoint. These new positions were registered
with an optical catalog from $R-$band images (see below) to correct the absolute
astrometry and determine the absolute astrometric precision of each {\it Chandra}\
observation (0.3--0.5\arcsec). The statistical significance of each detection
was added in quadrature with the absolute astrometric precision to estimate the
total X-ray positional precision.
We measured the counts in three energy ranges: the broad (0.3--8 keV), soft
(0.3--2 keV), and hard (2.0--8.0 keV) bands. The observed
fluxes in these bands were derived assuming a $\Gamma=1.7$ power-law spectrum
with Galactic absorption. We then calculated the rest-frame luminosity in
the broad band (0.3--8 keV) and the classic hard band (2--10 keV) for all
sources with redshifts (see \S\ref{sec:spectra}).
\subsection{MDM Photometry} \label{sec:phot}
$R-$band images of these clusters were obtained at the MDM Observatory 2.4m
Hiltner telescope with the Echelle CCD camera during a run from the night of 28
May 2007 to 3 June 2007. Because the FOV of the CCD camera
($\sim 9.5' \times 9.5'$) is smaller than the ACIS-I FOV ($\sim 17' \times
17'$), we imaged a $2 \times 2$ mosaic to cover the {\it Chandra}\ area, with each
panel consisting of $3\times300 {\rm \, s}$ exposures. All images were trimmed,
bias-subtracted and flat-fielded with the {\sc ccdproc} package within
IRAF\footnote{IRAF is distributed by the National Optical Astronomy Observatory,
which is operated by the Association of Universities for Research in Astronomy
(AURA) under cooperative agreement with the National Science Foundation.}.
Sources were cataloged with the SExtractor package \citep{bertin96}.
Aperture magnitudes from these catalogs were calibrated with multiple
observations of standard star fields from the data compiled by P. B.
Stetson\footnote{\url http://cadcwww.hia.nrc.ca/standards}
onto the Vega magnitude system. Only data from the last night, which includes
each quadrant of Abell~1240 and ZwCl~1358.1+6245, the north-east quadrant of
MS~1512.4+3647, and $1\times300 {\rm \, s}$ exposures of each quadrant of
Abell~2125, were taken under photometric conditions. Our derived photometric
solution for this night was precise to 0.03 mag. As all of these clusters
except for Abell~2125 were imaged with SDSS, we cross-correlated aperture
magnitudes from all images on this run with stars in the SDSS DR5 catalog.
After correcting to R
(Vega)\footnote{\url http://www.sdss.org/dr7/algorithms/sdssUBVRITransform.html\#Lupton2005},
our derived photometric solution for 3 June, which includes a color correction
term, is accurate to 0.01 mag and precise to 0.06 mag. The poorer precision
compared to our photometric solution appears to be only partially due to the
dispersion in the Vega correction (overlapping sources between quadrants of
our own observations indicate typical
photometric precisions of $0.05-0.08 {\rm \, mag}$). We therefore adopted the
SDSS cross-calibration technique to photometrically correct all observations on
non-photometric nights, except for observations of Abell~2125. For Abell~2125,
non-photometric observations were cross calibrated with the single photometric
exposures for Abell~2125. As we do not have complete multi-band data, we report
only the magnitudes assuming no color correction. The exclusion of the color
correction term does not significantly decrease the precision of our photometric
solutions.
We calculated astrometric solutions for the images with the WCSTools package
\citep{mink02}, package and then produced the final, calibrated mosaics with the
SWARP\footnote{http://terapix.iap.fr/rubrique.php?id\_rubrique=49} package. A
final source catalog was extracted with SExtractor and used to register the
astrometry of the X-ray observations. We consider only the SExtractor AUTO
magnitudes, which is an automatic aperture magnitude designed to give precise
estimates of total magnitudes for galaxies. As nearby, detected neighbors are
removed and replaced by mirroring the opposite side of the aperture where
available, these magnitudes are suitable for our relatively crowded fields. All
X-ray sources that would be more luminous than $L_{X,H} = 10^{43}$ erg s$^{-1}$\ at the
cluster redshift that were also associated with galaxies and that would be more
luminous than $M_R^*(z)+1$ at the cluster redshift were then targeted for
the highest-priority spectroscopic observations, with the exception of sources
heavily contaminated by ICM emission. We also identified other
candidate cluster X-ray AGN, specifically those that would have $L_{X,H} \geq 10^{42}$
erg s$^{-1}$, as lower-priority spectroscopic targets.
\subsection{MDM Spectroscopy} \label{sec:spectra}
We obtained low-resolution spectroscopy of these candidates with the 2.4m
Hiltner telescope with the CCDS, a Boller \& Chivens spectrograph, during a run
from the night of 28 April 2008 to 3 May 2008. The slit widths were
determined by the nightly seeing conditions and were either $1.0''$ or
$1.5''$. At least two exposures of every candidate were obtained and total
exposure times varied from $120 {\rm \, s}$ to $9000 {\rm \, s}$. Five sets of
internal and twilight flats were taken over the entire run, while comparison
lamps were observed before and/or after every candidate.
The files were trimmed and bias-subtracted with the {\tt ccdproc} package
within IRAF and bad pixels were determined from a ratio of flat-field
images and were fixed in every image. The individual flat-field images from
internal lamps revealed a complex wavelength and slit-dependent flat-field, most
likely due to some reflection. To model this complex response, we first median
smoothed the internal flat-fields (over $11\times11$ pixels) and then
Gaussian-smoothed ($\sigma=11$ pixels) over the dispersion axis. The ratio of
the internal flat-field to the modeled internal flat-field was adopted as the
true internal flat-field. An illumination correction was then created from the
twilight flat-fields and applied to make the final set of flat-field corrections
to remove fringing in the spectra. After each spectrum was properly
flat-fielded, we rejected cosmic rays using L.A. Cosmic\footnote{\url
http://www.astro.yale.edu/dokkum/lacosmic/} \citep{vandokkum01}. A fourth
order wavelength-solution
was calculated for each set of HgNe comparison spectra, resulting in a typical
RMS of $\sim 0.1$\AA\ pixel$^{-1}$. Thereafter, standard aperture
extraction of the spectra was used to remove the night sky emission and produce
one-dimensional, logarithmically interpolated spectra with a dispersion of
$\sim$ 3\AA\ pixel$^{-1}$. The spectra extend from approximately 3650\AA\ to
7250\AA. We extracted both the signal and noise for each final spectrum of a
source.
\begin{deluxetable*}{llcccrrrrrc}
\tablecolumns{11}
\tabletypesize{\footnotesize}
\tablewidth{7.0truein}
\tablecaption{New, Lower-Luminosity Cluster X-ray AGN \label{tbl:xcagn}}
\tablehead{
\colhead{CXOU ID} &
\colhead{$z$} &
\colhead{$z$ ref} &
\colhead{$R$} &
\colhead{$R$ flag} &
\colhead{$f_{X,S}$} &
\colhead{$f_{X,H}$} &
\colhead{$f_{X,B}$} &
\colhead{$L_{X,B}$} &
\colhead{$L_{X,H}$} &
\colhead{X flag} \\
\colhead{(1)} &
\colhead{(2)} &
\colhead{(3)} &
\colhead{(4)} &
\colhead{(5)} &
\colhead{(6)} &
\colhead{(7)} &
\colhead{(8)} &
\colhead{(9)} &
\colhead{(10)} &
\colhead{(11)}
}
\startdata
J135950.5+623106.3 & $0.32717\pm0.00038$ & 1 & $17.80\pm0.05$ & 3 & $9.5^{+1.1}_{-1.0}$ & $8.7^{+2.8}_{-2.4}$ & $20.6^{+2.2}_{-2.1}$ & $70.6^{+7.6}_{-7.2}$ & $46.2^{+5.0}_{-4.7}$ & 1 \\
J143821.8+034013.3 & 0.22479 & 2 & $16.44\pm0.06$ & 3 & $2.20^{+0.91}_{-0.76}$ & $2.3^{+1.9}_{-1.4}$ & $4.8^{+1.7}_{-1.5}$ & $7.2^{+2.5}_{-2.2}$ & $4.7^{+1.7}_{-1.4}$ & 1 \\
J145714.7+221933.6 & $0.24852\pm0.00025$ & 1 & $20.04\pm0.07$ & 0 & $2.40^{+0.63}_{-0.53}$ & $3.7^{+1.3}_{-1.1}$ & $5.9^{+1.2}_{-1.0}$ & $11.2^{+2.2}_{-2.0}$ & $7.3^{+1.5}_{-1.3}$ & 0 \\
J145715.0+222034.5 & $0.25772\pm0.00015$ & 1 & $16.82\pm0.07$ & 0 & $20.2^{+4.9}_{-4.8}$ & $21.6^{+8.5}_{-8.3}$ & $44.4^{+9.1}_{-9.0}$ & $93^{+19}_{-19}$ & $61^{+12}_{-12}$ & 1 \\
J151422.5+363620.7 & 0.3718 & 3 & $18.05\pm0.06$ & 2 & $3.98^{+0.98}_{-0.89}$ & $3.2^{+2.5}_{-1.9}$ & $8.4^{+1.9}_{-1.8}$ & $38.1^{+8.8}_{-8.0}$ & $24.9^{+5.7}_{-5.2}$ & 1 \\
J154101.9+661627.1 & $0.24564\pm0.00045$ & 1 & $17.19\pm0.08$ & 2 & $2.78^{+0.62}_{-0.52}$ & $0.21^{+0.76}_{-0.41}$ & $4.63^{+1.0}_{-0.87}$ & $8.5^{+1.9}_{-1.6}$ & $5.5^{+1.2}_{-1.0}$ & 0 \\
J154101.9+661721.4 & 0.2567 & 4 & $19.36\pm0.08$ & 0 & $8.11^{+0.97}_{-0.88}$ & $7.1^{+1.7}_{-1.4}$ & $17.0^{+1.8}_{-1.6}$ & $34.3^{+3.6}_{-3.3}$ & $22.4^{+2.3}_{-2.1}$ & 0 \\
J154117.3+661923.6 & 0.2465 & 4 & $18.81\pm0.08$ & 0 & $2.08^{+0.58}_{-0.47}$ & $1.46^{+1.2}_{-0.82}$ & $4.15^{+1.1}_{-0.88}$ & $7.6^{+1.9}_{-1.6}$ & $5.0^{+1.3}_{-1.1}$ & 0
\enddata
\tablecomments{{\it Chandra} Observation Log. Columns are:
(1) Name of X-ray source;
(2) Redshift
(3) References for redshift are: 1: this work; 2: SDSS \citep{adelman-mccarthy08}; 3 \citet{abraham98}; 4: \citet{miller04};
(4) $R-$band magnitude;
(5) Flags for photometry are: (0) no flag; (1) may be contaminated by nearby neighbors or bad pixels; (2) blended with nearby neighbors; (3) both;
(6--8) Soft [0.3--2 keV], Hard [2--8 keV], and Broad [0.3--8 keV] band flux
in the observed frame in units of $10^{-15} {\rm \, erg \, s^{-1} cm^{-2}}$.
(9--10) Broad [0.3--8 keV] and Hard [2--10 keV] band luminosity in the
rest-frame in units of $10^{41} {\rm \, erg \, s}^{-1}$ corrected for
Galactic absorption.
(11) X-ray flags are: (0) no flag; (1) contaminated by ICM peak.
Note that CXOU J145715.0+222034.5 is the BCG and we subtracted a
multi-component beta model for the ICM to compute the quoted fluxes and
luminosities. }
\end{deluxetable*}
We adapted the Princeton/MIT SDSS Spectroscopy routines\footnote{\url
http://spectro.princeton.edu/idlspec2d\_doc.html}
to calculate redshifts. This technique cross-correlates
the spectra in pixel space with template spectra, with each pixel weighted by
the inverse of its variance, and is similar to the technique used in
\citet{martini06}. The template spectra include a set of four eigenspectra for
galaxies, four eigenspectra for quasars, and forty eigenspectra for stars. The
five best galaxy redshifts for $-0.01<z<1.00$, five best quasar redshifts for
$0.0033<z<7.00$, and forty different stellar redshifts for $-0.004<z<0.004$ are
found and ordered by the reduced $\chi^2$ of their fit. We adopted the best-fit
redshift and classification for each source. To ascertain the quality of the fit
and the errors to the redshift, we resampled each spectra 100 times randomly
according to its noise characteristics and reran the cross-correlation routine.
Both the dispersion in best-fit redshifts and the best-fit spectral type were
used to qualify the spectral classification quality.
If the dispersion in redshift was relatively low ($\sigma_z \lesssim 0.01$),
$>68\%$ of the best-fit redshifts were within $3\sigma_z$ of our adopted
redshift, and had the same spectral type (i.e., galaxy, quasar, or a similar
stellar type) we consider this a secure redshift.
Typically the maximum SNR of these spectra were $>5 {\rm \, pixel^{-1}}$.
We did not identify any AGN in these clusters with $L_{X,H} \geq 10^{43}$
erg s$^{-1}$, although we did identify several lower-luminosity AGN in these
clusters. Data for the lower-luminosity X-ray sources are provided in
Table~\ref{tbl:xcagn} and include several sources with spectroscopic
measurements from the literature. The spectroscopic observations of Abell~1240
and MS1512.4+3647 are complete for all candidates that would have
$L_{X,H} \geq 10^{42}$ if at the cluster redshift, while the other four
clusters are not complete to this luminosity limit. We have also measured
redshifts, $R-$band magnitudes, and X-ray fluxes and luminosities for numerous
additional sources not associated with these clusters and their properties are
listed in Table~\ref{tbl:xsources}.
As for the high-redshift clusters, several of the low-redshift clusters do
not have direct velocity dispersion measurements. For Abell~1942 we estimated
this quantity from the X-ray temperature. For Abell~1240 \citet{xue00} quote
$kT = 3.83$ keV from \citep{mushotzky97}, but in fact the value in
\citet{mushotzky97} appears instead to be for Abell 1242. As we could not
identify another $T_X$ value in the literature, we used the measurement of
$L_{bol} = 2.71 \times 10^{44}$ erg s$^{-1}$\ from \citet{david99} and the relation
$\sigma = 10^{2.76} L_X^{0.19}$ derived by \citet{xue00} to estimate
the velocity dispersion.
\begin{deluxetable*}{llcccrrrrrr}
\tablecolumns{10}
\tabletypesize{\footnotesize}
\tablewidth{7.0truein}
\tablecaption{Nonmember X-ray Sources \label{tbl:xsources}}
\tablehead{
\colhead{CXOU ID} &
\colhead{$z$} &
\colhead{$z$ ref} &
\colhead{$R$} &
\colhead{$R$ flag} &
\colhead{$f_{X,S}$} &
\colhead{$f_{X,H8}$} &
\colhead{$f_{X,B}$} &
\colhead{log $L_{X,B}$} &
\colhead{log $L_{X,H}$} \\
\colhead{(1)} &
\colhead{(2)} &
\colhead{(3)} &
\colhead{(4)} &
\colhead{(5)} &
\colhead{(6)} &
\colhead{(7)} &
\colhead{(8)} &
\colhead{(9)} &
\colhead{(10)}
}
\startdata
J112314.9+431208.3 & $0.08017\pm0.00010$ & 1 & $17.66\pm0.08$ & 0 & $8.4^{+1.5}_{-1.3}$ & $29.9^{+4.7}_{-4.1}$ & $30.1^{+3.4}_{-3.1}$ & $41.69^{+0.05}_{-0.04}$ & $41.50^{+0.05}_{-0.04}$ \\
J112357.4+431314.1 & 0.08007 & 2 & $19.46\pm0.08$ & 0 & $23.8^{+2.4}_{-2.2}$ & $32.8^{+4.9}_{-4.3}$ & $55.9^{+4.5}_{-4.2}$ & $44.51^{+0.03}_{-0.03}$ & $44.32^{+0.03}_{-0.03}$ \\
J112403.0+431330.6 & 1.1049 & 2 & $18.39\pm0.08$ & 0 & $22.2^{+2.5}_{-2.2}$ & $17.3^{+4.2}_{-3.6}$ & $44.5^{+4.4}_{-4.0}$ & $43.16^{+0.04}_{-0.04}$ & $42.98^{+0.04}_{-0.04}$ \\
J112413.1+430639.3 & $2.3666\pm0.0015$ & 1 & $19.80\pm0.08$ & 0 & $7.53^{+1.3}_{-1.1}$ & $7.56^{+2.5}_{-2.0}$ & $16.1^{+2.4}_{-2.1}$ & $44.73^{+0.07}_{-0.06}$ & $44.54^{+0.07}_{-0.06}$ \\
J143804.9+033752.6 & $0.29192\pm0.00030$ & 1 & $18.50\pm0.06$ & 0 & $5.22^{+1.1}_{-0.95}$ & $<4.0$ & $7.8^{+1.8}_{-1.5}$ & $42.32^{+0.10}_{-0.09}$ & $42.13^{+0.10}_{-0.09}$ \\
J143832.2+033506.0 & $1.0083\pm0.0051$ & 1 & $19.98\pm0.06$ & 0 & $65.7^{+3.2}_{-3.0}$ & $79.8^{+6.2}_{-5.8}$ & $149.3^{+6.0}_{-5.8}$ & $44.85^{+0.02}_{-0.02}$ & $44.66^{+0.02}_{-0.02}$ \\
J143833.0+033606.8 & $0.38252\pm0.00017$ & 1 & $19.40\pm0.06$ & 0 & $11.5^{+1.3}_{-1.2}$ & $18.3^{+3.1}_{-2.7}$ & $28.5^{+2.6}_{-2.4}$ & $43.15^{+0.04}_{-0.04}$ & $42.96^{+0.04}_{-0.04}$ \\
J143839.7+033631.3 & $2.1493\pm0.0019$ & 1 & $19.00\pm0.06$ & 0 & $16.1^{+1.7}_{-1.5}$ & $16.1^{+3.1}_{-2.6}$ & $34.8^{+3.1}_{-2.8}$ & $44.97^{+0.04}_{-0.04}$ & $44.79^{+0.04}_{-0.04}$ \\
J143841.9+034110.2 & 1.7372 & 2 & $17.82\pm0.06$ & 0 & $28.2^{+2.3}_{-2.2}$ & $29.5^{+4.3}_{-3.8}$ & $61.6^{+4.3}_{-4.1}$ & $45.01^{+0.03}_{-0.03}$ & $44.83^{+0.03}_{-0.03}$ \\
J143847.3+032950.8 & $0.00034\pm0.00012$ & 1 & $16.89\pm0.06$ & 0 & $16.3^{+2.0}_{-1.8}$ & $2.0^{+2.2}_{-1.6}$ & $26.9^{+3.3}_{-3.0}$ & & \\
J143859.0+033547.8 & 0.7339 & 2 & $18.51\pm0.06$ & 0 & $46.5^{+2.8}_{-2.7}$ & $7.1^{+6.1}_{-5.7}$ & $113.8^{+5.6}_{-5.3}$ & $44.41^{+0.02}_{-0.02}$ & $44.22^{+0.02}_{-0.02}$ \\
J145623.0+221833.5 & $0.00027\pm0.00010$ & 1 & $15.51\pm0.07$ & 0 & $9.0^{+1.5}_{-1.3}$ & $5.1^{+2.5}_{-2.0}$ & $17.1^{+2.6}_{-2.3}$ & & \\
J145624.5+222057.1 & $0.00019\pm0.00010$ & 1 & $15.45\pm0.07$ & 0 & $14.8^{+1.4}_{-1.3}$ & $11.3^{+2.5}_{-2.2}$ & $29.8^{+2.5}_{-2.4}$ & & \\
J145634.6+221514.2 & $0.40918\pm0.00010$ & 1 & $20.16\pm0.07$ & 0 & $25.8^{+1.7}_{-1.6}$ & $56.8^{+4.4}_{-4.1}$ & $73.3^{+3.7}_{-3.5}$ & $43.63^{+0.02}_{-0.02}$ & $43.45^{+0.02}_{-0.02}$ \\
J145657.7+221315.6 & $0.00016\pm0.00010$ & 1 & $14.87\pm0.07$ & 0 & $8.63^{+1.0}_{-0.93}$ & $3.6^{+1.4}_{-1.1}$ & $16.0^{+1.8}_{-1.6}$ & & \\
J145708.7+222352.4 & 0.1238 & 2 & $17.44\pm0.07$ & 0 & $2.27^{+0.60}_{-0.50}$ & $<3.4$ & $3.32^{+1.0}_{-0.86}$ & $41.14^{+0.13}_{-0.11}$ & $40.95^{+0.13}_{-0.11}$ \\
J145710.7+221844.9 & $1.885\pm0.0014$ & 1 & $18.73\pm0.07$ & 0 & $3.99^{+0.66}_{-0.57}$ & $5.3^{+1.4}_{-1.1}$ & $9.4^{+1.2}_{-1.1}$ & $44.28^{+0.06}_{-0.05}$ & $44.09^{+0.06}_{-0.05}$ \\
J145712.3+221446.7 & $-0.00069\pm0.00010$ & 1 & $15.15\pm0.07$ & 1 & $50.4^{+2.1}_{-2.1}$ & $15.3^{+2.2}_{-2.0}$ & $90.2^{+3.6}_{-3.5}$ & & \\
J145721.0+222334.5 & $1.7362\pm0.0010$ & 1 & $19.33\pm0.07$ & 1 & $9.4^{+1.1}_{-1.0}$ & $7.0^{+1.9}_{-1.6}$ & $18.9^{+2.0}_{-1.9}$ & $44.50^{+0.05}_{-0.04}$ & $44.32^{+0.05}_{-0.04}$ \\
J145726.9+221755.1 & $1.4664\pm0.0011$ & 1 & $19.55\pm0.07$ & 0 & $23.6^{+1.7}_{-1.6}$ & $33.0^{+3.4}_{-3.1}$ & $56.3^{+3.2}_{-3.1}$ & $44.81^{+0.03}_{-0.02}$ & $44.62^{+0.03}_{-0.02}$ \\
J151427.0+363803.1 & 0.1616 & 2 & $16.90\pm0.06$ & 0 & $2.28^{+0.61}_{-0.49}$ & $1.8^{+1.9}_{-1.1}$ & $4.82^{+1.2}_{-0.99}$ & $41.53^{+0.11}_{-0.09}$ & $41.35^{+0.11}_{-0.09}$ \\
J151428.4+363743.5 & 0.4026 & 3 & $20.13\pm0.06$ & 0 & $7.70^{+1.0}_{-0.92}$ & $14.1^{+3.7}_{-3.0}$ & $18.9^{+2.2}_{-2.0}$ & $43.01^{+0.05}_{-0.05}$ & $42.83^{+0.05}_{-0.05}$ \\
J151437.5+364041.3 & 0.1468 & 3 & $19.86\pm0.06$ & 0 & $12.1^{+1.2}_{-1.1}$ & $13.5^{+3.5}_{-2.9}$ & $26.9^{+2.4}_{-2.3}$ & $42.19^{+0.04}_{-0.04}$ & $42.01^{+0.04}_{-0.04}$ \\
J153938.1+662102.4 & 0.4375 & 4 & $19.71\pm0.08$ & 0 & $5.02^{+1.2}_{-0.98}$ & $6.5^{+2.9}_{-2.4}$ & $11.7^{+2.3}_{-2.1}$ & $42.90^{+0.09}_{-0.08}$ & $42.71^{+0.09}_{-0.08}$ \\
J154012.3+661439.2 & $1.0577\pm0.0029$ & 1 & $19.75\pm0.08$ & 0 & $37.2^{+1.9}_{-1.8}$ & $41.9^{+3.7}_{-3.4}$ & $83.1^{+3.7}_{-3.5}$ & $44.64^{+0.02}_{-0.02}$ & $44.46^{+0.02}_{-0.02}$
\enddata
\tablecomments{{\it Chandra} Observation Log. Columns are:
Col (1) Name of X-ray source;
Col (2) Redshift
Col (3) References for redshift are: 1: this work; 2: SDSS \citep{adelman-mccarthy08}; 3: \citet{abraham98}; 4: \citet{miller04};
Col (4) $R-$band magnitude;
Col (5) Flags for photometry are: (0) no flag; (1) may be contaminated by
nearby neighbors or bad pixels;
Cols (6--8) Soft [0.5--2 keV], Hard [2--8 keV], and Broad [0.5--8 keV] band
flux in the observed frame in units of
$10^{-15} {\rm \, erg \, s^{-1} cm^{-2}}$. Upper limits are $3\sigma$ limits.
Cols (9--10) Log of the Broad [0.5--8 keV] and Hard [2--10 keV] band luminosity
in the rest-frame in units of erg s$^{-1}$\ corrected for Galactic absorption. We do
not quote luminosities for X-ray sources identified with Galactic stars
($z\sim0$).
}
\end{deluxetable*}
\section{Cluster X-ray AGN Fraction} \label{sec:fa}
We require two quantities to estimate the AGN fraction in these clusters:
the number of AGN above our hard X-ray luminosity threshold hosted by galaxies
with $M_R < M_R^*(z)+1$ and the total number of cluster galaxies above this
magnitude threshold. For our low-redshift
cluster sample, we have complete data to our X-ray threshold and
reasonably complete data for the other cluster galaxies for about half of the
clusters. For the high-redshift sample we have incomplete knowledge of both
quantities. The AGN sample is likely incomplete because of spectroscopic
incompleteness in the ChaMP and SEXSI surveys. The census of other cluster
galaxies is very incomplete because few very high redshift clusters have the
same quality membership data as our low-redshift sample. In the first three
subsections below we describe the choice of the fiducial absolute magnitude
threshold, our estimate of the completeness of the spectroscopic observations
of X-ray sources, and the total number of cluster galaxies in the clusters
with incomplete membership data. The fourth subsection describes our main
result, the measurement of the AGN fraction and its evolution. The final
two subsections describe potential contamination by AGN associated with
large-scale structure around these clusters and other sources of
uncertainty, respectively.
\subsection{Host galaxy magnitude threshold} \label{galaxy}
In previous work we defined the AGN fraction in clusters relative to
galaxies more luminous than an $R-$band absolute magnitude of $M_R =
-20$ mag \citep[{\rm e.g.}][]{martini06}. This choice of magnitude threshold was
largely driven by expedience, namely it corresponded to the completeness
limit for the most distant clusters in that sample. To properly extend this
work to high redshift it is important to account for the evolution of the
galaxy population in clusters, both in luminosity and number. These were
not significant effects in our low-redshift study as the highest-redshift
cluster was at only $z=0.31$, but in our previous work at $z\sim0.6$ by
\citet{eastman07} the $M_R = -20$ mag cutoff corresponded to a fainter
absolute magnitude relative to $M_R^*$. Because the cluster galaxy population
is larger, this would have led to a lower estimate of the AGN fraction if the
cluster AGN are predominantly associated with the most luminous galaxies, as
is the case at low redshifts \citep{sivakoff08}.
Here we adopt an absolute magnitude threshold of $M_R^*(z) + 1$,
and thus allow for evolution of $M_R^*$. At low-redshifts ($0.01 < z < 0.07$)
\citet{christlein03} measured the $R-$band luminosity function (LF) for six
nearby clusters\footnote{Two of these clusters (Abell~85 and Abell~754) are
in our low-redshift sample \citep{sivakoff08}.} and found that the composite
cluster LF is consistent with a Schechter
function with $M_R^* = -21.92 \pm 0.17$ mag ($h=0.7$, $\alpha = -1.21$).
They also find an essentially identical value of $M_R^* = -21.93$
mag for the field. The low-redshift value of $M_R^* + 1$ is therefore
about one magnitude brighter than the value of $M_R = -20$ mag we adopted in
our previous, low-redshift studies \citep{martini06,sivakoff08}. For
comparison, \citet{blanton03} measured $M^* = -21.22$ ($\alpha = -1.05$)
at $z=0.1$ for the $r^{0.1}$ band on the AB system. This corresponds to
$M_R^* = -21.72$ mag on the Vega system for the $R-$band at $z=0$ based on
the conversions presented in \citet{blanton07} and is therefore consistent with
\citet{christlein03}.
Many recent studies have measured the evolution of $M_R^*$ and generally
these measurements include both a value for all galaxies and separate
measurements for particular spectroscopic types. This has relevance for
our study as the cluster galaxy population is on average more quiescent than
field galaxies and consequently their evolutionary history is different.
We are most interested in measurements of the evolution of $M_R^*$ as a
function of spectral type to isolate the evolution of galaxies dominated
by older stellar
populations that are most likely representative of the evolution of cluster
galaxies. A useful, low-redshift benchmark for a type-dependent LF for
clusters comes again from \citet{christlein03}. They find $M_R^* = -21.78$ mag
for quiescent galaxies in clusters, which is nearly identical to the
value for all cluster members. For field galaxies \citet{chen03} use
photometric redshifts in the Las Campanas Infrared Survey
and measure values of $-21.70$ to $-22.22$ mag ($\alpha = -1$) for all galaxies
over the range $0.5 < z < 1.5$ and values of $-21.21$ to $-21.82$ mag
($\alpha = -0.2$) for galaxies consistent with an E/S0 + Sab spectral
template. \citet{wolf03} use photometric redshifts from COMBO-17 and measure
more pronounced evolution for their early-type spectral template with
$M_R^*$ fading by $\sim 1$ mag from $z \sim 1.1$ to $z \sim 0.3$.
More recently, \citet{ilbert05} measure a fading of $1.1 - 1.8$ mag between
$z\sim 2$ and $z \sim 0.1$ in the $R-$band based on spectroscopic redshifts,
although they do not present the evolution as a function of spectral type.
These measurements of evolution in $M_R^*$ are broadly comparable to the
1.2 mag of fading from $z=1$ to the present expected from pure luminosity
evolution of a single stellar population with $z_f = 2$ and solar metallicity
\citep{bruzual03}.
Direct measurements of evolution of the cluster LF have mostly been conducted
in the rest-frame $B-$band. \citet{goto05} find $M_B^* = -21.13$ mag for
MS1054-03 ($z=0.83$), which is in our sample, and similar to the
$M_B^* = -21.15$ mag measured for three clusters at an average $z = 0.859$
by \citet{postman01}. In comparison to local $B-$band measurements of the
cluster LF \citep[e.g.][]{colless89,rauzy98}, \citet{goto05} conclude that
$M_B^*$ fades by $0.46$ to $0.71$ mag between $z=0.83$ and $z=0$. For the
same simple stellar population model considered above \citep{bruzual03},
1.2 mag of fading in $B-$band is expected from $z=0.83$ to the present.
While there is not a direct measurement in the rest-frame $R-$band for the
cluster LF, at yet longer wavelengths \citet{ellis04} find that the fading
in the $K-$band is 1.2 mag from $z=0.9$ to the present and consistent with
passive evolution and a formation epoch at $z_f=2$. From these investigations
of the LF evolution in the field and clusters, we adopt the assumption that
$M_R^*(z) = M_R^*(0) - z$ and the normalization for $M_R^*$ from
\citet{christlein03} for all cluster galaxies to estimate the completeness of
the spectroscopy
of X-ray counterparts and the size of the galaxy population in low-redshift
clusters. This result is broadly consistent with all of the results described
here, although is most consistent with the studies that predict more
fading. If there is less fading of galaxies at the bright end of the LF, such
as may be due to some low-level star formation in these galaxies, then
the completeness limits we describe next are too bright and we will have
systematically underestimated the population of luminous AGN in the
higher-redshift clusters.
\subsection{Completeness} \label{sec:completeness}
We calculate a completeness limit in the observed $R-$band for each cluster
based on the value of $M_R^*(z) + 1$ and a $k-$correction derived from
the elliptical template of the four-component spectral template presented by
\citet{assef08}.
These templates are derived from 16,033 galaxies with spectroscopic redshifts
and multiband photometry from the AGN and Galaxy Evolution Survey. Most
of the galaxies are in the range $0 < z < 1$ and the median redshift is 0.31.
The parent sample is therefore broadly representative of our redshift range.
For the higher-redshift clusters the $k-$correction requires a substantial
extrapolation from the observed $R-$band, which for example samples rest-frame
$B-$band at $z=0.5$. Our assumption that the typical cluster galaxies are
best approximated by an elliptical template is certainly reasonable for the
low-redshift clusters. This may not be as good an approximation at higher
redshifts, although in a study of the color-magnitude relation in our two
highest-redshift clusters (Lynx E and W) \citet{mei09} found there is no
evidence for significant evolution. If a later-type template were a better
choice for the $k-$correction at higher redshift, the $k-$correction
would be smaller and the necessary $R-$band spectroscopic limit would be
brighter. The net effect would be a smaller completeness correction.
The spectroscopic completeness of the high-z AGN sample largely depends on
the completeness of the ChaMP and SEXSI surveys, although we also use
additional spectra for MS 2053.7-0449, MS 1054-03, and RDCS J0910+5422.
The ChaMP survey quotes a spectroscopic completeness of 77\% for $R<22.37$ mag
\citep{silverman05a} and the SEXSI survey quotes a spectroscopic
completeness of 61\% for sources with $22 < R < 23$ mag, 67\% for
sources with $23 < R < 24$, and 74\% for sources with $R>24$ mag (typically
to 24.4 mag) \citep{eckart06}. For the ChaMP data we adopt 77\% as the
completeness correction for $R<22.37$ mag, while for the SEXSI survey we adopt
an average completeness correction of 67\% for $R<24.4$ mag. For nearly all
of the clusters above $z>0.6$ the spectroscopic data do not extend to the
equivalent of $M_R^*(z) + 1$ and the size of the magnitude range without
spectra ranges from a few tenths to over a magnitude. To estimate the
number that may have been missed we inspected the host galaxy absolute
magnitude distribution of the $L_{X,H} \geq 10^{43}$ erg s$^{-1}$\ AGN in the
clusters with complete data and find only one AGN fainter than $M_R^*$.
The distribution in $M_R$ of the X-ray AGN is shown in Figure~\ref{fig:mr}.
We therefore assume that we have not missed any AGN because the spectroscopic
observations of X-ray sources did not have the requisite depth, although this
assumption may have led us to underestimate the AGN fraction at high
redshift. In contrast, if our assumption of an early-type template for the
$k-$correction was too red, then the spectroscopic data do achieve the
requisite depth and this remains a nonissue. At brighter apparent magnitudes
we do apply a completeness correction to account for the quoted 77\% and
67\% completeness of the surveys. We discuss this further in
\S\ref{sec:evolve} below.
\begin{figure}
\plotone{martini.fig1.eps}
\caption{Distribution in absolute magnitude $M_R$ of the cluster AGN
relative to $M_R^*(z)+1$ at their redshift. All of the cluster AGN are
substantially brighter than $M_R^*(z)+1$, although in most cases the
spectroscopy is complete to this limit. The subset that are classified as
BLAGN are represented by the hatched histogram. The dotted line
corresponds to our galaxy luminosity threshold at $M_R^*(z)+1$.
\label{fig:mr}
}
\end{figure}
The X-ray AGN populations of several of these clusters have been studied
in previous work. The first substantial study of spectroscopically-confirmed
X-ray AGN in a high-redshift cluster was by \citet{johnson03} in MS1054-03. They
identified 2 AGN associated with this cluster: CXOU J105702.7-033943 and
CXOU J105710.6-033500; however, neither of these are included in the present
sample. The first was not included because the X-ray luminosity is below our
threshold of $10^{43}$ erg s$^{-1}$\ and the second because it falls slightly outside
the projected virial radius ($R/R_{200} = 1.2$). \citet{martel07} have
also studied X-ray sources in clusters, including three clusters that overlap
this sample. They are discussed further in \S\ref{sec:hosts} below.
\subsection{Inactive Cluster Galaxy Population} \label{sec:richness}
To estimate the AGN fraction in these clusters we need to know the
number of cluster galaxies more luminous than $M_R^*(z) + 1$. We
estimate this quantity in two ways, depending on the available data
for the clusters. For the low-redshift clusters in our previous studies
\citep{martini06,martini07,sivakoff08} we have a large number of
spectroscopically-confirmed cluster members and can estimate the number
of cluster galaxies either directly or with a completeness correction.
We have calculated new estimates for these clusters for the present paper
because we no longer use the $M_R=-20$ mag threshold of the previous
work. These values are listed in Table~\ref{tbl:fa}.
For essentially all of the new clusters in the present study we employ
the same technique as \citet{eastman07} to estimate the number of
cluster members above $M_R^*(z) + 1$ from the cluster velocity dispersion.
This employs the richness--velocity dispersion relationship
defined by \citet{koester07} for the MaxBCG cluster sample. The cluster
richness $N_{gal}^{R200}$ is the number of red (E/S0) cluster members more
luminous than $0.4L^*$ within the projected $R_{200}$ radius. This relationship
was originally derived from a sample of 13,823 clusters with $0.1 < z < 0.3$
in the SDSS with velocity dispersions greater than $\sim 400$ km s$^{-1}$.
\citet{becker07} provide the most recent estimate of this relation based on
a larger sample that extends over both a broader redshift range and to lower
velocity dispersion groups. They find $\ln \sigma = (6.17 \pm 0.04) +
(0.436 \pm 0.015) \ln N_{gal}^{R200}/25$. For reference a 520 km s$^{-1}$\ cluster
has $N_{gal}^{R200} = 30$.
There are several caveats that need to be considered with the use of this
estimator.
First, the richness--velocity dispersion relationship is based on photometric
and not spectroscopic redshifts. This is not a significant concern because
for red cluster galaxies the photometric redshift estimates are robust within
the quoted uncertainties. The second concern is that this relationship is
based on the red cluster galaxies alone. At low redshifts this estimate is a
reasonable approximation as the vast majority of cluster galaxies more
luminous than $M_R^* + 1$ fall in this category. For example, the fraction
of quiescent galaxies above this luminosity in the composite LF of
\citet{christlein03} is $\sim 85$\%. While their definition of quiescence is
based on spectral lines rather than color, these two definitions of
quiescence typically agree when averaged over a cluster. At higher redshifts
a larger fraction
of the cluster galaxies may be blue due to ongoing star formation, but this
can not be a substantial contribution because the luminosity-weighted mean
star formation epoch is $z = 2$ for early-type cluster galaxies up to $z=0.5$
\citep{vandokkum07}. \citet{becker07} do find evidence of evolution
in this relationship in the sense of lower richness at fixed velocity
dispersion in higher redshift clusters, but they note that this may be due
to their strict color selection. In addition, for our accounting of the
inactive galaxy population the color of the galaxies does not matter so
long as they are in the cluster and above the luminosity threshold.
Observations of individual
clusters with extensive spectroscopic data support the assumption that
there is no substantial evolution in the relation between halo occupation
number and cluster mass \citep{muzzin07b}. This is also supported by
several theoretical studies that find minimal evolution in the number of
bright galaxies in massive halos \citep{kravtsov04,zentner05}.
\begin{figure}
\plotone{martini.fig2.eps}
\caption{Difference between predicted and measured cluster richness compared
to the cluster richness predicted by the MaxBCG sample. The quantity
$N_{gal}^{R200}$ is the number of red cluster galaxies more luminous than
$0.4L^*$ and estimated from the cluster velocity dispersion \citep{becker07},
while $N_{spec}$ is a spectroscopic estimate of this quantity (see
\S\ref{sec:richness}). Symbols are coded according to the spectroscopic
completeness relative to $R_{200}$. Large circles have complete coverage
to $R_{200}$, medium circles have more than 50\% coverage, and the
small circles
have less than 50\% coverage. Most clusters are at $z<0.5$ ({\it open
symbols}), although substantial data exist for three at $z>0.5$ ({\it filled
symbols}). See \S\ref{sec:richness} for further details.
\label{fig:rich}
}
\end{figure}
We performed an independent validation of the MaxBCG relation with an analysis
of the individual clusters in our sample with substantial membership data.
While most
of the low-redshift clusters have substantial membership data, these data
generally do not extend to our estimate of $R_{200}$ \citep{martini07},
nor is the X-ray coverage complete to this radius. Our spectroscopic coverage
was often limited to the size of the {\it Chandra}\ field of view. However, two
useful exceptions are Abell 89B and MS1008.1-1224 and in both cases
estimates agree to within a factor of two. Our wide-field X-ray coverage of
Abell 85 and Abell 754 \citep{sivakoff08} were designed to sample a
substantial fraction of the projected $R_{200}$ and these values also agree
well. Figure~\ref{fig:rich} illustrates the difference between the MaxBCG
membership estimates and our spectroscopic estimates. The larger points
have nearly complete spectroscopic coverage to $R_{200}$, while smaller
points are substantially more incomplete. These points indicate that the
error introduced by adopting the MaxBCG relation is approximately a
factor of two. This error estimate is also consistent with an examination
of figure 4 of \citet{becker07}.
At higher redshifts three of our clusters have extensive membership
information. We estimate that MS0015.9+1609 has $\sim 200$ members based on
several studies \citep{dressler92,ellingson98} and that MS2053.7-0449
has $\sim 100$ members \citep{tran05}. Note that these
estimates are different from those presented in \citet{eastman07} due to
updated completeness corrections and the change in the absolute magnitude
threshold. For MS 1054-03 we estimate that there are $\sim 300$ members from
the extensive spectroscopic work of \citet{tran07}. These three clusters
are also shown in Figure~\ref{fig:rich} ({\it filled circles}). They
are consistent with the low-redshift results and a factor of two uncertainty
in the richness -- velocity dispersion relation. While our estimates of
the cluster galaxy population for these three clusters, as for the low-redshift
clusters, are based on all galaxies rather than just red galaxies, the
consistency supports the assumption that the integral of the bright end of
the galaxy luminosity function in clusters above an evolving $M_R$ threshold
scales reasonably well with the cluster velocity dispersion independent of
redshift, even if there is evolution in the colors of the cluster galaxies.
The number of AGN, estimate of the inactive population, AGN fraction,
and spectroscopic completeness for each cluster is listed in
Table~\ref{tbl:fa}.
\subsection{Cluster AGN Fraction and Evolution} \label{sec:evolve}
\begin{figure}
\plotone{martini.fig3.eps}
\caption{Evolution of the AGN population in clusters from $z=0$ to $z=1.3$
({\it filled symbols}). The fraction of cluster members more luminous than
$M_R^* + 1$ with AGN that have $L_{X,H} > 10^{43}$ erg s$^{-1}$ is shown in two
redshift bins ($z<0.4$, $z>0.4$; {\it filled circles}) and three redshift bins
($z<0.3$, $0.3<z<0.8, 0.8<z<1.3$; {\it filled triangles}).
We also show our estimate of the field AGN fraction based on the galaxy LF
estimates by \citet[][{\it open triangles}]{ilbert05}, \citet[][{\it open
circles}]{dahlen05}, and \citet[][{\it open squares}]{chen03}. See
\S\ref{sec:evolve} for further details.
\label{fig:faz}
}
\end{figure}
\begin{deluxetable}{llrccclc}
\tablecolumns{8}
\tablewidth{0pt}
\tabletypesize{\scriptsize}
\tablecaption{AGN Fraction Estimates and Cluster Membership\label{tbl:fa}}
\tablehead{
\colhead{Cluster} &
\colhead{$z$} &
\colhead{$\sigma$} &
\colhead{$N_{AGN}$} &
\colhead{$N_{gal}$} &
\colhead{Flag} &
\colhead{$f_{A,raw}$ [\%]} &
\colhead{$f_{spec}$} \\
\colhead{(1)} &
\colhead{(2)} &
\colhead{(3)} &
\colhead{(4)} &
\colhead{(5)} &
\colhead{(6)} &
\colhead{(7)} &
\colhead{(8)}
}
\startdata
Abell754& 0.0546 & 953 & 1 & 82 & 1 & $1.2^{+2.8}_{-1.0}$ & 1.00 \\
Abell85& 0.0554 & 993 & 0 & 53 & 1 & $<2.2$ & 1.00 \\
Abell3128& 0.0595 & 906 & 0 & 28 & 1 & $<4.1$ & 1.00 \\
Abell3125& 0.0616 & 475 & 0 & 15 & 1 & $<7.7$ & 1.00 \\
Abell644& 0.0701 & 952 & 0 & 40 & 1 & $<2.9$ & 1.00 \\
Abell89B& 0.0770 & 474 & 0 & 12 & 1 & $<9.6$ & 1.00 \\
Abell2104& 0.1544 & 1242 & 1 & 54 & 1 & $1.9^{+4.3}_{-1.5}$ & 1.00 \\
Abell1240& 0.1590 & 698 & 0 & 28 & 2 & $<4.1$ & 1.00 \\
Abell1689& 0.1867 & 1400 & 0 & 184 & 1 & $<0.62$ & 1.00 \\
Abell2163& 0.2007 & 1381 & 0 & 262 & 1 & $<0.44$ & 1.00 \\
Abell1942& 0.2240 & 905 & 0 & 65 & 2 & $<1.8$ & 1.00 \\
Abell2125& 0.2465 & 1113 & 0 & 127 & 2 & $<0.90$ & 1.00 \\
MS1455.0+2232& 0.2578 & 1032 & 0 & 99 & 2 & $<1.2$ & 1.00 \\
MS1008.1-1224& 0.3068 & 1127 & 0 & 216 & 1 & $<0.53$ & 1.00 \\
AC114& 0.3148 & 1388 & 0 & 121 & 1 & $<0.95$ & 1.00 \\
ZwCl1358.1+6245&0.328 & 1003 & 0 & 91 & 2 & $<1.3$ & 1.00 \\
MS1512.4+3647& 0.372 & 575 & 0 & 15 & 2 & $<7.7$ & 1.00 \\
MS1621.5+2640& 0.430 & 735 & 0 & 65 & 2 & $<1.8$ & 0.67 \\
3C295& 0.460 & 1642 & 2 & 412 & 2 & $0.49^{+0.64}_{-0.31}$ & 0.67 \\
MS0451.6-0305& 0.538 & 1371 & 0 & 273 & 2 & $<0.42$ & 0.77 \\
MS0015.9+1609& 0.541 & 1234 & 1 & 214 & 2 & $0.47^{+1.1}_{-0.39}$ & 0.77 \\
RXJ0848.7+4456& 0.574 & 895 & 1 & 102 & 2 & $0.98^{+2.3}_{-0.81}$ & 0.67 \\
MS2053.7-0449& 0.583 & 865 & 0 & 95 & 2 & $<1.2$ & 1.00 \\
RXJ0542.8-4100& 0.634 & 1269 & 5 & 229 & 2 & $2.18^{+1.5}_{-0.94}$ & 0.77 \\
RXJ2302.8+0844& 0.722 & 658 & 0 & 50 & 2 & $<2.3$ & 0.77 \\
MS1137.5+6625& 0.782 & 885 & 1 & 100 & 2 & $1.00^{+2.3}_{-0.83}$ & 0.77 \\
RX J1317.4+2911&0.805 & 1142 & 1 & 179 & 2 & $0.56^{+1.3}_{-0.46}$ & 0.67 \\
RXJ1716.4+6708& 0.813 & 1445 & 3 & 308 & 2 & $0.97^{+0.95}_{-0.53}$ & 0.92 \\
MS 1054-03& 0.823 & 1156 & 1 & 184 & 2 & $0.54^{+1.2}_{-0.45}$ & 0.77 \\
RDCS J0910+5422&1.110 & 675 & 1 & 53 & 2 & $1.9^{+4.3}_{-1.6}$ & 0.67 \\
Lynx E& 1.261 & 740 & 1 & 66 & 2 & $1.5^{+3.5}_{-1.3}$ & 0.67 \\
Lynx W& 1.270 & 650 & 1 & 49 & 2 & $2.0^{+4.7}_{-1.7}$ & 0.67
\enddata
\tablecomments{
AGN fraction estimates for individual clusters. Columns are:
Col. (1): Cluster name;
Col. (2): Redshift;
Col. (3): Velocity dispersion (references for these values are in
Table~\ref{tbl:highz}, Table~\ref{tbl:lowz}, \citet{sivakoff08} for
Abell 754, Abell 85, Abell 89B, \citet{martini06} for Abell 3128, Abell 3125,
Abell 644, Abell 2104, Abell 2163, and MS1008.1-1224, or adopted from
\citet{czoske04} for Abell 1689 and \citet{girardi01} for AC 114);
Col. (4): Number of AGN with $L_{X,H} \geq 10^{43}$ erg s$^{-1}$ in galaxies
more luminous than $M_R^*(z) + 1$;
Col. (5): Estimate of the number of cluster galaxies more luminous than
$M_R^*(z) + 1$ within either the {\it Chandra}\ FOV or $R_{200}$, whichever is
smaller;
Col. (6): Flag for the origin of the estimate where 1: from
our spectroscopy and completeness correction; 2: from
the MaxBCG as described in \S\ref{sec:richness};
Col. (7): Estimate of the cluster AGN fraction in percent;
Col. (8): Estimate of the spectroscopic completeness for X-ray sources.
}
\end{deluxetable}
\begin{deluxetable*}{llrlrrrlll}
\tablecolumns{10}
\tablewidth{7.0truein}
\tabletypesize{\scriptsize}
\tablecaption{AGN Fraction for Subsamples of the Clusters\label{tbl:fabin}}
\tablehead{
\colhead{Sample} &
\colhead{$z$ range} &
\colhead{$N_{CL}$} &
\colhead{median $z$} &
\colhead{median $\sigma$} &
\colhead{$N_{A,raw}$} &
\colhead{$N_{gal}$} &
\colhead{$f_{A,raw}$ [\%]} &
\colhead{$f_{spec}$} &
\colhead{$f_{A,corr}$ [\%]} \\
\colhead{(1)} &
\colhead{(2)} &
\colhead{(3)} &
\colhead{(4)} &
\colhead{(5)} &
\colhead{(6)} &
\colhead{(7)} &
\colhead{(8)} &
\colhead{(9)}
}
\startdata
Two Bins & & & & & & & & \\
& $z<0.4$ & 17 & 0.19 & 993 & 2 & 1492 & $0.134^{+0.18}_{-0.087}$ & 1.00 & $0.134^{+0.18}_{-0.087}$ \\
& $z>0.4$ & 15 & 0.72 & 895 & 18 & 2379 & $0.76^{+0.22}_{-0.18}$ & 0.76 & $1.00^{+0.29}_{-0.23}$ \\
& & & & & & & & \\
Three Bins & & & & & & & & \\
& $z<0.3$ & 13 & 0.15 & 953 & 2 & 1049 & $0.19^{+0.25}_{-0.12}$ & 1.00 & $0.19^{+0.25}_{-0.12}$ \\
& $0.3<z<0.6$ & 10 & 0.45 &1065 & 4 & 1604 & $0.25^{+0.20}_{-0.12}$ & 0.81 & $0.31^{+0.24}_{-0.15}$ \\
& $z>0.6$ & 9 & 0.81 & 885 & 14 & 1218 & $1.15^{+0.39}_{-0.30}$ & 0.78 & $1.47^{+0.50}_{-0.39}$
\enddata
\tablecomments{
Cluster AGN fractions with the data split into two bins and three bins.
The two bins are split at $z=0.4$, while the three bins split the data
at $z=0.3$ and $z=0.6$. For each bin we list:
Col. (2): redshift range;
Col. (3): number of clusters;
Col. (4): median redshift;
Col. (5): median velocity dispersion of clusters;
Col. (6): sum of the luminous AGN in the bin;
Col. (7): raw AGN fraction with double-sided, $1-\sigma$ confidence limits;
Col. (8): estimate of the mean spectroscopic completeness weighted by the
number of galaxies per cluster;
Col. (9): AGN fraction corrected for spectroscopic completeness.
}
\end{deluxetable*}
The AGN fraction for any single cluster is very small and it is uncertain due
to small number statistics. In addition, the AGN fraction may vary from
cluster to cluster due to correlations with other cluster properties such as
velocity dispersion \citep[][]{sivakoff08}. The AGN fraction may also depend on
variations in the properties of the galaxy population within each cluster
(e.g., mass, SFR, morphology). We therefore have binned the
cluster sample in two ways to characterize variations with redshift. First, we
simply split the sample at $z=0.4$. This choice is primarily motivated by
the transition between where we rely on our own measurements and where we
largely rely on other work. It also approximately divides the sample in two
(17 clusters at $z<0.4$, 15 at $z>0.4$).
This yields completeness-corrected AGN fractions of $f_A(z=0.19) =
0.00134^{+0.0018}_{-0.00087}$ and $f_A(z=0.72) = 0.0100^{+0.0029}_{-0.0023}$,
or approximately a factor of eight increase in the AGN fraction (see
Table~\ref{tbl:fabin}) from a median redshift of 0.19 to a median redshift of
0.72. AGN fractions without the completeness correction are also listed in
Table~\ref{tbl:fabin}. The uncertainties on these quantities are double-sided,
$1-\sigma$ confidence limits \citep{gehrels86}. The increase in the AGN fraction is
formally significant at the $3.8\sigma$ level. We also split the
sample into three bins with $z<0.3$, $0.3 < z < 0.6$, and $z>0.6$ to
better resolve the continued increase at high redshift that is apparent
in the raw data for individual clusters. This binning yields AGN fractions
of $f_A(z=0.15) = 0.0019^{+0.0025}_{-0.0012}$, $f_A(z=0.45) =
0.0031^{+0.0024}_{-0.0015}$, and $f_A(z=0.81) = 0.0147^{+0.0050}_{-0.0039}$.
The measured evolution between the lowest and highest bins is also a factor of
eight and in good agreement with the other binning scheme. We note that the
observed evolution is also well fit by a simple power law scaling as
$f_A \propto (1+z)^{\alpha}$ where $\alpha = 5.3^{+1.8}_{-1.7}$, although the
power-law index is strongly correlated with the $z=0$ value of the AGN
fraction.
The factor of eight evolution of the AGN fraction is smaller but consistent
with the order of magnitude evolution observed by \citet{eastman07}. They
measured $f_A(z=0.2) = 0.0007^{+0.0021}_{-0.0007}$ and $f_A(z=0.6) =
0.020^{+0.012}_{-0.008}$ for $L_{X,H} > 10^{43}$ erg s$^{-1}$, although for a lower
and fixed galaxy absolute magnitude of $M_R = -20$. At $z=0$ our galaxy
absolute magnitude threshold is approximately a magnitude brighter than that
used by \citet{eastman07} and the offset increases linearly with redshift.
This difference in absolute magnitude threshold can readily account for the
change in the low-redshift fraction because most of the AGN are associated
with luminous cluster galaxies, that is increasing the galaxy luminosity
threshold decreases the denominator and does not affect the numerator of the
AGN fraction. In addition, we have since identified a second luminous AGN at
low redshift \citep{sivakoff08}. At high redshift the change in galaxy
luminosity threshold is also important, but in addition the cluster sample is
more than three times larger than the \citet{eastman07} sample. The
low-redshift cluster sample has increased by less than a factor of two.
One way to characterize the evolution of the cluster AGN fraction relative
to the field is to calculate the integral of the field space density
$\Phi(L_{X,H}>10^{43})$ as a function of redshift. Integration of the
luminosity-dependent density evolution model in \citet{ueda03} yields a
factor of five increase between $z=0.8$ and $z=0.2$, which is somewhat less
but consistent with the observed evolution of cluster AGN.
However, this is not a fair comparison because the evolution of field
AGN with $\Phi(L_{X,H}>10^{43})$ is not normalized by the evolution of all
field galaxies brighter than $M_R^* + 1$ and the cluster AGN fraction is.
While there is not a direct measurement of the field AGN fraction similar to
our calculation for clusters \citep[although see][]{lehmer07}, we can estimate
this quantity by dividing the integral of the field hard X-ray LF from
\citet{ueda03} by the integral of the galaxy LF. We have identified three
surveys that report LF measurements for the $R-$band and approximately span
the same redshift range of this work. The first of these is the VIMOS-VLT Deep
Survey \citep{ilbert05}, which is based on $UBVRI$ photometry, $\sim 11,000$
spectra to $I_{AB} = 24$ mag and extends from $z=0.05$ to $z=2$ \citep[although
their lowest-redshift point is taken from SDSS;][]{blanton03}. We also show
results from two measurements based on photometric redshift data: the Las
Campanas Infrared Survey \citep[LCIRS;][]{chen03}, which is mostly based on
$UBVRIH$ measurements and presents the LF for $z=0.5-1.5$, and the Great
Observatories Origins Deep Survey \citep[GOODS;][]{dahlen05}, which is based on
$U$ through $K$ observations and presents the galaxy LF to $z=2$. While these
photometric redshift surveys may have more systematic uncertainties than the LF
based on spectroscopic measurements, they have the virtue that they have
measured the luminosity function in the rest-frame $R-$band, rather than relied
on assumptions about galaxy spectral energy distributions (SEDs) to calculate
$k-$corrections. We have calculated the field AGN fraction for each of these
surveys and show the results in Figure~\ref{fig:faz} ({\it open symbols}). At
low redshift the AGN fraction calculated with the \citet{ilbert05} LF is
approximately a factor of five above the cluster fraction, which is consistent
with the difference between the field and clusters seen by \citet{dressler99}
for spectroscopically-identified AGN. At higher redshifts ($z>0.5$), the field
estimates range between a factor of three and a factor of ten above the cluster
fraction. These estimates of the field AGN fraction vary so substantially due
to the dispersion in estimates of the galaxy luminosity function. In addition,
this calculation presupposes that all of the X-ray AGN are in galaxies more
luminous than $M_R^*(z) + 1$. While there is good evidence that most of these
luminous X-ray AGN are in relatively luminous galaxies
\citep[e.g.][]{silverman09a}, there is nevertheless a bias against
spectroscopic identification of lower-luminosity X-ray AGN host galaxies.
Finally, we note that the relative evolution of galaxies in clusters and the
field further complicates this comparison. In future work we hope to compile
sufficient data to calculate the AGN fraction in the field and clusters as a
function of galaxy mass. At present the data are insufficient to conclude
if the cluster AGN fraction or field AGN fraction evolves more rapidly.
\subsection{Contamination by AGN Associated with Large-Scale Structure} \label{sec:lss}
One concern raised about the physical origin of the Butcher-Oemler effect
is the contribution of projection effects. \citet{diaferio01} studied this
issue in detail with N-body simulations and semianalytic models
to distinguish true cluster members from field interlopers that were
at the cluster redshift and within the projected $R_{200}$, yet
physically outside the cluster $R_{200}$. \citet{diaferio01} concluded
that up to 50\% of the apparent Butcher-Oemler galaxies at the
redshifts of high-redshift clusters may be interlopers. A similar
effect may be relevant for the AGN population and such a large contamination
would decrease the observed evolution, but not erase it.
While there is no comparable study that directly investigates the
projection of AGN onto high-redshift clusters, there is good evidence
that AGN are associated with the large-scale environment of clusters.
\citet{gilmour07} identified 11 X-ray AGN (to a lower luminosity limit
of $\sim 10^{41}$ erg s$^{-1}$) in the A901/2 supercluster at $z \sim 0.17$ and
only one was in the densest region of the supercluster. The remainder were
mainly in regions of intermediate density. In the vicinity of 3C295
($z=0.46$) \citet{delia08} find evidence for AGN associated with a filamentary
structure. At yet higher redshifts this trend is also
apparent. \citet{kocevski09a} find X-ray AGN associated with the CL1604
supercluster at $z\sim0.9$, which contains 8 confirmed groups and clusters.
These AGN mostly avoid the densest regions of the clusters and are located
on the outskirts of the most massive clusters, that is they are associated with
poorer clusters and groups.
We examined our data to determine if there were a population of
AGN outside the projected $R_{200}$ for these clusters similar to
those seen in the two superclusters. This is only possible with the
subset of the sample with substantial coverage beyond $R_{200}$.
Eight of the clusters have {\it Chandra}\ coverage that extends to
$2 R_{200}$. There are six AGN between $R_{200}$ and $2 R_{200}$ that
meet our velocity cuts for cluster membership compared to
eight AGN within $R_{200}$ for these same clusters.
The larger number within the clusters suggests the opposite trend from
the two supercluster studies described above, although these
results are not truly in conflict because the supercluster studies
encompassed a much larger area outside of dense clusters than this
study. The different large-scale environments associated with these clusters
and the superclusters suggest a more quantitative comparison would not
be meaningful. These large-scale structure data also provide a crude means
to estimate the likelihood of chance juxtapositions of AGN associated with
large-scale structure onto the clusters. If interloper AGN have the same
surface density within $R_{200}$ as between $R_{200}$ and $2 R_{200}$, then
the six we identified in an area of $3 \pi R_{200}$ suggest we should expect at
most 2 interlopers compared to the 8 AGN we see within $R_{200}$.
This line of argument suggests that the interloper fraction is 25\%, which is
small compared to the observed evolution signature.
\begin{deluxetable*}{llccccccl}
\tablecolumns{9}
\tablewidth{7.0truein}
\tabletypesize{\scriptsize}
\tablecaption{High-Redshift AGN Associated with Large-Scale Structure around Clusters\label{tbl:lssagn}}
\tablehead{
\colhead{AGN} &
\colhead{Cluster} &
\colhead{$z$} &
\colhead{$R$ [mag]} &
\colhead{log $L_{X,H}$ [erg s$^{-1}$]} &
\colhead{$\delta v/\sigma$ [km/s]} &
\colhead{$\Delta R$ [arcmin]} &
\colhead{$R/R_{200}$} &
\colhead{Class} \\
\colhead{(1)} &
\colhead{(2)} &
\colhead{(3)} &
\colhead{(4)} &
\colhead{(5)} &
\colhead{(6)} &
\colhead{(7)} &
\colhead{(8)} &
\colhead{(9)}
}
\startdata
CXOSEXSI J084846.0+445945& RX J0848.7+4456 &0.567 &21.45 &43.1 &1.99 &3.51 &1.16 &ELG \\
CXOMP J230300.9+084659& RXJ2302.8+0844 &0.738 &21.71 &44.23 &2.81 &4.46 &1.2 &BLAGN \\
CXOSEXSI J171807.6+670647& RXJ1716.4+6708 &0.797 &21.75 &44 &1.83 &7.8 &1.59 &BLAGN \\
CXOU J105710.6-033500& MS 1054-03 &0.832 &21.93 &43.14 &1.27 &4.57 &1.18 &ALG \\
CXOSEXSI J091040.8+542006& RDCS J0910+5422 &1.097 &22.38 &43.1 &2.74 &2 &1.13 &ELG \\
CXOSEXSI J084903.9+445023& LynxE &1.276 &23.92 &43.2 &2.95 &1.76 &1.03 &ELG
\enddata
\tablecomments{
AGN associated with large-scale structure around the subset of high-redshift
clusters with complete X-ray coverage to twice the projected virial radius.
This is the subset of AGN that satisfy the redshift selection criterion,
but have a projected distance of $1 < R/R_{200} \leq 2$.
Columns are identical to Table~\ref{tbl:highzagn}. The data for
CXOU J105710.6-033500 are from \citet{vandokkum00} for the redshift,
magnitude, and classification and the X-ray data are from \citet{johnson03}.
This sample is described in further detail in \S\ref{sec:lss}.
}
\end{deluxetable*}
\subsection{Uncertainties} \label{systematics}
One major potential source of systematic error is the use of the MaxBCG
richness estimator to estimate the fraction of cluster galaxies more luminous
than $M_R^*+1$. In \S\ref{sec:richness} we estimated that there is a factor of
two uncertainty in the use of this relation. This uncertainty is mainly
important for the high-redshift subsamples as the low-redshift subsamples
have more complete spectroscopic membership data. If we randomly introduce a
factor of two uncertainty in each cluster, the effect is negligible when
averaged over the 15 clusters with $z>0.4$ compared to the factor of eight
evolution in the AGN fraction.
As mentioned previously, another valid concern with the MaxBCG estimator is
that it is calibrated to the number of red galaxies in the cluster and this
population may not all be in place at $z=0.4$ and higher. For our application
it does not matter if the galaxies are red or not, just that they are in the
cluster. Furthermore, if we have overestimated the number of galaxies brighter
than $M_R^*+1$ then we have underestimated the evolution of the AGN fraction
and our result is yet more statistically significant. The assumption that
all of the galaxies are red does impact the $k-$correction we use to estimate
the spectroscopic limit corresponding to $M_R^*(z)+1$ and thus the size of
our completeness correction. If the galaxies are redder, then the
$k-$correction would be smaller, the apparent magnitude limit would be
brighter, and the completeness correction would be smaller. The implication
would be that we have preferentially overestimated the AGN fraction at high
redshifts because completeness corrections are only applied to the
high-redshift clusters. While the average completeness correction approaches
25\% (see Table~\ref{tbl:fabin}), in practice the spectroscopic completeness
is not a strong function of apparent magnitude \citep[e.g. see
\S\ref{sec:completeness},][]{silverman05a,eckart06} and
we consequently expect much less than a 25\% reduction in the evolution.
The evolution of the host galaxy population is also important because
if there were less fading of $M^*(z)$ than we assume, then the completeness
limit would be too bright and we would have underestimated the AGN fraction
at high redshift.
The value of the cluster velocity dispersion introduces additional uncertainty
to this calculation in two ways. First, many of the direct measurements of the
cluster velocity dispersion, particularly for high-redshift clusters, are
based on small samples of galaxies and thus the velocity dispersion itself
may be uncertain, particularly if the galaxy velocity distribution is
not Gaussian. Second, as noted above the cluster velocity dispersion has not
been directly measured for several clusters and we instead used the X-ray
temperature and the results of \citet{xue00} to estimate the velocity
dispersion and this has a 30\% scatter. We checked both of these concerns
with a measurement of the scatter between $\sigma$ and $T_X$ for
the ten high-redshift clusters with measurements of both quantities
and the mean deviation is $\sim 220$ km s$^{-1}$\ if we exclude 3C295, which
has a substantially higher velocity dispersion \citep[1642 km/s][]{girardi01}
than expected from its X-ray temperature \citep[5.3 K from][]{vikhlinin02}.
This mean deviation corresponds to approximately a factor of two uncertainty
in the richness, which is comparable to the uncertainty we derived for the
richness estimator. From this analysis we similarly conclude that this
source of uncertainty does not substantially affect our results.
A related evolutionary effect is that the velocity distributions of the
high-redshift clusters may be systematically more non-Gaussian than
low-redshift clusters because the high-redshift clusters are less likely to be
relaxed. If the cluster velocity dispersion were overestimated, then the
richness and $R_{200}$ would be overestimated as well. This in turn would lead
to an underestimate of the AGN fraction in high-redshift clusters.
\citet{jeltema05} measured power ratios from {\it Chandra}\ observations of the IGM
for a large sample of clusters out to $z\sim1$ and found good evidence that
high-redshift clusters are less relaxed than low-redshift clusters, so this
potential source of systematic error would lead us to underestimate the AGN
fraction. Nine of our clusters were analyzed in the \citet{jeltema05} study,
including eight in our high-redshift sample. We compared the AGN fractions and
the power ratios for these clusters, but did not find a significant trend.
Unfortunately we do not have sufficient redshift data for most high-redshift
clusters to look for non-Gaussianity in the galaxy velocity distribution,
although note there is no evidence for a trend between dynamically-disturbed
clusters and AGN fraction at low redshift \citep{martini07}.
\begin{figure}
\plotone{martini.fig4.eps}
\caption{
Histograms of the number of clusters with a given velocity dispersion
({\it dotted line}) and the number of AGN in clusters of a given velocity
dispersion ({\it dashed line}) for the low-redshift ($z<0.4$; {\it top panel})
and the high-redshift ($z>0.4$; {\it bottom panel}) subsamples. The cluster
samples are reasonably well matched within these two redshift bins.
\label{fig:sigma}
}
\end{figure}
Finally, we consider the evolution of the cluster population to determine if
the higher-redshift clusters represent the progenitor population of the
lower redshift clusters. As noted previously, observations at low redshift
indicate that the AGN fraction depends on environment and specifically that
the AGN fraction is higher in lower velocity dispersion environments
\citep{sivakoff08,arnold09}. Therefore if our high-redshift clusters are the
progenitors of lower velocity dispersion clusters or massive groups, then
the observed evolution may not be as significant. As many of the high-redshift
clusters are X-ray selected, they are generally high-mass clusters and are
reasonably well matched to the lower-redshift sample
(see Figure~\ref{fig:sigma} and Table~\ref{tbl:fa}).
Following \citet{finn05} and \citet{poggianti06}, we have estimated the
velocity dispersions of the progenitors of the high-redshift cluster sample
and find they are in good agreement. For example, the progenitor of a 1000
km s$^{-1}$\ cluster at $z=0$ has 800 km s$^{-1}$\ at $z=0.6$ \citep{poggianti06}, or only
about 100 km s$^{-1}$\ less than the difference between our low-redshift and
high-redshift subsamples. The sense of this trend is that the high-redshift
sample is actually somewhat more massive than the typical progenitor of the
low-redshift sample and therefore the minor mismatch in cluster masses
is more likely to have dampened rather than enhanced the measured evolution
of the AGN fraction.
\section{Properties of the Cluster AGN} \label{sec:agn}
\subsection{Distribution} \label{sec:dist}
\begin{figure*}
\plotone{martini.fig5.eps}
\caption{
Histograms of the AGN clustercentric distances in terms of Mpc ({\it left})
and normalized to $R_{200}$ ({\it right}) for cluster AGN with $z>0.4$. The
distribution of the confirmed cluster members ({\it solid line}) is much more
centrally peaked when expressed in terms of Mpc than in terms of $r/R_{200}$.
Other AGN associated with large-scale structure (with $R>R_{200}$) are also
shown ({\it dotted line}).
\label{fig:rad}
}
\end{figure*}
The projected radial and velocity distributions of the AGN provide valuable
additional information about the origin of the AGN. For example, if the
AGN are preferentially located in the cluster outskirts, or preferentially
have a higher velocity dispersion than the cluster mean, this may indicate
that their host galaxies have relatively recently entered the cluster
potential. This is known to be the case for emission-line galaxies
\citep{biviano97,dressler99}. At low-redshifts and for lower-luminosity
X-ray AGN, \citet{martini07} found that $L_X > 10^{42}$ erg s$^{-1}$\ [0.5--8 keV]
AGN were more centrally concentrated than typical cluster galaxies, while
AGN an order of magnitude less luminous had the same distribution as the
inactive galaxy population. For both luminosity thresholds the velocity
distribution of the AGN were consistent with the galaxy population.
It is more challenging to compare these higher-luminosity X-ray AGN to the host
galaxy population because we lack membership data for nearly all of the
high-redshift clusters. Nevertheless, we can compare the distribution of
sources to the typical distribution of cluster galaxies and to the excess
surface density distribution found by surveys of X-ray point sources toward
distant clusters. In Figure~\ref{fig:rad} we present a histogram of the number
of X-ray AGN from the cluster center as a function of distance in both physical
units (Mpc) and normalized to $R_{200}$. While the sample is small, two results
are apparent from the figure. First, there are approximately equal numbers of
AGN outside $0.5 R_{200}$ as inside it, whereas if the AGN traced the cluster
galaxy distribution we would expect them to be more centrally concentrated.
Second, the radial distribution is more strongly peaked when plotted in
physical units than normalized to $R_{200}$;
While we do not have detailed information on the radial distribution of the
cluster galaxy populations in these clusters, we do have extensive data
on nearby clusters from \citet{christlein03}. For these clusters we have
investigated the cluster galaxy distribution with the same selection
criteria ($R<R_{200}$, $M_R<M_R^*+1$, $\Delta v<3\sigma$) and find that
70\% of the galaxies fall within $0.5 R_{200}$, whereas 10 of 18 luminous
AGN at $z>0.4$ are
within $0.5 R_{200}$. The binomial probability is only 14\% that we would
find 10 or fewer AGN within $0.5 R_{200}$ if we expected 70\%. There is
thus a mild tendency for luminous AGN to be distributed toward the outskirts of
the clusters, although this does make the substantial assumption that
the radial distribution of galaxies within clusters is similar at $z\sim0.8$
and the present. This broad distribution in radius is in contrast to our
earlier results on lower-luminosity AGN in lower-redshift clusters. At low
redshift we found that 50\% of the luminous AGN were within $0.1 R_{200}$
\citep{martini07}. Better statistics could determine if the AGN are
preferentially located in the outskirts of clusters compared to all cluster
galaxies. That would be consistent with the hypothesis that AGN are triggered
by mergers during infall. From simulations \citet{ghigna98} find that mergers
between galaxies do not occur within the virial radius. We note that
\citet{berrier09} simulated the formation of 53 galaxy clusters and find
most cluster galaxies do not experience `preprocessing' in group environments
and therefore processes specific to clusters must largely be responsible for
the differences between cluster and field galaxies.
The second result has interesting implications for studies that use the
surface density distribution of excess sources to characterize the
distribution of AGN in clusters \citep{ruderman05,gilmour09,galametz09}.
These studies generally plot the excess surface density as a function of
physical distance from the cluster center and find a central peak in surface
density. Our results indicate that the true distribution may be flatter
than implied by use of the physical (proper) distance from the cluster
core. This is because those surveys, like the present study, include
clusters with a wide range of masses and consequently a wide range of
$R_{200}$. Simply adding the distributions for all clusters without
renormalizing each observation for the size of the cluster will produce an
artificial central peak due to the mass range of the cluster sample.
\begin{figure}
\plotone{martini.fig6.eps}
\caption{
Histogram of the cumulative velocity distribution of cluster AGN normalized
to the cluster velocity dispersion for the 18 cluster AGN with $z>0.4$
({\it solid histogram}). The AGN velocity distribution is consistent with a
Gaussian distribution ({\it solid curve}) and the $L_{X,B} \geq 10^{42}$
erg s$^{-1}$\ AGN from \citet{martini07} ({\it dotted histogram}).
\label{fig:vel}
}
\end{figure}
If the cluster AGN are associated with a population that recently entered
the cluster potential, the host galaxies may also be preferentially on more
radial orbits and have a larger velocity dispersion than that
of all cluster galaxies. As noted previously, this is true of
the emission-line galaxy population in clusters. In Figure~\ref{fig:vel} we
plot the cumulative velocity distribution for all 18 AGN with $z>0.4$
normalized by the cluster velocity dispersion. The distribution is
in excellent agreement with a Gaussian distribution and we therefore
find no evidence that the cluster AGN have a larger velocity distribution that
would be consistent with more radial orbits. This was also found for the
14 relatively luminous ($L_{X,B} \geq 10^{42}$ erg s$^{-1}$) AGN studied by
\citet{martini07}. A better test would be to
compare the AGN host population to the absorption-line galaxies in the
clusters since the velocity dispersion estimates for many of these clusters
may be biased toward the emission-line galaxy population because it is
easier to measure redshifts for them. While this is not the case for
those whose velocity dispersions are estimated from X-ray data, it may also be
true of the calibration sample for the relations between X-ray properties
and galaxy velocity dispersion.
\subsection{Luminosity Function} \label{sec:xlf}
\begin{figure}
\plotone{martini.fig7.eps}
\caption{
\label{fig:hxlf}
Hard X-ray luminosity function of cluster AGN at $z>0.4$ compared to the field
XLF from \citet{ueda03} at the median cluster redshift ($z=0.8$, {\it solid
curve}) and at low redshift ($z=0.1$, {\it dotted curve}). The field XLFs have
been renormalized to be consistent with the cluster measurements in the first
two luminosity bins. The arrows are upper limits calculated with Poisson
statistics.
}
\end{figure}
We have begun to acquire sufficient numbers of cluster AGN that it is
possible to compare the X-ray luminosity function (XLF) between clusters and
the field, as well as the cluster XLF at different redshifts. A comparison
between the cluster and field XLF is interesting because differences
between the two would be a signature of environment-dependent downsizing.
There is evidence that this is true of star formation in different
environments. For example \citet{kauffmann04} find that substantial star
formation is only present in higher mass galaxies in lower density
environments in the local universe.
If the cluster black holes primarily grew at higher redshifts
than field black holes, similar to the earlier formation epoch expected for
the stellar populations in luminous cluster galaxies, then the cluster
luminosity function at high-redshift may have a similar shape to the
present-day XLF in the field. One test of this hypothesis is to compare the
characteristic luminosity $L_X^*$ between clusters and the field. If the
cluster AGN primarily grew at an earlier epoch, $L_X^*$ would be smaller in
clusters relative to the field at a given redshift.
It is reasonable to compare the shape of the XLF between clusters and the
field because the XLF is a measurement of the X-ray sources alone within
well-defined volumes, although the caveats associated with large scale
structure discussed in \S\ref{sec:lss} do apply. This is different from the
case in Section~\ref{sec:evolve}, where we noted that the comparison of
the evolution of the AGN fraction and the integrated space density was not
comparing identical
quantities because the AGN fraction includes information about the galaxy
population. The one assumption that we do make is that all of the X-ray
sources are hosted by galaxies above our threshold, but this is reasonable
given Figure~\ref{fig:mr}. In addition, the normalization remains arbitrary
because it is challenging to define a total volume for the cluster AGN sample,
although this is not necessary because the shape of the XLF already provides
useful information. In Figure~\ref{fig:hxlf} we plot the cluster XLF for
our $z>0.4$ sample compared to the field XLF at the median cluster redshift
of $z=0.8$ from \citet{ueda03}. The cluster XLF is in reasonable agreement
with the field XLF at the same redshift, although the statistics are quite
limited. As motivation for future work, we also plot the field XLF at lower
redshift ($z=0.1$, {\it dotted line}). For the lower-redshift XLF $L_X^*$ is
smaller and consequently all $L_{X,H} \geq 10^{43}$ AGN are above the
characteristic luminosity, while these data straddle $L_X^*$ in the field
XLF at $z=0.8$. Improved statistics for cluster X-ray AGN at $z>0.4$ could
determine if there is also a break in the cluster XLF, or if it is more similar
to the field XLF at lower redshift.
The evolution of the cluster XLF with redshift is also relevant for the
origin of X-ray AGN in lower-redshift clusters. If cluster AGN at the present
day are simply the descendants of AGN at higher redshift that have been fading
for several Gyr, then the difference between the low-redshift and
high-redshift cluster XLF should be consistent with pure luminosity evolution.
In contrast, if there is substantial retriggering of low-luminosity AGN in
low-redshift clusters, or if other mechanisms are capable of fueling AGN in
clusters, then the cluster XLF evolution may not be consistent with just
luminosity evolution. A signature of other fueling or triggering mechanisms
would be a substantially larger population of lower-luminosity AGN in present
day clusters compared to expectations from the high-redshift population.
While pure luminosity evolution would be surprising because this is not
observed in field AGN, the most luminous cluster galaxies are consistent with
passive evolution. Better measurements of the cluster XLF over a broader range
in luminosity could investigate this hypothesis.
\subsection{Host Galaxy Properties} \label{sec:hosts}
Both the colors and morphologies of low-luminosity ($\sim10^{41}$ erg s$^{-1}$) AGN
in low-redshift clusters suggest they are primarily hosted by galaxies
dominated by light from their old stellar populations \citep{martini06}.
This becomes progressively less true for higher-luminosity AGN and
ground-based observations of the most luminous sources ($\geq 10^{43}$ erg/s)
in Abell 2104 \citep{martini02} and Abell 754 \citep{arnold09} indicate
that they have late-type morphologies, although their hosts are luminous.
In addition, these more luminous AGN are more likely to exhibit
visible-wavelength AGN
spectral signatures than their lower-luminosity counterparts.
While the spectroscopic classification of the high-redshift sample is fairly
subjective because of variations in wavelength coverage and signal to noise
ratio, the spectroscopic classifications reported by \citet{silverman05a}
and \citet{eckart06} support the low-redshift results. They classified six
of 17 X-ray AGN as BLAGN, nine as other emission-line galaxies, and the
remaining two as absorption-line galaxies. The vast majority of the higher
luminosity AGN have substantial line emission, even with the bias against
redshift measurements for sources without strong emission lines. We note
for comparison that two of the six AGN in the large-scale structure sample
are classified as BLAGN and the other four are evenly split between
emission-line and absorption-line galaxies. These other sources are thus
similar to the cluster AGN.
Several of the high-redshift clusters also have HST observations suitable
to study the morphologies of the cluster galaxies. The largest survey of
X-ray source morphology in high-redshift clusters is that by \citet{martel07},
who investigate the fields of five high-redshift clusters: RX J0152-1357,
RX J0849+4452, RDCS J0910+5422, MS 1054-0321, and RDCS J1252-2927, and the
middle three clusters overlap this sample. For the entire field sample they
classify half of the X-ray counterparts as early-type, 35\% as late-type,
and 15\% as irregular galaxies. For the six cluster members in their sample,
they find half are in early-type hosts, two in late-type hosts, and one in
an irregular galaxy. In addition, three of these cluster AGN hosts are
in interacting systems. The specific overlap with this sample are
CXOU J091043.3+542152 and CXOMP J105650.6-033508 and both have early-type
morphologies (their other member in RDCS J0910+5422 falls slightly below
our luminosity threshold).
\subsection{Implications for the Sunyaev-Zel'dovich Effect} \label{sec:sz}
Many cluster surveys are currently planned or in progress that use the
Sunyaev-Zel'dovich effect to identify large numbers of clusters
\citep[e.g.][]{kosowsky03,ruhl04}. This effect is caused by inverse Compton
scattering of Cosmic Microwave Background (CMB) photons off hot electrons in
the ICM that changes the spectrum of the CMB in the direction of a cluster
\citep[e.g.][]{carlstrom02}. The main virtue of this effect is that it is
redshift independent, and consequently can be used to detect (the hot
electrons associated with) clusters out to high redshifts. However, mechanical
heating by AGN in the cluster may contribute to the thermal energy of
the ICM \citep[e.g.][]{birzan04} and thus make it more difficult to
identify some clusters. Any increase in the AGN population with redshift will
also introduce a systematic effect with redshift.
The potential impact of AGN on SZE cluster surveys was recently examined in
detail by \citet{lin07}. They measured the radio luminosity function in
nearby clusters at 1.4 GHz and used measurements of AGN at higher frequencies
\citep{cooray98,coble07} to estimate that on order 10\% of clusters will
have an AGN flux comparable to the SZE flux. As a worst-case scenario
they adopted an evolution model where the fraction of radio AGN increases
as $(1+z)^{2.5}$. This model was largely motivated by observations of the
radio galaxy luminosity function, which suggested evidence for an increase
\citep{best02,branchesi06}. If this population evolves at a comparable rate
more consistent with the $(1+z)^{5.3}$ rate we observe for luminous X-ray AGN,
then the fraction of substantially contaminated clusters will be
higher than predicted by \citet{lin07}.
\section{Discussion} \label{sec:dis}
The extent of the correlation between the evolution of star formation and AGN
in clusters could provide valuable new insights into how closely related these
two processes are. The original work by \citet{butcher78,butcher84} on the
evolution of the fraction of blue galaxies in clusters provides a useful first
point of comparison to the AGN fraction evolution, in part because we
adopted many elements of their methodology. Specifically, \citet{butcher84}
characterized
cluster galaxy evolution with: 1) a fixed criterion to define the sample of
interest (a galaxy was classified as blue if the rest-frame $B-V$ color was
at least 0.2 mag bluer than the relation exhibited by the red galaxies); 2)
measurement of this population relative only to cluster galaxies above some
luminosity threshold ($M_V = - 20$); 3) use of an aperture scaled to the
physical properties of individual clusters (a circle that contained the inner
30\% of the cluster galaxy population). With these definitions,
\citet{butcher84} found that the blue galaxy fraction increased from
$f_B ~\sim 0.03$ at $z\leq0.1$ to $f_B \sim 0.25$ at $z=0.5$ for relatively
compact, concentrated clusters, or approximately an order of magnitude.
One of the most recent and comprehensive studies of the evolution of
star formation in clusters is the work of \citet{poggianti06}. These
authors used the {\rm [\ion{O}{2}]} $\lambda 3727$ line as a tracer of star formation, rather
than color, and measured the fraction of galaxies with {\rm [\ion{O}{2}]}\ emission
(equivalent width $> 3$\AA) as a function of both cluster redshift
and cluster velocity dispersion. Their sample includes 25 clusters with
$z = 0.4 - 0.8$ and another 10 groups in the same redshift range, while they
have a large local comparison sample at $z=0.04 - 0.08$ from the Sloan Digital
Sky Survey. They measure the {\rm [\ion{O}{2}]}\ fraction $f_{{\rm [OII]}}$ relative to an
evolving absolute magnitude limit $M_{V,lim}$ that varies from -20.5 at $z =
0.8$ to -20.1 at $z = 0.4$, while the local limit was $M_V < -19.8$. Their
main results are that there is substantial evolution in $f_{{\rm [OII]}}$ and
that there is substantial variation in $f_{{\rm [OII]}}$ with velocity
dispersion at a given redshift.
Given the velocity dispersion dependence, a
direct comparison of the evolution of $f_{{\rm [OII]}}$ with $f_A$ is not
meaningful for different cluster samples. Instead, we have used their upper
envelope for
$f_{{\rm [OII]}}$($\sigma$) at high redshift and their envelope prescription
at low redshift to estimate $f_{{\rm [OII]}}$ for each of our clusters and
then computed the average $f_{{\rm [OII]}}$ for each of the subsamples shown
in Table~\ref{tbl:fabin}. These relations predict an increase in
$f_{{\rm [OII]}}$ of less than a factor of two from the low-redshift
to the high-redshift subsamples, or substantially less than the factor of
eight we observe for the AGN fraction.
These results are interesting, although numerous caveats forestall
too much interpretation of the relative rates of evolution. One major
concern is that there is likely downsizing in clusters similar to what is
observed in the field \citep[e.g.][]{cowie96,hasinger05,silverman08,yencho09},
that is the relative number of galaxies with star formation or AGN activity
above a certain threshold varies with redshift. The direct implication of
this for the AGN fraction is that the evolution of the AGN fraction
over a given redshift range is expected to depend on luminosity, just
as the rate of evolution of the AGN space density is observed to vary in
the field as a function of minimum luminosity \citep[e.g.][]{ueda03}.
This is similarly a complication for interpretation of the evolution of
star formation, and consequently limits direct comparison of the mere rates of
evolution of star formation and AGN above some threshold. For example, while
\citet{poggianti06} have similarly used an evolving galaxy luminosity threshold
to characterize the evolution of the star-forming galaxy fraction, their
galaxy luminosity threshold is over a magnitude fainter and therefore they
have measured the evolution in a population that includes many more fainter
cluster members.
However, these concerns are not an obvious limitation to comparisons that use
the same luminosity threshold to separately compare either AGN or star
formation across different environments, particularly when the evolution of the
star formation rate and AGN luminosity are tied to the same galaxy population.
For example, if the relative rates of evolution of AGN and star formation in
$< M_R^*+1$ galaxies were different in the field and clusters, this would
suggest a limit to the extent of the apparent coevolution of black holes and
galaxies in at least one of these environments.
Another concern about a direct comparison to these measurements of the
evolution of the star forming galaxy population is that {\rm [\ion{O}{2}]}\ emission
is more susceptible to reddening and metallicity effects relative to other
star formation indicators, such as H$\alpha$ \citep{kewley04}. Many
ISO studies \citep[summarized by][]{metcalfe05} found evidence for an
increase in star formation in clusters at higher redshifts, and that
the increase appeared to be greater than that predicted by UV continuum or
visible-wavelength spectroscopic diagnostics. {\it Spitzer}\ observations of
clusters have also found substantial, often obscured, star formation
in high-redshift clusters \citep{geach06,marcillac07,bai07}. \citet{geach06}
used new {\it Spitzer}\ data for two clusters and data for five others
from the literature to estimate the star formation rate
normalized by the cluster mass. They find evidence for an increase in
higher redshift clusters, but also substantial variation between clusters
at the same redshift. \citet{saintonge08} used a larger sample of eight
clusters with 24$\mu$m {\it Spitzer}\ data to study the evolution of the fraction of
obscured star-forming galaxies from $z=0.02 - 0.83$.
They find that the fraction of cluster galaxies with star formation rates
above 5 M$_\odot$ yr$^{-1}$ increases from 3\% at $z=0.02$ to 13\% at $z=0.83$
and that this is stronger evolution than exhibited by color-selection, such
as the criteria of \citet{butcher78,butcher84}. The star-forming galaxies they
identify in
these clusters are also mostly disjoint from the Butcher-Oemler galaxies and
consequently when they sum the blue and mid-infrared galaxies the fraction of
star-forming galaxies increases to $\sim 23$\% at high redshift.
Several of these {\it Spitzer}\ studies overlap clusters that are also in our
sample and it is interesting to see if there is a direct correspondence
between the AGN and mid-infrared sources detected by {\it Spitzer}.
The massive cluster MS1054-03 was studied by \citet{bai07} and their 24$\mu$m
sources include the two X-ray AGN identified by \citet{johnson03}.
\citet{saintonge08} have three clusters in common with our sample:
MS0451.6-0305, MS2053.7-0449, and MS 1054-03, although they do not provide
information on individual sources. While not in our sample, the study of
RX J0152.7-1357 ($z=0.831$) by \citet{marcillac07} found that the two most
luminous $24\mu{\rm m}$ sources (of 22 confirmed members) were also X-ray AGN.
Similarly, \citet{geach09} found that one (of 12) of the luminous
infrared galaxies ($L_{IR} > 10^{11} L_\odot$) in CL0024+16 ($z=0.4$)
was obviously an AGN based on their infrared data alone. At lower
redshifts, \citet{gallagher08} have also used {\it Spitzer}\ data to
identify AGN and star forming galaxies in Hickson Compact Groups.
\citet{saintonge08} explore whether or not the increase in the fraction of
obscured star formation in high-redshift clusters is related to infall.
They speculate that the increase in star formation reflects the infall of
new members and note that most of the MIPS-detected cluster galaxies are
not projected onto the cluster core (inner 500pc). Over larger scales the
work of \citet{gallazzi09} explored the obscured star formation fraction
as a function of environment in the Abell 901/902 supercluster at $z=0.165$.
They find more obscured star formation at intermediate densities than in the
cluster cores, similar to the distribution of the AGN population studied
by \citet{gilmour07} in the same supercluster.
If there is a substantial increase in the obscured star
formation fraction in the intermediate densities around clusters, and
the star formation in this environment increases with redshift, then
projection of some of these structures onto the cluster core may
contaminate the cluster estimates.
As discussed in Section~\ref{sec:lss}, AGN in the large-scale environments
around massive clusters may also project onto cluster cores. To better
evaluate this possibility, it is useful to both directly measure the
AGN population immediately outside clusters and measure the AGN population
in intermediate densities more generally. Just as \citet{poggianti06} found
that the fraction of {\rm [\ion{O}{2}]}-emitting galaxies increases in lower velocity
dispersion environments, the AGN fraction as a function of environment is
important because the environmental dependence may provide new information
on the processes that drive AGN evolution. Both the XMM observations of the
COSMOS fields \citep{silverman09a,silverman09b} and {\it Chandra}\ observations of
the Extended Groth Strip from DEEP2 \citep[the All-wavelength Extended Groth
strip International Survey, AEGIS;][]{georgakakis08a,georgakakis08b} have
estimated the AGN fraction in groups of galaxies or as a function of
local overdensity at high redshifts. \citet{georgakakis08b} found that X-ray
AGN are more frequently found in groups than in the field, which they
connect to their observation that the X-ray AGN host galaxies are often
red, luminous galaxies that tend to reside in denser environments, although
they also find that this trend may reverse for the most powerful AGN.
In a narrower redshift range from $0.7 < z < 0.9$ and for $M_B < -20$ mag
they find that the AGN fraction is comparable in groups and the field and
about 5\%. This is approximately a factor of five higher than we find in
clusters at similar redshifts, although these values are not exactly
comparable as the \citet{georgakakis08b} AGN include somewhat lower-luminosity
sources than our sample and the host galaxy magnitude limits are somewhat
different. \citet{silverman09a} also investigate the environment dependence of
X-ray AGN hosted by galaxies above a fixed stellar mass and find no strong
preference between the field and groups except for the most massive galaxies,
while \citet{jeltema07} find that the fraction of {\rm [\ion{O}{2}]}-emitting galaxies in
intermediate-redshift, X-ray-selected groups ($0.2 < z < 0.6$) is similar to
clusters at the same redshift.
The clustering analysis by \citet{coil09} on the AEGIS data also helps to
elucidate the distribution of AGN at high redshift as a function of
environment, AGN luminosity, and host galaxy mass. They find that the X-ray
AGN have similar clustering to luminous red galaxies and are more likely to
reside in groups, while UV-bright QSOs are less strongly clustered and more
similar to the field blue galaxy population. This is also similar to the
results from \citet{kauffmann04} at low redshifts from SDSS, who find that
galaxies at a fixed stellar mass that host luminous [\ion{O}{3}]\ emission are twice
as common in low-density regions as high. Taken together, the AEGIS and COSMOS
results illustrate that the measured AGN fraction depends on both the stellar
mass (or luminosity) of the galaxy population and the star formation rate of
the host, in addition to the AGN luminosity. This makes a direct comparison
between these two surveys, as well as to our work on high-redshift clusters,
somewhat problematic. The X-ray range considered by \citet{silverman09a}
extends over $42 < {\rm log} L_{0.5-10 {\rm \, keV}} < 43.7$, or approximately
half an order of magnitude below our X-ray threshold for a typical AGN SED.
The X-ray AGN studied by \citet{coil09} extend an order of magnitude fainter
than our work to a hard band limit of $L_{X,H} > 10^{42}$ erg s$^{-1}$. Both of these
surveys are therefore dominated by intrinsically less luminous objects. The
galaxy mass and luminosity ranges are similarly not identical. In future work
we hope to put all of these high-redshift measurements on an equal basis for a
more direct comparison.
While none of these results suggest that there are more luminous AGN in
clusters than groups or the fields out to $z\sim1$, such a trend may be
seen at yet higher redshits. Observations of cluster galaxies, particularly
massive cluster ellipticals, suggest that most of their stars formed
earlier than field galaxies \citep[by 0.4 Gyr;][]{vandokkum07}. If the central
black holes of these galaxies grew contemporaneously, then perhaps by
$z\sim2$ the AGN fraction will be higher in denser environments. Some
interesting support for this picture comes from {\it Chandra}\ observations of the
SSA22 protocluster at $z=3.09$ \citep{lehmer09}. They find a slightly higher
AGN fraction in Lyman Break and Ly$\alpha-$emitting galaxies in the
protocluster compared to the field. While this is just one region,
observations of the AGN fraction in clusters relative to the field at $z\sim2$
and above could provide interesting new insights into the coevolution of black
holes and galaxies.
\section{Summary} \label{sec:conclude}
We have conducted an expanded survey to identify luminous $L_{X,H} \geq
10^{43}$ erg s$^{-1}$\ AGN in clusters of galaxies from $z \sim 0.05$ to $z \sim 1.3$.
At low redshifts we have presented a new X-ray analysis of archival {\it Chandra}\
observations and spectroscopic follow-up of AGN candidates in six new clusters.
There are no new, luminous AGN in these clusters and there are a total of just
two luminous AGN in our sample of 17 clusters with $z<0.4$. These measurements
further strengthen the evidence for a very small luminous AGN fraction in
low-redshift clusters. An important virtue of the new clusters is that the
X-ray and spectroscopic coverage extends to the projected $R_{200}$ radius and
therefore they are better matched to observations of high-redshift clusters.
At higher redshifts we have combined our previous work with literature data on
X-ray sources, primarily from the ChaMP and SEXSI surveys, to compile a
total sample of 15 clusters at $z>0.4$. In spite of somewhat incomplete
spectroscopic coverage of the X-ray sources in these fields, there are 18
luminous AGN in these clusters.
We parameterize the evolution of the AGN population in clusters in terms of
the fraction of luminous galaxies that host AGN above our luminosity threshold.
We have used a variety of techniques to estimate the number of luminous
galaxies, defined to have $M_R < M_R^*+1$, in these clusters and calculated
the average cluster AGN fraction in several redshift bins. As the low and
high-redshift clusters are reasonably well matched in terms of cluster
velocity dispersion and X-ray temperature, the increase in the number of
AGN is closely related to the increase in the fraction of galaxies
more luminous than $M_R^*+1$. Specifically, we find that the AGN fraction
increases by approximately a factor of eight from $z\sim 0.2$ to $z \sim 1$.
This corresponds to an increase in the AGN population that scales as
$(1+z)^{5.3}$. If the radio AGN population in clusters increases by a
comparable amount, radio AGN may impact the identification of clusters as a
function of redshift in current and planned SZ surveys. The substantial
evolution in the cluster AGN population is also correlated with the evolution
of the fraction of star-forming galaxies in clusters known as the
Butcher-Oemler effect. Detailed studies of star formation and AGN in
individual clusters could better quantify the extent that these two phenomena
are coupled in clusters or perhaps even individual galaxies. We have also
estimated the evolution of the field AGN fraction to compare it to the cluster
AGN fraction. While the field AGN fraction is higher at all redshifts, the
present data do not suffice to conclude if the rate of evolution is faster
or slower in clusters. Future measurements of the relative evolution of star
formation and black hole growth in clusters and the field could be an
important probe of the coevolution of black holes and their host galaxies.
Measurements of the radial distribution of the cluster AGN provide new
information on the origin of AGN within clusters. Unlike we found in previous
work at low redshifts, the AGN in these high-redshift clusters are not strongly
centrally concentrated when their distribution is plotted normalized to the
$R_{200}$ radius. This demonstrates that there are substantial numbers in the
outskirts of clusters and supports the hypothesis that some cluster AGN are
hosted by relatively gas-rich galaxies that have recently entered the cluster
potential. While this excess is not apparent in the velocity distribution,
this may be due to biases in the measurement of the cluster velocity dispersion
or simply small number statistics. We have also presented the first measurement
of the XLF of cluster AGN at high-redshift and found that it is consistent with
the field XLF at the same redshift. This comparison illustrates the future
potential of XLF measurements in clusters to measure environment-dependent
downsizing in clusters, as well as how the evolution of the cluster XLF can
be used to constrain the evolution of black hole growth in clusters.
\acknowledgements
We are grateful to John Silverman and the referee for many suggestions that
have improved this paper. We also acknowledge helpful discussions with Dan
Stern and Tommaso Treu. Support for this work was provided by the National
Aeronautics and Space Administration through Chandra Award Number AR8-9014X
issued by the Chandra X-ray Observatory Center, which is operated by the
Smithsonian Astrophysical Observatory for and on behalf of the National
Aeronautics Space Administration under contract NAS8-03060.
PM is grateful for support from the NSF via award AST-0705170 and from the
Department of Astronomy at The Ohio State University.
This research has made use of the NASA/IPAC Extragalactic Database (NED)
which is operated by the Jet Propulsion Laboratory, California Institute of
Technology, under contract with the National Aeronautics and Space
Administration.
{\it Facilities:} \facility{Hiltner (), CXO ()}
|
1,477,468,750,371 | arxiv | \section{Introduction}
Graph is a natural format to represent relationships that are prevalent in a wide range of real-world applications, such as, material/drug discovery \cite{you2018graph}, web-structure~\cite{kumar2002web}, social network~\cite{garton1997studying}, protein-protein interaction~\cite{von2002comparative}, knowledge graphs~\cite{popping2003knowledge}, among many others.
Learning, mining, analyzing and visualizing graphs is hence of paramount value to \cready{our society}.
However, as the size of the graph continues to grow, the complexity of handling those graphs also soars. In fact, large-scale graph analytics is deemed as a grand challenge that draws a great deal of attention. Popular evidences are Graph 500~\cite{murphy2010introducing} and GraphChallenge~\cite{graphchallenge}.
Fortunately, recent research efforts find out {\em graph sampling} and {\em random walk}, which significantly reduce the size of \cready{the} original graphs, can benefit learning, mining and \cready{analyzing} large graphs, by capturing the desirable graph properties~\cite{huang2018adaptive,gao2018large,chen2017stochastic}. For instance, {Zeng et al.~\cite{zeng2019accurate}}, GraphSAINT~\cite{zeng2019graphsaint}, GraphZoom~\cite{deng2019graphzoom}, Pytorch-biggraph~\cite{lerer2019pytorch} and \cready{Deep Graph Library (DGL)}~\cite{wang2019deep} manage to learn from the sampled graphs and arrive at vertex embeddings that are either similar or better than directly learning on the original gigantic graphs~\cite{deng2019graphzoom}. Weisfeiler-Lehman Algorithm~\cite{shervashidze2011weisfeiler} exploits graph sampling to find isomorphic graphs. {Furthermore, various random walk methods are used to generate vertex ranking and embedding in a graph~\cite{perozzi2014deepwalk,page1999pagerank,grover2016node2vec,kyrola2013drunkardmob}.} Sampling and random walk can also help classical graph computing algorithms, such as BFS~\cite{korf2005large} and PageRank~\cite{page1999pagerank}.
Despite great importance,
limited efforts have been made to deploy graph sampling and random walk algorithms on GPUs which come with tempting computing, data access capabilities and ever-thriving community~\cite{gao2018large}.
This paper finds three major challenges that prevent this effort.
\vspace{.1in}
First, although there is a variety of platforms to accelerate traditional graph processing algorithms on GPUs~\cite{liu2015enterprise,wang2016gunrock,liu2019simd,gaihre2019xbfs}, graph sampling and random walk pose unique challenges. Unlike traditional graph algorithms which often treat various vertices and edges similarly and focus on optimizing the operations on the vertex or edge, sampling and random walk algorithms center around how to select a subset of vertex or edge \cready{based upon a bias} (Section \ref{sect:background:ns}). Once selected, the vertex is merely visited again.
Consequently, {\em how to efficiently select the vertices of interest which is rarely studied by traditional algorithms becomes the core of sampling and random walk.}
This process needs to construct and potentially update the selection \cready{bias} repeatedly which is very expensive hence significantly hampers the performance.
\vspace{.1in}
Second, it is difficult to arrive at a GPU-based framework for various graph sampling and random walk algorithms that address the needs of vastly different applications. Particularly, there exists a rich body of graph sampling and random walk algorithms (detailed in Section~\ref{sect:background:samplerw}), deriving the common functionalities for a framework and exposing different needs as user programming interface is a daunting task. And \cready{offloading} this framework on GPU to enjoy the unprecedented computing and bandwidth capability yet hiding the GPU programming complexity further worsens the challenge.
\begin{table*}[t]
\new{{\fontsize{8}{10}\selectfont
\centering
\begin{tabular}{|c|c|c|c|c|c|}
\hline
\multicolumn{2}{|c|}{\multirow{3}{*}{\begin{tabular}[c]{@{}c@{}}Bias\\criterion\end{tabular}}} & \multicolumn{4}{c|}{\# of neighbors (NeighborSize)} \\ \cline{3-6}
\multicolumn{2}{|c|}{} & \multicolumn{2}{c|}{Per layer} & \multicolumn{2}{c|}{Per vertex} \\ \cline{3-6}
\multicolumn{2}{|c|}{} & 1 & $>1$ &Constant & Variable \\ \hline
\multicolumn{2}{|c|}{Unbiased} & \begin{tabular}[c]{@{}l@{}}Simple random walk, metropolis hasting random walk,\\ random walk with Jump, random walk with restart\end{tabular} & & Unbiased neighbor sampling & \begin{tabular}[c]{@{}l@{}}\new{Forest fire sampling},\\ Snowball sampling\end{tabular} \\ \hline
\multirow{2}{*}{Biased}
& Static &Biased random walk & Layer sampling &Biased neighbor sampling & \\ \cline{2-6}
& Dynamic & Multi-dimensional random walk, Node2vec & & & \\ \hline
\end{tabular}
}
}
\vspace{-.1in}
\caption{\cready{The design space of traversal based sampling and random walk algorithms.}
\vspace{-.2in}
}
\label{tbl:samplespace}
\end{table*}
\vspace{.1in}Third, an extremely large graph, which drives the needs of graph sampling and random walk, usually goes beyond the size of GPU memory. While there exists an array of solutions for GPU-based large graph processing, namely, unified memory~\cite{gera2020traversing}, topology-aware partition~\cite{karypis1998fast} and vertex-range based partitions~\cite{guattery1995performance}, graph sampling and random walk algorithms, which require all the neighbors of a vertex to present in order to compute the selection probability, exhibit stringent requirement on the partitioning methods.
In the meantime, the asynchronous and out-of-order nature of graph sampling and random walk provides some unique optimization opportunities for {\em out-of-memory sampling}, which are neither shared nor explored by traditional out-of-memory systems.
This work advocates {\textsc{c-saw}}, to the best of our knowledge, the first GPU-based framework that addresses all the three aforementioned challenges and supports a wide range of sampling and random walk algorithms. Taken together, {\textsc{c-saw}} significantly outperforms the state of the art systems that support either part of sampling or random walk algorithms.
The contributions of this paper are as follows:
\vspace{.1in}
\begin{itemize}
\item We propose a generic framework which allows end users to express a large family of sampling and random walk algorithms with ease (Section \ref{sect:arch}).
\vspace{.05in}
\item We implement efficient GPU sampling with novel techniques.
Our techniques parallelize the vertex selection on GPUs, with efficient algorithm and system optimizations for vertex collision migration (Section~\ref{sect:single}).
\vspace{.05in}
\item We propose {asynchronous} designs for sampling and random walk, which optimizes the data transfer efficiency for graphs that exceed the GPU memory capacity.
\new{We further scale {\textsc{c-saw}} to multiple GPUs} (Section \ref{sect:outmem}).
\end{itemize}
\vspace{.1in}
The remainder of this paper goes as follows: Section~\ref{sect:background} presents the background. Section~\ref{sect:arch} outlines the Application Programming Interface (API) and Sections~\ref{sect:single} and~\ref{sect:outmem} optimize {\textsc{c-saw}}. Section~\ref{sect:experiment} presents the evaluation results.
Section~\ref{sect:related} discusses the related works and Section \ref{sect:conclusion} concludes.
\section{Background}
\label{sect:background}
\begin{figure*}[hbt!]
\centering
\includegraphics[width=\linewidth]{Figures/sample_example.pdf}
\caption{Example of graph sampling and vertex selection techniques. (a) A toy graph example to select a neighbor of $v_8$ ($v_5, v_7, v_9, v_{10}, v_{11}$), assuming the bias of a neighbor is defined as its degree. (b) Inverse Transform Sampling which does a binary search on a 1-D space to select $v_7$. (c) Dartboard method that rejects \protect\circled{1} and accepts \protect\circled{2} ($v_7$). (d) Alias method that selects $v_7$.
}
\label{fig:example}
\end{figure*}
\subsection{Graph Sampling \& Random Walk Variations}
\label{sect:background:samplerw}
\new{
This section presents the required background for various graph sampling and random walk algorithms~\cite{leskovec2006sampling}.}
Graph sampling refers to the random exploration of a graph, which results in a subgraph
of the original graph.
\new{\textbf{One Pass Sampling}
only goes through the original graph once to extract a sample.
Random node and random edge sampling belong to this category \cite{leskovec2006sampling}.
They select a subset of vertices/edges in original graph uniformly and randomly.
}
\textbf{Traversal based Sampling}
often traverses the graph in a Breath-First Search manner to better preserve the properties of original graphs \cite{hu2013survey}.
Traversal based sampling follows \textit{sampling without replacement} methodology, i.e., it avoids sampling the same vertex more than once.
As shown in Table \ref{tbl:samplespace},
traversal based sampling algorithms are categorized based upon the number of sampled neighbors, called {\textit{NeighborSize}}, and the criterion to select neighbors, which is referred to as {\em bias}.
Snowball sampling~\cite{stivala2016snowball} initiates the sample using a set of uniformly selected seed vertices.
Iteratively, it adds all neighbors of every sampled vertex into the sample, until a required depth is reached.
Neighbor sampling~\cite{neighborsampling}
samples a constant number of neighbors per vertex. The sampling could be \cready{either} biased or unbiased.
Forest fire sampling~\cite{leskovec2006sampling} can be regarded as a probabilistic version of neighbor sampling, which selects a variable number of neighbors for each vertex based on a burning probability.
Unlike neighbor and forest fire sampling, which select neighbors for each vertex independently, layer sampling \cite{gao2018large} samples a constant number of neighbors for all vertices present in the frontier in each round.
It repeats this process until a certain depth is reached.
\new{
\textbf{Random Walk} simulates a stochastic process of traversing the graph to form a path of connected vertices. The length of path is constrained by a user given sampling budget.
Random walk can be viewed as a special case of sampling when only one neighbor is sampled at a step with the salient difference lies in that random walk allows repeated appearance of a vertex while sampling does not.
Table \ref{tbl:samplespace} summarizes the design space of random walk algorithms.
}
Similar to traversal based sampling, random walk algorithms use {\em bias} to decide the probability of selecting a certain neighbor.
For unbiased simple random walk, the bias is uniform for all neighbors, i.e., every neighbor has the same chance to be selected.
Deepwalk~\cite{perozzi2014deepwalk} and metropolis hasting random walk \cite{li2015random} are two examples of unbiased random walk. While Deepwalk samples neighbors uniformly, meropolis hasting random walk decides to either explore the sampled neighbor or choose to stay at the same vertex based upon the degree of source and neighbor vertices.
For a biased random walk, the bias varies across neighbors.
Furthermore, depending on how to decide the bias, biased random walks are classified \cready{into} static random walks \cready{and} dynamic random walks.
For static random walk, the bias is determined by the original graph structure and does not change at runtime. Biased Deepwalk~\cite{cochez2017biased} is an example of static random walk which extends the original Deepwalk algorithm. The degree of each neighbor is used as its bias.
\new{Since a simple random walk may get stuck
locally,
random walk with jump~\cite{tzevelekas2010random}, random walk with restart~\cite{tong2006fast} and multi-independent random walk~\cite{hu2013survey} are introduced.
Particularly, random walk with jump
jumps to a random vertex under a certain probability.
Random walk with restart
jumps to a predetermined vertex.
Multi-independent random walk performs multiple instances of random walk independently.
}
For dynamic random walks, the bias depends upon the runtime states.
Node2vec~\cite{grover2016node2vec} and multi-dimensional random walk (a.k.a. frontier sampling)~\cite{ribeiro2010estimating} belong to this genre.
Node2vec is an advanced version of Deepwalk which provides more control to the random walk.
The bias of a neighbor depends upon the edge weight and its distance from the vertex explored at preceding step.
In multi-dimensional random walk, a pool of seed vertices are selected at the beginning.
At each step, multi-dimensional random walk explores one vertex $v$ from the pool based on their degrees.
One random neighbor of $v$ is added to the pool to replace $v$.
This process repeats until a desired number of vertices are sampled.
\new{
\textbf{Summary.}
Traversal based sampling and random walk are widely used and share two core similarities: 1) they are based on graph traversal, and 2) they selectively sample vertices based on biases (detailed in Section \ref{sect:background:ns}).
Their difference is the number of sampled neighbors, as shown in Table \ref{tbl:samplespace}.
In the rest of this paper, we use {\bf graph sampling to refer to both traversal based sampling and random walk}, unless explicitly specified.
}
\subsection{Bias based Vertex Selection}
\label{sect:background:ns}
\new{This section discusses the key challenge of graph sampling:} to select vertices based on user defined biases, i.e., {\em bias based vertex selection}.
As discussed in Section \ref{sect:background:samplerw}, all sampling algorithms involve the process of picking up a subset of vertices from a candidate pool of vertices.
For \cready{unbiased graph sampling}, the selection is straightforward:
one can generate a random integer in the range of 1 to the candidate count and use it to select a vertex.
Vertex selection is more challenging \cready{biased graph sampling}.
Given certain biases, we need to calculate the probability of selecting a certain vertex, which is called {\em transition probability}.
Theorem \ref{thm:tp} gives the formula to calculate transition probabilities from biases.
\begin{theorem}
\label{thm:tp}
Let \new{vertices $v_1, v_2, ..., v_{n}$} be the $n$ candidates, and the transition probability of $v_k$, i.e., $t_k$, be proportional to the {\em bias} $b_{k}$.
Then, one can formally obtain
\new{$t_k=\frac{b_{k}}{\sum_{i=1}^{n} b_{i}}$}.
\end{theorem}
Theorem \ref{thm:tp} underscores that \textit{bias} is the key to calculate transition probability.
All popular vertex selection algorithms -- inverse transform sampling~\cite{olver2013fast}, dartboard \cite{yang2019knightking}, and alias method~\cite{walker1977efficient,li2014reducing} -- obey this rule.
The key idea of inverse transform sampling is to generate the cumulative distribution function of the transition probability.
Fig.~\ref{fig:example}(b) shows an example.
First, inverse transform sampling computes the prefix sum of biases of candidate vertices, to get an array $S$, where \cready{$S_m = \sum_{i=1}^{m} b_{i-1}$ ($1 \le m \le n+1)$ and $n$= total \# of candidate vertices}.
In Fig.~\ref{fig:example}(b), $S = \{0, 3, 9, 11, 13, 15\}$.
Then $S$ is normalized using \cready{$S_{n+1}$}, to get array $F$, where \cready{$F_m = S_m/S_{n+1}\ (1 \le m \le n+1)$}.
$F = \{0, 0.2, 0.6, 0.73, 0.87, 1\}$ in Fig.~\ref{fig:example}(b).
In this way, the transition probability of $v_k$ can be derived with $F$, because
\setlength{\belowdisplayskip}{0pt} \setlength{\belowdisplayshortskip}{0pt}
\setlength{\abovedisplayskip}{0pt} \setlength{\abovedisplayshortskip}{0pt}
\new{{\footnotesize
\begin{equation}
\label{equ:tp}
\begin{aligned}
t_k&=\frac{b_{k}}{\sum_{i=1}^{n} b_{i}}=\frac{\sum_{i=1}^{k} b_{i}-\sum_{i=1}^{k-1} b_{i}}{\sum_{i=1}^{n} b_{i}}\\
&=\frac{S_{k}-S_{k-1}}{S_n}
=\frac{S_{k}}{S_n}-\frac{S_{k-1}}{S_n}=F_{k}-F_{k-1}.
\end{aligned}
\end{equation}
}
}
\noindent We call the array of $F$ \textbf{Cumulative Transition Probability Space (CTPS)}.
To select a neighbor, inverse transform sampling generates a random number $r$ in the range of (0,1), and employs a binary search of $r$ over the CTPS.
Assuming $r=0.5$ in Fig. \ref{fig:example}(b), it falls between \cready{$F_2 = 0.2$ and $F_3 = 0.6$.}
As a result, the \cready{second} candidate $v_7$ is selected on the CTPS.
When implemented sequentially, ITS has the computational complexity of $O(n)$, determined by the prefix sum calculation.
Dartboard \cite{yang2019knightking} uses 2D random numbers to select/reject vertices.
As shown in Fig.~\ref{fig:example}(c), we build a 2D board using the \cready{bias of each vertex as a bar}, and then throw a dart to the 2D board \cready{formed by two random numbers}.
If it does not hit any bar (e.g., \circled{1}), we reject the selection and throw another dart, until a bar is hit (e.g., \circled{2}).
This method may require many trials before \cready{picking up a vertex} successfully, especially for scale-free graphs where a few candidates have much larger biases than others.
Similar to dartboard, the alias method~\cite{li2014reducing} also uses a 2D \cready{board}.
To avoid rejection, the alias method converts the sparse 2D board into a dense one as shown in Fig. \ref{fig:example}(d).
It breaks down and distributes large biases across bins on the x axis,
with the guarantee that a single bin contains at most two vertices.
The drawback of alias method is its high preprocessing cost to break down and distribute biases, which is not suitable for GPUs.
\section{$\textsc{c-saw}$ Architecture}
\label{sect:arch}
\vspace{0.1in}
\subsection{Motivation}
\label{sect:background:moti}
\new{
\textbf{Need for Generic Sampling Framework.}
After sifting across numerous graph analytical frameworks (detailed in Section~\ref{sect:related}),
we find the need of a new framework for graph sampling, because
\textit{sampling algorithms pose distinct needs on both the framework design and APIs}.
For framework design, several sampling algorithms, e.g., layer sampling, require the information beyond a vertex and its neighbors for computing, which postulates hardship for traditional vertex-centric frameworks that limit the view of a user to a vertex and its 1-hop neighbors.
When it comes to API design, \textit{\textbf{bias} is the essence of sampling and random walk}.
In comparison, traditional graph frameworks focus upon the operators that alter the information on an edge or a vertex, e.g., minimum operator in single source shortest path. We also notice recent attempts, e.g., KnightKing~\cite{yang2019knightking} and GraphSAINT~\cite{zeng2019graphsaint}, but they cannot support both sampling and random walk algorithms.}
\textbf{Need for Sampling and Random Walk on GPUs.}
For sampling, short turnaround time is the key.
It is also the root cause of the invention of sampling~\cite{zeng2019accurate,lofgren2014fast}.
The good news is that GPU is a proven vehicle to drive an array of graph algorithms beyond their performance ceiling~\cite{liu2015enterprise,liu2016ibfs,wang2016gunrock,liu2019simd,gaihre2019xbfs,pandey2019h,bisson2017high}, thanks to the unprecedented computing capability and memory bandwidth~\cite{keckler2011gpus}. When it comes to sampling which are much more random than traditional graph algorithms, GPUs will best CPU at even larger margins because extreme randomness puts the large caches of CPU in vein.
\subsection{{\textsc{c-saw}}: A Bias-Centric Sampling Framework}
\label{sect:arch:overview}
{\textsc{c-saw}} offloads sampling and random walk on GPUs with the goal of a \textit{simple} and \textit{expressive} API and a \textit{high performance} framework.
Particularly, \textit{simple} means the end users can program {\textsc{c-saw}} without knowing the GPU programming syntax.
\textit{Expressiveness} requires {\textsc{c-saw}} to not only support the known sampling algorithms discussed in Section~\ref{sect:background:samplerw}, but also prepare to support emerging ones.
\textit{High performance} targets the framework design.
That is, the programming simplicity does not prevent {\textsc{c-saw}} from exploring major GPU and sampling related optimizations.
\begin{figure}[t]
\centering
\includegraphics[width=0.89\textwidth]{Figures/API_hang.pdf}
\caption{\new{\textsc{c-saw}~framework and API functions.}
}
\label{fig:api}
\end{figure}
{\textsc{c-saw}} encompasses two types of user involvements, i.e., parameter and API based options. The parameter-based option only needs a choice from the end users thus is simple, e.g., deciding the number of selected frontier vertices ($FrontierSize$ in line 4 of Fig. \ref{fig:api}(b)) and neighbors ($NeighborSize$ in line 6).
API based involvement, in contrast, provides more expressiveness to users. Particularly, {\textsc{c-saw}} offers three user defined API functions \cready{as shown in Fig.~\ref{fig:api}(a)}, \cready{most} of which surround bias, that is, \textsc{VertexBias}, \textsc{EdgeBias}, and \textsc{Update}.
We will discuss the design details of these API functions in Section \ref{sect:arch:api}.
Fig. \ref{fig:api}(b) gives an overview of the {\textsc{c-saw}} algorithm.
Particularly, bias based vertex selection occurs in two places: to select frontier vertices from a pool (line 4), and to select the neighbors of frontier vertices (line 6).
While the latter case is required by all graph sampling algorithms, the former becomes essential when users want to introduce more randomness, such as \new{multi-dimensional random walk}.
\begin{figure}[t]
\centering
\includegraphics[width=0.89\textwidth]{Figures/rw_ns.pdf}
\vspace{-.1in}
\caption{\new{Implementing two sampling algorithms with \textsc{c-saw}~API.}
}
\label{fig:api_example}
\end{figure}
In the beginning, the frontier \cready{$FrontierPool$} is initialized with a set of seed vertices (line 2).
Sampling starts from these seeds until reaching the desired depth (line 3).
In each iteration of the while loop, first, \textbf{\textsc{VertexBias}} is called on the \cready{$FrontierPool$} to retrieve the bias for each candidate vertex.
\textsc{Select} method uses the biases provided by \textbf{\textsc{VertexBias}} to choose $FrontierSize$ vertices as the current frontier (line 4).
Next, all neighbors of the frontier vertices are collected in the $NeighborPool$ using the \textsc{GatherNeighbors} method (line 5).
For these neighbors, we first define their biases using the \textbf{\textsc{EdgeBias}} method.
Similarly, \textsc{Select} method uses the biases to choose $NeighborSize$ neighbors from the $NeighborPool$ (line 6).
From the selected neighbors, \textbf{\textsc{Update}} is used to \cready{pick} new vertices for the \cready{$FrontierPool$} (line 7).
The selected neighbors are also added to the final sample \cready{list $Sampled$} (line 8) before we move forward to the next iteration.
\subsection{{\textsc{c-saw}} API}
\label{sect:arch:api}
\textbf{\textsc{VertexBias}} defines the bias associated with a candidate vertex of the \cready{$FrontierPool$}.
We often use the pertinent property of vertex to derive the bias. Equation~(\ref{equation:vertexbias}) formally defines the bias for each vertex $v$ in the \cready{$FrontierPool$}.
\new{We apply function $f_{vBias}$ over the property of $v$ to define the associated bias.}
\new{\begin{equation}
\label{equation:vertexbias}
\textbf{\textsc{VertexBias}}\underset{v~{\in}~FrontierPool}{\longleftarrow} f_{vBias} (v).
\end{equation}}
Using multi-dimensional random walk as an example, it uses the vertex degree as a bias for the vertex of interest.
\textbf{\textsc{EdgeBias}} defines the bias of each neighbor in the \cready{\textit{NeighborPool}}.
It is named as {\textsc{EdgeBias}} because every neighboring vertex is associated with an edge.
While, again, any static or dynamic bias is applicable, a typical bias is induced from the properties of \cready{the associated edge}.
Equation~(\ref{equation:edgebias}) defines \textsc{EdgeBias} formally.
Let $v$ be the source vertex of $u$.
\new{Assuming edge $e=(v, u)$ carries the essential properties of $v$, $u$ and $e$, we arrive at the following edge bias:
\begin{equation}
\label{equation:edgebias}
\textbf{\textsc{EdgeBias}} \underset{e~{\in}~NeighborPool}{\longleftarrow} f_{eBias} (e)
}
\end{equation}
}
\textbf{\textsc{Update}} decides the vertex that should be added to the \cready{$FrontierPool$} based on the sampled neighbors.
It can return any vertex to provide maximum flexibility.
For instance, this method can be used to filter out vertices that have been visited before for most traversal based sampling algorithms.
Whereas for random walk, this method can be used to implement the jump or restart action in the random walk with jump and with start, respectively. Equation~(\ref{equation:update}) quantifies this method, \new{where we will decide whether to add the sampled vertex $u$, a neighbor of frontier $v$ from edge $e$ into \cready{\textit{FrontierPool}} based upon the properties of $e$ and its endpoints.
\begin{equation}
\label{equation:update}
FrontierPool \underset{}{\longleftarrow} \textbf{\textsc{Update}}(e)
}
\end{equation}
}
\vspace{-.1in}
\subsection{Case Study}
\label{sect:arch:case}
{\textsc{c-saw}} can support all graph sampling and random walk algorithms introduced in Section \ref{sect:background:samplerw}.
Fig.~\ref{fig:api_example} exhibits how to use {\textsc{c-saw}} to implement two popular algorithms: Node2vec and \new{multi-dimensional random walk.}
Without loss of generality, we use the simplest example, i.e., \new{multi-dimensional random walk} to illustrate how {\textsc{c-saw}} works, as shown in Fig.~\ref{fig:layer_example}.
$FrontierSize$ and $NeighborSize$ are set as \new{3 and 1 respectively}.
{\textsc{VertexBias}} is based on the degree of vertices in the frontier pool in \new{multi-dimensional random walk}.
{\textsc{EdgeBias}} returns 1, resulting in the same transition probability for every neighbor.
{\textsc{Update}} always adds the currently sampled neighbor to the FrontierPool.
\begin{figure}[t]
\floatbox[{\capbeside\thisfloatsetup{capbesideposition={right,top},capbesidewidth=3.8cm}}]{figure}[\FBwidth]
{
\caption{A \new{multi-dimensional random walk} example. Assuming \{$v_8$, $v_0$, $v_3$\} in FrontierPool$_t$, we use \textsc{VertexBias} to select $v_8$ as the sampling frontier at iteration $t$. Based on \textsc{EdgeBias} in Fig.~\ref{fig:api_example}(d), we select $v_7$, and put it in sampled edges array. According to \textsc{Update}, {\textsc{c-saw}} further puts $v_7$ in FrontierPool$_{t+1}$ as \{$v_0$, $v_3$, $v_7$\}. Similar process continues until {\textsc{c-saw}} gathers adequate samples.\vspace{-.01in}
}
\label{fig:layer_example}
}
{
\includegraphics[width=.95\linewidth]{Figures/MDRW.pdf}
}
\end{figure}
\section{Optimizing GPU Sampling}
\label{sect:single}
Fig. \ref{fig:api}(b) has shown the overall algorithm of \textsc{c-saw}.
In this section, we discuss how to implement this algorithm efficiently on GPUs.
We will discuss our general strategies to parallelize the \textsc{Select} function on GPUs (Section \ref{sect:single:ves}) and how to address the conflict when multiple GPU threads select the same vertex (Section \ref{sect:single:bitmap}).
\subsection{{Warp-Centric Parallel Selection}}
\label{sect:single:ves}
The core part of the \textsc{c-saw}~algorithm is to {\em select} a subset of vertices from a pool (lines 4 and 6 in Fig. \ref{fig:api}(b)).
As discussed in Section~\ref{sect:background:ns}, several algorithms have been proposed in this regard.
In this paper, we adopt inverse transform sampling \cite{olver2013fast} for GPU vertex selection, because 1) it allows to calculate transition probabilities with flexible and dynamic biases, and 2) it shows more regular control flow which is friendly to GPU execution.
Fig. \ref{fig:select} illustrates the \textsc{Select} algorithm using inverse transform sampling.
We aim to have an efficient GPU implementation of it.
\textbf{Inter-warp Parallelism.}
{Each thread warp, no matter intra or inter thread blocks, is assigned to sample a vertex in \cready{$FrontierPool$}}.
To fully saturate GPU resources, thousands of \cready{candidate vertices needs to be sampled concurrently.}
There are two sources of them.
First of all, many sampling algorithms naturally \cready{sample all vertices in $FrontierPool$ concurrently.}
For instance, \cready{neighbor sampling allows all vertices in $FrontierPool$ to be sampled concurrently and} requires a separate \cready{$NeighborPool$} for each vertex in the \cready{$FrontierPool$.}
\begin{figure}[t]
\centering
\includegraphics[width=0.7\linewidth]{Figures/select.pdf}
\vspace{-.05in}
\caption{\new{The unoptimized implementation of \textsc{Select} function}.
\vspace{-.02in}}
\label{fig:select}
\end{figure}
\begin{figure*}[t]
\centering
\includegraphics[width=0.99\linewidth]{Figures/brs_hang.pdf}
\caption{\new{Assuming $v_7$ is already selected (dotted line in CTPS): (a) naive repeated sampling on the original CTPS, (b) updated sampling on the recalculated CTPS, and (c) our bipartite region search approach.}
\vspace{-.02in}
}
\label{fig:brs}
\end{figure*}
\cready{Second}, most sampling applications including {Graph Convolutional Network} (GCN)~\cite{kipf2016semi} , Deepwalk, Node2vec, and {Personalized PageRank} (PPR)~\cite{lofgren2014fast}, need to launch many instances of sampling either from the same seeds or different seeds. \new{Here, an \textit{instance} generates one sampled graph from the original graph. Particularly, for all algorithms except multi-dimensional random walk, an instance starts with one source vertex. For multi-dimensional random walk, an instance has multiple source vertices, which collectively generate one sampled graph.}
Applications like GCN require multiple sample instances for training the model~\cite{gao2018large,zeng2019graphsaint,chen2018fastgcn}, while Deepwalk, Node2vec, and PPR require multi-source random walk to either generate vertex embeddings or estimate PPR~\cite{alamgir2010multi, perozzi2014deepwalk, grover2016node2vec}.
With thousands of concurrent instances, {\textsc{c-saw}} is able to leverage the full computing power of GPU.
Since the inter-warp parallelism is straightforward to implement, we focus on exploiting the intra-warp parallelism for {\textsc{c-saw}}.
\textbf{Intra-warp Parallelism.}
A thread warp is used to execute one instance of \textsc{Select} on a pool of vertices.
An obvious alternative is to use a thread block. Most real world graphs follow power-law degree distribution, i.e., the majority of the vertices in the graph have very few edges. Using a thread block for a neighbor pool will fail to saturate the resource.
Our evaluation shows that using thread warps achieves $\sim 2 \times$ speedup compared with using thread blocks.
Thus we choose to use thread warps to exploit the parallelism within \textsc{Select}.
As shown in Fig. \ref{fig:select}, first, \textsc{Select} calculates the prefix sum of the biases of all vertices (line {6}).
Fortunately, parallel prefix sum is a well-studied area on GPUs.
In this paper, we adopt the Kogge-Stone algorithm\cready{~\cite{merrill2009parallel}} which presents superior performance for the prefix sum of warp-level where all threads execute in lock-step.
The normalization of prefix sums (line {7}) can be naturally parallelized by distributing the division of different array elements across threads.
To parallelize the vertex selection loop (line {10-14}),
{\textsc{c-saw}} dedicates one thread for each vertex selection to maximize the parallelism.
For each loop iteration, a random number is generated to select one vertex, as introduced in Section \ref{sect:background:ns}.
However, this creates a crucial challenge that different threads may select the same vertex, i.e., {\em selection collision}.
\subsection{Migrating Selection Collision}
\label{sect:single:bitmap}
To migrate the aforementioned {\em selection collision},
we propose two interesting solutions: bipartite region search, and bitmap based collision detection. Before introducing our new design, we first discuss naive solutions.
\vspace{0.05in}
\textbf{Naive Solutions.}
A naive solution is to have a do-while loop (line {10-14} in Fig. \ref{fig:select}) to re-select another one until success, i.e., {\em repeated sampling}.
However, many iterations may be needed to make a successful selection.
As shown in Fig.~\ref{fig:brs}(a), if the region of $v_7$ (i.e., 0.2 - 0.6 in CTPS) is already selected, our newly generated random number 0.58 will not lead to a successful selection.
In fact, our evaluation observes that this method suffers for scale-free graphs whose transition probability can be highly skewed, or when a large fraction of the candidates need to be selected i.e larger $NeighborSize$.
Another solution is to recalculate the CTPS by excluding the already selected vertices, i.e., {\em updated sampling}, such as Fig.~\ref{fig:brs}(b).
Then we can always pick unselected vertices by searching through the updated CTPS.
Particularly in Fig.~\ref{fig:brs}(b), we will perform another Kogge-Stone prefix-sum for the new bias array \{3, 2, 2, 2\} towards \{0, 3, 5, 7, 9\}. Consequently, the CTPS becomes \{0, 0.33, 0.56, 0.78, 1\}. Then, the random number $r=0.58$ selects $v_{10}$.
Recalculating prefix sum is, however, time consuming.
\vspace{0.05in}
\textbf{Bipartite Region Search} inherits the advantages of both repeated and updated sampling, while avoiding their drawbacks.
That is, it {\em does not need the expensive CTPS update} compared with updated sampling, while greatly {\em improve the chance of successful selection} compared with repeated sampling.
Particularly, while updated sampling updates the CTPS without changing the random number as shown in Fig. \ref{fig:brs}(b), the key idea of bipartite region search is to adjust the random number $r$ so that the CTPS remains intact and can be reused.
Most importantly, bipartite region search warrants that its random number adjustment leads to the same selections as updated sampling.
Note, this method is called bipartite region search because when the random number selects an already selected vertex, bipartite region search searches either the right or the left side of the already selected region in CTPS.
Below, we discuss this adjustment.
\vspace{0.05in}
{\footnotesize
\centering
\noindent\fbox{%
\parbox{0.95\linewidth}{%
\noindent \circled{1} Generate a random number $r'$ $(0 \leq r' < 1)$.
\noindent \circled{2} Use $r'$ to select a vertex in CTPS. If the vertex has not been selected, done. Otherwise, the region that $r'$ falls into corresponds to a pre-selected vertex. Assume the boundary of this region in CTPS is $(l, h)$. Go to \circled{3}.
\noindent {\circled{3} Let $\lambda = 1/(1 - (h - l))$, $\delta = h - l$ and update $r$ to $r'/\lambda$. If $r < l$, select $(0, l)$ and go to \circled{4}.
Otherwise select $(h, 1)$ and go to \circled{5}.}
\noindent {\circled{4} Use the updated $r$ to search in $(0, l)$. If updated $r$ falls in another selected region, go to \circled{1}. Otherwise done.}
\noindent {\circled{5} Further update $r$ to $r + \delta$ and search in $(h, 1)$. If updated $r$ falls in another selected region, go to \circled{1}. Otherwise done.}
}
}
\par}
\vspace{0.05in}
{Fig.~\ref{fig:brs}(c) explains how bipartite region search works for the same example in Fig.~\ref{fig:brs}(b).
Assuming we get a random number $r'=0.58$, it corresponds to $v_7$ in the original CTPS.
Since $v_7$ is already selected, bipartite region search will adjust this random number to 0.348 in \circled{3}.
Since the updated $r = 0.348 > l = 0.2$, bipartite region search selects $(0.6, 1)$ to explore. Consequently in \circled{5}, we further add $\delta = 0.4$ to $r$ which leads to $r = 0.748$.
0.748 corresponds to $v_{10}$, and thus results in a successful selection.
\cready{\textit{It is important to note that this selection is identical as updated sampling in Fig.~\ref{fig:brs}(b).}} }
\vspace{0.05in}
\textbf{Proof of Bipartite Region Search.}
We will prove the soundness of bipartite region search mathematically, in the scenario when one and only one vertex has been pre-selected.
\begin{theorem}\label{thm:brs}
Assuming $v_k$'s probability region is $(F_k, F_{k+1})$ in the original CTPS. Remind the definition of $F$ in Section \ref{sect:background:ns}. Let $v_s$ be the pre-selected vertex, and $F'_k$ be the probability in the updated CTPS. $l = F_k$, $h = F_{k+1}$, $\lambda = \frac{1}{1 - (h - l)}$ and $\delta = h - l$, we prove that:
{\footnotesize
\begin{equation}\label{eq:brs}
F'_{k} =
\begin{cases}
\lambda\cdot F_k; & \text{$k<s,$}\\
\lambda\cdot (F_k - \delta); & \text{otherwise.}
\end{cases}
\end{equation}
}
\end{theorem}
\begin{proof}
Adopting Equation~\ref{equ:tp}, we get \new{$F_k = \frac{\sum_{i=1}^{k-1}b_{i}}{\sum_{i=1}^{n}b_{i}}$}. \new{Denoting $\mathbb{F}=\sum_{i=1}^{s-1}b_{i} + \sum_{i=s+1}^{n}b_{i}$}, Theorem~\ref{thm:tp} leads to:
\new{\footnotesize
\begin{equation}
F'_{k} =
\begin{cases}
\frac{\sum_{i=1}^{k-1}b_{i}}{\mathbb{F}}; & \text{$k<s,$}\\
\frac{\sum_{i=1}^{s-1}b_{i} + \sum_{i=s+1}^{k-1}b_{i}}{\mathbb{F}}; & \text{otherwise.}
\end{cases}
\end{equation}
}
\noindent When $k<s$,
\new{\footnotesize
\begin{align}
F'_k&=\frac{\sum_{i=1}^{k-1}b_{i}}{\mathbb{F}}
=\frac{\sum_{i=1}^{k-1}b_{i}}{\sum_{i=1}^{n}b_{i}}\cdot \frac{\sum_{i=1}^{n}b_{i}}{\mathbb{F}}
=F_k\cdot\frac{\sum_{i=1}^{n}b_{i}}{\mathbb{F}}.
\end{align}
}
\noindent Since \new{$\frac{\sum_{i=1}^{n}b_{i}}{\mathbb{F}} = \frac{1}{1 - (h - l)} = \lambda$}, we prove $F'_k = \lambda\cdot F_k$.
When $k > s$,
\new{\footnotesize
\begin{equation}
\begin{aligned}
F'_k&=\frac{\sum_{i=1}^{s-1}b_{i} + \sum_{i=s+1}^{k-1}b_{i}}{\mathbb{F}}
=\frac{\sum_{i=1}^{s-1}b_{i} + \sum_{i=s+1}^{k-1}b_{i}}{\sum_{i=1}^{n}b_{i}}\cdot \frac{\sum_{i=1}^{n}b_{i}}{\mathbb{F}}\\
&=\frac{\sum_{i=1}^{s-1}b_{i} + \sum_{i=s+1}^{k-1}b_{i}}{\sum_{i=1}^{n}b_{i}}\cdot\lambda
=\frac{\sum_{i=1}^{k-1}b_{i} - b_s}{\sum_{i=1}^{n}b_{i}}\cdot\lambda\\
&=(\frac{\sum_{i=1}^{k-1}b_{i}}{\sum_{i=1}^{n}b_{i}} - \frac{b_s}{\sum_{i=1}^{n}b_{i}})\cdot\lambda
=(F_k - \frac{b_s}{\sum_{i=1}^{n}b_{i}})\cdot\lambda.
\label{equ:factor}
\end{aligned}
\end{equation}
}
\noindent Since \new{$\frac{b_s}{\sum_{i=1}^{n}b_{i}} = {h-l} = \delta$}, we obtain $F'_k = \lambda\cdot (F_k -\delta)$.
\end{proof}
Theorem~\ref{thm:brs} states that one can adjust the probabilities from the original CTPS to derive the updated CTPS.
Reversing the transformation direction, we further obtain:
{\footnotesize
\begin{equation}\label{eq:brs_random}
F_{k} =
\begin{cases}
\frac{F'_k}{\lambda}; & \text{$k<s,$}\\
\frac{F'_k}{\lambda} + \delta; & \text{otherwise.}
\end{cases}
\end{equation}
}
Since $r'$ is the random number for the updated CTPS, we can substitute $F'_k$ with $r'$ in Equation~\ref{eq:brs_random} to derive the corresponding $r$ in the original CTPS that falls right at the region boundaries of original CTPS, e.g., \{0, 0.33, 0.56, 0.78, 1\} in Fig.~\ref{fig:brs}(b) fall right at \{0, 0.2, 0.73, 0.87, 1\} in Fig.~\ref{fig:brs}(c). Further, since $F_k$ is a strictly monotonic function of $F'_k$, it is clear that if $r'$ falls between the region boundaries of the updated CTPS, the derived $r$ will also do so in the original CTPS.
This ensures bipartite region search will make identical selection as if the CTPS is updated.
It is also provable that statistically, the selection probability of our algorithm is the same as the desired transition probability in more complicated scenarios where multiple vertices have been pre-selected.
\begin{comment}
\begin{theorem}
Let bipartite region search select $v_s$ in the 1st iteration.
Assume $v_s$ has been pre-selected so it advances to the next iteration.
Then, its probability of selecting $v_k$ $(k \neq s)$ in the 2nd iteration, $P_{brs}(v_k)$, equals to the desired transition probability $P(v_k|v_s \in S)$ in Theorem \ref{thm:stp}.
\end{theorem}
\begin{proof}
When $k<s$, $v_k$ falls into the region $(0, l)$ in TPS.
As a result, $v_k$ is selected if and only if $(0, l)$ is selected at \circled{3} in the 1st iteration (the probability is denoted as $P_{brs}((0,l))$), and $v_k$ is chosen within $(0, l)$ in the 2nd iteration (the probability is denoted as $P_{brs}(v_k|(0,l))$).
From \circled{3}, it is obvious that $P_{brs}((0,l)) = \frac{l}{l+1-h}$.
Recall no vertices in $(0, l)$ are selected yet because $v_s$ is the only selected vertex, so based on Theorem \ref{thm:tp}, we know that $P_{brs}(v_k|(0,l)) = \frac{t_k}{l-0}$.
Apparently for bipartite region search, $P_{brs}((0,l))$) and $P_{brs}(v_k|(0,l))$ are independent, therefore,
{\footnotesize
\begin{equation}
\label{equ:5}
\begin{aligned}
P_{brs}(v_k) &= P_{brs}(v_k|(0, l)) \times P_{brs}((0, l))\\
&= \frac{t_k}{l-0} \times \frac{l}{l+1-h} = \frac{t_k}{l+1-h}.
\end{aligned}
\end{equation}
}
\noindent By definition of TPS (Equation \ref{equ:tp}), $t_s = h-l$, therefore,
{\footnotesize
\begin{equation}
\label{equ:6}
\begin{aligned}
l+1-h &= 1-(h-l) = \sum_{i=0}^{n-1} t_i - (h-l) \\
&= \sum_{i=0}^{n-1} t_i - t_s = \sum_{i=0}^{s-1} t_i + \sum_{i=s+1}^{n-1} t_i.
\end{aligned}
\end{equation}
}
\noindent Use Equation \ref{equ:6} to substitute $l+1-h$ in Equation \ref{equ:5}, thus
{\footnotesize
\begin{equation}
P_{brs}(v_k) = \frac{t_k}{\sum_{i=0}^{s-1} t_i + \sum_{i=s+1}^{n-1} t_i} = P(v_k|v_s \in S).
\end{equation}
}
\noindent Similarly, we can also prove $P_{brs}(v_k)=P(v_k|v_s \in S)$ when $k>s$.
\end{proof}
\textbf{Over-selection to Migrate Collision.}
Naturally, to select $N$ vertices, $N$ threads should be deployed so each of them selects one vertex.
In the meantime, GPU is capable to execute hundreds of thousands of threads in parallel.
To reduce the chance of selection collision, we propose {\em over-selection} by leveraging the massive parallelism capability of GPU.
Instead of having $N$ threads, over-selection always deploys more threads than the number of selected vertices.
The number of threads is a multiply of 32, as each thread warp has 32 threads for NVIDIA GPUs.
As a result, over-selection makes use of all intra-warp threads.
A shared counter is used to record how many unique vertices have been selected.
When a thread selects one unique vertex, it will check the counter.
If the counter indicates enough vertices have been selected, the thread drops the vertex and finishes its work.
Otherwise, the vertex is selected successfully and the counter increases 1.
\end{comment}
\vspace{0.05in}
\textbf{Strided Bitmap for Collision Detection.}
Bipartite region search requires a collision detection mechanism.
We introduce a per vertex bitmap to detect selection collision (line \new{13} in Fig. \ref{fig:select}).
For every candidate vertex, there is a unique bit in the bitmap to indicate whether it has been selected.
The bitmap is shared by all threads of a warp.
After each thread selects a vertex, we perform an atomic compare-and-swap operation to the corresponding bit in the bitmap.
If the bit is 0, which means no other threads have picked this vertex, we set it to 1.
Since GPUs do not have variables that support bit-wise atomic operations currently, we may use either 8-bit or 32-bit integer variables for bitmap representation, where each bit corresponds to one vertex.
As using 32-bit variables results in more conflicts when updating multiple bits within the same variable, we choose 8-bit variables instead.
To resolve the atomic contentions, we propose to use {\em strided} bitmaps, inspired by the set-associative cache organization \cite{jouppi1990improving}.
A strided bitmap scatters the bits of adjacent vertices across different 8-bit variables, as shown in Fig. \ref{fig:bitmap}. Instead of using the first five bits of the same 8-bit variable to indicate the status of all vertices in the contiguous bitmap, the strided bitmap spreads them into two variables to reduce conflicts.
\begin{figure}[t]
\centering
\includegraphics[width=0.95\linewidth]{Figures/bitmap.pdf}
\caption{Sampling the neighbors of $v_8$ in Fig. \ref{fig:example}(a), under: (a) contiguous bitmap and (b) strided bitmaps.
\vspace{-.1in}
}
\label{fig:bitmap}
\end{figure}
\new{
\vspace{0.05in}
\textbf{Data Structures.}
{\textsc{c-saw}} employs three major data structures: frontier queues, per-warp bitmap, and per-warp CTPS.
All these data structures are allocated in the GPU global memory before sampling starts.
A frontier queue is a structure of three arrays, $VertexID$, $InstanceID$, and $CurrDepth$ to keep track of the sampling process.
Till now, all threads share one frontier queue, with a few exceptions that will be introduced in Section \ref{sect:outmem}.
Per-warp bitmaps and CTPSs are stored as arrays and get reused across the entire sampling process.
They are also located in global memory.
}
\begin{comment}
\subsection{Application Specific Optimizations}
\noindent \textbf{Avoiding Redundant Sampling.}
Most sampling algorithms and some random walk variations require not to explore the same vertex more than once i.e we avoid adding explored vertex to the candidate list.
In the example of Fig.~\ref{fig:example}(a), while exploring the neighbors of $v_7$ in the second iteration, it may select $v_8$, which has been added to the sample in the first iteration.
Especially, vertices with higher weights have higher chances of getting selected multiple times.
To avoid this redundant selection problem, we use a hash table to record vertices that have been selected.
Before adding any vertex to the candidate list, we first check if the vertex is present in the hash table or not
If the vertex is not present in the hash table, we add that vertex both into the candidate list and into the hash table. Particularly, we do this in two steps:
First, it locates the bin where the candidate vertex belongs to.
Second, we perform a linear search over the bin to check if the vertex was already explored for the same sample.
In this work, {\textsc{c-saw}} finds the bin size of 32 is adequate for all graphs.
\textbf{Caching Transition Probability}.
\hang{Applications that need this optimization, probability of one vertex is not changed}
We profile the vertex repetition for ten datasets listed in Table~\ref{Table-datasets}.
Through profiling , for a substantial number of vertices, the prefix sum is calculated multiple times.
Inspired by the observation, {\textsc{c-saw}} introduces caching transition probability to avoid repeated calculation.
Considering vertices with higher degrees are more susceptible to get sampled multiple times among various instances, {\textsc{c-saw}} caches the probability for higher-degree vertices.
To avoid repeated computations, we propose to cache the prefix sum of transition probability. For each vertex, we only compute the prefix sum once. When a vertex is visited for the first time, \textsc{c-saw} stores its prefix sum in the global memory for reuse. When it is visited again in any step of other sample instance, we read the pre-calculated result from the GPU memory instead of computing it again.
Essentially, we trade prefix sum computation with a memory access, which is much cheaper. We show the performance improvement from this optimization in Section\ref{sect:experiment:in-mem}.
For caching prefix sum of transition probability, we use an array to store the computed values for each edge. We also use a boolean value for each vertex as a flag to determine if the transition probability for that vertex is computed or not. After computing the prefix sum of transition probability, we update the transition probability to the array and set the flag for that vertex.
Fig.~\ref{fig:hash_compare} shows the ratio for number of comparisons made for linear search and hash based search for various datasets. Hashing reduces the overall comparison made by more the 40\% of than linear search. The space that we need to perform linear search would be greatly reduced by hashing.
\end{comment}
\section{Out-of-memory \& \new{Multi-GPU} {\textsc{c-saw}}}
\label{sect:outmem}
Thanks to sampling and random walk which lift important obstacles for out-of-memory computation,
that is,
they need neither the entire graph nor synchronization during computation.
This section takes advantage of this opportunity to enable fast out-of-memory \new{and multi-GPU} {\textsc{c-saw}}.
\subsection{Graph Partition}
{\textsc{c-saw}} \new{partitions the graph by simply assigning a contiguous and equal range of vertices and all their neighbor lists to one partition.} We adopt this method instead of advanced topology-aware partition (e.g., \textsc{METIS}~\cite{karypis1995metis, karypis1998fast,guattery1995performance}) and 2-D partition~\cite{boman2013scalable}, for \new{three} reasons. First and foremost, sampling and random walk require all the edges of a vertex be present in order to compute the transition probability.
Splitting the neighbor list of any vertex, which is the case in 2-D partition, would introduce fine-grained communication between partitions, that largely hampers the performance.
Second, topology-aware partition would require extremely long preprocessing time, as well as yield discontinued vertex ranges which often lead to more overhead than benefit.
\new{Third, this simple partitioning method allows {\textsc{c-saw}} to decide \cready{which} partition a vertex belongs to in constant time that is important for fast bulk asynchronous sampling (Fig.~\ref{fig:streaming}).}
\subsection{Workload-Aware Partition Scheduling}
\label{sect:outmem:design}
Since multiple sampling instances are independent of each other, this dimension of flexibility grants {\textsc{c-saw}} the freedom of dynamically scheduling various partitions based upon the workload from both graph partitions and workers (such as GPU kernels and devices).
\vspace{0.05in}
\textbf{Workload-Aware Partition Scheduling.}
{\textsc{c-saw}} tracks the number of frontier vertices that falls into each partition to determine which partition will offer more workload (\circled{1} in Fig. \ref{fig:streaming}). \new{We refer them as active vertices. Based upon the count, we also allocate thread blocks to each GPU kernel with thread block based workload balancing described in next paragraph.}
Subsequently, the partitions that contain more workload are transferred to the GPU earlier and sampled first (\new{\circled{2}} in Fig. \ref{fig:streaming}).
Non-blocking {cudaMemcpyAsync} is used to copy partitions to the GPU memory asynchronously.
{\textsc{c-saw}} samples this partition until it has no active vertices. \new{
Note that, {\textsc{c-saw}} stores frontier queues from all partitions in the GPU memory. It allows a partition to insert new vertices to its frontier queue, as well as the frontier queues of other partitions to enable communications.}
The actively sampled partition is only released from the GPU memory when its frontier queue is empty.
The reason is that partitions with more active vertices often insert more neighbors in its own frontier queue, which further leads to more workloads.
As a result, this design can reduce the number of partitions transferred from CPU to GPU.
When it comes to computation, we dedicate one GPU kernel to one active partition along with a CUDA stream, in order to overlap the data transfer and sampling of different active partitions.
After parallel partition sampling finishes, we count the vertex number in each frontier queue to decide which partitions should be transferred to GPU for sampling next (\circled{3} in Fig. \ref{fig:streaming}).
The entire sampling is complete when there are no active vertices in all partitions.
\vspace{0.1in}
\textbf{Thread Block based Workload Balancing.}
Depending upon the properties of graphs and sample seeds, frontiers are likely not equally distributed across partitions.
As a result, the sampling and data transfer time are not the same as well.
Since the straggler kernel determines the overall execution time, it is ideal to balance the workload across kernels.
Consequently, we implicitly partition the GPU resources by controlling the thread block number of different kernels.
\textbf{Example.}
Fig.~\ref{fig:streaming} shows an example of out-of-memory sampling.
Here, we assume three graph partitions (i.e., P$_1$, P$_2$, P$_3$) for \new{the same graph in Fig.~\ref{fig:example}(a)}, two GPU kernels (i.e., Kernel$_1$ and Kernel$_2$), and the GPU memory can contain two active partitions.
\new{If we start sampling from vertices \{0, 2, 8\}}, P$_1$, P$_2$, and P$_3$\ will have \new{2, 0, and 1} active vertices initially.
Hence, kernel K$_1$ is assigned to work on \new{P$_1$} and kernel K$_2$ for P$_3$. To balance the workload, the ratio of thread block numbers assigned to K$_1$ and K$_2$ is set to \new{2:1}.
\new{Assuming vertices 0, 2, and 8 pick 7, 3, and 5, respectively, the frontier queues for P$_1$, P$_2$ and P$_3$ become \{3\}, \{7, 5\} and \{$\phi$\} as shown in bottom right of Fig.~\ref{fig:streaming}. Subsequently, K$_2$ exits because P$_3$'s frontier queue is empty, while K$_1$ continues sampling 3 and puts 4 into the frontier queue of P$_2$.
Then, K$_1$ also exits and leaves \{7, 5, 4\} in the frontier queue of P$_2$ to be scheduled next.}
\begin{figure}[t]
\centering
\includegraphics[width=.95\linewidth]{Figures/Streaming_example.pdf}
\caption{Workload-aware scheduling of graph partition. \new{The upper part shows the toy graph and its partition. We start sampling within partitions 1 and 3. The lower part shows an example for out-of-memory sampling. For simplicity, we hide InstanceID and CurrDepth from the frontier queue. }
\vspace{-.1in}
}
\label{fig:streaming}
\end{figure}
\vspace{0.05in}
\new{\textbf{Correctness.}
The out-of-order nature of the workload-aware partition scheduling does not impact the correctness of {\textsc{c-saw}}.
With out-of-order scheduling, the sampling of one instance is not in the breath-first order as in the in-order case.
The actual sampling order can be considered as a combination of breath-first and depth-first orders.
However, since we keep track of the depth of sampled vertices to prevent an instance from reaching beyond the desired depth, the sampling result will be the same as if it is done in the breath first order.
}
\subsection{Batched Multi-Instance Sampling}
\label{sect:multi:balance}
In the out-of-memory setting, {\textsc{c-saw}} introduces {\em batched multi-instance sampling}, which \textit{concurrently} samples multiple instances, to combat the expensive data transferring cost.
Batched sampling is implemented by combining the active vertices of various concurrently sampling instances into a single frontier queue for each partition.
Along with the queue, we need to keep two extra metadata for each vertex, i.e., $InstanceID$ \new{and $CurrDepth$, which tracks the instance that a vertex belongs to and stores the current depth of that instance respectively.}
During sampling, a thread warp in the kernel can work on any vertex in the queue, no matter whether they are from the same or different instances.
After it finishes selecting vertices (line 6 in Fig. \ref{fig:api}(b)), $InstanceID$ is used to find the corresponding frontier pool and sampled graph to update (line 7-8).
Note that there may exist multiple copies of the same vertex in the queue, because a common vertex can be sampled by multiple instances.
\begin{figure*}[t]
\centering
\subfloat[{\textsc{c-saw}} vs. KnightKing on biased random walk.]{
\includegraphics[width=.495\linewidth]{Figures/knightking.pdf}
}%
\subfloat[{\textsc{c-saw}} vs. GraphSAINT on multi-dimensional random walk.]{
\includegraphics[width=.475\linewidth]{Figures/graphsaint.pdf}
}
\caption{\new{{\textsc{c-saw}} vs. the state-of-the-art in million sampled edges per second with 1 GPU and 6 GPUs (higher is better).}
}
\label{fig:stateofart}
\end{figure*}
Batched sampling can also balance the workload across sampling instances. Otherwise, if we sample various instances separately,
since many real-world graphs hold highly skewed degree distributions, some instances may encounter higher degree vertices more often and thus more workloads. This will end up with skewed workload distributions.
Batched sampling solves this problem using a vertex-grained workload distribution, instead of instance-grained distribution.
\new{
\subsection{Multi-GPU {\textsc{c-saw}}}
As the number of sources continues to grow, the workload will saturate one GPU and go beyond. In this context, scaling {\textsc{c-saw}} to multiple GPUs would help accelerate the sampling performance.
Since various sampling instances are independent from each other, {\textsc{c-saw}} simply divides all the sampling instances into several disjoint groups, each of which contains equal number of instances. Here, the number of disjoint groups is the same as the number of GPUs. Afterwards, each GPU will be responsible for one sampling group. During sampling, each GPU will perform the same tasks as shown in Fig.~\ref{fig:streaming} and no inter-GPU communication is required.
}
\vspace{0.1in}
\section{Evaluations}
\label{sect:experiment}
{\textsc{c-saw}} is {implemented with $\sim$4,000} lines of CUDA code and compiled by CUDA Toolkit 10.1.243 and g++ 7.4.0
with optimization flag as -O3.
We evaluate {\textsc{c-saw}} on the Summit supercomputer of Oak Ridge National Laboratory~\cite{ornl_summit}.
Each Summit node is equipped with 6 NVIDIA Tesla V100 GPUs, dual-socket 22-core POWER9 CPUs and 512 GB main memory. Particularly, each V100 GPU is equipped with 16GB device memory.
For the random number generation, we use the cuRAND library~\cite{tian2009mersenne}.
\begin{table}[!h]
{\scriptsize
\begin{tabular}{|l|l|l|l|l|l|}
\hline
Dataset & Abbr. & \begin{tabular}[c]{@{}l@{}}Vertex \\ Count\end{tabular} & \begin{tabular}[c]{@{}l@{}}Edge \\ Count\end{tabular} & \begin{tabular}[c]{@{}l@{}}Avg.\\ degree\end{tabular} & \new{\begin{tabular}[c]{@{}l@{}}Size\\ (of CSR)\end{tabular}} \\ \hline
Amazon0601~\cite{snapnets} & AM & 0.4M & 3.4M & 8.39 &\new{59 MB} \\ \hline
As-skitter \cite{snapnets} & AS & 1.7M & 11.1M & 6.54 &\new{325 MB}\\ \hline
cit-Patents \cite{snapnets} & CP & 3.8M & 16.5M & 4.38 &\new{293 MB}\\ \hline
LiveJournal \cite{snapnets} & LJ & 4.8M & 68.9M & 14.23 & \new{1.1 GB}\\ \hline
Orkut \cite{snapnets} & OR & 3.1M & 117.2M & 38.14 & \new{1.8 GB}\\ \hline
Reddit \cite{zeng2019graphsaint,yelpreddit} &RE &0.2M &11.6M & 49.82 & \new{179 MB}\\ \hline
web-Google \cite{snapnets} & WG & 0.8M & 5.1M & 5.83 & \new{85 MB}\\ \hline
Yelp \cite{zeng2019graphsaint,yelpreddit} &YE &0.7M &6.9M & 9.73& \new{111 MB}\\ \hline \hline
Friendster \cite{snapnets} & FR & 65.6M & 1.8M & 27.53 &\new{29 GB} \\ \hline
Twitter \cite{konect:2017:twitter} & TW & 41.6M & 1.5M & 35.25 &\new{22 GB}\\ \hline
\end{tabular}
}
\caption{Details of evaluated graphs.}\label{Table-datasets}
\end{table}
\textbf{Dataset.}
We use the graph datasets in Table~\ref{Table-datasets} to study {\textsc{c-saw}}. This dataset collection contains a wide range of applications, such as social networks (LJ, OR, FR and TW), forum discussion (RE and YE), online shopping (AM), citation networks (CP), computer routing (AS) and web page (WG).
\textbf{Metrics.}
Instead of Traversed Edges Per Second (TEPS) in classical graph analytics~\cite{liu2015enterprise,wang2016gunrock}, we introduce a new metric - Sampled Edges Per Second (SEPS) - to evaluate the performance of sampling and random walk. Formally, SEPS = $\frac{\#~\text{SampledEdges}}{\text{Time}}$. This metric is more suitable than TEPS to evaluate sampling and random walk because these algorithms might use different methods thus traverse a different number of edges but end up with the same number of sampled edges.
\new{Similar to previous work \cite{wang2016gunrock,liu2015enterprise}, the kernel execution time is used to compute SEPS, i.e., the time spent on generating the samples}, except for the out-of-memory case that also includes the time for transferring the partitions. Note, each reported result is an average of three runs with different sets of seeds.
\textbf{Test Setup.}
Analogous to GraphSAINT~\cite{zeng2019graphsaint}, we generate \new{4,000} \new{instances} for random walk algorithms and 2000 instances for sampling algorithms.
For sampling, both the \new{\textit{NeighborSize}} (i.e., number of neighbors sampled from one frontier) and $Depth$ are 2 for analyzing the performance of {\textsc{c-saw}} except forest fire, which uses $P_{f}$= 0.7 to derive \new{\textit{NeighborSize}} as in~\cite{leskovec2006sampling}. For \new{biased random walk algorithm}, the length of the walk is \new{2,000}. \new{For multi-dimensional random walk, similar to GraphSAINT, we use \new{2,000} as the \textit{FrontierSize} for each instance.}
\subsection{{\textsc{c-saw}} vs. State-of-the-art}
\label{sect:experiment:in-mem}
\vspace{-0.03in}
First, we compare {\textsc{c-saw}} against the state-of-the-art frameworks, KnightKing and GraphSAINT. \new{Our profiling result shows that both GraphSAINT and KnightKing use multiple threads to perform the computation, where the \# threads = \# cores.}
Since KnightKing only supports random walk variations, we compare {\textsc{c-saw}} with KnightKing for \new{biased random walk}.
\new{GraphSAINT provides both Python and C++ implementations. We choose the C++ implementation~\cite{zeng2019accurate}
which exhibits better performance. \cready{The} C++ version only supports multi-dimensional random walk which is studied in Fig.~\ref{fig:stateofart}(b)}.
As shown in Fig.~\ref{fig:stateofart}, {\textsc{c-saw}} presents superior performance over both projects. On average, {\textsc{c-saw}} is\new{ {10}$\times$ and {14.7}$\times$} faster than KnightKing with 1 GPU and 6 GPUs, respectively. Compared to GraphSAINT, {\textsc{c-saw}} is \new{{8.1}$\times$ and {11.5}$\times$ faster with 1 GPU and 6 GPUs respectively. Each instance of sampled graphs has 1,703 edges on average.}
While {\textsc{c-saw}} outperforms both projects across all graphs, we generally observe better speedup on graphs with a lower average degree, such as, AM, CP and WG on KnightKing and AM on GraphSAINT. This is rooted from \new{1) the superior computing capability of GPU over CPU}, 2) {\textsc{c-saw}} is free of \cready{bulk synchronous parallelism (BSP)}~\cite{malewicz2010pregel}, which allows it to always have adequate computing tasks for sparse graphs, and 3) the unprecedented bandwidth of the V100 GPU over the {POWER9} CPU, i.e., 900 GB/s vs. {170 GB/s} \cite{ornl_summit}.
This underscores the need of GPU-based sampling and random walk.
\subsection{In-memory Optimization}
\begin{figure}[ht]
\centering
\subfloat[\new{Biased neighbor sampling.}]{
\hspace{-.05in}\includegraphics[width=.50\linewidth]{Figures/in_opt_NSb.pdf}
}
\subfloat[\new{Forest fire sampling.}]{
\hspace{-.05in}\includegraphics[width=.47\linewidth]{Figures/in_opt_FF.pdf}
}\\
\subfloat[\new{Layer sampling.}]{
\hspace{-.05in}\includegraphics[width=.50\linewidth]{Figures/in_opt_LS.pdf}
}
\subfloat[\new{Unbiased neighbor sampling.}]{
\hspace{-.05in}\includegraphics[width=.47\linewidth]{Figures/in_opt_NS.pdf}
}
\caption{\new{Performance impacts of in-memory optimizations for various sampling algorithms. \vspace{-0.1in}
}
}
\label{fig:in_memory}
\end{figure}
\begin{figure}[ht]
\centering
\subfloat[\new{Biased neighbor sampling.}]{
\hspace{-.01in}\includegraphics[width=.49\linewidth]{Figures/in_prof_brs_NSb.pdf}
}
\subfloat[\new{Forest fire sampling.}]{
\hspace{-.01in}\includegraphics[width=.46\linewidth]{Figures/in_prof_brs_FF.pdf}
}
\\
\subfloat[\new{Layer sampling.}]{
\hspace{-.01in}\includegraphics[width=.49\linewidth]{Figures/in_prof_brs_LS.pdf}
}
\subfloat[\new{Unbiased neighbor sampling.}]{
\hspace{-.01in}\includegraphics[width=.46\linewidth]{Figures/in_prof_brs_NS.pdf}
}
\caption{Average \# iteration w/ and w/o \new{bipartite region search} for various algorithms.
\vspace{-0.1in}
}
\label{fig:profile_bipartite}
\end{figure}
\begin{figure}[t]
\centering
\subfloat[\new{Biased neighbor sampling.}]{
\hspace{-.05in}\includegraphics[width=.50\linewidth]{Figures/in_prof_bit_NSb.pdf}
}
\subfloat[\new{Forest fire sampling.}]{
\hspace{-.05in}\includegraphics[width=.47\linewidth]{Figures/in_prof_bit_FF.pdf}
}
\\
\subfloat[\new{Layer sampling.} ]{
\hspace{-.05in}\includegraphics[width=.50\linewidth]{Figures/in_prof_bit_LS.pdf}
}
\subfloat[\new{Uniased neighbor sampling.}]{
\hspace{-.05in}\includegraphics[width=.47\linewidth]{Figures/in_prof_bit_NS.pdf}
}
\caption{\new{Total search reduction by bitmap for various algorithms.}
}
\label{fig:profile_bitmap}
\end{figure}
Fig.~\ref{fig:in_memory} studies the performance impacts of \new{bipartite region search} and bitmap optimizations over repeated sampling (Fig.~\ref{fig:brs}(a)) and updated sampling (Fig.~\ref{fig:brs}(b)) across four applications, which include both biased and unbiased algorithms.
Repeated sampling is used as the performance baseline for comparison.
\new{FR and TW are not studied in this subsection because they exceed the GPU memory capacity.}
Particularly, \new{bipartite region search} introduces, on average, {1.7}$\times$, {1.4}$\times$, {1.7}$\times$ and {1.17}$\times$ speedup, on \new{biased neighbor sampling, forest fire sampling, layer sampling, and unbiased neighbor sampling} respectively. \new{Bipartite region search} presents better performance compared with both repeated sampling and updated sampling.
Bitmap further improves speedup to {1.8}$\times$, {1.5}$\times$, {1.8}$\times$, and {1.28}$\times$ on these four applications, respectively. The performance for AM, CP, and WG gleams the effectiveness of {\textsc{c-saw}}. With a lower average degree of vertices, they suffer from more selection collision. Using \new{bipartite region search}, we achieve better speedup by mitigating the collision.
Fig.~\ref{fig:profile_bipartite} and~\ref{fig:profile_bitmap} further profile the effectiveness of our two optimizations. On average, \new{bipartite region search} reduces the average number of iterations to pick a neighbor by {5.0}$\times$, {1.5}$\times$, {1.8}$\times$, and {1.7}$\times$ for these four applications, respectively.
\new{
Here, \#~iterations refers to the trip count of do-while loop in Fig.~\ref{fig:select} (line 10-14), which represents the amount of computation used to select a vertex.
For analysis, we compare the average number of iterations for all sampled vertices, i.e., $\frac{Total~\#~\text{iterations of sampled vertices}}{\#~\text{sampled vertices}}$.}
We observe more \new{reduction on \#~iterations} for \new{biased neighbor sampling} than other algorithms as it has a higher selection collision chance and thus requires more iterations without \new{bipartite region search}.
With relatively larger neighbor pools, collision is less likely to happen in \new{layer sampling} which explains its lower benefits from bipartite region search.
Similarly, \new{unbiased neighbor sampling} and \new{forest fire sampling} incur less collision due to unbiased sampling.
Fig.~\ref{fig:profile_bitmap} shows the effectiveness of bitmap over the baseline which stores the sampled vertices in the GPU shared memory and performs a linear search to detect collision. \new{The ratio metric in Fig.~\ref{fig:profile_bitmap} compares the total number of searches performed by bitmap with that of baseline, i.e., $\text{Ratio} = \frac{\sum\#~\text{searches in bitmap}}{\sum\#~\text{searches in baseline}}$.} Compared to baseline, bitmap reduces the total searches by
{{63}\%, {83}\%, {71}\%, and {81}\%} for these four applications, respectively.
Despite of the significant search count reduction from bitmap, \cready{the} overhead of atomic operations refrains us from achieving speedups proportional with the search count reduction.
\subsection{Out-of-memory Optimization}
\begin{figure}[t]
\centering
\subfloat[\new{Biased neighbor sampling.} ]{
\hspace{-.05in}\includegraphics[width=.50\linewidth]{Figures/out_opt_NSb.pdf}
}
\subfloat[\new{Biased random walk.} ]{
\hspace{-.05in}\includegraphics[width=.47\linewidth]{Figures/out_opt_RW.pdf}
}\\
\subfloat[\new{Forest fire sampling.} ]{
\hspace{-.05in}\includegraphics[width=.50\linewidth]{Figures/out_opt_FF.pdf}
}
\subfloat[\new{Unbiased neighbor sampling.} ]{
\hspace{-.05in}\includegraphics[width=.47\linewidth]{Figures/out_opt_NS.pdf}
}
\caption{Performance impacts of out-of-memory optimizations. Here,\new{ baseline implementation refers to partition transfer based on active partition without any optimization}.
\vspace{-0.15in}
}
\label{fig:out_memory}
\end{figure}
\begin{figure}[t]
\centering
\subfloat[\new{Biased neighbor sampling.}]{
\hspace{-.05in}\includegraphics[width=.50\linewidth]{Figures/out_prof_bal_NSb.pdf}
}
\subfloat[\new{Biased random walk.}]{
\hspace{-.05in}\includegraphics[width=.47\linewidth]{Figures/out_prof_bal_RW.pdf}
} \\
\subfloat[\new{Forest fire sampling.} ]{
\hspace{-.05in}\includegraphics[width=.50\linewidth]{Figures/out_prof_bal_FF.pdf}
}
\subfloat[\new{Unbiased neighbor sampling.}]{
\hspace{-.05in}\includegraphics[width=.47\linewidth]{Figures/out_prof_bal_NS.pdf}
}
\vspace{-.05in}
\caption{Standard deviation of kernel time for multi-instance batching and workload-aware balancing in out-of-memory {\textsc{c-saw}} \new{(lower is better). Here, baseline represents even distribution of resources. \vspace{-0.15in}}
}
\label{fig:profile_batched}
\end{figure}
Fig.~\ref{fig:out_memory} presents the performance impacts of \new{multi-instance} batched sampling (BA), workload-aware scheduling (WS), and thread block based workload balancing (BAL) \new{on both large graphs and small graphs. For the sake of analysis, we pretend small graphs do not fit in GPU memory.} For the experimental analysis, we use 4 partitions for each graph and two CUDA streams.
Assume the GPU memory can keep at most two partitions at the same time, for all graphs.
Particularly, batched sampling introduces, on average, {2.0}$\times$, {1.9}$\times$, {2.1}$\times$, and {2.7}$\times$ speedup, respectively on \new{biased neighbor sampling, biased random walk, forest fire sampling, and unbiased neighbor sampling}. Workload-aware scheduling further introduces {3.2}$\times$, {2.8}$\times$, {3.9}$\times$, and {3.3}$\times$ speedups on these four applications, respectively. Workload balancing gives, on average, {3.5}$\times$ speedup over all applications.
\begin{figure}[ht]
\centering
\subfloat[\new{Biased neighbor sampling.}]{
\hspace{-.05in}\includegraphics[width=.50\linewidth]{Figures/out_prof_act_NSb.pdf}
}
\subfloat[\new{Biased random walk.}]{
\hspace{-.05in}\includegraphics[width=.47\linewidth]{Figures/out_prof_act_RW.pdf}
}
\\
\subfloat[\new{Forest fire sampling.}]{
\hspace{-.05in}\includegraphics[width=.50\linewidth]{Figures/out_prof_act_FF.pdf}
}
\subfloat[\new{Unbiased neighbor sampling.}]{
\hspace{-.05in}\includegraphics[width=.47\linewidth]{Figures/out_prof_act_NS.pdf}
}
\caption{Partition transfer counts for workload-aware scheduling \new{(lower is better)}.
\vspace{-0.1in}
}
\label{fig:profile_degree}
\end{figure}
Fig.~\ref{fig:profile_batched} and~\ref{fig:profile_degree} reasons the effectiveness of two optimizations. \new{We use standard deviation to measure workload imbalance in runtime of two kernels for overall sampling.} On average, multi-instance batched sampling (BA) and thread block based workload balancing (BAL) reduce the average kernel time by {27}\%, {12}\%, {23}\%, and {26}\%, respectively on four applications.
As active vertices increase exponentially with depth during sampling, \new{biased neighbor sampling, forest fire sampling, and unbiased neighbor sampling} observe more reduction in kernel time than \new{biased random walk}. Workload-aware scheduling reduces the overall partition transfers by {1.2}$\times$, {1.3}$\times$, {1.2}$\times$, and {1.1}$\times$ on these four applications, respectively. Even with moderate decrease in partition transfers, we still achieve noticeable speedups.
\begin{figure}[ht]
\vspace{-0.25cm}
\centering
\subfloat[NeighborSize: 1 - 8.]{
\hspace{-.05in}\includegraphics[width=0.95\linewidth]{Figures/sampling_rate.pdf}
}
\\
\subfloat[\# instances: 2k - 16k.]{
\hspace{-.05in}\includegraphics[width=0.95\linewidth]{Figures/sampling_size.pdf}
}
\vspace{-0.1in}
\caption{\new{Biased neighbor sampling with (a) \textit{NeighborSize} as 1, 2 4 and 8 and (b) \# instances as 2k, 4k, 8k and 16k.\vspace{-0.1in}}
}
\vspace{-0.05in}
\label{fig:time}
\end{figure}
\vspace{0.05in}
\subsection{Studying NeighborSize and \#~Instances in {\textsc{c-saw}}}
Fig.~\ref{fig:time} reports the time consumption impacts of various \textit{NeighborSize} and \# instances. Here, we use Depth= 3 and 16k instances in Fig.~\ref{fig:time} (a) for extensive analysis. For Fig.~\ref{fig:time} (b), we use \textit{NeighborSize} = 8.
As shown in Fig.~\ref{fig:time}(a), larger \textit{NeighborSize} leads to longer sampling time. The average sampling time for \textit{NeighborSize} of 1, 2, 4, and 8 are 3, 4, 7, and 14 ms.
Similarly, the increase of sampling instances, as shown in Fig.~\ref{fig:time}(b) also results in longer sampling time. The average sampling time for 2k, 4k, 8k, and 16k instances is 2, 5, 9, and 15 ms.
It is important to note that graphs with higher average degrees, i.e., TW, RE, and OR, have longer sampling time, while the impact of graph sizes on sampling time is secondary.
\begin{figure}[t]
\centering
\subfloat[2,000 instances.]{
\hspace{-.05in}\includegraphics[width=0.9\linewidth]{Figures/scaling_2k.pdf}
}
\\
\subfloat[8,000 instances.]{
\hspace{-.05in}\includegraphics[width=0.9\linewidth]{Figures/scaling_8k.pdf}
}
\caption{\new{Scaling {\textsc{c-saw}} from 1 to 6 GPUs with (a) 2,000 and (b) 8,000 instances for biased neighbor sampling.\vspace{-0.15in}}
}
\label{fig:scaling}
\end{figure}
\vspace{0.02in}
\subsection{{\textsc{c-saw}} Scalability}
Fig.~\ref{fig:scaling} scales {\textsc{c-saw}} from 1 to 6 GPUs for different number of sampling instances.
For 2,000 and 8,000 instances, we achieve {1.8}$\times$ and {5.2}$\times$ speedup with 6 GPUs, respectively. The reason is that 2,000 instances fail to saturate 6 GPUs. With 8,000 instances, we observe more workloads that lead to better scalability. We also observe that lower average degree graphs present better scalability because their workloads are better distributed across sampling instances.
\section{Related Works}
\label{sect:related}
Despite there is a surge of frameworks for classical graph algorithms including think like a vertex~\cite{low2014graphlab,malewicz2010pregel}, an edge~\cite{roy2013x}, a graph~\cite{tian2013think}, an IO partition~\cite{liu2017graphene}, and Domain Specific Languages~\cite{hong2012green,zhang2018graphit}, among many others~\cite{liu2019simd,bader2006gtgraph,sundaram2015graphmat}, very few projects target graph sampling and random walk which are the focus of {\textsc{c-saw}}.
This section discusses the closely related work from the following three aspects.\vspace{3px}
\textbf{Programming Interface.}
KnightKing~\cite{yang2019knightking} proposes a walker-centric model to support random walk~\cite{ribeiro2010estimating,li2015random}, e.g., Node2vec~\cite{grover2016node2vec,10.1145/3159652.3159706}, Deepwalk~\cite{perozzi2014deepwalk}, and PPR~\cite{ilprints750,lofgren2014fast,lin2019distributed}, and hence fails to accommodate sampling algorithms that are important for graph learning and sparsification~\cite{ribeiro2010estimating,gao2018large,chen2018fastgcn,ying2018graph,hamilton2017inductive,leskovec2006sampling,gaihre2019deanonymizing}. Similarly for~\cite{lakhotia2019parallel,chen2016general} which also only support limited sampling/random walk algorithms.
GraphSAINT~\cite{zeng2019graphsaint,zeng2019accurate} explores three graph sampling methods, i.e., random vertex and edge sampling, and random walk based sampler, but fails to arrive at a universal framework.
\cite{tariq2017power} supports deletion based sampling algorithms~\cite{krishnamurthy2007sampling}.
But this design is inefficient for large graphs that need to remove most edges.
In this work, {\textsc{c-saw}} offers a bias-centric framework that can support both sampling and random walk algorithms, and hide the GPU programming complexity from end users.
\vspace{0.05in}
\textbf{Transition Probability Optimizations.} Existing projects often explore the following optimizations, i.e., probability pre-computation and data structure optimization. Particularly, KnightKing~\cite{yang2019knightking} pre-computes the alias table for static transition probability, and resorts to dartboard for the dynamic counterpart which is similar to~\cite{zeng2019accurate}. Interestingly, kPAR~\cite{shi2019realtime} even proposes to pre-compute random walks to expedite the process.
Since large graphs cannot afford to index the probabilities of all vertices,~\cite{lin2019distributed} only pre-computes for hub vertices and further uses hierarchical alias method, i.e., alias tree for \cready{distributed} sampling.
However, not all sampling and random walk algorithms could have deterministic probabilities that support pre-computation.
{\textsc{c-saw}} finds \cready{inverse transform sampling} to be ideal for GPUs, and achieves superior performance over the state-of-the-art even when computing the probability during runtime.
\vspace{0.05in}
\textbf{Out-of-memory Processing.} GPU unified memory and partition-centric are viable method for out-of-memory graph processing.
Since graph sampling is irregular, unified memory is not a suitable option~\cite{mishra2017um,li2019um}. Besides, partition-centric options~\cite{graphchi,zheng2015flashgraph,liu2017graphene,han2017graphie,chiang2019cluster} load each graph partition from either secondary storage to memory or CPU memory to GPU for processing. Since prior work deals with classical graph algorithms, they need BSP.
In contrast, {\textsc{c-saw}} takes advantage of the asynchronous nature of sampling to introduce workload-aware scheduling and batched sampling to reduce the data transfer between GPU and CPU.
\section{Conclusion}
\label{sect:conclusion}
This paper introduces {\textsc{c-saw}}, a novel, generic, and optimized GPU graph sampling framework that supports a wide range of sampling and random walk algorithms. Particularly, we introduce novel bias-centric framework, bipartite region search and workload aware out-of-GPU and multi-GPU scheduling for {\textsc{c-saw}}.
Taken together, our evaluation shows that {\textsc{c-saw}} bests the state-of-the-art.
\section*{Acknowledgement}
We thank the anonymous reviewers for their helpful suggestions and feedbacks. This research is supported in part by the National Science Foundation CRII award No. 2000722, the U.S. Department of Energy, Office of Science, Office of Advanced Scientific Computing Research, under Contract No. DE-AC02-05CH11231 at Lawrence Berkeley National Laboratory, and Brookhaven National Laboratory, which is operated and managed for the U.S. Department of Energy Office of Science by Brookhaven Science Associates under contract No. DE-SC0012704.
\bibliographystyle{unsrt}
\section{Introduction}
Graph is a natural format to represent relationships that are prevalent in a wide range of real-world applications, such as, material/drug discovery \cite{you2018graph}, web-structure~\cite{kumar2002web}, social network~\cite{garton1997studying}, protein-protein interaction~\cite{von2002comparative}, knowledge graphs~\cite{popping2003knowledge}, among many others.
Learning, mining, analyzing and visualizing graphs is hence of paramount value to \cready{our society}.
However, as the size of the graph continues to grow, the complexity of handling those graphs also soars. In fact, large-scale graph analytics is deemed as a grand challenge that draws a great deal of attention. Popular evidences are Graph 500~\cite{murphy2010introducing} and GraphChallenge~\cite{graphchallenge}.
Fortunately, recent research efforts find out {\em graph sampling} and {\em random walk}, which significantly reduce the size of \cready{the} original graphs, can benefit learning, mining and \cready{analyzing} large graphs, by capturing the desirable graph properties~\cite{huang2018adaptive,gao2018large,chen2017stochastic}. For instance, {Zeng et al.~\cite{zeng2019accurate}}, GraphSAINT~\cite{zeng2019graphsaint}, GraphZoom~\cite{deng2019graphzoom}, Pytorch-biggraph~\cite{lerer2019pytorch} and \cready{Deep Graph Library (DGL)}~\cite{wang2019deep} manage to learn from the sampled graphs and arrive at vertex embeddings that are either similar or better than directly learning on the original gigantic graphs~\cite{deng2019graphzoom}. Weisfeiler-Lehman Algorithm~\cite{shervashidze2011weisfeiler} exploits graph sampling to find isomorphic graphs. {Furthermore, various random walk methods are used to generate vertex ranking and embedding in a graph~\cite{perozzi2014deepwalk,page1999pagerank,grover2016node2vec,kyrola2013drunkardmob}.} Sampling and random walk can also help classical graph computing algorithms, such as BFS~\cite{korf2005large} and PageRank~\cite{page1999pagerank}.
Despite great importance,
limited efforts have been made to deploy graph sampling and random walk algorithms on GPUs which come with tempting computing, data access capabilities and ever-thriving community~\cite{gao2018large}.
This paper finds three major challenges that prevent this effort.
\vspace{.1in}
First, although there is a variety of platforms to accelerate traditional graph processing algorithms on GPUs~\cite{liu2015enterprise,wang2016gunrock,liu2019simd,gaihre2019xbfs}, graph sampling and random walk pose unique challenges. Unlike traditional graph algorithms which often treat various vertices and edges similarly and focus on optimizing the operations on the vertex or edge, sampling and random walk algorithms center around how to select a subset of vertex or edge \cready{based upon a bias} (Section \ref{sect:background:ns}). Once selected, the vertex is merely visited again.
Consequently, {\em how to efficiently select the vertices of interest which is rarely studied by traditional algorithms becomes the core of sampling and random walk.}
This process needs to construct and potentially update the selection \cready{bias} repeatedly which is very expensive hence significantly hampers the performance.
\vspace{.1in}
Second, it is difficult to arrive at a GPU-based framework for various graph sampling and random walk algorithms that address the needs of vastly different applications. Particularly, there exists a rich body of graph sampling and random walk algorithms (detailed in Section~\ref{sect:background:samplerw}), deriving the common functionalities for a framework and exposing different needs as user programming interface is a daunting task. And \cready{offloading} this framework on GPU to enjoy the unprecedented computing and bandwidth capability yet hiding the GPU programming complexity further worsens the challenge.
\begin{table*}[t]
\new{{\fontsize{8}{10}\selectfont
\centering
\begin{tabular}{|c|c|c|c|c|c|}
\hline
\multicolumn{2}{|c|}{\multirow{3}{*}{\begin{tabular}[c]{@{}c@{}}Bias\\criterion\end{tabular}}} & \multicolumn{4}{c|}{\# of neighbors (NeighborSize)} \\ \cline{3-6}
\multicolumn{2}{|c|}{} & \multicolumn{2}{c|}{Per layer} & \multicolumn{2}{c|}{Per vertex} \\ \cline{3-6}
\multicolumn{2}{|c|}{} & 1 & $>1$ &Constant & Variable \\ \hline
\multicolumn{2}{|c|}{Unbiased} & \begin{tabular}[c]{@{}l@{}}Simple random walk, metropolis hasting random walk,\\ random walk with Jump, random walk with restart\end{tabular} & & Unbiased neighbor sampling & \begin{tabular}[c]{@{}l@{}}\new{Forest fire sampling},\\ Snowball sampling\end{tabular} \\ \hline
\multirow{2}{*}{Biased}
& Static &Biased random walk & Layer sampling &Biased neighbor sampling & \\ \cline{2-6}
& Dynamic & Multi-dimensional random walk, Node2vec & & & \\ \hline
\end{tabular}
}
}
\vspace{-.1in}
\caption{\cready{The design space of traversal based sampling and random walk algorithms.}
\vspace{-.2in}
}
\label{tbl:samplespace}
\end{table*}
\vspace{.1in}Third, an extremely large graph, which drives the needs of graph sampling and random walk, usually goes beyond the size of GPU memory. While there exists an array of solutions for GPU-based large graph processing, namely, unified memory~\cite{gera2020traversing}, topology-aware partition~\cite{karypis1998fast} and vertex-range based partitions~\cite{guattery1995performance}, graph sampling and random walk algorithms, which require all the neighbors of a vertex to present in order to compute the selection probability, exhibit stringent requirement on the partitioning methods.
In the meantime, the asynchronous and out-of-order nature of graph sampling and random walk provides some unique optimization opportunities for {\em out-of-memory sampling}, which are neither shared nor explored by traditional out-of-memory systems.
This work advocates {\textsc{c-saw}}, to the best of our knowledge, the first GPU-based framework that addresses all the three aforementioned challenges and supports a wide range of sampling and random walk algorithms. Taken together, {\textsc{c-saw}} significantly outperforms the state of the art systems that support either part of sampling or random walk algorithms.
The contributions of this paper are as follows:
\vspace{.1in}
\begin{itemize}
\item We propose a generic framework which allows end users to express a large family of sampling and random walk algorithms with ease (Section \ref{sect:arch}).
\vspace{.05in}
\item We implement efficient GPU sampling with novel techniques.
Our techniques parallelize the vertex selection on GPUs, with efficient algorithm and system optimizations for vertex collision migration (Section~\ref{sect:single}).
\vspace{.05in}
\item We propose {asynchronous} designs for sampling and random walk, which optimizes the data transfer efficiency for graphs that exceed the GPU memory capacity.
\new{We further scale {\textsc{c-saw}} to multiple GPUs} (Section \ref{sect:outmem}).
\end{itemize}
\vspace{.1in}
The remainder of this paper goes as follows: Section~\ref{sect:background} presents the background. Section~\ref{sect:arch} outlines the Application Programming Interface (API) and Sections~\ref{sect:single} and~\ref{sect:outmem} optimize {\textsc{c-saw}}. Section~\ref{sect:experiment} presents the evaluation results.
Section~\ref{sect:related} discusses the related works and Section \ref{sect:conclusion} concludes.
\section{Background}
\label{sect:background}
\begin{figure*}[hbt!]
\centering
\includegraphics[width=\linewidth]{Figures/sample_example.pdf}
\caption{Example of graph sampling and vertex selection techniques. (a) A toy graph example to select a neighbor of $v_8$ ($v_5, v_7, v_9, v_{10}, v_{11}$), assuming the bias of a neighbor is defined as its degree. (b) Inverse Transform Sampling which does a binary search on a 1-D space to select $v_7$. (c) Dartboard method that rejects \protect\circled{1} and accepts \protect\circled{2} ($v_7$). (d) Alias method that selects $v_7$.
}
\label{fig:example}
\end{figure*}
\subsection{Graph Sampling \& Random Walk Variations}
\label{sect:background:samplerw}
\new{
This section presents the required background for various graph sampling and random walk algorithms~\cite{leskovec2006sampling}.}
Graph sampling refers to the random exploration of a graph, which results in a subgraph
of the original graph.
\new{\textbf{One Pass Sampling}
only goes through the original graph once to extract a sample.
Random node and random edge sampling belong to this category \cite{leskovec2006sampling}.
They select a subset of vertices/edges in original graph uniformly and randomly.
}
\textbf{Traversal based Sampling}
often traverses the graph in a Breath-First Search manner to better preserve the properties of original graphs \cite{hu2013survey}.
Traversal based sampling follows \textit{sampling without replacement} methodology, i.e., it avoids sampling the same vertex more than once.
As shown in Table \ref{tbl:samplespace},
traversal based sampling algorithms are categorized based upon the number of sampled neighbors, called {\textit{NeighborSize}}, and the criterion to select neighbors, which is referred to as {\em bias}.
Snowball sampling~\cite{stivala2016snowball} initiates the sample using a set of uniformly selected seed vertices.
Iteratively, it adds all neighbors of every sampled vertex into the sample, until a required depth is reached.
Neighbor sampling~\cite{neighborsampling}
samples a constant number of neighbors per vertex. The sampling could be \cready{either} biased or unbiased.
Forest fire sampling~\cite{leskovec2006sampling} can be regarded as a probabilistic version of neighbor sampling, which selects a variable number of neighbors for each vertex based on a burning probability.
Unlike neighbor and forest fire sampling, which select neighbors for each vertex independently, layer sampling \cite{gao2018large} samples a constant number of neighbors for all vertices present in the frontier in each round.
It repeats this process until a certain depth is reached.
\new{
\textbf{Random Walk} simulates a stochastic process of traversing the graph to form a path of connected vertices. The length of path is constrained by a user given sampling budget.
Random walk can be viewed as a special case of sampling when only one neighbor is sampled at a step with the salient difference lies in that random walk allows repeated appearance of a vertex while sampling does not.
Table \ref{tbl:samplespace} summarizes the design space of random walk algorithms.
}
Similar to traversal based sampling, random walk algorithms use {\em bias} to decide the probability of selecting a certain neighbor.
For unbiased simple random walk, the bias is uniform for all neighbors, i.e., every neighbor has the same chance to be selected.
Deepwalk~\cite{perozzi2014deepwalk} and metropolis hasting random walk \cite{li2015random} are two examples of unbiased random walk. While Deepwalk samples neighbors uniformly, meropolis hasting random walk decides to either explore the sampled neighbor or choose to stay at the same vertex based upon the degree of source and neighbor vertices.
For a biased random walk, the bias varies across neighbors.
Furthermore, depending on how to decide the bias, biased random walks are classified \cready{into} static random walks \cready{and} dynamic random walks.
For static random walk, the bias is determined by the original graph structure and does not change at runtime. Biased Deepwalk~\cite{cochez2017biased} is an example of static random walk which extends the original Deepwalk algorithm. The degree of each neighbor is used as its bias.
\new{Since a simple random walk may get stuck
locally,
random walk with jump~\cite{tzevelekas2010random}, random walk with restart~\cite{tong2006fast} and multi-independent random walk~\cite{hu2013survey} are introduced.
Particularly, random walk with jump
jumps to a random vertex under a certain probability.
Random walk with restart
jumps to a predetermined vertex.
Multi-independent random walk performs multiple instances of random walk independently.
}
For dynamic random walks, the bias depends upon the runtime states.
Node2vec~\cite{grover2016node2vec} and multi-dimensional random walk (a.k.a. frontier sampling)~\cite{ribeiro2010estimating} belong to this genre.
Node2vec is an advanced version of Deepwalk which provides more control to the random walk.
The bias of a neighbor depends upon the edge weight and its distance from the vertex explored at preceding step.
In multi-dimensional random walk, a pool of seed vertices are selected at the beginning.
At each step, multi-dimensional random walk explores one vertex $v$ from the pool based on their degrees.
One random neighbor of $v$ is added to the pool to replace $v$.
This process repeats until a desired number of vertices are sampled.
\new{
\textbf{Summary.}
Traversal based sampling and random walk are widely used and share two core similarities: 1) they are based on graph traversal, and 2) they selectively sample vertices based on biases (detailed in Section \ref{sect:background:ns}).
Their difference is the number of sampled neighbors, as shown in Table \ref{tbl:samplespace}.
In the rest of this paper, we use {\bf graph sampling to refer to both traversal based sampling and random walk}, unless explicitly specified.
}
\subsection{Bias based Vertex Selection}
\label{sect:background:ns}
\new{This section discusses the key challenge of graph sampling:} to select vertices based on user defined biases, i.e., {\em bias based vertex selection}.
As discussed in Section \ref{sect:background:samplerw}, all sampling algorithms involve the process of picking up a subset of vertices from a candidate pool of vertices.
For \cready{unbiased graph sampling}, the selection is straightforward:
one can generate a random integer in the range of 1 to the candidate count and use it to select a vertex.
Vertex selection is more challenging \cready{biased graph sampling}.
Given certain biases, we need to calculate the probability of selecting a certain vertex, which is called {\em transition probability}.
Theorem \ref{thm:tp} gives the formula to calculate transition probabilities from biases.
\begin{theorem}
\label{thm:tp}
Let \new{vertices $v_1, v_2, ..., v_{n}$} be the $n$ candidates, and the transition probability of $v_k$, i.e., $t_k$, be proportional to the {\em bias} $b_{k}$.
Then, one can formally obtain
\new{$t_k=\frac{b_{k}}{\sum_{i=1}^{n} b_{i}}$}.
\end{theorem}
Theorem \ref{thm:tp} underscores that \textit{bias} is the key to calculate transition probability.
All popular vertex selection algorithms -- inverse transform sampling~\cite{olver2013fast}, dartboard \cite{yang2019knightking}, and alias method~\cite{walker1977efficient,li2014reducing} -- obey this rule.
The key idea of inverse transform sampling is to generate the cumulative distribution function of the transition probability.
Fig.~\ref{fig:example}(b) shows an example.
First, inverse transform sampling computes the prefix sum of biases of candidate vertices, to get an array $S$, where \cready{$S_m = \sum_{i=1}^{m} b_{i-1}$ ($1 \le m \le n+1)$ and $n$= total \# of candidate vertices}.
In Fig.~\ref{fig:example}(b), $S = \{0, 3, 9, 11, 13, 15\}$.
Then $S$ is normalized using \cready{$S_{n+1}$}, to get array $F$, where \cready{$F_m = S_m/S_{n+1}\ (1 \le m \le n+1)$}.
$F = \{0, 0.2, 0.6, 0.73, 0.87, 1\}$ in Fig.~\ref{fig:example}(b).
In this way, the transition probability of $v_k$ can be derived with $F$, because
\setlength{\belowdisplayskip}{0pt} \setlength{\belowdisplayshortskip}{0pt}
\setlength{\abovedisplayskip}{0pt} \setlength{\abovedisplayshortskip}{0pt}
\new{{\footnotesize
\begin{equation}
\label{equ:tp}
\begin{aligned}
t_k&=\frac{b_{k}}{\sum_{i=1}^{n} b_{i}}=\frac{\sum_{i=1}^{k} b_{i}-\sum_{i=1}^{k-1} b_{i}}{\sum_{i=1}^{n} b_{i}}\\
&=\frac{S_{k}-S_{k-1}}{S_n}
=\frac{S_{k}}{S_n}-\frac{S_{k-1}}{S_n}=F_{k}-F_{k-1}.
\end{aligned}
\end{equation}
}
}
\noindent We call the array of $F$ \textbf{Cumulative Transition Probability Space (CTPS)}.
To select a neighbor, inverse transform sampling generates a random number $r$ in the range of (0,1), and employs a binary search of $r$ over the CTPS.
Assuming $r=0.5$ in Fig. \ref{fig:example}(b), it falls between \cready{$F_2 = 0.2$ and $F_3 = 0.6$.}
As a result, the \cready{second} candidate $v_7$ is selected on the CTPS.
When implemented sequentially, ITS has the computational complexity of $O(n)$, determined by the prefix sum calculation.
Dartboard \cite{yang2019knightking} uses 2D random numbers to select/reject vertices.
As shown in Fig.~\ref{fig:example}(c), we build a 2D board using the \cready{bias of each vertex as a bar}, and then throw a dart to the 2D board \cready{formed by two random numbers}.
If it does not hit any bar (e.g., \circled{1}), we reject the selection and throw another dart, until a bar is hit (e.g., \circled{2}).
This method may require many trials before \cready{picking up a vertex} successfully, especially for scale-free graphs where a few candidates have much larger biases than others.
Similar to dartboard, the alias method~\cite{li2014reducing} also uses a 2D \cready{board}.
To avoid rejection, the alias method converts the sparse 2D board into a dense one as shown in Fig. \ref{fig:example}(d).
It breaks down and distributes large biases across bins on the x axis,
with the guarantee that a single bin contains at most two vertices.
The drawback of alias method is its high preprocessing cost to break down and distribute biases, which is not suitable for GPUs.
\section{$\textsc{c-saw}$ Architecture}
\label{sect:arch}
\vspace{0.1in}
\subsection{Motivation}
\label{sect:background:moti}
\new{
\textbf{Need for Generic Sampling Framework.}
After sifting across numerous graph analytical frameworks (detailed in Section~\ref{sect:related}),
we find the need of a new framework for graph sampling, because
\textit{sampling algorithms pose distinct needs on both the framework design and APIs}.
For framework design, several sampling algorithms, e.g., layer sampling, require the information beyond a vertex and its neighbors for computing, which postulates hardship for traditional vertex-centric frameworks that limit the view of a user to a vertex and its 1-hop neighbors.
When it comes to API design, \textit{\textbf{bias} is the essence of sampling and random walk}.
In comparison, traditional graph frameworks focus upon the operators that alter the information on an edge or a vertex, e.g., minimum operator in single source shortest path. We also notice recent attempts, e.g., KnightKing~\cite{yang2019knightking} and GraphSAINT~\cite{zeng2019graphsaint}, but they cannot support both sampling and random walk algorithms.}
\textbf{Need for Sampling and Random Walk on GPUs.}
For sampling, short turnaround time is the key.
It is also the root cause of the invention of sampling~\cite{zeng2019accurate,lofgren2014fast}.
The good news is that GPU is a proven vehicle to drive an array of graph algorithms beyond their performance ceiling~\cite{liu2015enterprise,liu2016ibfs,wang2016gunrock,liu2019simd,gaihre2019xbfs,pandey2019h,bisson2017high}, thanks to the unprecedented computing capability and memory bandwidth~\cite{keckler2011gpus}. When it comes to sampling which are much more random than traditional graph algorithms, GPUs will best CPU at even larger margins because extreme randomness puts the large caches of CPU in vein.
\subsection{{\textsc{c-saw}}: A Bias-Centric Sampling Framework}
\label{sect:arch:overview}
{\textsc{c-saw}} offloads sampling and random walk on GPUs with the goal of a \textit{simple} and \textit{expressive} API and a \textit{high performance} framework.
Particularly, \textit{simple} means the end users can program {\textsc{c-saw}} without knowing the GPU programming syntax.
\textit{Expressiveness} requires {\textsc{c-saw}} to not only support the known sampling algorithms discussed in Section~\ref{sect:background:samplerw}, but also prepare to support emerging ones.
\textit{High performance} targets the framework design.
That is, the programming simplicity does not prevent {\textsc{c-saw}} from exploring major GPU and sampling related optimizations.
\begin{figure}[t]
\centering
\includegraphics[width=0.89\textwidth]{Figures/API_hang.pdf}
\caption{\new{\textsc{c-saw}~framework and API functions.}
}
\label{fig:api}
\end{figure}
{\textsc{c-saw}} encompasses two types of user involvements, i.e., parameter and API based options. The parameter-based option only needs a choice from the end users thus is simple, e.g., deciding the number of selected frontier vertices ($FrontierSize$ in line 4 of Fig. \ref{fig:api}(b)) and neighbors ($NeighborSize$ in line 6).
API based involvement, in contrast, provides more expressiveness to users. Particularly, {\textsc{c-saw}} offers three user defined API functions \cready{as shown in Fig.~\ref{fig:api}(a)}, \cready{most} of which surround bias, that is, \textsc{VertexBias}, \textsc{EdgeBias}, and \textsc{Update}.
We will discuss the design details of these API functions in Section \ref{sect:arch:api}.
Fig. \ref{fig:api}(b) gives an overview of the {\textsc{c-saw}} algorithm.
Particularly, bias based vertex selection occurs in two places: to select frontier vertices from a pool (line 4), and to select the neighbors of frontier vertices (line 6).
While the latter case is required by all graph sampling algorithms, the former becomes essential when users want to introduce more randomness, such as \new{multi-dimensional random walk}.
\begin{figure}[t]
\centering
\includegraphics[width=0.89\textwidth]{Figures/rw_ns.pdf}
\vspace{-.1in}
\caption{\new{Implementing two sampling algorithms with \textsc{c-saw}~API.}
}
\label{fig:api_example}
\end{figure}
In the beginning, the frontier \cready{$FrontierPool$} is initialized with a set of seed vertices (line 2).
Sampling starts from these seeds until reaching the desired depth (line 3).
In each iteration of the while loop, first, \textbf{\textsc{VertexBias}} is called on the \cready{$FrontierPool$} to retrieve the bias for each candidate vertex.
\textsc{Select} method uses the biases provided by \textbf{\textsc{VertexBias}} to choose $FrontierSize$ vertices as the current frontier (line 4).
Next, all neighbors of the frontier vertices are collected in the $NeighborPool$ using the \textsc{GatherNeighbors} method (line 5).
For these neighbors, we first define their biases using the \textbf{\textsc{EdgeBias}} method.
Similarly, \textsc{Select} method uses the biases to choose $NeighborSize$ neighbors from the $NeighborPool$ (line 6).
From the selected neighbors, \textbf{\textsc{Update}} is used to \cready{pick} new vertices for the \cready{$FrontierPool$} (line 7).
The selected neighbors are also added to the final sample \cready{list $Sampled$} (line 8) before we move forward to the next iteration.
\subsection{{\textsc{c-saw}} API}
\label{sect:arch:api}
\textbf{\textsc{VertexBias}} defines the bias associated with a candidate vertex of the \cready{$FrontierPool$}.
We often use the pertinent property of vertex to derive the bias. Equation~(\ref{equation:vertexbias}) formally defines the bias for each vertex $v$ in the \cready{$FrontierPool$}.
\new{We apply function $f_{vBias}$ over the property of $v$ to define the associated bias.}
\new{\begin{equation}
\label{equation:vertexbias}
\textbf{\textsc{VertexBias}}\underset{v~{\in}~FrontierPool}{\longleftarrow} f_{vBias} (v).
\end{equation}}
Using multi-dimensional random walk as an example, it uses the vertex degree as a bias for the vertex of interest.
\textbf{\textsc{EdgeBias}} defines the bias of each neighbor in the \cready{\textit{NeighborPool}}.
It is named as {\textsc{EdgeBias}} because every neighboring vertex is associated with an edge.
While, again, any static or dynamic bias is applicable, a typical bias is induced from the properties of \cready{the associated edge}.
Equation~(\ref{equation:edgebias}) defines \textsc{EdgeBias} formally.
Let $v$ be the source vertex of $u$.
\new{Assuming edge $e=(v, u)$ carries the essential properties of $v$, $u$ and $e$, we arrive at the following edge bias:
\begin{equation}
\label{equation:edgebias}
\textbf{\textsc{EdgeBias}} \underset{e~{\in}~NeighborPool}{\longleftarrow} f_{eBias} (e)
}
\end{equation}
}
\textbf{\textsc{Update}} decides the vertex that should be added to the \cready{$FrontierPool$} based on the sampled neighbors.
It can return any vertex to provide maximum flexibility.
For instance, this method can be used to filter out vertices that have been visited before for most traversal based sampling algorithms.
Whereas for random walk, this method can be used to implement the jump or restart action in the random walk with jump and with start, respectively. Equation~(\ref{equation:update}) quantifies this method, \new{where we will decide whether to add the sampled vertex $u$, a neighbor of frontier $v$ from edge $e$ into \cready{\textit{FrontierPool}} based upon the properties of $e$ and its endpoints.
\begin{equation}
\label{equation:update}
FrontierPool \underset{}{\longleftarrow} \textbf{\textsc{Update}}(e)
}
\end{equation}
}
\vspace{-.1in}
\subsection{Case Study}
\label{sect:arch:case}
{\textsc{c-saw}} can support all graph sampling and random walk algorithms introduced in Section \ref{sect:background:samplerw}.
Fig.~\ref{fig:api_example} exhibits how to use {\textsc{c-saw}} to implement two popular algorithms: Node2vec and \new{multi-dimensional random walk.}
Without loss of generality, we use the simplest example, i.e., \new{multi-dimensional random walk} to illustrate how {\textsc{c-saw}} works, as shown in Fig.~\ref{fig:layer_example}.
$FrontierSize$ and $NeighborSize$ are set as \new{3 and 1 respectively}.
{\textsc{VertexBias}} is based on the degree of vertices in the frontier pool in \new{multi-dimensional random walk}.
{\textsc{EdgeBias}} returns 1, resulting in the same transition probability for every neighbor.
{\textsc{Update}} always adds the currently sampled neighbor to the FrontierPool.
\begin{figure}[t]
\floatbox[{\capbeside\thisfloatsetup{capbesideposition={right,top},capbesidewidth=3.8cm}}]{figure}[\FBwidth]
{
\caption{A \new{multi-dimensional random walk} example. Assuming \{$v_8$, $v_0$, $v_3$\} in FrontierPool$_t$, we use \textsc{VertexBias} to select $v_8$ as the sampling frontier at iteration $t$. Based on \textsc{EdgeBias} in Fig.~\ref{fig:api_example}(d), we select $v_7$, and put it in sampled edges array. According to \textsc{Update}, {\textsc{c-saw}} further puts $v_7$ in FrontierPool$_{t+1}$ as \{$v_0$, $v_3$, $v_7$\}. Similar process continues until {\textsc{c-saw}} gathers adequate samples.\vspace{-.01in}
}
\label{fig:layer_example}
}
{
\includegraphics[width=.95\linewidth]{Figures/MDRW.pdf}
}
\end{figure}
\section{Optimizing GPU Sampling}
\label{sect:single}
Fig. \ref{fig:api}(b) has shown the overall algorithm of \textsc{c-saw}.
In this section, we discuss how to implement this algorithm efficiently on GPUs.
We will discuss our general strategies to parallelize the \textsc{Select} function on GPUs (Section \ref{sect:single:ves}) and how to address the conflict when multiple GPU threads select the same vertex (Section \ref{sect:single:bitmap}).
\subsection{{Warp-Centric Parallel Selection}}
\label{sect:single:ves}
The core part of the \textsc{c-saw}~algorithm is to {\em select} a subset of vertices from a pool (lines 4 and 6 in Fig. \ref{fig:api}(b)).
As discussed in Section~\ref{sect:background:ns}, several algorithms have been proposed in this regard.
In this paper, we adopt inverse transform sampling \cite{olver2013fast} for GPU vertex selection, because 1) it allows to calculate transition probabilities with flexible and dynamic biases, and 2) it shows more regular control flow which is friendly to GPU execution.
Fig. \ref{fig:select} illustrates the \textsc{Select} algorithm using inverse transform sampling.
We aim to have an efficient GPU implementation of it.
\textbf{Inter-warp Parallelism.}
{Each thread warp, no matter intra or inter thread blocks, is assigned to sample a vertex in \cready{$FrontierPool$}}.
To fully saturate GPU resources, thousands of \cready{candidate vertices needs to be sampled concurrently.}
There are two sources of them.
First of all, many sampling algorithms naturally \cready{sample all vertices in $FrontierPool$ concurrently.}
For instance, \cready{neighbor sampling allows all vertices in $FrontierPool$ to be sampled concurrently and} requires a separate \cready{$NeighborPool$} for each vertex in the \cready{$FrontierPool$.}
\begin{figure}[t]
\centering
\includegraphics[width=0.7\linewidth]{Figures/select.pdf}
\vspace{-.05in}
\caption{\new{The unoptimized implementation of \textsc{Select} function}.
\vspace{-.02in}}
\label{fig:select}
\end{figure}
\begin{figure*}[t]
\centering
\includegraphics[width=0.99\linewidth]{Figures/brs_hang.pdf}
\caption{\new{Assuming $v_7$ is already selected (dotted line in CTPS): (a) naive repeated sampling on the original CTPS, (b) updated sampling on the recalculated CTPS, and (c) our bipartite region search approach.}
\vspace{-.02in}
}
\label{fig:brs}
\end{figure*}
\cready{Second}, most sampling applications including {Graph Convolutional Network} (GCN)~\cite{kipf2016semi} , Deepwalk, Node2vec, and {Personalized PageRank} (PPR)~\cite{lofgren2014fast}, need to launch many instances of sampling either from the same seeds or different seeds. \new{Here, an \textit{instance} generates one sampled graph from the original graph. Particularly, for all algorithms except multi-dimensional random walk, an instance starts with one source vertex. For multi-dimensional random walk, an instance has multiple source vertices, which collectively generate one sampled graph.}
Applications like GCN require multiple sample instances for training the model~\cite{gao2018large,zeng2019graphsaint,chen2018fastgcn}, while Deepwalk, Node2vec, and PPR require multi-source random walk to either generate vertex embeddings or estimate PPR~\cite{alamgir2010multi, perozzi2014deepwalk, grover2016node2vec}.
With thousands of concurrent instances, {\textsc{c-saw}} is able to leverage the full computing power of GPU.
Since the inter-warp parallelism is straightforward to implement, we focus on exploiting the intra-warp parallelism for {\textsc{c-saw}}.
\textbf{Intra-warp Parallelism.}
A thread warp is used to execute one instance of \textsc{Select} on a pool of vertices.
An obvious alternative is to use a thread block. Most real world graphs follow power-law degree distribution, i.e., the majority of the vertices in the graph have very few edges. Using a thread block for a neighbor pool will fail to saturate the resource.
Our evaluation shows that using thread warps achieves $\sim 2 \times$ speedup compared with using thread blocks.
Thus we choose to use thread warps to exploit the parallelism within \textsc{Select}.
As shown in Fig. \ref{fig:select}, first, \textsc{Select} calculates the prefix sum of the biases of all vertices (line {6}).
Fortunately, parallel prefix sum is a well-studied area on GPUs.
In this paper, we adopt the Kogge-Stone algorithm\cready{~\cite{merrill2009parallel}} which presents superior performance for the prefix sum of warp-level where all threads execute in lock-step.
The normalization of prefix sums (line {7}) can be naturally parallelized by distributing the division of different array elements across threads.
To parallelize the vertex selection loop (line {10-14}),
{\textsc{c-saw}} dedicates one thread for each vertex selection to maximize the parallelism.
For each loop iteration, a random number is generated to select one vertex, as introduced in Section \ref{sect:background:ns}.
However, this creates a crucial challenge that different threads may select the same vertex, i.e., {\em selection collision}.
\subsection{Migrating Selection Collision}
\label{sect:single:bitmap}
To migrate the aforementioned {\em selection collision},
we propose two interesting solutions: bipartite region search, and bitmap based collision detection. Before introducing our new design, we first discuss naive solutions.
\vspace{0.05in}
\textbf{Naive Solutions.}
A naive solution is to have a do-while loop (line {10-14} in Fig. \ref{fig:select}) to re-select another one until success, i.e., {\em repeated sampling}.
However, many iterations may be needed to make a successful selection.
As shown in Fig.~\ref{fig:brs}(a), if the region of $v_7$ (i.e., 0.2 - 0.6 in CTPS) is already selected, our newly generated random number 0.58 will not lead to a successful selection.
In fact, our evaluation observes that this method suffers for scale-free graphs whose transition probability can be highly skewed, or when a large fraction of the candidates need to be selected i.e larger $NeighborSize$.
Another solution is to recalculate the CTPS by excluding the already selected vertices, i.e., {\em updated sampling}, such as Fig.~\ref{fig:brs}(b).
Then we can always pick unselected vertices by searching through the updated CTPS.
Particularly in Fig.~\ref{fig:brs}(b), we will perform another Kogge-Stone prefix-sum for the new bias array \{3, 2, 2, 2\} towards \{0, 3, 5, 7, 9\}. Consequently, the CTPS becomes \{0, 0.33, 0.56, 0.78, 1\}. Then, the random number $r=0.58$ selects $v_{10}$.
Recalculating prefix sum is, however, time consuming.
\vspace{0.05in}
\textbf{Bipartite Region Search} inherits the advantages of both repeated and updated sampling, while avoiding their drawbacks.
That is, it {\em does not need the expensive CTPS update} compared with updated sampling, while greatly {\em improve the chance of successful selection} compared with repeated sampling.
Particularly, while updated sampling updates the CTPS without changing the random number as shown in Fig. \ref{fig:brs}(b), the key idea of bipartite region search is to adjust the random number $r$ so that the CTPS remains intact and can be reused.
Most importantly, bipartite region search warrants that its random number adjustment leads to the same selections as updated sampling.
Note, this method is called bipartite region search because when the random number selects an already selected vertex, bipartite region search searches either the right or the left side of the already selected region in CTPS.
Below, we discuss this adjustment.
\vspace{0.05in}
{\footnotesize
\centering
\noindent\fbox{%
\parbox{0.95\linewidth}{%
\noindent \circled{1} Generate a random number $r'$ $(0 \leq r' < 1)$.
\noindent \circled{2} Use $r'$ to select a vertex in CTPS. If the vertex has not been selected, done. Otherwise, the region that $r'$ falls into corresponds to a pre-selected vertex. Assume the boundary of this region in CTPS is $(l, h)$. Go to \circled{3}.
\noindent {\circled{3} Let $\lambda = 1/(1 - (h - l))$, $\delta = h - l$ and update $r$ to $r'/\lambda$. If $r < l$, select $(0, l)$ and go to \circled{4}.
Otherwise select $(h, 1)$ and go to \circled{5}.}
\noindent {\circled{4} Use the updated $r$ to search in $(0, l)$. If updated $r$ falls in another selected region, go to \circled{1}. Otherwise done.}
\noindent {\circled{5} Further update $r$ to $r + \delta$ and search in $(h, 1)$. If updated $r$ falls in another selected region, go to \circled{1}. Otherwise done.}
}
}
\par}
\vspace{0.05in}
{Fig.~\ref{fig:brs}(c) explains how bipartite region search works for the same example in Fig.~\ref{fig:brs}(b).
Assuming we get a random number $r'=0.58$, it corresponds to $v_7$ in the original CTPS.
Since $v_7$ is already selected, bipartite region search will adjust this random number to 0.348 in \circled{3}.
Since the updated $r = 0.348 > l = 0.2$, bipartite region search selects $(0.6, 1)$ to explore. Consequently in \circled{5}, we further add $\delta = 0.4$ to $r$ which leads to $r = 0.748$.
0.748 corresponds to $v_{10}$, and thus results in a successful selection.
\cready{\textit{It is important to note that this selection is identical as updated sampling in Fig.~\ref{fig:brs}(b).}} }
\vspace{0.05in}
\textbf{Proof of Bipartite Region Search.}
We will prove the soundness of bipartite region search mathematically, in the scenario when one and only one vertex has been pre-selected.
\begin{theorem}\label{thm:brs}
Assuming $v_k$'s probability region is $(F_k, F_{k+1})$ in the original CTPS. Remind the definition of $F$ in Section \ref{sect:background:ns}. Let $v_s$ be the pre-selected vertex, and $F'_k$ be the probability in the updated CTPS. $l = F_k$, $h = F_{k+1}$, $\lambda = \frac{1}{1 - (h - l)}$ and $\delta = h - l$, we prove that:
{\footnotesize
\begin{equation}\label{eq:brs}
F'_{k} =
\begin{cases}
\lambda\cdot F_k; & \text{$k<s,$}\\
\lambda\cdot (F_k - \delta); & \text{otherwise.}
\end{cases}
\end{equation}
}
\end{theorem}
\begin{proof}
Adopting Equation~\ref{equ:tp}, we get \new{$F_k = \frac{\sum_{i=1}^{k-1}b_{i}}{\sum_{i=1}^{n}b_{i}}$}. \new{Denoting $\mathbb{F}=\sum_{i=1}^{s-1}b_{i} + \sum_{i=s+1}^{n}b_{i}$}, Theorem~\ref{thm:tp} leads to:
\new{\footnotesize
\begin{equation}
F'_{k} =
\begin{cases}
\frac{\sum_{i=1}^{k-1}b_{i}}{\mathbb{F}}; & \text{$k<s,$}\\
\frac{\sum_{i=1}^{s-1}b_{i} + \sum_{i=s+1}^{k-1}b_{i}}{\mathbb{F}}; & \text{otherwise.}
\end{cases}
\end{equation}
}
\noindent When $k<s$,
\new{\footnotesize
\begin{align}
F'_k&=\frac{\sum_{i=1}^{k-1}b_{i}}{\mathbb{F}}
=\frac{\sum_{i=1}^{k-1}b_{i}}{\sum_{i=1}^{n}b_{i}}\cdot \frac{\sum_{i=1}^{n}b_{i}}{\mathbb{F}}
=F_k\cdot\frac{\sum_{i=1}^{n}b_{i}}{\mathbb{F}}.
\end{align}
}
\noindent Since \new{$\frac{\sum_{i=1}^{n}b_{i}}{\mathbb{F}} = \frac{1}{1 - (h - l)} = \lambda$}, we prove $F'_k = \lambda\cdot F_k$.
When $k > s$,
\new{\footnotesize
\begin{equation}
\begin{aligned}
F'_k&=\frac{\sum_{i=1}^{s-1}b_{i} + \sum_{i=s+1}^{k-1}b_{i}}{\mathbb{F}}
=\frac{\sum_{i=1}^{s-1}b_{i} + \sum_{i=s+1}^{k-1}b_{i}}{\sum_{i=1}^{n}b_{i}}\cdot \frac{\sum_{i=1}^{n}b_{i}}{\mathbb{F}}\\
&=\frac{\sum_{i=1}^{s-1}b_{i} + \sum_{i=s+1}^{k-1}b_{i}}{\sum_{i=1}^{n}b_{i}}\cdot\lambda
=\frac{\sum_{i=1}^{k-1}b_{i} - b_s}{\sum_{i=1}^{n}b_{i}}\cdot\lambda\\
&=(\frac{\sum_{i=1}^{k-1}b_{i}}{\sum_{i=1}^{n}b_{i}} - \frac{b_s}{\sum_{i=1}^{n}b_{i}})\cdot\lambda
=(F_k - \frac{b_s}{\sum_{i=1}^{n}b_{i}})\cdot\lambda.
\label{equ:factor}
\end{aligned}
\end{equation}
}
\noindent Since \new{$\frac{b_s}{\sum_{i=1}^{n}b_{i}} = {h-l} = \delta$}, we obtain $F'_k = \lambda\cdot (F_k -\delta)$.
\end{proof}
Theorem~\ref{thm:brs} states that one can adjust the probabilities from the original CTPS to derive the updated CTPS.
Reversing the transformation direction, we further obtain:
{\footnotesize
\begin{equation}\label{eq:brs_random}
F_{k} =
\begin{cases}
\frac{F'_k}{\lambda}; & \text{$k<s,$}\\
\frac{F'_k}{\lambda} + \delta; & \text{otherwise.}
\end{cases}
\end{equation}
}
Since $r'$ is the random number for the updated CTPS, we can substitute $F'_k$ with $r'$ in Equation~\ref{eq:brs_random} to derive the corresponding $r$ in the original CTPS that falls right at the region boundaries of original CTPS, e.g., \{0, 0.33, 0.56, 0.78, 1\} in Fig.~\ref{fig:brs}(b) fall right at \{0, 0.2, 0.73, 0.87, 1\} in Fig.~\ref{fig:brs}(c). Further, since $F_k$ is a strictly monotonic function of $F'_k$, it is clear that if $r'$ falls between the region boundaries of the updated CTPS, the derived $r$ will also do so in the original CTPS.
This ensures bipartite region search will make identical selection as if the CTPS is updated.
It is also provable that statistically, the selection probability of our algorithm is the same as the desired transition probability in more complicated scenarios where multiple vertices have been pre-selected.
\begin{comment}
\begin{theorem}
Let bipartite region search select $v_s$ in the 1st iteration.
Assume $v_s$ has been pre-selected so it advances to the next iteration.
Then, its probability of selecting $v_k$ $(k \neq s)$ in the 2nd iteration, $P_{brs}(v_k)$, equals to the desired transition probability $P(v_k|v_s \in S)$ in Theorem \ref{thm:stp}.
\end{theorem}
\begin{proof}
When $k<s$, $v_k$ falls into the region $(0, l)$ in TPS.
As a result, $v_k$ is selected if and only if $(0, l)$ is selected at \circled{3} in the 1st iteration (the probability is denoted as $P_{brs}((0,l))$), and $v_k$ is chosen within $(0, l)$ in the 2nd iteration (the probability is denoted as $P_{brs}(v_k|(0,l))$).
From \circled{3}, it is obvious that $P_{brs}((0,l)) = \frac{l}{l+1-h}$.
Recall no vertices in $(0, l)$ are selected yet because $v_s$ is the only selected vertex, so based on Theorem \ref{thm:tp}, we know that $P_{brs}(v_k|(0,l)) = \frac{t_k}{l-0}$.
Apparently for bipartite region search, $P_{brs}((0,l))$) and $P_{brs}(v_k|(0,l))$ are independent, therefore,
{\footnotesize
\begin{equation}
\label{equ:5}
\begin{aligned}
P_{brs}(v_k) &= P_{brs}(v_k|(0, l)) \times P_{brs}((0, l))\\
&= \frac{t_k}{l-0} \times \frac{l}{l+1-h} = \frac{t_k}{l+1-h}.
\end{aligned}
\end{equation}
}
\noindent By definition of TPS (Equation \ref{equ:tp}), $t_s = h-l$, therefore,
{\footnotesize
\begin{equation}
\label{equ:6}
\begin{aligned}
l+1-h &= 1-(h-l) = \sum_{i=0}^{n-1} t_i - (h-l) \\
&= \sum_{i=0}^{n-1} t_i - t_s = \sum_{i=0}^{s-1} t_i + \sum_{i=s+1}^{n-1} t_i.
\end{aligned}
\end{equation}
}
\noindent Use Equation \ref{equ:6} to substitute $l+1-h$ in Equation \ref{equ:5}, thus
{\footnotesize
\begin{equation}
P_{brs}(v_k) = \frac{t_k}{\sum_{i=0}^{s-1} t_i + \sum_{i=s+1}^{n-1} t_i} = P(v_k|v_s \in S).
\end{equation}
}
\noindent Similarly, we can also prove $P_{brs}(v_k)=P(v_k|v_s \in S)$ when $k>s$.
\end{proof}
\textbf{Over-selection to Migrate Collision.}
Naturally, to select $N$ vertices, $N$ threads should be deployed so each of them selects one vertex.
In the meantime, GPU is capable to execute hundreds of thousands of threads in parallel.
To reduce the chance of selection collision, we propose {\em over-selection} by leveraging the massive parallelism capability of GPU.
Instead of having $N$ threads, over-selection always deploys more threads than the number of selected vertices.
The number of threads is a multiply of 32, as each thread warp has 32 threads for NVIDIA GPUs.
As a result, over-selection makes use of all intra-warp threads.
A shared counter is used to record how many unique vertices have been selected.
When a thread selects one unique vertex, it will check the counter.
If the counter indicates enough vertices have been selected, the thread drops the vertex and finishes its work.
Otherwise, the vertex is selected successfully and the counter increases 1.
\end{comment}
\vspace{0.05in}
\textbf{Strided Bitmap for Collision Detection.}
Bipartite region search requires a collision detection mechanism.
We introduce a per vertex bitmap to detect selection collision (line \new{13} in Fig. \ref{fig:select}).
For every candidate vertex, there is a unique bit in the bitmap to indicate whether it has been selected.
The bitmap is shared by all threads of a warp.
After each thread selects a vertex, we perform an atomic compare-and-swap operation to the corresponding bit in the bitmap.
If the bit is 0, which means no other threads have picked this vertex, we set it to 1.
Since GPUs do not have variables that support bit-wise atomic operations currently, we may use either 8-bit or 32-bit integer variables for bitmap representation, where each bit corresponds to one vertex.
As using 32-bit variables results in more conflicts when updating multiple bits within the same variable, we choose 8-bit variables instead.
To resolve the atomic contentions, we propose to use {\em strided} bitmaps, inspired by the set-associative cache organization \cite{jouppi1990improving}.
A strided bitmap scatters the bits of adjacent vertices across different 8-bit variables, as shown in Fig. \ref{fig:bitmap}. Instead of using the first five bits of the same 8-bit variable to indicate the status of all vertices in the contiguous bitmap, the strided bitmap spreads them into two variables to reduce conflicts.
\begin{figure}[t]
\centering
\includegraphics[width=0.95\linewidth]{Figures/bitmap.pdf}
\caption{Sampling the neighbors of $v_8$ in Fig. \ref{fig:example}(a), under: (a) contiguous bitmap and (b) strided bitmaps.
\vspace{-.1in}
}
\label{fig:bitmap}
\end{figure}
\new{
\vspace{0.05in}
\textbf{Data Structures.}
{\textsc{c-saw}} employs three major data structures: frontier queues, per-warp bitmap, and per-warp CTPS.
All these data structures are allocated in the GPU global memory before sampling starts.
A frontier queue is a structure of three arrays, $VertexID$, $InstanceID$, and $CurrDepth$ to keep track of the sampling process.
Till now, all threads share one frontier queue, with a few exceptions that will be introduced in Section \ref{sect:outmem}.
Per-warp bitmaps and CTPSs are stored as arrays and get reused across the entire sampling process.
They are also located in global memory.
}
\begin{comment}
\subsection{Application Specific Optimizations}
\noindent \textbf{Avoiding Redundant Sampling.}
Most sampling algorithms and some random walk variations require not to explore the same vertex more than once i.e we avoid adding explored vertex to the candidate list.
In the example of Fig.~\ref{fig:example}(a), while exploring the neighbors of $v_7$ in the second iteration, it may select $v_8$, which has been added to the sample in the first iteration.
Especially, vertices with higher weights have higher chances of getting selected multiple times.
To avoid this redundant selection problem, we use a hash table to record vertices that have been selected.
Before adding any vertex to the candidate list, we first check if the vertex is present in the hash table or not
If the vertex is not present in the hash table, we add that vertex both into the candidate list and into the hash table. Particularly, we do this in two steps:
First, it locates the bin where the candidate vertex belongs to.
Second, we perform a linear search over the bin to check if the vertex was already explored for the same sample.
In this work, {\textsc{c-saw}} finds the bin size of 32 is adequate for all graphs.
\textbf{Caching Transition Probability}.
\hang{Applications that need this optimization, probability of one vertex is not changed}
We profile the vertex repetition for ten datasets listed in Table~\ref{Table-datasets}.
Through profiling , for a substantial number of vertices, the prefix sum is calculated multiple times.
Inspired by the observation, {\textsc{c-saw}} introduces caching transition probability to avoid repeated calculation.
Considering vertices with higher degrees are more susceptible to get sampled multiple times among various instances, {\textsc{c-saw}} caches the probability for higher-degree vertices.
To avoid repeated computations, we propose to cache the prefix sum of transition probability. For each vertex, we only compute the prefix sum once. When a vertex is visited for the first time, \textsc{c-saw} stores its prefix sum in the global memory for reuse. When it is visited again in any step of other sample instance, we read the pre-calculated result from the GPU memory instead of computing it again.
Essentially, we trade prefix sum computation with a memory access, which is much cheaper. We show the performance improvement from this optimization in Section\ref{sect:experiment:in-mem}.
For caching prefix sum of transition probability, we use an array to store the computed values for each edge. We also use a boolean value for each vertex as a flag to determine if the transition probability for that vertex is computed or not. After computing the prefix sum of transition probability, we update the transition probability to the array and set the flag for that vertex.
Fig.~\ref{fig:hash_compare} shows the ratio for number of comparisons made for linear search and hash based search for various datasets. Hashing reduces the overall comparison made by more the 40\% of than linear search. The space that we need to perform linear search would be greatly reduced by hashing.
\end{comment}
\section{Out-of-memory \& \new{Multi-GPU} {\textsc{c-saw}}}
\label{sect:outmem}
Thanks to sampling and random walk which lift important obstacles for out-of-memory computation,
that is,
they need neither the entire graph nor synchronization during computation.
This section takes advantage of this opportunity to enable fast out-of-memory \new{and multi-GPU} {\textsc{c-saw}}.
\subsection{Graph Partition}
{\textsc{c-saw}} \new{partitions the graph by simply assigning a contiguous and equal range of vertices and all their neighbor lists to one partition.} We adopt this method instead of advanced topology-aware partition (e.g., \textsc{METIS}~\cite{karypis1995metis, karypis1998fast,guattery1995performance}) and 2-D partition~\cite{boman2013scalable}, for \new{three} reasons. First and foremost, sampling and random walk require all the edges of a vertex be present in order to compute the transition probability.
Splitting the neighbor list of any vertex, which is the case in 2-D partition, would introduce fine-grained communication between partitions, that largely hampers the performance.
Second, topology-aware partition would require extremely long preprocessing time, as well as yield discontinued vertex ranges which often lead to more overhead than benefit.
\new{Third, this simple partitioning method allows {\textsc{c-saw}} to decide \cready{which} partition a vertex belongs to in constant time that is important for fast bulk asynchronous sampling (Fig.~\ref{fig:streaming}).}
\subsection{Workload-Aware Partition Scheduling}
\label{sect:outmem:design}
Since multiple sampling instances are independent of each other, this dimension of flexibility grants {\textsc{c-saw}} the freedom of dynamically scheduling various partitions based upon the workload from both graph partitions and workers (such as GPU kernels and devices).
\vspace{0.05in}
\textbf{Workload-Aware Partition Scheduling.}
{\textsc{c-saw}} tracks the number of frontier vertices that falls into each partition to determine which partition will offer more workload (\circled{1} in Fig. \ref{fig:streaming}). \new{We refer them as active vertices. Based upon the count, we also allocate thread blocks to each GPU kernel with thread block based workload balancing described in next paragraph.}
Subsequently, the partitions that contain more workload are transferred to the GPU earlier and sampled first (\new{\circled{2}} in Fig. \ref{fig:streaming}).
Non-blocking {cudaMemcpyAsync} is used to copy partitions to the GPU memory asynchronously.
{\textsc{c-saw}} samples this partition until it has no active vertices. \new{
Note that, {\textsc{c-saw}} stores frontier queues from all partitions in the GPU memory. It allows a partition to insert new vertices to its frontier queue, as well as the frontier queues of other partitions to enable communications.}
The actively sampled partition is only released from the GPU memory when its frontier queue is empty.
The reason is that partitions with more active vertices often insert more neighbors in its own frontier queue, which further leads to more workloads.
As a result, this design can reduce the number of partitions transferred from CPU to GPU.
When it comes to computation, we dedicate one GPU kernel to one active partition along with a CUDA stream, in order to overlap the data transfer and sampling of different active partitions.
After parallel partition sampling finishes, we count the vertex number in each frontier queue to decide which partitions should be transferred to GPU for sampling next (\circled{3} in Fig. \ref{fig:streaming}).
The entire sampling is complete when there are no active vertices in all partitions.
\vspace{0.1in}
\textbf{Thread Block based Workload Balancing.}
Depending upon the properties of graphs and sample seeds, frontiers are likely not equally distributed across partitions.
As a result, the sampling and data transfer time are not the same as well.
Since the straggler kernel determines the overall execution time, it is ideal to balance the workload across kernels.
Consequently, we implicitly partition the GPU resources by controlling the thread block number of different kernels.
\textbf{Example.}
Fig.~\ref{fig:streaming} shows an example of out-of-memory sampling.
Here, we assume three graph partitions (i.e., P$_1$, P$_2$, P$_3$) for \new{the same graph in Fig.~\ref{fig:example}(a)}, two GPU kernels (i.e., Kernel$_1$ and Kernel$_2$), and the GPU memory can contain two active partitions.
\new{If we start sampling from vertices \{0, 2, 8\}}, P$_1$, P$_2$, and P$_3$\ will have \new{2, 0, and 1} active vertices initially.
Hence, kernel K$_1$ is assigned to work on \new{P$_1$} and kernel K$_2$ for P$_3$. To balance the workload, the ratio of thread block numbers assigned to K$_1$ and K$_2$ is set to \new{2:1}.
\new{Assuming vertices 0, 2, and 8 pick 7, 3, and 5, respectively, the frontier queues for P$_1$, P$_2$ and P$_3$ become \{3\}, \{7, 5\} and \{$\phi$\} as shown in bottom right of Fig.~\ref{fig:streaming}. Subsequently, K$_2$ exits because P$_3$'s frontier queue is empty, while K$_1$ continues sampling 3 and puts 4 into the frontier queue of P$_2$.
Then, K$_1$ also exits and leaves \{7, 5, 4\} in the frontier queue of P$_2$ to be scheduled next.}
\begin{figure}[t]
\centering
\includegraphics[width=.95\linewidth]{Figures/Streaming_example.pdf}
\caption{Workload-aware scheduling of graph partition. \new{The upper part shows the toy graph and its partition. We start sampling within partitions 1 and 3. The lower part shows an example for out-of-memory sampling. For simplicity, we hide InstanceID and CurrDepth from the frontier queue. }
\vspace{-.1in}
}
\label{fig:streaming}
\end{figure}
\vspace{0.05in}
\new{\textbf{Correctness.}
The out-of-order nature of the workload-aware partition scheduling does not impact the correctness of {\textsc{c-saw}}.
With out-of-order scheduling, the sampling of one instance is not in the breath-first order as in the in-order case.
The actual sampling order can be considered as a combination of breath-first and depth-first orders.
However, since we keep track of the depth of sampled vertices to prevent an instance from reaching beyond the desired depth, the sampling result will be the same as if it is done in the breath first order.
}
\subsection{Batched Multi-Instance Sampling}
\label{sect:multi:balance}
In the out-of-memory setting, {\textsc{c-saw}} introduces {\em batched multi-instance sampling}, which \textit{concurrently} samples multiple instances, to combat the expensive data transferring cost.
Batched sampling is implemented by combining the active vertices of various concurrently sampling instances into a single frontier queue for each partition.
Along with the queue, we need to keep two extra metadata for each vertex, i.e., $InstanceID$ \new{and $CurrDepth$, which tracks the instance that a vertex belongs to and stores the current depth of that instance respectively.}
During sampling, a thread warp in the kernel can work on any vertex in the queue, no matter whether they are from the same or different instances.
After it finishes selecting vertices (line 6 in Fig. \ref{fig:api}(b)), $InstanceID$ is used to find the corresponding frontier pool and sampled graph to update (line 7-8).
Note that there may exist multiple copies of the same vertex in the queue, because a common vertex can be sampled by multiple instances.
\begin{figure*}[t]
\centering
\subfloat[{\textsc{c-saw}} vs. KnightKing on biased random walk.]{
\includegraphics[width=.495\linewidth]{Figures/knightking.pdf}
}%
\subfloat[{\textsc{c-saw}} vs. GraphSAINT on multi-dimensional random walk.]{
\includegraphics[width=.475\linewidth]{Figures/graphsaint.pdf}
}
\caption{\new{{\textsc{c-saw}} vs. the state-of-the-art in million sampled edges per second with 1 GPU and 6 GPUs (higher is better).}
}
\label{fig:stateofart}
\end{figure*}
Batched sampling can also balance the workload across sampling instances. Otherwise, if we sample various instances separately,
since many real-world graphs hold highly skewed degree distributions, some instances may encounter higher degree vertices more often and thus more workloads. This will end up with skewed workload distributions.
Batched sampling solves this problem using a vertex-grained workload distribution, instead of instance-grained distribution.
\new{
\subsection{Multi-GPU {\textsc{c-saw}}}
As the number of sources continues to grow, the workload will saturate one GPU and go beyond. In this context, scaling {\textsc{c-saw}} to multiple GPUs would help accelerate the sampling performance.
Since various sampling instances are independent from each other, {\textsc{c-saw}} simply divides all the sampling instances into several disjoint groups, each of which contains equal number of instances. Here, the number of disjoint groups is the same as the number of GPUs. Afterwards, each GPU will be responsible for one sampling group. During sampling, each GPU will perform the same tasks as shown in Fig.~\ref{fig:streaming} and no inter-GPU communication is required.
}
\vspace{0.1in}
\section{Evaluations}
\label{sect:experiment}
{\textsc{c-saw}} is {implemented with $\sim$4,000} lines of CUDA code and compiled by CUDA Toolkit 10.1.243 and g++ 7.4.0
with optimization flag as -O3.
We evaluate {\textsc{c-saw}} on the Summit supercomputer of Oak Ridge National Laboratory~\cite{ornl_summit}.
Each Summit node is equipped with 6 NVIDIA Tesla V100 GPUs, dual-socket 22-core POWER9 CPUs and 512 GB main memory. Particularly, each V100 GPU is equipped with 16GB device memory.
For the random number generation, we use the cuRAND library~\cite{tian2009mersenne}.
\begin{table}[!h]
{\scriptsize
\begin{tabular}{|l|l|l|l|l|l|}
\hline
Dataset & Abbr. & \begin{tabular}[c]{@{}l@{}}Vertex \\ Count\end{tabular} & \begin{tabular}[c]{@{}l@{}}Edge \\ Count\end{tabular} & \begin{tabular}[c]{@{}l@{}}Avg.\\ degree\end{tabular} & \new{\begin{tabular}[c]{@{}l@{}}Size\\ (of CSR)\end{tabular}} \\ \hline
Amazon0601~\cite{snapnets} & AM & 0.4M & 3.4M & 8.39 &\new{59 MB} \\ \hline
As-skitter \cite{snapnets} & AS & 1.7M & 11.1M & 6.54 &\new{325 MB}\\ \hline
cit-Patents \cite{snapnets} & CP & 3.8M & 16.5M & 4.38 &\new{293 MB}\\ \hline
LiveJournal \cite{snapnets} & LJ & 4.8M & 68.9M & 14.23 & \new{1.1 GB}\\ \hline
Orkut \cite{snapnets} & OR & 3.1M & 117.2M & 38.14 & \new{1.8 GB}\\ \hline
Reddit \cite{zeng2019graphsaint,yelpreddit} &RE &0.2M &11.6M & 49.82 & \new{179 MB}\\ \hline
web-Google \cite{snapnets} & WG & 0.8M & 5.1M & 5.83 & \new{85 MB}\\ \hline
Yelp \cite{zeng2019graphsaint,yelpreddit} &YE &0.7M &6.9M & 9.73& \new{111 MB}\\ \hline \hline
Friendster \cite{snapnets} & FR & 65.6M & 1.8M & 27.53 &\new{29 GB} \\ \hline
Twitter \cite{konect:2017:twitter} & TW & 41.6M & 1.5M & 35.25 &\new{22 GB}\\ \hline
\end{tabular}
}
\caption{Details of evaluated graphs.}\label{Table-datasets}
\end{table}
\textbf{Dataset.}
We use the graph datasets in Table~\ref{Table-datasets} to study {\textsc{c-saw}}. This dataset collection contains a wide range of applications, such as social networks (LJ, OR, FR and TW), forum discussion (RE and YE), online shopping (AM), citation networks (CP), computer routing (AS) and web page (WG).
\textbf{Metrics.}
Instead of Traversed Edges Per Second (TEPS) in classical graph analytics~\cite{liu2015enterprise,wang2016gunrock}, we introduce a new metric - Sampled Edges Per Second (SEPS) - to evaluate the performance of sampling and random walk. Formally, SEPS = $\frac{\#~\text{SampledEdges}}{\text{Time}}$. This metric is more suitable than TEPS to evaluate sampling and random walk because these algorithms might use different methods thus traverse a different number of edges but end up with the same number of sampled edges.
\new{Similar to previous work \cite{wang2016gunrock,liu2015enterprise}, the kernel execution time is used to compute SEPS, i.e., the time spent on generating the samples}, except for the out-of-memory case that also includes the time for transferring the partitions. Note, each reported result is an average of three runs with different sets of seeds.
\textbf{Test Setup.}
Analogous to GraphSAINT~\cite{zeng2019graphsaint}, we generate \new{4,000} \new{instances} for random walk algorithms and 2000 instances for sampling algorithms.
For sampling, both the \new{\textit{NeighborSize}} (i.e., number of neighbors sampled from one frontier) and $Depth$ are 2 for analyzing the performance of {\textsc{c-saw}} except forest fire, which uses $P_{f}$= 0.7 to derive \new{\textit{NeighborSize}} as in~\cite{leskovec2006sampling}. For \new{biased random walk algorithm}, the length of the walk is \new{2,000}. \new{For multi-dimensional random walk, similar to GraphSAINT, we use \new{2,000} as the \textit{FrontierSize} for each instance.}
\subsection{{\textsc{c-saw}} vs. State-of-the-art}
\label{sect:experiment:in-mem}
\vspace{-0.03in}
First, we compare {\textsc{c-saw}} against the state-of-the-art frameworks, KnightKing and GraphSAINT. \new{Our profiling result shows that both GraphSAINT and KnightKing use multiple threads to perform the computation, where the \# threads = \# cores.}
Since KnightKing only supports random walk variations, we compare {\textsc{c-saw}} with KnightKing for \new{biased random walk}.
\new{GraphSAINT provides both Python and C++ implementations. We choose the C++ implementation~\cite{zeng2019accurate}
which exhibits better performance. \cready{The} C++ version only supports multi-dimensional random walk which is studied in Fig.~\ref{fig:stateofart}(b)}.
As shown in Fig.~\ref{fig:stateofart}, {\textsc{c-saw}} presents superior performance over both projects. On average, {\textsc{c-saw}} is\new{ {10}$\times$ and {14.7}$\times$} faster than KnightKing with 1 GPU and 6 GPUs, respectively. Compared to GraphSAINT, {\textsc{c-saw}} is \new{{8.1}$\times$ and {11.5}$\times$ faster with 1 GPU and 6 GPUs respectively. Each instance of sampled graphs has 1,703 edges on average.}
While {\textsc{c-saw}} outperforms both projects across all graphs, we generally observe better speedup on graphs with a lower average degree, such as, AM, CP and WG on KnightKing and AM on GraphSAINT. This is rooted from \new{1) the superior computing capability of GPU over CPU}, 2) {\textsc{c-saw}} is free of \cready{bulk synchronous parallelism (BSP)}~\cite{malewicz2010pregel}, which allows it to always have adequate computing tasks for sparse graphs, and 3) the unprecedented bandwidth of the V100 GPU over the {POWER9} CPU, i.e., 900 GB/s vs. {170 GB/s} \cite{ornl_summit}.
This underscores the need of GPU-based sampling and random walk.
\subsection{In-memory Optimization}
\begin{figure}[ht]
\centering
\subfloat[\new{Biased neighbor sampling.}]{
\hspace{-.05in}\includegraphics[width=.50\linewidth]{Figures/in_opt_NSb.pdf}
}
\subfloat[\new{Forest fire sampling.}]{
\hspace{-.05in}\includegraphics[width=.47\linewidth]{Figures/in_opt_FF.pdf}
}\\
\subfloat[\new{Layer sampling.}]{
\hspace{-.05in}\includegraphics[width=.50\linewidth]{Figures/in_opt_LS.pdf}
}
\subfloat[\new{Unbiased neighbor sampling.}]{
\hspace{-.05in}\includegraphics[width=.47\linewidth]{Figures/in_opt_NS.pdf}
}
\caption{\new{Performance impacts of in-memory optimizations for various sampling algorithms. \vspace{-0.1in}
}
}
\label{fig:in_memory}
\end{figure}
\begin{figure}[ht]
\centering
\subfloat[\new{Biased neighbor sampling.}]{
\hspace{-.01in}\includegraphics[width=.49\linewidth]{Figures/in_prof_brs_NSb.pdf}
}
\subfloat[\new{Forest fire sampling.}]{
\hspace{-.01in}\includegraphics[width=.46\linewidth]{Figures/in_prof_brs_FF.pdf}
}
\\
\subfloat[\new{Layer sampling.}]{
\hspace{-.01in}\includegraphics[width=.49\linewidth]{Figures/in_prof_brs_LS.pdf}
}
\subfloat[\new{Unbiased neighbor sampling.}]{
\hspace{-.01in}\includegraphics[width=.46\linewidth]{Figures/in_prof_brs_NS.pdf}
}
\caption{Average \# iteration w/ and w/o \new{bipartite region search} for various algorithms.
\vspace{-0.1in}
}
\label{fig:profile_bipartite}
\end{figure}
\begin{figure}[t]
\centering
\subfloat[\new{Biased neighbor sampling.}]{
\hspace{-.05in}\includegraphics[width=.50\linewidth]{Figures/in_prof_bit_NSb.pdf}
}
\subfloat[\new{Forest fire sampling.}]{
\hspace{-.05in}\includegraphics[width=.47\linewidth]{Figures/in_prof_bit_FF.pdf}
}
\\
\subfloat[\new{Layer sampling.} ]{
\hspace{-.05in}\includegraphics[width=.50\linewidth]{Figures/in_prof_bit_LS.pdf}
}
\subfloat[\new{Uniased neighbor sampling.}]{
\hspace{-.05in}\includegraphics[width=.47\linewidth]{Figures/in_prof_bit_NS.pdf}
}
\caption{\new{Total search reduction by bitmap for various algorithms.}
}
\label{fig:profile_bitmap}
\end{figure}
Fig.~\ref{fig:in_memory} studies the performance impacts of \new{bipartite region search} and bitmap optimizations over repeated sampling (Fig.~\ref{fig:brs}(a)) and updated sampling (Fig.~\ref{fig:brs}(b)) across four applications, which include both biased and unbiased algorithms.
Repeated sampling is used as the performance baseline for comparison.
\new{FR and TW are not studied in this subsection because they exceed the GPU memory capacity.}
Particularly, \new{bipartite region search} introduces, on average, {1.7}$\times$, {1.4}$\times$, {1.7}$\times$ and {1.17}$\times$ speedup, on \new{biased neighbor sampling, forest fire sampling, layer sampling, and unbiased neighbor sampling} respectively. \new{Bipartite region search} presents better performance compared with both repeated sampling and updated sampling.
Bitmap further improves speedup to {1.8}$\times$, {1.5}$\times$, {1.8}$\times$, and {1.28}$\times$ on these four applications, respectively. The performance for AM, CP, and WG gleams the effectiveness of {\textsc{c-saw}}. With a lower average degree of vertices, they suffer from more selection collision. Using \new{bipartite region search}, we achieve better speedup by mitigating the collision.
Fig.~\ref{fig:profile_bipartite} and~\ref{fig:profile_bitmap} further profile the effectiveness of our two optimizations. On average, \new{bipartite region search} reduces the average number of iterations to pick a neighbor by {5.0}$\times$, {1.5}$\times$, {1.8}$\times$, and {1.7}$\times$ for these four applications, respectively.
\new{
Here, \#~iterations refers to the trip count of do-while loop in Fig.~\ref{fig:select} (line 10-14), which represents the amount of computation used to select a vertex.
For analysis, we compare the average number of iterations for all sampled vertices, i.e., $\frac{Total~\#~\text{iterations of sampled vertices}}{\#~\text{sampled vertices}}$.}
We observe more \new{reduction on \#~iterations} for \new{biased neighbor sampling} than other algorithms as it has a higher selection collision chance and thus requires more iterations without \new{bipartite region search}.
With relatively larger neighbor pools, collision is less likely to happen in \new{layer sampling} which explains its lower benefits from bipartite region search.
Similarly, \new{unbiased neighbor sampling} and \new{forest fire sampling} incur less collision due to unbiased sampling.
Fig.~\ref{fig:profile_bitmap} shows the effectiveness of bitmap over the baseline which stores the sampled vertices in the GPU shared memory and performs a linear search to detect collision. \new{The ratio metric in Fig.~\ref{fig:profile_bitmap} compares the total number of searches performed by bitmap with that of baseline, i.e., $\text{Ratio} = \frac{\sum\#~\text{searches in bitmap}}{\sum\#~\text{searches in baseline}}$.} Compared to baseline, bitmap reduces the total searches by
{{63}\%, {83}\%, {71}\%, and {81}\%} for these four applications, respectively.
Despite of the significant search count reduction from bitmap, \cready{the} overhead of atomic operations refrains us from achieving speedups proportional with the search count reduction.
\subsection{Out-of-memory Optimization}
\begin{figure}[t]
\centering
\subfloat[\new{Biased neighbor sampling.} ]{
\hspace{-.05in}\includegraphics[width=.50\linewidth]{Figures/out_opt_NSb.pdf}
}
\subfloat[\new{Biased random walk.} ]{
\hspace{-.05in}\includegraphics[width=.47\linewidth]{Figures/out_opt_RW.pdf}
}\\
\subfloat[\new{Forest fire sampling.} ]{
\hspace{-.05in}\includegraphics[width=.50\linewidth]{Figures/out_opt_FF.pdf}
}
\subfloat[\new{Unbiased neighbor sampling.} ]{
\hspace{-.05in}\includegraphics[width=.47\linewidth]{Figures/out_opt_NS.pdf}
}
\caption{Performance impacts of out-of-memory optimizations. Here,\new{ baseline implementation refers to partition transfer based on active partition without any optimization}.
\vspace{-0.15in}
}
\label{fig:out_memory}
\end{figure}
\begin{figure}[t]
\centering
\subfloat[\new{Biased neighbor sampling.}]{
\hspace{-.05in}\includegraphics[width=.50\linewidth]{Figures/out_prof_bal_NSb.pdf}
}
\subfloat[\new{Biased random walk.}]{
\hspace{-.05in}\includegraphics[width=.47\linewidth]{Figures/out_prof_bal_RW.pdf}
} \\
\subfloat[\new{Forest fire sampling.} ]{
\hspace{-.05in}\includegraphics[width=.50\linewidth]{Figures/out_prof_bal_FF.pdf}
}
\subfloat[\new{Unbiased neighbor sampling.}]{
\hspace{-.05in}\includegraphics[width=.47\linewidth]{Figures/out_prof_bal_NS.pdf}
}
\vspace{-.05in}
\caption{Standard deviation of kernel time for multi-instance batching and workload-aware balancing in out-of-memory {\textsc{c-saw}} \new{(lower is better). Here, baseline represents even distribution of resources. \vspace{-0.15in}}
}
\label{fig:profile_batched}
\end{figure}
Fig.~\ref{fig:out_memory} presents the performance impacts of \new{multi-instance} batched sampling (BA), workload-aware scheduling (WS), and thread block based workload balancing (BAL) \new{on both large graphs and small graphs. For the sake of analysis, we pretend small graphs do not fit in GPU memory.} For the experimental analysis, we use 4 partitions for each graph and two CUDA streams.
Assume the GPU memory can keep at most two partitions at the same time, for all graphs.
Particularly, batched sampling introduces, on average, {2.0}$\times$, {1.9}$\times$, {2.1}$\times$, and {2.7}$\times$ speedup, respectively on \new{biased neighbor sampling, biased random walk, forest fire sampling, and unbiased neighbor sampling}. Workload-aware scheduling further introduces {3.2}$\times$, {2.8}$\times$, {3.9}$\times$, and {3.3}$\times$ speedups on these four applications, respectively. Workload balancing gives, on average, {3.5}$\times$ speedup over all applications.
\begin{figure}[ht]
\centering
\subfloat[\new{Biased neighbor sampling.}]{
\hspace{-.05in}\includegraphics[width=.50\linewidth]{Figures/out_prof_act_NSb.pdf}
}
\subfloat[\new{Biased random walk.}]{
\hspace{-.05in}\includegraphics[width=.47\linewidth]{Figures/out_prof_act_RW.pdf}
}
\\
\subfloat[\new{Forest fire sampling.}]{
\hspace{-.05in}\includegraphics[width=.50\linewidth]{Figures/out_prof_act_FF.pdf}
}
\subfloat[\new{Unbiased neighbor sampling.}]{
\hspace{-.05in}\includegraphics[width=.47\linewidth]{Figures/out_prof_act_NS.pdf}
}
\caption{Partition transfer counts for workload-aware scheduling \new{(lower is better)}.
\vspace{-0.1in}
}
\label{fig:profile_degree}
\end{figure}
Fig.~\ref{fig:profile_batched} and~\ref{fig:profile_degree} reasons the effectiveness of two optimizations. \new{We use standard deviation to measure workload imbalance in runtime of two kernels for overall sampling.} On average, multi-instance batched sampling (BA) and thread block based workload balancing (BAL) reduce the average kernel time by {27}\%, {12}\%, {23}\%, and {26}\%, respectively on four applications.
As active vertices increase exponentially with depth during sampling, \new{biased neighbor sampling, forest fire sampling, and unbiased neighbor sampling} observe more reduction in kernel time than \new{biased random walk}. Workload-aware scheduling reduces the overall partition transfers by {1.2}$\times$, {1.3}$\times$, {1.2}$\times$, and {1.1}$\times$ on these four applications, respectively. Even with moderate decrease in partition transfers, we still achieve noticeable speedups.
\begin{figure}[ht]
\vspace{-0.25cm}
\centering
\subfloat[NeighborSize: 1 - 8.]{
\hspace{-.05in}\includegraphics[width=0.95\linewidth]{Figures/sampling_rate.pdf}
}
\\
\subfloat[\# instances: 2k - 16k.]{
\hspace{-.05in}\includegraphics[width=0.95\linewidth]{Figures/sampling_size.pdf}
}
\vspace{-0.1in}
\caption{\new{Biased neighbor sampling with (a) \textit{NeighborSize} as 1, 2 4 and 8 and (b) \# instances as 2k, 4k, 8k and 16k.\vspace{-0.1in}}
}
\vspace{-0.05in}
\label{fig:time}
\end{figure}
\vspace{0.05in}
\subsection{Studying NeighborSize and \#~Instances in {\textsc{c-saw}}}
Fig.~\ref{fig:time} reports the time consumption impacts of various \textit{NeighborSize} and \# instances. Here, we use Depth= 3 and 16k instances in Fig.~\ref{fig:time} (a) for extensive analysis. For Fig.~\ref{fig:time} (b), we use \textit{NeighborSize} = 8.
As shown in Fig.~\ref{fig:time}(a), larger \textit{NeighborSize} leads to longer sampling time. The average sampling time for \textit{NeighborSize} of 1, 2, 4, and 8 are 3, 4, 7, and 14 ms.
Similarly, the increase of sampling instances, as shown in Fig.~\ref{fig:time}(b) also results in longer sampling time. The average sampling time for 2k, 4k, 8k, and 16k instances is 2, 5, 9, and 15 ms.
It is important to note that graphs with higher average degrees, i.e., TW, RE, and OR, have longer sampling time, while the impact of graph sizes on sampling time is secondary.
\begin{figure}[t]
\centering
\subfloat[2,000 instances.]{
\hspace{-.05in}\includegraphics[width=0.9\linewidth]{Figures/scaling_2k.pdf}
}
\\
\subfloat[8,000 instances.]{
\hspace{-.05in}\includegraphics[width=0.9\linewidth]{Figures/scaling_8k.pdf}
}
\caption{\new{Scaling {\textsc{c-saw}} from 1 to 6 GPUs with (a) 2,000 and (b) 8,000 instances for biased neighbor sampling.\vspace{-0.15in}}
}
\label{fig:scaling}
\end{figure}
\vspace{0.02in}
\subsection{{\textsc{c-saw}} Scalability}
Fig.~\ref{fig:scaling} scales {\textsc{c-saw}} from 1 to 6 GPUs for different number of sampling instances.
For 2,000 and 8,000 instances, we achieve {1.8}$\times$ and {5.2}$\times$ speedup with 6 GPUs, respectively. The reason is that 2,000 instances fail to saturate 6 GPUs. With 8,000 instances, we observe more workloads that lead to better scalability. We also observe that lower average degree graphs present better scalability because their workloads are better distributed across sampling instances.
\section{Related Works}
\label{sect:related}
Despite there is a surge of frameworks for classical graph algorithms including think like a vertex~\cite{low2014graphlab,malewicz2010pregel}, an edge~\cite{roy2013x}, a graph~\cite{tian2013think}, an IO partition~\cite{liu2017graphene}, and Domain Specific Languages~\cite{hong2012green,zhang2018graphit}, among many others~\cite{liu2019simd,bader2006gtgraph,sundaram2015graphmat}, very few projects target graph sampling and random walk which are the focus of {\textsc{c-saw}}.
This section discusses the closely related work from the following three aspects.\vspace{3px}
\textbf{Programming Interface.}
KnightKing~\cite{yang2019knightking} proposes a walker-centric model to support random walk~\cite{ribeiro2010estimating,li2015random}, e.g., Node2vec~\cite{grover2016node2vec,10.1145/3159652.3159706}, Deepwalk~\cite{perozzi2014deepwalk}, and PPR~\cite{ilprints750,lofgren2014fast,lin2019distributed}, and hence fails to accommodate sampling algorithms that are important for graph learning and sparsification~\cite{ribeiro2010estimating,gao2018large,chen2018fastgcn,ying2018graph,hamilton2017inductive,leskovec2006sampling,gaihre2019deanonymizing}. Similarly for~\cite{lakhotia2019parallel,chen2016general} which also only support limited sampling/random walk algorithms.
GraphSAINT~\cite{zeng2019graphsaint,zeng2019accurate} explores three graph sampling methods, i.e., random vertex and edge sampling, and random walk based sampler, but fails to arrive at a universal framework.
\cite{tariq2017power} supports deletion based sampling algorithms~\cite{krishnamurthy2007sampling}.
But this design is inefficient for large graphs that need to remove most edges.
In this work, {\textsc{c-saw}} offers a bias-centric framework that can support both sampling and random walk algorithms, and hide the GPU programming complexity from end users.
\vspace{0.05in}
\textbf{Transition Probability Optimizations.} Existing projects often explore the following optimizations, i.e., probability pre-computation and data structure optimization. Particularly, KnightKing~\cite{yang2019knightking} pre-computes the alias table for static transition probability, and resorts to dartboard for the dynamic counterpart which is similar to~\cite{zeng2019accurate}. Interestingly, kPAR~\cite{shi2019realtime} even proposes to pre-compute random walks to expedite the process.
Since large graphs cannot afford to index the probabilities of all vertices,~\cite{lin2019distributed} only pre-computes for hub vertices and further uses hierarchical alias method, i.e., alias tree for \cready{distributed} sampling.
However, not all sampling and random walk algorithms could have deterministic probabilities that support pre-computation.
{\textsc{c-saw}} finds \cready{inverse transform sampling} to be ideal for GPUs, and achieves superior performance over the state-of-the-art even when computing the probability during runtime.
\vspace{0.05in}
\textbf{Out-of-memory Processing.} GPU unified memory and partition-centric are viable method for out-of-memory graph processing.
Since graph sampling is irregular, unified memory is not a suitable option~\cite{mishra2017um,li2019um}. Besides, partition-centric options~\cite{graphchi,zheng2015flashgraph,liu2017graphene,han2017graphie,chiang2019cluster} load each graph partition from either secondary storage to memory or CPU memory to GPU for processing. Since prior work deals with classical graph algorithms, they need BSP.
In contrast, {\textsc{c-saw}} takes advantage of the asynchronous nature of sampling to introduce workload-aware scheduling and batched sampling to reduce the data transfer between GPU and CPU.
\section{Conclusion}
\label{sect:conclusion}
This paper introduces {\textsc{c-saw}}, a novel, generic, and optimized GPU graph sampling framework that supports a wide range of sampling and random walk algorithms. Particularly, we introduce novel bias-centric framework, bipartite region search and workload aware out-of-GPU and multi-GPU scheduling for {\textsc{c-saw}}.
Taken together, our evaluation shows that {\textsc{c-saw}} bests the state-of-the-art.
\section*{Acknowledgement}
We thank the anonymous reviewers for their helpful suggestions and feedbacks. This research is supported in part by the National Science Foundation CRII award No. 2000722, the U.S. Department of Energy, Office of Science, Office of Advanced Scientific Computing Research, under Contract No. DE-AC02-05CH11231 at Lawrence Berkeley National Laboratory, and Brookhaven National Laboratory, which is operated and managed for the U.S. Department of Energy Office of Science by Brookhaven Science Associates under contract No. DE-SC0012704.
\bibliographystyle{unsrt}
|
1,477,468,750,372 | arxiv | \section{Introduction}
\label{intro}
\begin{table}[t]
\caption{A typical two-way contingency-table/cross-tabulation
($n_{j,k}$ is a nonnegative integer
with $j=1$,~$2$, \dots, $r$,\ \ \ $k=1$,~$2$, \dots, $s$;\ \ \
$n_{j,\centerdot} = \sum_{k=1}^s n_{j,k}$ is a row total
with $j=1$,~$2$, \dots,~$r$;
$n_{\centerdot,k} = \sum_{j=1}^r n_{j,k}$ is a column total
with $k=1$,~$2$, \dots, $s$;
and $n_{\centerdot,\centerdot} = \sum_{j=1}^r \sum_{k=1}^s n_{j,k} = n$ is the grand total)}
\label{tab}
\vspace{-2.5em}
\begin{center}
$$
\begin{array}{c|cccc|c}
& 1 & 2 & \cdots & s & \\\hline
1 & n_{1,1} & n_{1,2} & \cdots & n_{1,s} & n_{1,\centerdot} \\
2 & n_{2,1} & n_{2,2} & \cdots & n_{2,s} & n_{2,\centerdot} \\
\vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\
r & n_{r,1} & n_{r,2} & \cdots & n_{r,s} & n_{r,\centerdot} \\\hline
& n_{\centerdot,1} & n_{\centerdot,2} & \cdots & n_{\centerdot,s} & n_{\centerdot,\centerdot}
\end{array}
$$
\end{center}
\vspace{1em}
\end{table}
\begin{table}
\caption{The model for homogeneity of proportions
($n_{1,\centerdot}$, $n_{2,\centerdot}$, \dots, $n_{r,\centerdot}$ are the row totals;
$n_{\centerdot,1}$, $n_{\centerdot,2}$, \dots, $n_{\centerdot,s}$ are the column totals;
and $n_{\centerdot,\centerdot} = n$ is the grand total)}
\label{homo}
\vspace{-2em}
\begin{center}
$$
\begin{array}{c | c c c c | c}
& \phantom{\Big|} 1 & 2
& \cdots & s & \\\hline
1 & \phantom{\Big|}n_{1,\centerdot} \cdot n_{\centerdot,1}/n & n_{1,\centerdot} \cdot n_{\centerdot,2}/n
& \cdots & n_{1,\centerdot} \cdot n_{\centerdot,s} / n & n_{1,\centerdot} \\
2 & \phantom{\Big|}n_{2,\centerdot} \cdot n_{\centerdot,1}/n & n_{2,\centerdot} \cdot n_{\centerdot,2}/n
& \cdots & n_{2,\centerdot} \cdot n_{\centerdot,s} / n & n_{2,\centerdot} \\
\vdots & \phantom{\Big|} \vdots & \vdots
& \ddots & \vdots & \vdots \\
r & \phantom{\Big|}n_{r,\centerdot} \cdot n_{\centerdot,1}/n & n_{r,\centerdot} \cdot n_{\centerdot,2}/n
& \cdots & n_{r,\centerdot} \cdot n_{\centerdot,s} / n & n_{r,\centerdot} \\\hline
& \phantom{\Big|} n_{\centerdot,1} & n_{\centerdot,2}
& \cdots & n_{\centerdot,s} & n_{\centerdot,\centerdot}
\end{array}
$$
\end{center}
\end{table}
The statistical analysis of categorical data is commonly formulated
in the framework of contingency-tables/cross-tabulations;
Table~\ref{tab} provides a typical two-way example
\citep[see, for instance, Chapter~4 of]
[for a comprehensive treatment]{andersen}.
A common task is to ascertain whether the given data
(displayed in Table~\ref{tab}) is consistent
up to expected statistical fluctuations
with the model for homogeneity of proportions (displayed in Table~\ref{homo}).
When considering homogeneity, we assume that the probabilistic process
generating the given data fixes the column totals (but not the row totals)
by construction.
Therefore, to gauge whether the given data displayed in Table~\ref{tab}
is consistent with the assumed homogeneity displayed
in Table~\ref{homo}, we do the following:
\begin{enumerate}
\item We generate $s$ sets of draws,
with the $k$th set consisting of $n_{\centerdot,k}$ independent and identically
distributed draws from the probability distribution $(p_1, p_2, \dots, p_r)$,
where $p_1 = n_{1,\centerdot}/n$, $p_2 = n_{2,\centerdot}/n$, \dots, $p_r = n_{r,\centerdot}/n$.
Note that $p_j = (n_{j,\centerdot} \cdot n_{\centerdot,k} / n) / n_{\centerdot,k}$
for $j = 1$,~$2$, \dots, $r$;
these are homogeneous proportions (since $p_1$,~$p_2$, \dots, $p_r$
are the same for every column index $k$).
\item For each of the $s$ sets of draws --- say the $k$th set ---
we define $N_{j,k}$ to be the number of draws falling in the $j$th row,
for $j = 1$,~$2$, \dots, $r$.
\item We calculate the probability $P$ that the discrepancy
between the simulated counts $N_{j,k}$ and the model
$N_{j,\centerdot} \cdot N_{\centerdot,k} / n$ is greater than or equal to the discrepancy
between the observed counts $n_{j,k}$ and the assumed
$n_{j,\centerdot} \cdot n_{\centerdot,k} / n$. When calculating this probability,
we view $N_{j,k}$ and $N_{j,\centerdot}$ as random,
while viewing all other numbers as fixed.
Please note that, by construction,
$N_{\centerdot,k} = n_{\centerdot,k}$ for $k = 1$,~$2$, \dots, $s$.
\end{enumerate}
The number $P$ defined in Step~3 is known as the (exact) P-value.
Given the P-value $P$, we can have $100(1-P)\%$ confidence
that the observed draws are not consistent
with assuming the homogeneity displayed in Table~\ref{homo}.
See Section~3 of~\cite{perkins-tygert-ward3} for further discussion
of P-values and their interpretation;
Section~3 of~\cite{perkins-tygert-ward3} details subtleties involved
in the definition and interpretation of these P-values.
The definition above of the P-value $P$ requires a metric
for measuring the discrepancies. The canonical choices are $\chi^2$
and the log--likelihood-ratio $G^2$:
\begin{equation}
\chi^2 = \sum_{j=1}^r \sum_{k=1}^s
\frac{(n_{j,k} - (n_{j,\centerdot} \cdot n_{\centerdot,k}/n))^2}{n_{j,\centerdot} \cdot n_{\centerdot,k}/n}
\end{equation}
\begin{equation}
X^2 = \sum_{j=1}^r \sum_{k=1}^s
\frac{(N_{j,k} - (N_{j,\centerdot} \cdot N_{\centerdot,k}/n))^2}{N_{j,\centerdot} \cdot N_{\centerdot,k}/n}
\end{equation}
\begin{equation}
\label{pchi2}
P_{\chi^2} = {\rm Prob}\{X^2 \ge \chi^2\}
\end{equation}
\nopagebreak
\begin{equation}
g^2 = 2 \sum_{j=1}^r \sum_{k=1}^s n_{j,k}
\cdot \ln\left(\frac{n_{j,k}}{n_{j,\centerdot} \cdot n_{\centerdot,k}/n}\right)
\end{equation}
\begin{equation}
G^2 = 2 \sum_{j=1}^r \sum_{k=1}^s N_{j,k}
\cdot \ln\left(\frac{N_{j,k}}{N_{j,\centerdot} \cdot N_{\centerdot,k}/n}\right)
\end{equation}
\begin{equation}
\label{pg2}
P_{g^2} = {\rm Prob}\{G^2 \ge g^2\}
\end{equation}
\noindent Other possibilities include the Hellinger (or Freeman-Tukey) distance
and the Frobenius (or Hilbert-Schmidt or Euclidean) distance:
\begin{equation}
h^2 = 4 \sum_{j=1}^r \sum_{k=1}^s
(\sqrt{n_{j,k}} - \sqrt{n_{j,\centerdot} \cdot n_{\centerdot,k}/n})^2
\end{equation}
\begin{equation}
H^2 = 4 \sum_{j=1}^r \sum_{k=1}^s
(\sqrt{N_{j,k}} - \sqrt{N_{j,\centerdot} \cdot N_{\centerdot,k}/n})^2
\end{equation}
\begin{equation}
\label{ph2}
P_{h^2} = {\rm Prob}\{H^2 \ge h^2\}
\end{equation}
\begin{equation}
\label{deff2}
f^2 = \sum_{j=1}^r \sum_{k=1}^s (n_{j,k} - (n_{j,\centerdot} \cdot n_{\centerdot,k}/n))^2
\end{equation}
\begin{equation}
F^2 = \sum_{j=1}^r \sum_{k=1}^s (N_{j,k} - (N_{j,\centerdot} \cdot N_{\centerdot,k}/n))^2
\end{equation}
\begin{equation}
\label{pf2}
P_{f^2} = {\rm Prob}\{F^2 \ge f^2\}
\end{equation}
\noindent When taking probabilities in~(\ref{pchi2}), (\ref{pg2}),
(\ref{ph2}), and~(\ref{pf2}), we view the uppercase $X^2$, $G^2$, $H^2$,
and $F^2$ as random variables, while viewing the lowercase $\chi^2$, $g^2$,
$h^2$, and $f^2$ as fixed numbers.
As discussed, for example, by~\cite{rao}, $X^2$, $G^2$, and $H^2$ all converge
to the same distribution in the limit of large numbers of draws ---
$X^2$, $G^2$, and $H^2$ are the best-known members
of the Cressie-Read power-divergence family.
$F^2$ is not a member of the Cressie-Read power-divergence family
and does not necessarily converge to the same distribution
as $X^2$, $G^2$, and $H^2$.
\cite{perkins-tygert-ward3} illustrated the many advantages of $F^2$
when neither the row totals nor the column totals are fixed;
the present paper illustrates the advantages when the column totals are fixed.
However, $F^2$ is not uniformly more powerful than the classical statistics.
We recommend using both $F^2$ and a classical statistic such as $G^2$.
In the sequel, Section~\ref{computation} summarizes an algorithm
for computing the P-values defined above.
Section~\ref{dataa} analyzes several data sets.
Section~\ref{conclusion} draws some conclusions.
\section{Computation of P-values}
\label{computation}
The definitions of the P-values in~(\ref{pchi2}), (\ref{pg2}), (\ref{ph2}),
and~(\ref{pf2}) involve the probabilities of certain events.
In the present paper, we compute these probabilities
via Monte-Carlo simulations with guaranteed error bounds.
Specifically, we conduct a large number $m$ of simulations;
in each simulation --- say the $\ell$th ---
we perform the following steps (using the data of Table~\ref{tab}):
\begin{enumerate}
\item We generate $s$ sets of draws,
with the $k$th set consisting of $n_{\centerdot,k}$ independent and identically
distributed draws from the probability distribution $(p_1, p_2, \dots, p_r)$,
where $p_1 = n_{1,\centerdot}/n$, $p_2 = n_{2,\centerdot}/n$, \dots, $p_r = n_{r,\centerdot}/n$.
Note that $p_j = (n_{j,\centerdot} \cdot n_{\centerdot,k} / n) / n_{\centerdot,k}$
for $j = 1$,~$2$, \dots, $r$;
these are homogeneous proportions (since $p_1$,~$p_2$, \dots, $p_r$
are the same for every column index $k$).
Furthermore, the underlying distribution of the draws does not depend
on $\ell$.
\item For each of the $s$ sets of draws --- say the $k$th set ---
we define $n^{(\ell)}_{j,k}$ to be the number of draws falling
in the $j$th row, for $j = 1$,~$2$, \dots, $r$.
\item We calculate the discrepancy $f^2_{(\ell)}$
between the simulated counts $n^{(\ell)}_{j,k}$ and the model
$n^{(\ell)}_{j,\centerdot} \cdot n^{(\ell)}_{\centerdot,k} / n$, that is,
\begin{equation}
f_{(\ell)}^2 = \sum_{j=1}^r \sum_{k=1}^s
(n^{(\ell)}_{j,k} - (n^{(\ell)}_{j,\centerdot} \cdot n^{(\ell)}_{\centerdot,k}/n))^2.
\end{equation}
\end{enumerate}
An estimate of the P-value $P_{f^2}$ is the fraction
of $f_{(1)}^2$,~$f_{(2)}^2$, \dots, $f_{(m)}^2$ which are greater than
or equal to $f^2$ defined in~(\ref{deff2}).
As discussed in Section~3 of~\cite{perkins-tygert-ward3},
the standard error of the estimate is $\sqrt{P_{f^2}(1-P_{f^2})/m}$,
where $m$ is the number of simulations.
Needless to say, we can compute the P-values for $\chi^2$, $g^2$, and $h^2$
via similar procedures, with the same error bounds.
\begin{remark}
For all computations reported in the present paper,
we generated random numbers via the C programming language procedure
given on page~9 of~\cite{marsaglia},
implementing the recommended complementary multiply with carry.
\end{remark}
\section{Data analysis}
\label{dataa}
To compare the performance of the various metrics for measuring
the discrepancies between observed and simulated data,
we analyze several data sets.
Using the procedure of Section~\ref{computation},
we conduct $m =$ 4,000,000 Monte-Carlo simulations per P-value,
for each of the examples presented below.
The standard error of the resulting estimate for the P-value $P$
is then $\sqrt{P(1-P)}/2000$; see Section~3 of~\cite{perkins-tygert-ward3}.
Before reporting the P-values associated with the data sets,
we make two remarks concerning their interpretation:
\begin{remark}
A significance test can only indicate that observed data
{\it cannot} be reasonably assumed to have arisen from the model
of homogeneous proportions; a significance test cannot prove
that the observed data {\it can} be reasonably assumed to have arisen
from the model of homogeneity.
Thus, aside from considerations of multiple testing,
if any statistic strongly signals that the data cannot be reasonably assumed
to have arisen from the model of homogeneity, then we must reject
(or at least question) the model --- irrespective of any large P-values
for other statistics. For instance, if the P-value for the Frobenius distance
$f^2$ is very small, then we should not accept the model of homogeneity,
not even if the P-values for $\chi^2$, the log--likelihood-ratio $g^2$,
and the Freeman-Tukey/Hellinger-distance $h^2$ are large.
\end{remark}
\begin{remark}
\label{nll}
The term ``negative log-likelihood'' used in the present section refers
to the statistic that is simply the negative of the logarithm
of the likelihood.
The negative log-likelihood is the same statistic used
in the generalization of Fisher's exact test
discussed by~\cite{guo-thompson}; unlike the log--likelihood-ratio $G^2$,
this statistic involves only one likelihood, not the ratio of two.
We mention the negative log-likelihood just to facilitate comparisons;
we are not asserting that the likelihood
on its own (rather than in a ratio) is a good gauge
of the relative sizes of deviations from a model.
\end{remark}
The $11 \times 2$ Table~\ref{Danish} displays the data for our first example,
which has 22 entries in all.
Table~\ref{Danishh} displays the model of homogeneous proportions
for Table~\ref{Danish}.
The P-values for Table~\ref{Danish} for the assumption that Table~\ref{Danishh}
gives the correct underlying distribution are
\medskip
\centerline{
\begin{tabular}{rl}
$\chi^2$ ($X^2$): & .0868 \\
log--likelihood-ratio ($G^2$): & .0906 \\
Freeman-Tukey/Hellinger ($H^2$): & .0959 \\
negative log-likelihood: & .0905 \\
Frobenius ($F^2$): & .00838
\end{tabular}
}\medskip
\noindent Please note that the P-value for the Frobenius distance
is over an order of magnitude smaller than the P-values
for the classical statistics.
The $7 \times 3$ Table~\ref{mania} displays the data for our second example,
which has 21 entries in all.
Table~\ref{maniah} displays the model of homogeneous proportions
for Table~\ref{mania}.
The P-values for Table~\ref{mania} for the assumption that Table~\ref{maniah}
gives the correct underlying distribution are
\medskip
\centerline{
\begin{tabular}{rl}
$\chi^2$ ($X^2$): & .145 \\
log--likelihood-ratio ($G^2$): & .292 \\
Freeman-Tukey/Hellinger ($H^2$): & .493 \\
negative log-likelihood: & .132 \\
Frobenius ($F^2$): & .0286
\end{tabular}
}\medskip
\noindent Please note that the P-value for the Frobenius distance is
over four times smaller than the P-values for the classical statistics.
The $9 \times 2$ Table~\ref{Republican} displays the data
for our third example, which has 18 entries in all.
Table~\ref{Republicanh} displays the model of homogeneous proportions
for Table~\ref{Republican}.
The P-values for Table~\ref{Republican} for the assumption
that Table~\ref{Republicanh} gives the correct underlying distribution are
\medskip
\centerline{
\begin{tabular}{rl}
$\chi^2$ ($X^2$): & .123 \\
log--likelihood-ratio ($G^2$): & .138 \\
Freeman-Tukey/Hellinger ($H^2$): & .157 \\
negative log-likelihood: & .114 \\
Frobenius ($F^2$): & .0344
\end{tabular}
}\medskip
\noindent Please note that the P-value for the Frobenius distance is
over three times smaller than the P-values for the classical statistics.
The $5 \times 3$ Table~\ref{mania2} displays the data for our final example,
which has 15 entries in all.
Table~\ref{mania2h} displays the model of homogeneous proportions
for Table~\ref{mania2}.
The P-values for Table~\ref{mania2} for the assumption that Table~\ref{mania2h}
gives the correct underlying distribution are
\medskip
\centerline{
\begin{tabular}{rl}
$\chi^2$ ($X^2$): & .276 \\
log--likelihood-ratio ($G^2$): & .171 \\
Freeman-Tukey/Hellinger ($H^2$): & .0794 \\
negative log-likelihood: & .235 \\
Frobenius ($F^2$): & .199
\end{tabular}
}\medskip
\noindent In this example, none of the statistics produces
a very small P-value; the smallest arises
from the Freeman-Tukey/Hellinger distance in this case.
\begin{remark}
Appropriate binning (or rebinning) to uniformize the frequencies associated
with the entries in the contingency-tables/cross-tabulations
can mitigate the problem with the classical statistics.
Yet rebinning is a black art that is liable to improperly influence the result
of a significance test, and the usual data-dependent rebinning calls
for Monte-Carlo simulations to calculate P-values accurately anyways.
Rebinning always requires careful extra work.
A principal advantage of the Frobenius distance
is that it does not require any rebinning; indeed, the Frobenius distance
is most powerful without any rebinning.
Note also that optimally rebinning data such as that displayed
in Table~\ref{Danish} can be very challenging.
\end{remark}
\begin{table}[p]
\caption{Results of polls in June 1983 for Danish parliamentary elections,
from Chapter~4 of~\cite{andersen}}
\label{Danish}
\begin{center}
\begin{tabular}{lrrrrrrrr}
Party &&& Poll 1 & &&& Poll 2 & \\\hline
A &&& 416 & (33.1\%) &&& 268 & (38.9\%) \\
B &&& 45 & (3.6\%) &&& 22 & (3.2\%) \\
C &&& 338 & (26.9\%) &&& 160 & (23.2\%) \\
E &&& 13 & (1.0\%) &&& 6 & (0.9\%) \\
F &&& 131 & (10.4\%) &&& 66 & (9.6\%) \\
K &&& 18 & (1.4\%) &&& 10 & (1.5\%) \\
M &&& 47 & (3.7\%) &&& 16 & (2.3\%) \\
Q &&& 20 & (1.6\%) &&& 8 & (1.2\%) \\
V &&& 129 & (10.3\%) &&& 92 & (13.4\%) \\
Y &&& 22 & (1.8\%) &&& 9 & (1.3\%) \\
Z &&& 76 & (6.1\%) &&& 32 & (4.6\%) \\\hline
All &&& 1255 & (100.0\%) &&& 689 & (100.0\%)
\end{tabular}
\end{center}
\end{table}
\begin{table}
\caption{The model of homogeneous proportions for Table~\ref{Danish}}
\label{Danishh}
\begin{center}
\begin{tabular}{lrrrrrrrr}
Party &&& Poll 1 & &&& Poll 2 & \\\hline
A &&& 441.6 & (35.2\%) &&& 242.4 & (35.2\%) \\
B &&& 43.3 & (3.4\%) &&& 23.7 & (3.4\%) \\
C &&& 321.5 & (25.6\%) &&& 176.5 & (25.6\%) \\
E &&& 12.3 & (1.0\%) &&& 6.7 & (1.0\%) \\
F &&& 127.2 & (10.1\%) &&& 69.8 & (10.1\%) \\
K &&& 18.1 & (1.4\%) &&& 9.9 & (1.4\%) \\
M &&& 40.7 & (3.2\%) &&& 22.3 & (3.2\%) \\
Q &&& 18.1 & (1.4\%) &&& 9.9 & (1.4\%) \\
V &&& 142.7 & (11.4\%) &&& 78.3 & (11.4\%) \\
Y &&& 20.0 & (1.6\%) &&& 11.0 & (1.6\%) \\
Z &&& 69.7 & (5.6\%) &&& 38.3 & (5.6\%) \\\hline
All &&& 1255.0 & (100.0\%) &&& 689.0 & (100.0\%)
\end{tabular}
\end{center}
\end{table}
\begin{table}
\caption{Differences between the entries of Table~\ref{Danish}
and the corresponding entries of Table~\ref{Danishh}}
\label{Danishhd}
\begin{center}
\begin{tabular}{lrr}
Party & Poll 1 & Poll 2 \\\hline
A & $-25.6$ & $ 25.6$ \\
B & $ 1.7$ & $ -1.7$ \\
C & $ 16.5$ & $-16.5$ \\
E & $ 0.7$ & $ -0.7$ \\
F & $ 3.8$ & $ -3.8$ \\
K & $ -0.1$ & $ 0.1$ \\
M & $ 6.3$ & $ -6.3$ \\
Q & $ 1.9$ & $ -1.9$ \\
V & $-13.7$ & $ 13.7$ \\
Y & $ 2.0$ & $ -2.0$ \\
Z & $ 6.3$ & $ -6.3$ \\\hline
All & $ 0.0$ & $ 0.0$
\end{tabular}
\end{center}
\end{table}
\begin{table}
\caption{The entries of Table~\ref{Danishhd} divided by the square roots
of the corresponding entries of Table~\ref{Danishh}}
\label{Danishhdn}
\begin{center}
\begin{tabular}{lrr}
Party & Poll 1 & Poll 2 \\\hline
A & $-1.2$ & $ 1.6$ \\
B & $ 0.3$ & $-0.4$ \\
C & $ 0.9$ & $-1.2$ \\
E & $ 0.2$ & $-0.3$ \\
F & $ 0.3$ & $-0.5$ \\
K & $-0.0$ & $ 0.0$ \\
M & $ 1.0$ & $-1.3$ \\
Q & $ 0.5$ & $-0.6$ \\
V & $-1.1$ & $ 1.5$ \\
Y & $ 0.4$ & $-0.6$ \\
Z & $ 0.8$ & $-1.0$ \\\hline
All & $ 0.0$ & $ 0.0$
\end{tabular}
\end{center}
\end{table}
\begin{table}
\caption{Reasons for (or absence of) premature termination of the treatment
of maniacal patients in three groups from~\cite{bowden-et_al}
(the three groups are those treated with divalproex,
those treated with lithium, and those ``treated'' with a placebo)}
\label{mania}
\begin{center}
\begin{tabular}{lrrrrrrrr}
Reason & Divalproex & && Lithium & && Placebo & \\\hline
Lack of efficacy & 21 & (30.4\%) && 12 & (33.3\%) && 38 & (51.4\%) \\
Intolerance & 4 & (5.8\%) && 4 & (11.1\%) && 2 & (2.7\%) \\
Recovered & 3 & (4.3\%) && 2 & (5.6\%) && 2 & (2.7\%) \\
Noncompliance & 1 & (1.4\%) && 1 & (2.8\%) && 3 & (4.1\%) \\
Another illness & 0 & (0.0\%) && 1 & (2.8\%) && 0 & (0.0\%) \\
Administration & 4 & (5.8\%) && 2 & (5.6\%) && 2 & (2.7\%) \\
Not terminated & 36 & (52.2\%) && 14 & (38.9\%) && 27 & (36.5\%) \\\hline
All & 69 & (100.0\%) && 36 & (100.0\%) && 74 & (100.0\%)
\end{tabular}
\end{center}
\end{table}
\begin{table}
\caption{The model of homogeneous proportions for Table~\ref{mania}}
\label{maniah}
\begin{center}
\begin{tabular}{lrrrrrrrr}
Reason & Divalproex & && Lithium & && Placebo & \\\hline
Lack of efficacy & 27.4 & (39.7\%) && 14.3 & (39.7\%) && 29.4 & (39.7\%) \\
Intolerance & 3.9 & (5.6\%) && 2.0 & (5.6\%) && 4.1 & (5.6\%) \\
Recovered & 2.7 & (3.9\%) && 1.4 & (3.9\%) && 2.9 & (3.9\%) \\
Noncompliance & 1.9 & (2.8\%) && 1.0 & (2.8\%) && 2.1 & (2.8\%) \\
Another illness & 0.4 & (0.6\%) && 0.2 & (0.6\%) && 0.4 & (0.6\%) \\
Administration & 3.1 & (4.5\%) && 1.6 & (4.5\%) && 3.3 & (4.5\%) \\
Not terminated & 29.7 & (43.0\%) && 15.5 & (43.0\%) && 31.8 & (43.0\%) \\
\hline
All & 69.0 & (100.0\%) && 36.0 & (100.0\%) && 74.0 & (100.0\%)
\end{tabular}
\end{center}
\end{table}
\begin{table}
\caption{Differences between the entries of Table~\ref{mania}
and the corresponding entries of Table~\ref{maniah}}
\label{maniahd}
\begin{center}
\begin{tabular}{lrrr}
Reason & Divalproex & Lithium & Placebo \\\hline
Lack of efficacy & $-6.4$ & $-2.3$ & $8.6$ \\
Intolerance & $0.1$ & $2.0$ & $-2.1$ \\
Recovered & $0.3$ & $0.6$ & $-0.9$ \\
Noncompliance & $-0.9$ & $0.0$ & $0.9$ \\
Another illness & $-0.4$ & $0.8$ & $-0.4$ \\
Administration & $0.9$ & $0.4$ & $-1.3$ \\
Not terminated & $6.3$ & $-1.5$ & $-4.8$ \\\hline
All & $0.0$ & $0.0$ & $0.0$
\end{tabular}
\end{center}
\end{table}
\begin{table}
\caption{The entries of Table~\ref{maniahd} divided by the square roots
of the corresponding entries of Table~\ref{maniah}}
\label{maniahdn}
\begin{center}
\begin{tabular}{lrrr}
Reason & Divalproex & Lithium & Placebo \\\hline
Lack of efficacy & $-1.2$ & $-0.6$ & $ 1.6$ \\
Intolerance & $ 0.1$ & $ 1.4$ & $-1.0$ \\
Recovered & $ 0.2$ & $ 0.5$ & $-0.5$ \\
Noncompliance & $-0.7$ & $ 0.0$ & $ 0.6$ \\
Another illness & $-0.6$ & $ 1.8$ & $-0.6$ \\
Administration & $ 0.5$ & $ 0.3$ & $-0.7$ \\
Not terminated & $ 1.2$ & $-0.4$ & $-0.9$ \\\hline
All & $ 0.0$ & $ 0.0$ & $ 0.0$
\end{tabular}
\end{center}
\end{table}
\begin{table}
\caption{Results for the 2012 Republican U.S. presidential nomination,
from a CBS News poll of November 6--10, 2011 (released November 11, 2011)
and from a Pew Research Center poll of November 9--11, 2011
(released November 17, 2011), as reconstructed from percentages
rounded to the nearest whole numbers (the original counts were not reported)
for Republican primary voters}
\label{Republican}
\begin{center}
\begin{tabular}{lrrrrrrrrrr}
Candidate &&&& CBS & &&&& Pew & \\\hline
Michele Bachmann &&&& 15 & (4.6\%) &&&& 21 & (5.1\%) \\
Herman Cain &&&& 69 & (21.2\%) &&&& 103 & (25.0\%) \\
Newt Gingrich &&&& 57 & (17.5\%) &&&& 66 & (16.0\%) \\
Jon Huntsman &&&& 4 & (1.2\%) &&&& 4 & (1.0\%) \\
Ron Paul &&&& 19 & (5.8\%) &&&& 33 & (8.0\%) \\
Rick Perry &&&& 31 & (9.5\%) &&&& 37 & (9.0\%) \\
Mitt Romney &&&& 57 & (17.5\%) &&&& 91 & (22.1\%) \\
Rick Santorum &&&& 8 & (2.5\%) &&&& 8 & (1.9\%) \\
Do not know &&&& 65 & (20.0\%) &&&& 49 & (11.9\%) \\\hline
All &&&& 325 & (100.0\%) &&&& 412 & (100.0\%)
\end{tabular}
\end{center}
\end{table}
\begin{table}
\caption{The model of homogeneous proportions for Table~\ref{Republican}}
\label{Republicanh}
\begin{center}
\begin{tabular}{lrrrrrrrrrr}
Candidate &&&& CBS & &&&& Pew & \\\hline
Michele Bachmann &&&& 15.9 & (4.9\%) &&&& 20.1 & (4.9\%) \\
Herman Cain &&&& 75.8 & (23.3\%) &&&& 96.2 & (23.3\%) \\
Newt Gingrich &&&& 54.2 & (16.7\%) &&&& 68.8 & (16.7\%) \\
Jon Huntsman &&&& 3.5 & (1.1\%) &&&& 4.5 & (1.1\%) \\
Ron Paul &&&& 22.9 & (7.1\%) &&&& 29.1 & (7.1\%) \\
Rick Perry &&&& 30.0 & (9.2\%) &&&& 38.0 & (9.2\%) \\
Mitt Romney &&&& 65.3 & (20.1\%) &&&& 82.7 & (20.1\%) \\
Rick Santorum &&&& 7.1 & (2.2\%) &&&& 8.9 & (2.2\%) \\
Do not know &&&& 50.3 & (15.5\%) &&&& 63.7 & (15.5\%) \\\hline
All &&&& 325.0 & (100.0\%) &&&& 412.0 & (100.0\%)
\end{tabular}
\end{center}
\end{table}
\begin{table}
\caption{Differences between the entries of Table~\ref{Republican}
and the corresponding entries of Table~\ref{Republicanh}}
\label{Republicanhd}
\begin{center}
\begin{tabular}{lrr}
Candidate & CBS & Pew \\\hline
Michele Bachmann & $-0.9$ & $0.9$ \\
Herman Cain & $-6.8$ & $6.8$ \\
Newt Gingrich & $2.8$ & $-2.8$ \\
Jon Huntsman & $0.5$ & $-0.5$ \\
Ron Paul & $-3.9$ & $3.9$ \\
Rick Perry & $1.0$ & $-1.0$ \\
Mitt Romney & $-8.3$ & $8.3$ \\
Rick Santorum & $0.9$ & $-0.9$ \\
Do not know & $14.7$ & $-14.7$ \\\hline
All & $0.0$ & $0.0$
\end{tabular}
\end{center}
\end{table}
\begin{table}
\caption{The entries of Table~\ref{Republicanhd} divided by the square roots
of the corresponding entries of Table~\ref{Republicanh}}
\label{Republicanhdn}
\begin{center}
\begin{tabular}{lrr}
Candidate & CBS & Pew \\\hline
Michele Bachmann & $-0.2$ & $ 0.2$ \\
Herman Cain & $-0.8$ & $ 0.7$ \\
Newt Gingrich & $ 0.4$ & $-0.3$ \\
Jon Huntsman & $ 0.3$ & $-0.2$ \\
Ron Paul & $-0.8$ & $ 0.7$ \\
Rick Perry & $ 0.2$ & $-0.2$ \\
Mitt Romney & $-1.0$ & $ 0.9$ \\
Rick Santorum & $ 0.4$ & $-0.3$ \\
Do not know & $ 2.1$ & $-1.8$ \\\hline
All & $ 0.0$ & $ 0.0$
\end{tabular}
\end{center}
\end{table}
\begin{table}
\caption{Reactions to prior treatment with lithium
(when treated before with lithium)
of maniacal patients in three groups from~\cite{bowden-et_al}
(the three groups are those treated with divalproex,
those treated with lithium, and those ``treated'' with a placebo)}
\label{mania2}
\vspace{.5em}
\begin{center}
\begin{tabular}{lrrrrrrrr}
Reaction
& Divalproex && & Lithium && & Placebo & \\\\\hline\\
\parbox[c]{1.2in}{Effective and\\tolerated}
& 22 & (31.9\%) && 16 & (44.4\%) && 19 & (25.7\%) \\\\
\parbox[c]{1.2in}{Effective but\\not tolerated}
& 7 & (10.1\%) && 0 & (0.0\%) && 6 & (8.1\%) \\\\
\parbox[c]{1.2in}{Ineffective but\\tolerated}
& 19 & (27.5\%) && 11 & (30.6\%) && 31 & (41.9\%) \\\\
\parbox[c]{1.2in}{Ineffective and\\not tolerated}
& 6 & (8.7\%) && 4 & (11.1\%) && 5 & (6.8\%) \\\\
\parbox[c]{1.2in}{No prior lithium\\treatment}
& 15 & (21.7\%) && 5 & (13.9\%) && 13 & (17.6\%) \\\\\hline\\
All
& 69 & (100.0\%) && 36 & (100.0\%) && 74 & (100.0\%)
\end{tabular}
\end{center}
\end{table}
\begin{table}
\caption{The model of homogeneous proportions for Table~\ref{mania2}}
\label{mania2h}
\vspace{.5em}
\begin{center}
\begin{tabular}{lrrrrrrrr}
Reaction
& Divalproex && & Lithium && & Placebo & \\\\\hline\\
\parbox[c]{1.2in}{Effective and\\tolerated}
& 22.0 & (31.8\%) && 11.5 & (31.8\%) && 23.6 & (31.8\%) \\\\
\parbox[c]{1.2in}{Effective but\\not tolerated}
& 5.0 & (7.3\%) && 2.6 & (7.3\%) && 5.4 & (7.3\%) \\\\
\parbox[c]{1.2in}{Ineffective but\\tolerated}
& 23.5 & (34.1\%) && 12.3 & (34.1\%) && 25.2 & (34.1\%) \\\\
\parbox[c]{1.2in}{Ineffective and\\not tolerated}
& 5.8 & (8.4\%) && 3.0 & (8.4\%) && 6.2 & (8.4\%) \\\\
\parbox[c]{1.2in}{No prior lithium\\treatment}
& 12.7 & (18.4\%) && 6.6 & (18.4\%) && 13.6 & (18.4\%) \\\\\hline\\
All
& 69.0 & (100.0\%) && 36.0 & (100.0\%) && 74.0 & (100.0\%)
\end{tabular}
\end{center}
\end{table}
\begin{table}
\caption{Differences between the entries of Table~\ref{mania2}
and the corresponding entries of Table~\ref{mania2h}}
\label{mania2hd}
\begin{center}
\begin{tabular}{lrrr}
Reaction & Divalproex & Lithium & Placebo \\\hline
Effective and tolerated & $0.0$ & $4.5$ & $-4.6$ \\
Effective but not tolerated & $2.0$ & $-2.6$ & $0.6$ \\
Ineffective but tolerated & $-4.5$ & $-1.3$ & $5.8$ \\
Ineffective and not tolerated & $0.2$ & $1.0$ & $-1.2$ \\
No prior lithium treatment & $2.3$ & $-1.6$ & $-0.6$ \\\hline
All & $0.0$ & $0.0$ & $0.0$
\end{tabular}
\end{center}
\end{table}
\begin{table}
\caption{The entries of Table~\ref{mania2hd} divided by the square roots
of the corresponding entries of Table~\ref{mania2h}}
\label{mania2hdn}
\begin{center}
\begin{tabular}{lrrr}
Reaction & Divalproex & Lithium & Placebo \\\hline
Effective and tolerated & $ 0.0$ & $ 1.3$ & $-0.9$ \\
Effective but not tolerated & $ 0.9$ & $-1.6$ & $ 0.3$ \\
Ineffective but tolerated & $-0.9$ & $-0.4$ & $ 1.2$ \\
Ineffective and not tolerated & $ 0.1$ & $ 0.6$ & $-0.5$ \\
No prior lithium treatment & $ 0.6$ & $-0.6$ & $-0.2$ \\\hline
All & $ 0.0$ & $ 0.0$ & $ 0.0$
\end{tabular}
\end{center}
\end{table}
\section{Conclusion}
\label{conclusion}
The Frobenius distance is significantly more powerful
than the classical statistics for gauging the consistency
of observed data with the assumption of homogeneity
in many of the examples of the present paper.
This may or may not be typical of most applications;
actually, we suspect that our last example ---
in which all the statistics perform similarly --- is the most representative.
Even so, both the present paper and the applications
of~\cite{perkins-tygert-ward3} illustrate that
there are many important circumstances in which the Frobenius distance
is much more powerful than the classical statistics.
\section*{Acknowledgements}
We would like to thank Alex Barnett, G\'erard Ben Arous, James Berger,
Tony Cai, Sourav Chatterjee, Ronald Raphael Coifman, Ingrid Daubechies,
Jianqing Fan, Jiayang Gao, Andrew Gelman, Leslie Greengard, Peter W. Jones,
Deborah Mayo, Peter McCullagh, Michael O'Neil, Ron Peled, William Perkins,
William H. Press, Vladimir Rokhlin, Joseph Romano, Gary Simon, Amit Singer,
Michael Stein, Stephen Stigler, Joel Tropp, Rachel Ward, Larry Wasserman,
and Douglas A. Wolfe.
This work was supported in part by a research fellowship
from the Alfred P. Sloan Foundation.
\newpage
\addcontentsline{toc}{section}{\protect\numberline{}References}
\bibliographystyle{asamod.bst}
|
1,477,468,750,373 | arxiv | \section{Introduction}
This paper is devoted to the study of a certain non-stationary Schr\"odinger equation with elliptic potential (see \eqref{Heun} below) which has appeared in dif\/ferent works on quantum integrable models and conformal f\/ield theory \cite{FLNO,Kolb,Ro}. This so-called {\em non-stationary Heun equation}~\cite{LT}, also known as {\em quantum Painlev\'e~VI}~\cite{Nag}, is a generalization of a second-order Fuchsian dif\/ferential equation with four regular singular points known as {\em Heun equation}~\cite{Ron,SL}. Another important special case is the {\em non-stationary Lam\'e equation} \cite{BM,EK}, which is also known as {\em KZB heat equation}\footnote{To be more precise: the special case of the KZB equations corresponding to $\mathfrak{g}=\mathfrak{sl}_2$ and $n=1$.}~\cite{FV}. Our main result is a method to construct a solution of the non-stationary Heun equation which is complete (in a sense explained below) and which is complementary to results obtained by other methods in that it can be used for generic parameter values. We also discuss various interesting special cases of our results. As will be explained, this is part of a research program using kernel functions to solve quantum models of Calogero--Moser--Sutherland (CMS) type; see \cite{HL,ELeCS0,ELJack,ELsigma,ELeCS2}.
\subsection{Background}
There is a long tradition in mathematical physics that regards integrable models and the mathematical theory of special functions as two sides of the same coin. While most textbook examples of such models can be solved using results about special functions known since a long time (they can be found in \cite{WW}, for example), modern developments in conformal f\/ield theory and quantum statistical physics have led to special functions which belong to function classes which are the subject of ongoing research. As an outstanding example we mention the representation theoretic approach to the solution of certain hyperbolic dif\/ferential equations due to Etingof, Frenkel and Kirillov \cite{EFK,EK}, which is a beautiful generalization of classical works on the Gauss hypergeometric equation pioneered by Gelfand and motivated by conformal f\/ield theory. A~complementary approach to this inspired by string theory was developed in works by Felder, Varchenko, and others \cite{FG,FV95,FV}. The non-stationary Lam\'e equation is the simplest non-trivial example to which these results apply. More recently this equation appeared in the construction of exact 4-point correlation functions of the quantum Liouville model~\cite{FLNO}, and in the exact solution of the eight-vertex model \cite{BM}. It was conjectured in~\cite{FLNO} that results about the non-stationary Lam\'e equation should have natural generalizations to the non-stationary Heun equation. This was recently conf\/irmed in important examples by Rosengren~\cite{Ro1,Ro}, who proved and generalized conjectures in~\cite{BM}, and Kolb \cite{Kolb}, who generalized the representation theoretic approach to the non-stationary Lam\'e equation in \cite{EFK,EK} to the Heun case. We also mention another important approach due to Nekrasov and Shatiashvili allowing to construct functions in the Heun class and which is based on supersymmetric gauge theories \cite{NS}; see also \cite{KS} for recent related work. Another related subject is the AGT conjecture which has led to explicit combinatorial expressions for conformal blocks related to the non-stationary Heun equation; see, e.g.,~\cite{Nawata} and references therein.
The Heun equation has received considerable interest as a natural equation def\/ining a function class generalizing the Gauss hypergeometric functions; see, e.g., \cite{Maier,Ron,SL,Smirnov,TakemuraHeun4} and references therein. The examples mentioned in the previous paragraph motivate us to extend this scope: our work is intended as a contribution towards a general theory of special functions def\/ined by the {\em non-stationary} Heun equation. The approach we use is based on a kernel function method developed in a series of papers in order to construct exact eigenfunctions of quantum models of CMS type; see \cite{HL,ELeCS0,ELJack,ELsigma,ELeCS2}. The relation of this to other approaches to the Heun equation based on kernel functions \cite{Erd,LaSl,LW,Novikov,RHeun} is discussed in Section~\ref{secFinal}.
\looseness=-1 To explain our method we recall that a kernel function for a pair of Schr\"odinger opera\-tors~$H(x)$ and $\tilde H(y)$ is an explicitly known function $K(x,y) $ satisfying the functional identity $(H(x)-\tilde H(y) -c)K(x,y)=0$ for some constant $c$; in the examples of interest to us the Schr\"odinger operators are Hamiltonians def\/ining a CMS-type model which can have equal ($H=\tilde H$) or dif\/ferent ($\tilde H\neq H$) parameters, and we write $H(x)$ to indicate that this dif\/ferential operator acts on functions depending on the variable $x$. A basic observation underlying the kernel function method we use is that CMS-type Hamiltonians have eigenfunctions which are easy to construct but uninteresting for applications, and that kernel functions provide a tool to transform such uninteresting eigenfunctions to interesting ones \cite{ELsigma}. It was shown in~\cite{HL} that all classical CMS-type models possess kernel functions which allow the construction of particular series representations of standard eigenfunctions in this way (by {\em classical CMS-type models} we mean those whose eigenfunctions provide natural many-variable generalizations of classical orthogonal polynomials, including the corresponding deformed CMS models in the sense of Chalykh, Feigin, Veselov and Sergeev~\cite{CFV,Sergeev}). We also mention recent related work of Halln\"as and Ruijsenaars~\cite{HR1} constructing eigenfunctions of the CMS model with $1/\sinh^2$-interactions using such a kernel function and which goes beyond the paradigm of polynomial eigenfunctions.
It was found that the elliptic generalizations of the CMS-models, which can be regarded as many-variable generalizations of the Lam\'e and Heun equations, possess kernel functions~\cite{ELrem,ELeCS1,LT} generalizing those in~\cite{HL}, and in~\cite{ELeCS0} one such kernel function was used to construct series representations of eigenfunctions of the elliptic quantum Calogero--Sutherland (eCS) model~\cite{ELeCS2}.
We mention in passing a series solution of the eCS model by Komori and Takemura~\cite{KT} which, while using the same expansion parameter, is dif\/ferent in important details from the result in~\cite{ELeCS0,ELeCS2} (this dif\/ference is analogous to the dif\/ference between perturbative results of Takemura on the Heun equation in~\cite{THeun} and results in the present paper, as discussed in the last paragraph of Section~\ref{secSummary1} below).
It is known that the kernel functions allowing for elliptic generalizations are restricted by a so-called {\em balancing condition} $\kappa=0$ with $\kappa$ a constant depending on the model parameters \cite{KNS,ELrem}.
If this condition is not fulf\/illed one can often f\/ind a {\em generalized kernel function} $K(x,y,\tau)$ satisfying
\begin{gather}\label{Kgen}
\left(\frac{{\rm i}}{\pi} \kappa \frac{\partial}{\partial\tau}+H(x,\tau)-\tilde H(y,\tau) -c(\tau)\right)K(x,y,\tau)=0
\end{gather}
with $\tau$ the half-period ratio of the elliptic functions appearing in the CMS Hamiltonians~$H$ and~$\tilde H$ \cite{ELrem,ELeCS1,LT} (see, e.g., Lemma~\ref{Lemma:kernel} where the $\tau$-dependence of~$H$, $\tilde H$, $c$ and $K$ is suppressed). In the present paper we use a generalized kernel function found in~\cite{LT} to solve the non-stationary Heun equation, following the approach in~\cite{ELeCS2}. The basic observation underlying our approach is that one can use the generalized kernel function in \eqref{Kgen} to transform an eigenfunction of the dif\/ferential operator $\frac{{\rm i}}{\pi} \kappa \frac{\partial}{\partial\tau}+\tilde H(y,\tau)$ to an eigenfunction of $\frac{{\rm i}}{\pi} \kappa \frac{\partial}{\partial\tau}+H(x,\tau)$ \cite{LT}.
\subsection{Summary of results}\label{secSummary1}
The non-stationary Heun equation can be written as\footnote{Here and in the following we often suppress the dependence of functions on the variable $\tau$.}
\begin{gather}\label{Heun}
\left( \frac{{\rm i}}{\pi}\kappa \frac{\partial}{\partial\tau} -\frac{\partial^2}{\partial x^2} +\sum_{\nu=0}^3 g_\nu(g_\nu-1) \wp(x+\omega_\nu)\right)\psi(x)=E\psi(x)
\end{gather}
with $\wp(x)$ the Weierstrass elliptic function with periods $(2\pi,2\pi\tau$) and
\begin{gather}\label{omeganu}
\omega_0= 0,\qquad \omega_1=\pi,\qquad \omega_2= -\pi-\pi\tau,\qquad \omega_3= \pi\tau
\end{gather}
(for the convenience of the reader we collect the def\/initions of $\wp$ and other well-known special functions we need in Appendix~\ref{appSpecialFunctions}). To simplify notation we set $\omega_1=\pi$ here and in most parts of this paper; see Appendix~\ref{appScaling} for how to transform our results to other values of $\omega_1$. The parameters $g_0$, $g_1$, $g_2$, $g_3$ and $\kappa$ can be arbitrary complex numbers for our general results.\footnote{We have to make some restrictions on parameters due to a technical problem referred to as {\em resonances} but, as discussed in Section~\ref{secResonances}, many of these restrictions are irrelevant in practice.} Our aim is to construct functions $\psi(x)\equiv \psi(x,\tau;\{g_\nu\}_{\nu=0}^3,\kappa)$ of two complex variables $x$ and $\tau$, $\Im(\tau)>0$, that satisfy this dif\/ferential equation for some $E\equiv E(\tau;\{g_\nu\}_{\nu=0}^3,\kappa)$; a more precise characterization of our solutions is given in \eqref{solution}--\eqref{P} below. It is important to note that, for $\kappa\neq 0$, $E$ can be transformed to 0, or any other convenient value, by changing the normalization of $\psi(x)$ (this follows from the obvious invariance of \eqref{Heun} under the transformation
\begin{gather} \label{symmetry}
\psi(x)\to C\psi(x),\qquad E \to E + \frac{{\rm i}}{\pi}\kappa \frac1{C}\frac{\partial C}{\partial\tau}
\end{gather}
for arbitrary analytic functions $C$ of $\tau$). However, we sometimes f\/ind it convenient to impose a normalization condition on $\psi(x)$ so that $E$ remains signif\/icant even for $\kappa\neq 0$. Important special cases of \eqref{Heun} include the Heun equation ($\kappa=0$), the non-stationary Lam\'e equation ($g_\nu=g$ independent of $\nu$, or $g_\nu(g_\nu-1)= 0$ for three of the $\nu$'s), and the Lam\'e equation (both specializations). Many of our results are new even for these special cases (to our knowledge).
To explain the nature of our solutions we recall that, in the trigonometric limit $\Im(\tau)\to +\infty$, \eqref{Heun} reduces to the stationary Schr\"odinger equation with the P\"oschl--Teller potential $\propto g_0(g_0-1)/\sin^2(x/2) + g_1(g_1-1)/\cos^2(x/2)$ which has explicitly known solutions equal to Jacobi polynomials up to a common factor $\sin(x/2)^{g_0}\cos(x/2)^{g_1}$ (see \eqref{PT}--\eqref{PTsolution} for details).
The solutions of \eqref{Heun} that we construct are a generalization of this: they are of the form
\begin{gather}
\psi_n(x)= (2q^{1/4})^{-g_0-g_1}\left( \prod_{\nu=0}^3 \theta_{\nu+1}\big(\tfrac{1}{2} x\big)^{g_\nu} \right)\mathcal{P}_n\left(\cos(x) \right),\nonumber\\
E_n = \kappa^2\left(\frac{1}{12} -\frac{\eta_1}{\pi}\right) -\sum_{\nu=0}^3 g_\nu(g_\nu-1)\frac{\eta_1}{\pi} + \mathcal{E}_n\label{solution}
\end{gather}
with $\theta_{\nu+1}(z)$ the Jacobi theta functions, $q=\exp({\rm i}\pi\tau)$ the nom\'e, $\eta_1/\pi$ in \eqref{eta1pi} and
\begin{gather}\label{P}
\mathcal{P}_n(z) = \sum_{\ell=0}^\infty \mathcal{P}^{(\ell)}_n(z) q^\ell ,\qquad \mathcal{E}_n = \sum_{\ell=0}^\infty \mathcal{E}^{(\ell)}_nq^\ell,
\end{gather}
with
\begin{gather}\label{eJacobi}
\mathcal{P}^{(0)}_n(z) = P_n^{\big(g_0-\frac{1}{2},g_1-\frac{1}{2}\big)}(z),\qquad \mathcal{E}^{(0)}_n = \left(n+\frac{g_0+g_1}{2}\right)^2,\qquad n\in\mathbb{N}_0,
\end{gather}
and $P^{\big(g_0-\frac12,g_1-\frac12\big)}_n(z)$ the Jacobi polynomials (see~\eqref{Series}). Our main result provides ef\/f\/icient recursive procedures to compute the functions $\mathcal{P}^{(\ell)}_n(z)$ which, as we show, are polynomials of degree $n+\ell$ in $z$; see Propositions~\ref{prop1} and~\ref{prop2} for two complementary variants of this result. Thus one can regard~$\mathcal{P}_n(z)$ in~\eqref{P} as an elliptic generalization of Jacobi polynomials. It is interesting to note that these elliptic generalizations exist even for negative integers~$n$ if~$q$ is non-zero, but they vanish like~$O(q^{-n})$ for $n<0$ as $q\to 0$. We conjecture that the series in~\eqref{P} is absolutely convergent and converges to a $L^2$-function on $[0,\pi]$ for $|q|\leq q_0$ and some $q_0>0$ depending on parameters (this is known to be true in the Heun case $\kappa=0$ from work by Takemura \cite{THeun}; see also \cite{ELeCS2} for a convergence proof for the Lam\'e case which, as we believe, can be generalized). However, this question is left for future work: for simplicity we treat series like in~\eqref{P} as formal power series.
Our solution \eqref{solution}--\eqref{P} of \eqref{Heun} is complete in the sense that the $\psi_n(x)$ provide a complete orthonormal basis in the Hilbert space of $L^2$-functions on $[0,\pi]$ for $g_0+g_1>0$ in the trigonometric case $q=0$ (we believe that this is true even for $q > 0$).
Moreover, we give an ef\/f\/icient recursive procedure to compute the coef\/f\/icients $ \mathcal{P}^{(\ell)}_n(z)$ and $ \mathcal{E}^{(\ell)}_n$ of the power series in~\eqref{P}. We note that, in the Heun case $\kappa=0$, the $E_n$ correspond to the eigenvalues of the $BC_1$ elliptic CMS Hamiltonian discovered by Inozemtsev~\cite{I}. We thus refer to the $E_n$ as {\em generalized eigenvalues} in the following.
It is important to note that, by exploiting the invariance of \eqref{Heun} under the transformation in \eqref{symmetry}, we obtain two complementary variants of results: in the f\/irst variant we impose a~normalization conditions on $\psi_n(x)$ such that the generalized eigenvalues $E_n$ are signif\/icant (see Proposition~\ref{prop1} and Theorem~\ref{Thm1}), and in the second we f\/ix $E_n$ by a convenient condition (see Proposition~\ref{prop2} and Theorem~\ref{Thm2}). These two variants of results are complementary to each other in that the second is somewhat simpler but restricted to $\kappa\neq 0$, whereas the f\/irst applies to the case $\kappa=0$ as well. However, as will be discussed after Proposition~\ref{prop2}, the second variant of results implies an interesting representation of the eigenvalues $E_n$ in the limit $\kappa\to 0$.
One important feature of our method is that it provides particular $q$-dependent basis functions $\{f_m(z)\}_{m\in\mathbb{Z}}$ to expand the functions $\mathcal{P}_n(z)$ in; see~\eqref{fn} and~\eqref{Pnseries}. This is useful since these functions $f_m(z)$ take into account much of the complexity of the problem; for example, in special cases the expansion coef\/f\/icients are trivial, and in these cases our method gives explicit integral representations of the solutions $\mathcal{P}_n(z)$ (see Section~\ref{secExplicit}). For general parameter values, we obtain a system of equations for these expansion coef\/f\/icients which, in the trigonometric case $q=0$, can be solved by diagonalizing a triangular matrix and which, for non-zero $q$, can be solved by ef\/f\/icient perturbative algorithms; see Propositions~\ref{prop1} and~\ref{prop2}. A perturbative solution of these algorithms to all orders in $q$ is obtained in Section~\ref{secAllOrders}; see Theorems~\ref{Thm1} and~\ref{Thm2}. We note that results for the elliptic Calogero--Sutherland model corresponding to the ones in Sections~\ref{secTrigonometric}, \ref{secPerturbative} and \ref{secAllOrders} were obtained by one of us in \cite{ELeCS0,ELJack}, and \cite{ELeCS2}, respectively. We also mention that, in the special case $\kappa=0$, we obtain results for the Heun equation in Section~\ref{secAllOrders0} which dif\/fer from the ones by Takemura who used Jacobi polynomials as basis to expand the functions $\mathcal{P}_n(z)$~\cite{THeun}. As already mentioned, this dif\/ference is analogous to the dif\/ference between the perturbative results for the eCS model in~\cite{ELeCS0} and the one by Komori and Takemura in \cite{KT}: in the latter work the eigenfunctions are expanded in Jack polynomials, whereas in the former an unconventional basis is used which allows for an explicit solution to all orders~\cite{ELeCS2}.
\subsection{Plan}
Section~\ref{secPreliminaries} contains preliminary material: a summary of notation (Section~\ref{secNotation}), a~review of a~well-known solution of \eqref{Heun} for $\Im(\tau)\to\infty$ in terms of Jacobi polynomials (Section~\ref{secPT}), the def\/inition and properties of our basis functions $f_m(z)$ (Section~\ref{secfm}), and a discussion of a technicality referred to as {\em resonances} (Section~\ref{secResonances}). In Section~\ref{secResults} we present our key result, which is a transformation of the problem to solve \eqref{Heun} into a~dif\/ferential-dif\/ference equation (Proposition~\ref{prop0}), together with a discussion of special cases (Section~\ref{secExplicit}); the proof of this key result is given in Section~\ref{Proofprop0}.
In Section~\ref{secPerturbative} we present two complementary recursive algorithms to solve this dif\/ferential-dif\/ference equation, and Section~\ref{secAllOrders} contains the corresponding explicit solutions to all orders.
We conclude with f\/inal remarks in Section~\ref{secFinal}.
Five appendices contain def\/initions and properties of special functions we use (Appendix~\ref{appSpecialFunctions}), details on how to translate our results for $\omega_1=\pi$ to other values of $\omega_1$ (Appendix~\ref{appScaling}), explicit results from one of our recursive algorithms at low order (Appendix~\ref{appExplicitResults}), derivations of results needed in proofs (Appendix~\ref{appComputations}), and a short discussion of the combinatorial structure of our solution (Appendix~\ref{appCombinatorics}).
\section{Preliminaries}\label{secPreliminaries}
We collect def\/initions and preliminary results that we use.
\subsection{Notation}\label{secNotation}
We use the special functions ($\xi$ and $q$ are complex variables; $|q|<1$)
\begin{gather}
\Theta_1(\xi) \equiv (1-\xi)\prod_{n=1}^\infty\big(1-q^{2n}\xi\big)\big(1-q^{2n}\xi^{-1}\big),\qquad \Theta_2(\xi)\equiv \Theta_1(-\xi),\nonumber\\
\Theta_3(\xi) \equiv \prod_{n=1}^\infty\big(1+q^{2n-1}\xi\big)\big(1+q^{2n-1}\xi^{-1}\big),\qquad \Theta_4(\xi)\equiv \Theta_3(-\xi)\label{Thetanu}
\end{gather}
and
\begin{gather}\label{Theta}
\Theta(z,\xi)\equiv \big(1-2z\xi+\xi^2\big)\prod_{n=1}^\infty\big(1-2q^{2n}\xi z+q^{4n}\xi^2\big)\big(1-2q^{2n}\xi^{-1}z+q^{4n}\xi^{-2}\big),
\end{gather}
which all are closely related to the Jacobi theta functions (see~\eqref{tettet} and~\eqref{tettet1}).
We denote as $\mathbb{N}_0$ and $\mathbb{Z}'$ the sets of non-negative and non-zero integers, respectively. The symbol $\delta(m,n)$ for integers $m$, $n$ denotes the Kronecker delta.
\subsection{Trigonometric limit}\label{secPT}
The non-stationary Heun equation simplif\/ies in the trigonometric case $q=0$ to
\begin{gather}\label{PT}
\left( -\frac{\partial^2}{\partial x^2} + \frac{g_0(g_0-1)}{4\sin^2 \frac{1}{2} x} + \frac{g_1(g_1-1)}{4\cos^2 \frac{1}{2} x} \right)\psi^{(0)}(x)=E^{(0)}\psi^{(0)}(x) ,
\end{gather}
which is known to have solutions
\begin{gather}\label{PTsolution}
\psi_n^{(0)}(x)=\big(\sin \tfrac{1}{2} x\big)^{g_0}\big(\cos \tfrac{1}{2} x\big)^{g_1}P^{\big(g_0-\frac12,g_1-\frac12\big)}_n(\cos x), \qquad
E^{(0)}_n = \left(n+\frac{g_0+g_1}2\right)^2
\end{gather}
with the Jacobi polynomials $P_n^{(\alpha,\beta)}(z)$ in \eqref{Series} (see, e.g., \cite[Table~18.8.1, 2nd line]{Dig10}). We will use a well-known uniqueness result about this solution in the following form.
\begin{Lemma}\label{lemmaUniqueness}
Let $\psi_n^{(0)}(x)$ be a solution of \eqref{PT} of the form
\begin{gather*}
\psi_n^{(0)}(x) = \big(\sin \tfrac{1}{2} x\big)^{g_0}\big(\cos \tfrac{1}{2} x\big)^{g_1}P(\cos x)
\end{gather*}
for some constant $E^{(0)}$, with $P(z)$ a polynomial of degree $n$ such that
\begin{gather}\label{Pnormalization}
P(z) = \frac{(n+g_0+g_1)_n}{2^nn!}z^n+O\big(z^{n-1}\big)
\end{gather}
for some $n\in\mathbb{N}_0$. Then $P(z)=P^{\big(g_0-\frac12,g_1-\frac12\big)}_n(z)$ and $E^{(0)} =E^{(0)}_n$.
\end{Lemma}
The proof is standard and therefore omitted. (Note that the normalization in \eqref{Pnormalization} follows from~\eqref{JacobiNormalization}.)
We will use this result to f\/ix the normalization of our solutions so as to get Jacobi polynomials in the trigonometric case $q=0$.
\subsection{Basis functions}\label{secfm}
As mentioned in the introduction, one important feature of our method is that we use non-trivial basis functions $f_m(z)$. These functions are def\/ined by the following generating function,
\begin{gather}\label{fn}
\frac{\prod\limits_{\nu=0}^3\Theta_{\nu+1}(\xi)^{\tilde{g}_\nu}}{\Theta(z,\xi)^\lambda}\equiv \sum_{m\in \mathbb{Z}}f_m(z)\xi^{m},\qquad |q|<|\xi|<1,
\\ \label{lambda}
\tilde g_\nu\equiv \lambda-g_\nu,\qquad \lambda \equiv \tfrac12(g_0+g_1+g_2+g_3-\kappa)
\end{gather}
with the special functions $\Theta_{\nu+1}(\xi)$, $\Theta(z,\xi)$ def\/ined in \eqref{Thetanu}--\eqref{Theta}. It is easy to check that the series on the r.h.s.\ in~\eqref{fn} is absolutely convergent in the region indicated, and thus the functions~$f_m(z)$ are well-def\/ined.\footnote{One can show that, for f\/ixed complex parameters $\{g_\nu\}_{\nu=0}^3$ and~$\kappa$, and all $m\in\mathbb{Z}$, $f_m(z)$ is analytic for $|q|<1$ and $z$ in some $q$-dependent open domain which includes the interval $[-1,1]$.} Since we restrict ourselves to results in the sense of formal power series in $q$, we only need the following characterization of these functions (the proof of this is technical and thus deferred to an appendix).
\begin{Lemma}\label{Lemma:fn2}
The functions $f_m(z)$ defined in \eqref{fn}--\eqref{lambda} have the following power series expansion
\begin{gather}\label{fnseries}
f_m(z) = \sum_{\ell=0}^\infty f_m^{(\ell)}(z)q^\ell
\end{gather}
with $f_m^{(\ell)}(z)=0$ for $m+\ell<0$ and $f_m^{(\ell)}(z)$ a polynomial of degree $m+\ell$ in $z$ for $m+\ell\geq 0$. In particular,
\begin{gather}\label{f0n}
f_m^{(0)}(z) = \binom{-\lambda}{m}(-2z)^m + O\big(z^{m-1}\big)
\end{gather}
with the binomial coefficient $\binom{-\lambda}{m}$ as usual $($see \eqref{binomial}$)$.
\end{Lemma}
\begin{proof} See Appendix~\ref{secProofLemfn2}. \end{proof}
We will also use the following integral representation of these functions:
\begin{gather}\label{fnint}
f_m(z) = \oint_{\mathcal{C}}\frac{d\xi}{2\pi{\rm i}\xi}\xi^{-m} \frac{\prod\limits_{\nu=0}^3\Theta_{\nu+1}(\xi)^{\tilde g_\nu}}{\Theta(z,\xi)^\lambda}
\end{gather}
with $\mathcal{C}$ the contour once around the circle with radius $|\xi|=R$, $|q|<R<1$, taken counterclockwise (this is equivalent to \eqref{fn} by Cauchy's theorem).
\begin{Remark}\label{remlambdacondition}
It is clear from \eqref{f0n} that the functions $f^{(0)}_m(z)$ are non-trivial polynomials in~$z$ of degree $m$ for all $m\in\mathbb{N}_0$ only if $-\lambda\notin\mathbb{N}_0$ (i.e., only in this case the binomial coef\/f\/icient on the r.h.s.\ in~\eqref{f0n} is always non-zero). If $-\lambda=k\in\mathbb{N}_0$, one can see from \eqref{fnint} that $f^{(0)}_m(z)$ is a~polynomials of degree $\leq k$ for all $m\in\mathbb{N}_0$. Thus, to get complete results, we sometimes impose the condition $-\lambda\notin\mathbb{N}_0$.
\end{Remark}
\subsection{Resonances}\label{secResonances}
We discuss a technical issue encountered in Sections~\ref{secPerturbative} and \ref{secAllOrders}: to prove Propositions~\ref{prop1} and~\ref{prop2} and Theorems~\ref{Thm1} and \ref{Thm2} we f\/ind it convenient to impose the following {\it no-resonance conditions:} {\em either $\Im(\kappa)\neq 0$ and $g_0+g_1\in\mathbb{R}$, or $\kappa=0$ and $g_0+g_1\notin\mathbb{Z}$}. At f\/irst sight this seems to exclude many cases of interests in applications but, at closer inspection, one f\/inds that this is not the case: as explained in this section, our results can be used even in cases where these conditions fail.
We f\/irst explain the reason for these conditions: our solutions are obtained by an unconventional variant of perturbation theory leading to series solutions which are linear combinations of products of the following generalized energy dif\/ference fractions,
\begin{gather}\label{resonance1}
\frac1{E^{(0)}_{m}-E^{(0)}_n -\kappa\ell} = \frac1{(m-n)(m+n+g_0+g_1)-\kappa\ell}
\end{gather}
with $E^{(0)}_n$ in \eqref{PTsolution} the energy eigenvalues of the unperturbed problem $q=0$ and $\ell=0,1,2,\ldots$; we refer to a case $(m,\ell)$ where, for f\/ixed $n$, the denominator in \eqref{resonance1} is zero as {\em resonance}. The reason for the conditions on parameters above is that they are a simple means to rule out resonances, and this guarantees that our series are well-def\/ined.
However, while these conditions are suf\/f\/icient, they are not necessary: one peculiar feature of our perturbation theory is that the series we obtain have singularities coming from energy dif\/ference denominators as in \eqref{resonance1}, but many of these singularities are removable. Thus our results can be extended to cases where our no-resonance conditions fail. We now give a precise formulation for one important such case.
\begin{Lemma}\label{lemResonance}
In the Heun case $\kappa=0$ and for fixed $n\in\mathbb{N}_0$, the results for the expansion coefficients $\mathcal{P}_n^{(\ell)}(z)$ and $\mathcal{E}_n^{(\ell)}$ in Proposition~{\rm \ref{prop1}} and Theorem~{\rm \ref{Thm1}} are valid even in the limit $g_0+g_1\to k\in\mathbb{Z}$ with $k>-(2n+1)$.
\end{Lemma}
(The proof is given at the end of this section.)
Thus, in the Heun case $\kappa=0$ and for all $n\in\mathbb{N}_0$, our results can be extended to the case $g_0+g_1\in\mathbb{N}_0$ of interest in applications. We believe that, in a similar manner, our results can be extended to interesting cases with non-zero {\em real} $\kappa$. As will become clear in our proof of Lemma~\ref{lemResonance} below, the challenge to make precise and prove such a result is that the generalization of standard perturbation theory~\cite{THeun} to $\kappa\neq 0$ is not known (to our knowledge).
\begin{proof}[Proof of Lemma~\ref{lemResonance}]
For $\kappa=0$ and f\/ixed $n\in\mathbb{N}_0$, the energy denominator in \eqref{resonance1} appearing in standard perturbation theory \cite{THeun} are only for $m\in\mathbb{N}_0$ dif\/ferent from $n$, and thus all pertinent fractions in \eqref{resonance1} are f\/inite if $g_0+g_1>-(2n+1)$.
In our perturbative expansions we encounter fractions in \eqref{resonance1} for arbitrary integers $m\neq n$.
However, it is clear that our results for the coef\/f\/icients $\mathcal{P}_n^{(\ell)}(z)$ and $\mathcal{E}_n^{(\ell)}$ of the perturbative solution def\/ined in \eqref{P} must be identical with the corresponding results obtained in standard perturbation theory. Thus the singularities coming from resonance fractions in our perturbation expansion must cancel: our perturbative results remain f\/inite even in the limit when $g_0+g_1$ becomes integer.
\end{proof}
\begin{Remark}The resonance problem that we encounter in this paper is very similar to the one which appeared in the treatment of the eCS model in \cite{ELeCS1,ELeCS2}.
This is no coincidence: results in the special case $N=2$ in {\em op.\ cit.}\ correspond to ours in the Lam\'e case $g_0=g_1=g_2=g_3$, $\kappa=0$.
The interested reader is referred to \cite{ELeCS2} for a more extensive discussion of resonances.
\end{Remark}
\section{Key result and special cases}\label{secResults}
In Section~\ref{secKey} we present our key result, which is a transformation of the problem to solve the non-stationary Heun equation in \eqref{Heun} with the ansatz in \eqref{solution} to a problem to solve a~dif\/ferential-dif\/ference equation; as we show in subsequent sections, the latter problem allows for ef\/f\/icient solutions. In Section~\ref{secExplicit} we point out special non-trivial cases where our key result leads to explicit integral representations of solutions of \eqref{Heun}.
\subsection{Dif\/ferential-dif\/ference equation}\label{secKey}
We construct solutions $\psi_n(x)$, $E_n$ of the non-stationary Heun equation in \eqref{Heun} of the form \eqref{solution}--\eqref{eJacobi} in the sense of formal power series in $q$.
One important feature of our method is that we expand
\begin{gather}\label{Pnseries}
\mathcal{P}_n(z) = {\mathcal N}_n \sum_{m \in\mathbb{Z}} \alpha_n(m)f_{m}(z)
\end{gather}
with non-trivial basis functions $f_m(z)$ given in \eqref{fn}--\eqref{lambda} and characterized in Lemma~\ref{Lemma:fn2}.
As will be shown, the following constant ensures the normalization in \eqref{eJacobi},
\begin{gather}\label{cNn}
{\mathcal N}_n = \frac{(n+g_0+g_1)_n}{4^n(\lambda)_n}
\end{gather}
with the raising Pochhammer symbol $(x)_n$ in \eqref{Pochhammer} and $\lambda$ in \eqref{lambda}; note that ${\mathcal N}_n$ is f\/inite and non-zero for all integers $n$ if $-\lambda\notin\mathbb{N}_0$ for $n>0$ and $-(g_0+g_1)\notin\mathbb{N}_0$ for $n<0$ (see the discussion after \eqref{Pochhammer}).
Our key result is equations determining $\alpha_n(m)$ and $\mathcal{E}_n$ and which, as we will show, can be solved ef\/f\/iciently.
To state this result we introduce the convenient shorthand notation
\begin{gather}\label{gammanu}
\gamma_k^\mu\equiv \begin{cases} \tilde g_0(\tilde g_0-1) + (-1)^\mu \tilde g_1(\tilde g_1-1) & \text{if } \frac{1}{2} k\in\mathbb{N}_0, \\
(-1)^\mu \tilde g_2(\tilde g_2-1) + \tilde g_3(\tilde g_3-1) & \text{if } \frac{1}{2} (k-1)\in\mathbb{N}_0 \end{cases}
\end{gather}
for $(\mu,k)\in\mathbb{Z}\times \mathbb{N}_0$ (recall that $\tilde{g}_0=\frac12(-g_0+g_1+g_2+g_3-\kappa)$, $\tilde{g}_1=\frac12(g_0-g_1+g_2+g_3-\kappa)$ etc.; note that $\gamma_k^\mu=\gamma_{k+2r}^{\mu+2s}$ for all integers~$r$,~$s$).
\begin{Proposition}\label{prop0}
Let $n\in\mathbb{Z}$, $-\lambda\notin\mathbb{N}_0$ for $n>0$ and $-(g_0+g_1)\notin\mathbb{N}_0$ for $n<0$, and assume that $\mathcal{E}_n$ and $\alpha_n(m)$ for $m\in\mathbb{Z}$ satisfy the following system of equations,
\begin{gather}
\left[ -\kappa q\frac{\partial}{\partial q} +E^{(0)}_{m}-\mathcal{E}_n\right]\alpha_n(m)\nonumber\\
\qquad {} =\sum_{\mu=1}^\infty\mu \gamma_0^\mu \alpha_n(m+\mu)
+ \sum_{\mu=1}^\infty \frac{\mu q^\mu}{1-q^{2\mu}}\big( \gamma_0^\mu q^{\mu} + \gamma_1^\mu \big) [\alpha_n(m+\mu)+\alpha_n(m-\mu)]\label{aneqs}
\end{gather}
and the condition
\begin{gather}\label{ic}
\alpha_n(m)|_{q=0} = \begin{cases} 0, & m>n, \\ 1, & m=n. \end{cases}
\end{gather}
Then $\psi_n(x)$, $E_n$ in \eqref{solution}--\eqref{P} and \eqref{Pnseries}--\eqref{cNn} satisfy the non-stationary Heun equation in \eqref{Heun}, and the conditions in \eqref{eJacobi} hold true provided $-(g_0+g_1)\notin\mathbb{N}$.
\end{Proposition}
(The proof is given in Section~\ref{Proofprop0}.)
It is important to note that the conditions above do not determine $\mathcal{P}_n(z)$ and $\mathcal{E}_n$ uniquely: \eqref{aneqs}--\eqref{ic} are invariant under
\begin{gather}\label{invariance}
\alpha_n(m)\to C \alpha_n(m),\qquad \mathcal{E}_n\to \mathcal{E}_n - \kappa \frac1{C} q\frac{\partial}{\partial q}C
\end{gather}
for any change of normalization $C=1+O(q)$ analytic in $q$ (this is a consequence of the invariance of~\eqref{Heun} under~\eqref{symmetry}).
This ambiguity can be f\/ixed for generic parameter values by replacing~\eqref{ic} by a stronger condition; see Propositions~\ref{prop1} and~\ref{prop2} for two dif\/ferent ways to do this.
This result also provides simple explicit solutions of \eqref{Heun} for special particular parameter values, as elaborated in Section~\ref{secExplicit}.
\begin{Remark}\label{remP}
It interesting to note that the solutions $\alpha_n(m)$ and $\mathcal{E}_n$ of the equations in Proposition~\ref{prop0} are essentially independent of $n$ in the following sense: they are of the form
\begin{gather*}
\alpha_n(m)=a(m-n) ,\qquad \mathcal{E}_n = (P/2)^2 +\tilde{\mathcal{E}}
\end{gather*}
with functions $a(k)$ and $\tilde{\mathcal{E}}$ depending on $n$ only in the combination
\begin{gather}\label{Pdef}
P \equiv 2n+g_0+g_1
\end{gather}
(this is easy to check).
This and the notation introduced here are useful in computations and in the presentation of results; we use this in \eqref{cEn1}--\eqref{cEn2} and in Appendix~\ref{appExplicitResults}.
\end{Remark}
\subsection{Explicit solutions by integrals}\label{secExplicit}
The solutions $\alpha_n(m)$, $\mathcal{E}_n$ of \eqref{aneqs}--\eqref{ic} are complicated in general.
However, there exist non-trivial cases where Proposition~\ref{prop0} gives simple explicit solutions of the non-stationary Heun equation:
\begin{Corollary}\label{corSimple}
Let $n\in\mathbb{Z}$, $\lambda$ a complex parameter such that $-\lambda\notin\mathbb{N}_0$ for $n>0$ and $-(g_0+g_1)\notin\mathbb{N}_0$ for $n<0$,
\begin{gather}\label{parametersgnu}
\tilde g_\nu\in\{0,1\},\qquad g_\nu=\lambda-\tilde g_\nu,\qquad \nu=0,1,2,3,\qquad \kappa=2\lambda-\sum_{\nu=0}^3\tilde g_\nu,
\end{gather}
${\mathcal N}_n$ in \eqref{cNn}, and $\mathcal{C}$ the integration contour defined after \eqref{fnint}. Then the non-stationary Heun equation in \eqref{Heun} has solutions $\psi_n(x)$, $E_n$ as in \eqref{solution} with
\begin{gather}\label{ExplicitSolution}
\mathcal{P}_n(z) = {\mathcal N}_n\oint_{\mathcal{C}}\frac{d\xi}{2\pi{\rm i}\xi}\xi^{-n} \frac{\prod\limits_{\nu=0}^3\Theta_{\nu+1}(\xi)^{\tilde g_\nu}}{\Theta(z,\xi)^\lambda}, \qquad
\mathcal{E}_n = \left(n+\frac{g_0+g_1}2\right)^2
\end{gather}
and such that the conditions in \eqref{eJacobi} hold true provided $-(g_0+g_1)\notin\mathbb{N}$.
\end{Corollary}
\begin{proof}
The assumptions imply $\gamma_k^\mu=0$ for all $k$, $\mu$, and the equations in \eqref{aneqs}--\eqref{ic} in this case have the solution $\alpha_n(m)=\delta(m,n)$, $\mathcal{E}_n=E^{(0)}_n$. Proposition~\ref{prop0} implies the result.
\end{proof}
Note that \eqref{parametersgnu} gives several dif\/ferent one-parameter families $(\{g_\nu\}_{\nu=0}^3,\kappa)$, depending on $\lambda$, where this result provides simple integral representations of elliptic Jacobi polynomials $\mathcal{P}_n(z)$. Moreover, these formulas are non-trivial even in the trigonometric case $q=0$: the results above imply the following integral representations of Jacobi polynomials,
\begin{gather}
P_n^{\big(g-\frac{1}{2},g-\frac{1}{2}\big)}(z) = \frac{(n+2g)_n}{4^n(g)_n} \oint_{\mathcal{C}}\frac{d\xi}{2\pi{\rm i}\xi}\xi^{-n} \frac1{(1-2z\xi+\xi^2)^g}, \nonumber\\
P_n^{\big(g-\frac{1}{2},g+\frac{1}{2}\big)}(z) = \frac{(n+2g+1)_n}{4^n(g+1)_n} \oint_{\mathcal{C}}\frac{d\xi}{2\pi{\rm i}\xi}\xi^{-n} \frac{(1-\xi)}{(1-2z\xi+\xi^2)^{g+1}},\nonumber\\
P_n^{\big(g+\frac{1}{2},g-\frac{1}{2}\big)}(z) = \frac{(n+2g+1)_n}{4^n(g+1)_n} \oint_{\mathcal{C}}\frac{d\xi}{2\pi{\rm i}\xi}\xi^{-n} \frac{(1+\xi)}{(1-2z\xi+\xi^2)^{g+1}},\nonumber\\
P_n^{\big(g-\frac{1}{2},g-\frac{1}{2}\big)}(z) = \frac{(n+2g)_n}{4^n(g+1)_n} \oint_{\mathcal{C}}\frac{d\xi}{2\pi{\rm i}\xi}\xi^{-n} \frac{(1-\xi^2)}{(1-2z\xi+\xi^2)^{g+1}}\label{Simple1}
\end{gather}
for $n\in\mathbb{N}_0$ (the latter identities are obtained from Corollary~\ref{corSimple} for the cases where $(\tilde{g}_0,\tilde{g}_1,\lambda)$ is $(0,0,g)$, $(1,0,g+1)$, $(0,1,g+1)$, and $(1,1,g+1)$, respectively). The f\/irst identity in \eqref{Simple1} is equivalent to a well-known generating function for the Gegenbauer polynomials (see \eqref{Gegenbauer}), and also the others can be found in \cite{Dig10}.
\begin{Remark}\label{RelationToFLNO}
Fateev et al.\ gave integral representations of solutions of the non-stationary Lam\'e equation \cite{FLNO} which, in a special case, are similar to the one above for $g_0=g_1=g_2=g_3$. More specif\/ically, the solution given in equation~(3.11) of~\cite{FLNO} can be proved by a simple variant of the argument that we used in order to obtain our solution in~\eqref{ExplicitSolution} (note that our parameter $\kappa$ corresponds to $-2/b^2$ in~\cite{FLNO}). We also mention similar integral representations of solutions of the non-stationary Lam\'e equation appearing in works by Etingof and Kirillov (see \cite[Theorem~5.1]{EK}) and Felder and Varchenko (see, e.g., \cite[Example~1.2]{FV}).
\end{Remark}
We emphasize that the result in Corollary~\ref{corSimple} is more general in that it includes some non-stationary Heun cases that cannot be reduced to a non-stationary Lam\'e case.
\begin{Remark}Corollary~\ref{corSimple} can be obtained as special case $(N,\tilde{N})=(1,0)$ from Proposition~4.1 in~\cite{LT}. However, this is not easy to see, and it is therefore worthwhile to emphasize this result here.
\end{Remark}
\section{Proof of key result}\label{Proofprop0}
We turn to the proof of Proposition~\ref{prop0}. In Section~\ref{secKernelMethod} we derive the key identity using the kernel function method. In Section~\ref{secTrigonometric} we consider the trigonometric case $q=0$ to prove that the conditions in \eqref{ic} and the normalization condition in \eqref{cNn} yield a solution satisfying \eqref{eJacobi}.
\subsection{Kernel function method}\label{secKernelMethod}
We introduce the notation
\begin{gather}\label{H}
H\big(x;\{ g_\nu\}_{\nu=0}^3\big) \equiv -\frac{\partial^2}{\partial x^2} +\sum_{\nu=0}^3 g_\nu(g_\nu-1)\wp(x+\omega_\nu)
\end{gather}
with $\omega_\nu$ in \eqref{omeganu}. This allows us to write the non-stationary Heun equation in \eqref{Heun} as
\begin{gather}\label{Heun1}
\left( \frac{{\rm i}}{\pi}\kappa \frac{\partial}{\partial\tau} + H\big(x;\{g_\nu\}_{\nu=0}^3\big) - E\right)\psi(x)=0.
\end{gather}
We also recall the def\/initions of $\tilde g_\nu$ and $\lambda$ in \eqref{lambda}. Note that $H$ in \eqref{H} is the Hamiltonian def\/ining the $BC_1$ Inozemtsev model \cite{THeun}.
We obtain our result from the following generalized kernel function identity:
\begin{Lemma}\label{Lemma:kernel}
The function
\begin{gather}\label{K}
\mathcal{K}(x,y)\equiv
\frac{\prod\limits_{\nu=0}^3\theta_{\nu+1}\big(\frac{1}{2} x\big)^{g_\nu}\theta_{\nu+1}\big(\frac{1}{2} y\big)^{\tilde g_\nu}}{\theta_1\big(\frac{1}{2} (x+y)\big)^{\lambda}\theta_1\big(\frac{1}{2} (x-y)\big)^{\lambda}}
\end{gather}
obeys the identity
\begin{gather}\label{kernel}
\left(\frac{{\rm i}}{\pi} \kappa\frac{\partial}{\partial\tau} + H\big(x;\{g_\nu\}_{\nu=0}^3\big) - H\big(y;\{\tilde g_\nu\}_{\nu=0}^3\big)-C_{1,1} \right) \mathcal{K}(x,y)=0
\end{gather}
with
\begin{gather}\label{C11}
C_{1,1} = 2\kappa (1-\lambda)\frac{\eta_1}{\pi}.
\end{gather}
\end{Lemma}
\begin{proof}This is the special case $N=M=1$ of Corollary~3.2 in \cite{LT} (note that the symbols $\beta$ and $A_{1,1}$ in \cite{LT} correspond to $-2\pi{\rm i} \tau$ and $2\kappa$ here, respectively).
\end{proof}
We use this to compute the action of $\frac{{\rm i}}{\pi} \kappa \frac{\partial}{\partial\tau} + H(x;\{g_\nu\}_{\nu=0}^3)$ on the functions
\begin{gather}\label{Fn}
F_m(x)\equiv \big(2q^{1/4}\big)^{-(g_0+g_1)}\left( \prod_{\nu=0}^3 \theta_{\nu+1}\big(\tfrac{1}{2} x\big)^{g_\nu} \right) f_m(\cos x),\qquad m\in\mathbb{Z}
\end{gather}
with $f_m(z)$ def\/ined in \eqref{fn}. For this we note that
\begin{gather*}
\mathcal{K}(x,y) = 2^{g_0+g_1}{\rm e}^{{\rm i} \pi\tilde g_0/2}G^{-\kappa }\big(2q^{1/4}\big)^{-(g_0+g_1)}\left( \prod_{\nu=0}^3 \theta_{\nu+1}\big(\tfrac{1}{2} x\big)^{g_\nu} \right)\frac{\prod\limits_{\nu=0}^3\Theta_{\nu+1}({\rm e}^{{\rm i} y})^{\tilde g_\nu}}{\Theta(\cos x,{\rm e}^{{\rm i} y})^\lambda} {\rm e}^{{\rm i} y(g_0+g_1)/2}
\end{gather*}
with $G$ def\/ined in \eqref{G} (we used \eqref{tettet} and \eqref{tettet1}). This and \eqref{fn} show that $\mathcal{K}(x,y)$ is a~generating function for the functions in \eqref{Fn}:
\begin{gather}\label{genfun}
\mathcal{K}(x,y) = 2^{g_0+g_1}{\rm e}^{{\rm i} \pi\tilde g_0/2}G^{-\kappa} \sum_{m\in\mathbb{Z}} F_m(x) {\rm e}^{{\rm i} \big(m+\frac{1}{2}(g_0+g_1)\big)y}, \qquad 0<\Im(y)<\pi\Im(\tau).
\end{gather}
To evaluate $H(y;\{\tilde g_\nu\}_{\nu=0}^3) \mathcal{K}(x,y)$ we use the following expansions
\begin{gather}\label{wpseries}
\wp(y+\omega_\nu) = - \frac{\eta_1}{\pi} - \sum_{\mu\in{\mathbb{Z}'}} (S_\nu)_\mu{\rm e}^{{\rm i}\mu y},\qquad 0<\Im(y)<\pi\Im(\tau)
\end{gather}
with
\begin{gather}
(S_0)_\mu = \mu\frac{1}{1-q^{2\mu}}= |\mu|\frac{q^{|\mu|-\mu}}{1-q^{2|\mu|}},\nonumber\\
(S_1)_\mu = (-1)^\mu\mu\frac{1}{1-q^{2\mu}}= (-1)^\mu|\mu|\frac{q^{|\mu|-\mu}}{1-q^{2|\mu|}},\nonumber\\
(S_2)_\mu = (-1)^\mu\mu\frac{q^\mu}{1-q^{2\mu}}= (-1)^\mu|\mu|\frac{q^{|\mu|}}{1-q^{2|\mu|}},\nonumber\\
(S_3)_\mu = \mu\frac{q^\mu}{1-q^{2\mu}}= |\mu|\frac{q^{|\mu|}}{1-q^{2|\mu|}}\label{Snumu}
\end{gather}
(see Appendix~\ref{appwp} for derivations of these formulas). From this the following result is obtained by straightforward computations.
\begin{Lemma}\label{lemma2}
The functions in \eqref{Fn} satisfy
\begin{gather}\label{HFn}
\left( \frac{{\rm i}}{\pi}\kappa \frac{\partial}{\partial\tau} + H\big(x;\{g_\nu\}_{\nu=0}^3\big) \right)F_n(x) = \big( C_0 + E^{(0)}_n \big)F_n(x) -\sum_{\mu\in{\mathbb{Z}'}} S_\mu F_{n-\mu}(x)
\end{gather}
with $E^{(0)}_n$ in \eqref{PTsolution},
\begin{gather}\label{gammanu1}
S_\mu\equiv \sum_{\nu=0}^3\gamma_\nu(S_\nu)_\mu,\qquad \gamma_\nu \equiv \tilde g_\nu(\tilde g_\nu-1),
\end{gather}
$\tilde g_\nu\equiv \lambda-g_\nu$, and
\begin{gather}\label{C0}
C_0 = \kappa ^2\left(\frac1{12}-\frac{\eta_1}{\pi}\right) -\sum_{\nu=0}^3 g_\nu(g_\nu-1)\frac{\eta_1}{\pi}.
\end{gather}
\end{Lemma}
(The proof can be found at the end of this section.)
To proceed we make the ansatz
\begin{gather}\label{ansatz11}
\psi(x) = \mathcal{N} \sum_{m\in\mathbb{Z}}\alpha(m) F_m(x)
\end{gather}
with $\mathcal{N}$ a $q$-independent normalization constant to be determined. Inserting this in \eqref{Heun} and using Lemma~\ref{lemma2} we obtain
\begin{gather*}
\sum_{m\in\mathbb{Z}}\left(\left(\frac{{\rm i}}{\pi}\kappa \frac{\partial}{\partial\tau} + E^{(0)}_m -\mathcal{E} \right)\alpha(m)
-\sum_{\mu\in{\mathbb{Z}'}}S_\mu\alpha(m+\mu) \right) F_m(x)=0
\end{gather*}
with $\mathcal{E}\equiv E-C_0$. It follows that the function in \eqref{ansatz11} is a solution of \eqref{Heun} if the coef\/f\/icients~$\alpha(m)$ and $\mathcal{E}$ satisfy
\begin{gather}\label{aneqs1}
\left(\frac{{\rm i}}{\pi}\kappa \frac{\partial}{\partial\tau} + E^{(0)}_m -\mathcal{E} \right)\alpha(m)
= \sum_{\mu\in{\mathbb{Z}'}}S_\mu\alpha(m+\mu) .
\end{gather}
Inserting \eqref{Snumu} and \eqref{gammanu1} and changing variables from $\tau$ to $q={\rm e}^{{\rm i}\pi\tau}$ allow us to write this as
\begin{gather}
\left( - \kappa q\frac{\partial}{\partial q} + E_{m}^{(0)} -\mathcal{E} \right) \alpha(m) = \sum_{\mu=1}^\infty \mu(\gamma_0 + (-1)^\mu\gamma_1) \alpha(m+\mu) \nonumber\\
\qquad{} + \sum_{\mu=1}^\infty \mu\left( \frac{[\gamma_0 + (-1)^\mu\gamma_1)]q^{2\mu}}{1-q^{2\mu}} +\frac{[(-1)^\mu\gamma_2+\gamma_3]q^{\mu}}{1-q^{2\mu}}\right) [ \alpha(m+\mu) + \alpha(m-\mu)]\label{aneqs2}
\end{gather}
equivalent to \eqref{aneqs}. This proves that $\psi(x)$ in \eqref{ansatz11} and $E=\mathcal{E}+C_0$ solve \eqref{Heun} provided \eqref{aneqs2} is fulf\/illed.
We are left to show that a solution $\alpha(m)=\alpha_n(m)$, $\mathcal{E}=\mathcal{E}_n$ satisfying the condition in \eqref{ic} is such that \eqref{eJacobi} holds true. This is done in Section~\ref{secTrigonometric}.
\begin{proof}[Proof of Lemma~\ref{lemma2}] It follows from \eqref{kernel} that
\begin{gather}\label{key1}
\left(\frac{{\rm i}}{\pi} \kappa \frac{\partial}{\partial\tau} + H\big(x;\{g_\nu\}_{\nu=0}^3\big) \right)G^{\kappa} \mathcal{K}(x,y) = \big( H\big(y;\{\tilde g_\nu\}_{\nu=0}^3\big) + C_{1,1}' \big) G^{\kappa} \mathcal{K}(x,y)
\end{gather}
with
\begin{gather}\label{C11p}
C_{1,1}' \equiv C_{1,1} + \kappa^2\left(\frac1{12}-\frac{\eta_1}{\pi}\right)
\end{gather}
(we computed
\begin{gather*
\frac{{\rm i}}{\pi}\frac1{G}\frac{\partial}{\partial\tau} G = \sum_{n=1}^\infty \frac{2nq^{2n}}{1-q^{2n}} = \sum_{n=1}^\infty \sum_{\ell=1}^\infty 2nq^{2n\ell} =\sum_{\ell=1}^\infty \frac{2q^{2\ell}}{(1-q^{2\ell})^2}= \frac1{12}-\frac{\eta_1}{\pi}
\end{gather*}
using \eqref{G} and \eqref{eta1pi}). To compute the r.h.s.\ in \eqref{key1} we use
\begin{gather*}
H\big(y;\{\tilde g_\nu\}_{\nu=0}^3\big) = -\frac{\partial^2}{\partial y^2} - \sum_{\mu\in{\mathbb{Z}'}}S_\mu{\rm e}^{{\rm i} \mu y}-\sum_{\nu=0}^3\gamma_\nu\frac{\eta_1}{\pi},\qquad 0<\Im(y)<\pi\Im(\tau)
\end{gather*}
obtained by inserting \eqref{wpseries} in \eqref{H} and using \eqref{gammanu1}.
Using \eqref{genfun} and equating terms with the same factor ${\rm e}^{{\rm i} (n+\frac{1}{2}(g_0+g_1))y}$ give \eqref{HFn} and
\begin{gather*}
C_0 = C_{1,1}'-\sum_{\nu=0}^3\gamma_\nu\frac{\eta_1}{\pi}.
\end{gather*}
Using \eqref{C11} and \eqref{C11p}, inserting \eqref{gammanu1}, and recalling \eqref{lambda}, one obtains the formula in \eqref{C0}.
\end{proof}
\subsection{Trigonometric limit}\label{secTrigonometric}
For $q=0$ the equations in \eqref{aneqs}--\eqref{ic} simplify to
\begin{gather}\label{aneqs0}
\big[E^{(0)}_{m}-\mathcal{E}^{(0)}_n\big]\alpha^{(0)}_n(m) =\sum_{\mu=1}^{n-m}\mu \gamma_0^\mu \alpha^{(0)}_n(m+\mu)
\end{gather}
and $\alpha^{(0)}_n(n)=1$; we use the superscript ``$(0)$'' to indicate that a quantity is for $q=0$. This system of equations is an eigenvalue problem for a non-degenerate triangular matrix which can be solved by recursion: $\mathcal{E}_n^{(0)}=E^{(0)}_n$, $\alpha^{(0)}_n(n)=1$, and
\begin{gather}\label{Eq:an0iteration}
\alpha_n^{(0)}(m<n) = \frac{1}{b^{(0)}_n(m-n)}\sum_{\mu=1}^{n-m} \mu\gamma_0^\mu \alpha^{(0)}_n(m+\mu)
\end{gather}
with
\begin{gather}\label{bnm}
b_n^{(0)}(m) \equiv E^{(0)}_{m+n}-E^{(0)}_{n} = m(m+2n+g_0+g_1),
\end{gather}
provided that $g_0+g_1$ is such that
\begin{gather}\label{nores0}
b_n^{(0)}(-m)\neq 0 \qquad \forall\, m=1,2,\ldots,n.
\end{gather}
The latter condition is implied by our assumption that $-(g_0+g_1)\notin\mathbb{N}$.
We recall that $f^{(0)}_m(z)\equiv \left. f_m(z)\right|_{q=0}$ is zero for $m<0$ and a polynomial satisfying \eqref{f0n} for $m\geq 0$ (see Lemma~\ref{Lemma:fn2}), and thus
\begin{gather}\label{Pn0}
\mathcal{P}^{(0)}_n(z) \equiv \left. \mathcal{P}_n(z)\right|_{q=0} = {\mathcal N}_n \sum_{m=0}^n \alpha_n^{(0)}(m)f_{m}^{(0)}(z) = {\mathcal N}_n \binom{-\lambda}{n}(-2z)^n+O\big(z^{n-1}\big)
\end{gather}
is a polynomial of degree $n$. Moreover, the special case $q=0$ of results proved above implies that $\psi_n^{(0)}(x)\equiv (\sin \frac{1}{2} x)^{g_0}(\cos \frac{1}{2} x)^{g_1}\mathcal{P}_n^{(0)}(\cos x)$ is a solution of \eqref{PT} with $E^{(0)}=E^{(0)}_n$. Thus, by Lemma~\ref{lemmaUniqueness}, \eqref{eJacobi} holds true provided that the coef\/f\/icient of the leading term in~\eqref{Pn0} agrees with the one in \eqref{Pnormalization}. This f\/ixes the normalization constant $\mathcal{N}_n$ as in \eqref{cNn} and completes the proof of Proposition~\ref{prop0}.
\section{Recursive algorithms}\label{secPerturbative}
We now present algorithms to compute the expansion coef\/f\/icients $\mathcal{P}_n^{(\ell)}(z)$ and $\mathcal{E}_n^{(\ell)}$ of our solution def\/ined in \eqref{P}, which are based on Proposition~\ref{prop0}.
The f\/irst algorithm given in Section~\ref{secPerturbative1} is such that it can be used for all values of $\kappa$, including $\kappa=0$.
We then give a second algorithm, which is a variant of the f\/irst, which is simpler but requires $\kappa\neq 0$; see Section~\ref{secPerturbative2}.
\subsection{Algorithm I}\label{secPerturbative1}
We compute the functions $\mathcal{P}_n^{(\ell)}(z)$ and $\mathcal{E}_n^{(\ell)}$ def\/ined in \eqref{P} by solving the equations in \eqref{aneqs}--\eqref{ic} with the ansatz
\begin{gather}\label{anEnseries}
\alpha_n(m) = \sum_{\ell=0}^\infty \alpha_n^{(\ell)}(m)q^\ell,\qquad \mathcal{E}_n = \sum_{\ell=0}^\infty \mathcal{E}_n^{(\ell)}q^\ell,
\end{gather}
which leads to recursive relations allowing a straightforward solution. Inserting this solution in \eqref{Pnseries} and using Lemma~\ref{Lemma:fn2} one obtains representations of the functions $\mathcal{P}_n^{(\ell)}(z)$ which make manifest that they are polynomials of degree $n+\ell$ in $z$ for $n+\ell\geq 0$ and zero otherwise.
To formulate this result we f\/ind it useful to introduce the shorthand notation
\begin{gather}\label{bnml}
b_n^{(\ell)}(k) \equiv E^{(0)}_{n+k}-E^{(0)}_n-\kappa\ell=k(k+2n+g_0+g_1) -\kappa\ell
\end{gather}
($E^{(0)}_n$ in \eqref{PTsolution}) and recall the def\/inition of $\lambda$ in \eqref{lambda}. Note that the denominators in the fractions in \eqref{resonance1} are equal to $b_n^{(\ell)}(m-n)$.
\begin{Proposition}\label{prop1}
Let $n\in\mathbb{Z}$, $-\lambda\notin\mathbb{N}_0$ for $n>0$ and $-(g_0+g_1)\notin\mathbb{N}_0$ for $n<0$, $\{f^{(\ell)}_m(z)\}_{m\in\mathbb{Z}}$ the functions defined and characterized in Lemma~{\rm \ref{Lemma:fn2}}, and assume that either $\Im(\kappa)\neq 0$ and $g_0+g_1\in\mathbb{R}$ or $\kappa=0$ and $g_0+g_1\notin \mathbb{Z}$.\footnote{Note that the latter are the no-resonance conditions discussed in Section~\ref{secResonances}.} Then the non-stationary Heun equation in \eqref{Heun} has a~unique solution as in \eqref{solution}--\eqref{P} given by
\begin{gather}\label{Pnellz}
\mathcal{P}_n^{(\ell)}(z) = \mathcal{N}_n\sum_{\ell'=0}^{\ell} \sum_{m=-\ell'}^{n+\ell-\ell'} \alpha_n^{(\ell-\ell')}(m)f_m^{(\ell')}(z)
\end{gather}
with $\mathcal{N}_n$ in \eqref{cNn}, and $\alpha_n^{(\ell)}(m)$, $\mathcal{E}^{(\ell)}_n$ are determined by the following recursion relations,
\begin{gather}
\alpha_n^{(\ell)}(m) = \frac1{b_n^{(\ell)}(m-n)}\Biggl( \sum_{\ell'=1}^\ell \mathcal{E}_n^{(\ell')}\alpha_n^{(\ell-\ell')}(m) + \sum_{\mu=1}^{n-m+\ell}\mu \gamma_0^\mu \alpha_n^{(\ell)}(m+\mu) \nonumber\\
\hphantom{\alpha_n^{(\ell)}(m) =}{} + \sum_{\ell'=0}^{\ell-1} \sum_{\mu=1}^{\ell-\ell'}\sum_{k=1}^{\left \lfloor{\frac{\ell-\ell'}{\mu}}\right \rfloor } \mu \gamma_k^\mu \delta_{\ell,\ell'+k\mu} \big[\alpha_n^{(\ell')}(m+\mu)+\alpha_n^{(\ell')}(m-\mu)\big] \Biggr)\label{recur22}
\end{gather}
for $m\neq n$, $\ell\geq 0$, and
\begin{gather}\label{Eellneqs}
\mathcal{E}^{(\ell)}_n =
- \sum_{\mu=1}^{\ell}\mu \gamma_0^\mu \alpha_n^{(\ell)}(n+\mu) -
\sum_{\ell'=0}^{\ell-1} \sum_{\mu=1}^{\ell-\ell'}\sum_{k=1}^{\left \lfloor{\frac{\ell-\ell'}{\mu}} \right\rfloor } \mu\gamma_k^\mu \delta_{\ell,\ell'+k\mu} \big[\alpha_n^{(\ell')}(n+\mu)+\alpha_n^{(\ell')}(n-\mu)\big]
\end{gather}
for $\ell\geq 1$, together with the following conditions
\begin{gather}\label{ic3}
\alpha_n^{(0)}(n)=1,\qquad \alpha_n^{(\ell)}(n)=0\qquad \forall\, \ell\geq 1,\qquad \alpha_n^{(\ell)}(m) = 0\qquad \forall\, m>n+\ell,\quad \ell\geq 0.
\end{gather}
The functions $\mathcal{P}_n^{(\ell)}(z)$ thus obtained are polynomials of degree $n+\ell$ in $z$ for $n+\ell\geq 0$ and zero otherwise, and \eqref{eJacobi} holds true provided $-(g_0+g_1)\notin\mathbb{N}$.
\end{Proposition}
(The proof can be found at the end of this section.)
The equations in \eqref{recur22}--\eqref{ic3} comprise the f\/irst of our perturbative solution algorithms.
It is important to note that it has a triangular structure and is f\/inite, which implies that it determines each coef\/f\/icient $\alpha_n^{(\ell)}(m)$ and $\mathcal{E}_n^{(\ell)}$ as a sum of f\/initely many terms.
To be specif\/ic: The set of pairs $(\ell,m)$ is partially ordered as follows,
\begin{gather}
\label{order}
(\ell',m')\prec (\ell,m)\Leftrightarrow (\ell'<\ell) \qquad \text{or} \qquad ( \ell'=\ell\mbox{ and } m'>m ) ,
\end{gather}
and \eqref{recur22} and \eqref{Eellneqs} have the form
\begin{gather*}
\alpha_n^{(\ell)}(m) = \sum_{(\ell',m')\prec (\ell,m)} B_{(\ell,m),(\ell',m')}\alpha_n^{(\ell')}(m'),\\
\mathcal{E}^{(\ell)}_n=\sum_{(\ell',m')\prec(\ell,n)} C_{(\ell,n),(\ell',m')}\alpha_n^{(\ell')}(m')
\end{gather*}
with explicitly known coef\/f\/icients $B_{(\ell,m),(\ell',m')}$ and $C_{(\ell,n),(\ell',m')}$ which, for f\/ixed f\/irst argument, are non-zero only for {\em finitely} many values of the second argument $(\ell',m')$. For the convenience of the reader we compute the solution $\alpha_n^{(\ell)}(m)$, $\mathcal{E}_n^{(\ell)}$ of this system of equations analytically for $\ell=0,1,2$ in Appendix~\ref{appExplicitResults}, and we obtain the following result for the generalized eigenvalues, $\mathcal{E}_n=(P/2)^2 +\mathcal{E}_n^{(1)}q + \mathcal{E}_n^{(2)}q^2+O(q^3)$ with $P=2n+g_0+g_1$ and
\begin{gather}\label{cEn1}
\mathcal{E}_n^{(1)}= \gamma_0^1\gamma_1^1\left(\frac1{P-1}-\frac1{P+1-\kappa}\right),\\
\mathcal{E}_n^{(2)} = \left(\gamma_0^1\right)^2\left(\frac{1}{P-1} - \frac{1}{P+1-2\kappa} \right) + \left(\gamma_1^1\right)^2 \left(\frac{1}{P-1+\kappa} - \frac{1}{P+1-\kappa} \right)\nonumber\\
\hphantom{\mathcal{E}_n^{(2)}=}{} + 4\gamma_0^0\gamma_1^0 \left( \frac{1}{2(P-2)} - \frac{1}{2(P+2)-2\kappa}\right)
- 2 (\gamma_0^1)^{2} \gamma_1^0 \left( \frac{1}{[2(P-2)][P-1]} \right. \nonumber\\
\left.\hphantom{\mathcal{E}_n^{(2)}=}{}
- \frac{1}{[P-1][P+1-2\kappa]} +\frac{1}{[P+1-2\kappa][2(P+2)-2\kappa]}\right)\nonumber\\
\hphantom{\mathcal{E}_n^{(2)}=}{}
- 2\gamma_0^0 \left(\gamma_1^1\right)^2 \left( \frac{1}{[2(P-2)][P-1+\kappa]} - \frac{1}{[P-1+\kappa][P+1-\kappa]} \right.\nonumber\\
\left.\hphantom{\mathcal{E}_n^{(2)}=}{} + \frac{1}{[P+1-\kappa][2(P+2)-2\kappa]} \right)
- (\gamma_0^1)^2(\gamma_1^1)^2 \left( \frac{1}{[P-1+\kappa][P-1]^2} \right. \nonumber\\
\hphantom{\mathcal{E}_n^{(2)}=}{} - \frac{1}{[P+1-2\kappa][P+1-\kappa]^2}- \frac{1}{[P-1+\kappa][P-1][P+1-\kappa]}
\nonumber\\
\hphantom{\mathcal{E}_n^{(2)}=}{} + \frac{1}{[P-1][P+1-\kappa][P+1-2\kappa]} -\frac{1}{[2(P-2)][P-1][P-1+\kappa]}\nonumber\\
\left.\hphantom{\mathcal{E}_n^{(2)}=}{}
+ \frac{1}{[P+1-\kappa][P+1-2\kappa][2(P+2)-2\kappa]} \right).\label{cEn2}
\end{gather}
This computation can be straightforwardly implemented in a symbolic programming language like MAPLE or MATHEMATICA and, in this way, extended to higher values of $\ell$.\footnote{We computed $\mathcal{E}_n^{(\ell)}$ up to $\ell=6$ using ordinary laptops and with computation times of the order of minutes.}
As proved by Ruijsenaars in \cite{RHeun}, the Heun equation in \eqref{Heun} for $\kappa=0$ has solutions $\psi_n(x)$ and corresponding eigenvalues $E_n$ such that the latter are invariant under all permutations of the following af\/f\/ine combinations of coupling parameters,
\begin{gather}\label{S4}
c_0\equiv g_0+g_2-1,\qquad c_1\equiv g_1+g_3-1,\qquad c_2\equiv g_1-g_3,\qquad c_3\equiv g_0-g_2.
\end{gather}
We used MAPLE to check that the coef\/f\/icients $\mathcal{E}_n^{(\ell)}$, $\ell=1,2,3,4,5$, determined by the algorithm in Proposition~\ref{prop2} have this permutation symmetry for $\kappa=0$ (but not for $\kappa\neq 0$).
\begin{proof}[Proof of Proposition~\ref{prop1}]
Insert the ansatz in \eqref{anEnseries} and the geometric series for $q^{\mu}/(1-q^{2\mu})$ into \eqref{aneqs}, compare terms which have the same power in $q$, and obtain
\begin{gather}
b_n^{(\ell)}(m-n) \alpha_n^{(\ell)}(m) = \sum_{\ell'=1}^\ell \mathcal{E}_n^{(\ell')}\alpha_n^{(\ell-\ell')}(m) + \sum_{\mu=1}^\infty\mu \gamma_0^\mu \alpha_n^{(\ell)}(m+\mu) \nonumber\\
\hphantom{b_n^{(\ell)}(m-n) \alpha_n^{(\ell)}(m) =}{}
+ \sum_{\ell'=0}^{\ell-1} \sum_{\mu=1}^\infty\sum_{k=1}^\infty \mu\gamma_k^\mu \delta_{\ell,\ell'+k\mu} \big[\alpha_n^{(\ell')}(m+\mu)+\alpha_n^{(\ell')}(m-\mu)\big],\label{recur}
\end{gather}
where $b_n^{(\ell)}(k)$ is short for $-\kappa\ell + E_{n+k}^{(0)} - \mathcal{E}_n^{(0)}$ for $\ell=0,1,2,\ldots$.
The f\/irst condition in \eqref{ic3} implies $\mathcal{E}_n^{(0)}=E^{(0)}_n$, and one obtains the formula for $b_n^{(\ell)}(k)$ in \eqref{bnml}.
One can check that the f\/irst two conditions in \eqref{ic3} are consistent with \eqref{recur} (details are given in Section~\ref{secTrigonometric}). Moreover, they imply that it is consistent to set
\begin{gather}\label{hwconditions1}
\alpha^{(\ell)}_n(m)=0\qquad \forall\, m>n+\ell,\quad \ell\geq 0
\end{gather}
(this can be proved by induction: \eqref{hwconditions1} is true by assumption for $\ell=0$; for $\ell\geq 1$ it follows from~\eqref{recur} that $\alpha_n^{(\ell)}(m)$ for $m>n+\ell$ is a linear combination of $\alpha_n^{(\ell')}(m')$ with $\ell'<\ell$, $m'\geq m-\mu$ and $\mu\geq 1$ constrained by the Kronecker deltas in \eqref{recur}, i.e., $m'\geq m-(\ell-\ell')>n+\ell'$, which proves the claim). Thus one can restrict the second sum in~\eqref{recur} to $1\leq\mu\leq n-m+\ell$. One can also check that the inf\/inite sums in the terms in the second line in \eqref{recur} can be replaced by f\/inite ones: the Kronecker deltas there are non-zero for $\ell-\ell'=2k\mu$ and $\ell-\ell'=(2k-1)\mu$, and since $\mu,k\geq 1$ this is possible only if $1\leq\mu\leq\ell-\ell'$ and $1\leq k \leq (\ell-\ell')/(2\mu)+\frac{1}{2} $. Since $b_n^{(\ell)}(k)$ is non-zero for $m\neq n$ by assumption, we obtain~\eqref{recur22} from~\eqref{recur}. For $m=n$ and $\ell\geq 1$, the f\/irst and last conditions in \eqref{ic3} make~\eqref{recur} into an equation determining $\mathcal{E}^{(\ell)}_n$ as in~\eqref{Eellneqs}.
\end{proof}
\subsection{Algorithm II}\label{secPerturbative2}
For $\kappa\neq 0$, the condition $\alpha_n^{(\ell\geq 1)}(n)=0$ in \eqref{ic3} can be replaced by the condition $\mathcal{E}_n^{(\ell\geq 1)}=0$ and, in this way, a somewhat simpler solution algorithm is obtained.
Recall the def\/initions of $\lambda$ in \eqref{lambda}$, \gamma_\mu^k$ in \eqref{gammanu} and $b_n^{(\ell)}(k)$ in \eqref{bnml}.
\begin{Proposition}\label{prop2}
Let $n\in\mathbb{Z}$, $-\lambda\notin\mathbb{N}_0$ for $n>0$ and $-(g_0+g_1)\notin\mathbb{N}_0$ for $n<0$, $\big\{f^{(\ell)}_m(z)\big\}_{m\in\mathbb{Z}}$ the functions defined and characterized in Lemma~{\rm \ref{Lemma:fn2}}, and assume that $\Im(\kappa)\neq 0$ and $g_0+g_1\in\mathbb{R}$.\footnote{The latter are no-resonance conditions discussed in Section~\ref{secResonances}.}
Then the non-stationary Heun equation in \eqref{Heun} has a unique solution $\psi_n(x)$, $E_n$ as in \mbox{\eqref{solution}--\eqref{P}} given by~\eqref{cNn}, \eqref{Pnellz} and
\begin{gather}\label{Ensimple}
\mathcal{E}_n = \left(n+\frac{g_0+g_1}{2} \right)^2,
\end{gather}
with coefficients $\alpha_n^{(\ell)}(m)$ determined by the following recursion relations,
\begin{gather}
\alpha_n^{(\ell)}(m) = \frac1{b_n^{(\ell)}(m-n)}\Biggl( \sum_{\mu=1}^{n-m+\ell}\mu \gamma_0^\mu \alpha_n^{(\ell)}(m+\mu) \nonumber\\
\hphantom{\alpha_n^{(\ell)}(m) =}{} +\sum_{\ell'=0}^{\ell-1} \sum_{\mu=1}^{\ell-\ell'}\sum_{k=1}^{\left \lfloor{\frac{\ell-\ell'}{\mu}}\right \rfloor } \mu \gamma_k^\mu \delta_{\ell,\ell'+k\mu} \big[\alpha_n^{(\ell')}(m+\mu)+\alpha_n^{(\ell')}(m-\mu)\big] \Biggr)\label{recur33}
\end{gather}
for all $m\in{\mathbb{Z}'}$ if $\ell=0$ and all $m\in\mathbb{Z}$ if $\ell\geq 1$, together with the following conditions
\begin{gather}\label{ic33}
\alpha_n^{(0)}(n)=1,\qquad \alpha_n^{(\ell)}(m)=0\qquad \forall\, m>n+\ell,\quad \ell\geq 0.
\end{gather}
The functions $\mathcal{P}_n^{(\ell)}(z)$ thus obtained are polynomials of degree $n+\ell$ in $z$ for $n+\ell\geq 0$ and zero otherwise.
\end{Proposition}
It follows from \eqref{anEnseries} that the f\/irst and last conditions in \eqref{ic3} are equivalent to $\alpha_n^{I}(n)=1$, which give rise to $\mathcal{E}_n^I=E^{(0)}_n+O(q)$, whereas the condition in \eqref{ic33} and $\mathcal{E}_n^{(\ell\geq 1)}=0$ corresponding to relaxing the former conditions to $\alpha_n^{II}(n) = 1+O(q)$ and instead requiring $\mathcal{E}_n^{II}=E^{(0)}_n$ (the superscripts $I$ and $II$ here are to distinguish the results from the algorithms in Proposition~\ref{prop1} and \ref{prop2}, respectively). From this and \eqref{invariance} we conclude that the results of the two algorithms are related as follows,
\begin{gather}\label{relprops}
\alpha_n^{I}(m) = \frac{\alpha_n^{II}(m) }{\alpha_n^{II}(n)},\qquad \mathcal{E}_n^I = E^{(0)}_n + \frac{1}{\alpha_n^{II}(n)}\kappa q\frac{\partial}{\partial q}\alpha_n^{II}(n).
\end{gather}
This shows that the algorithm in Proposition~\ref{prop2} is a re-summation of the one in Proposition~\ref{prop1}: it follows from \eqref{relprops} that, by computing $\alpha^{II}_n(n)$ up to some order $O(q^{N+1})$, one obtains a Pad\'e approximation of the generalized eigenvalues $\mathcal{E}_n^I$ as follows,
\begin{gather}\label{Pade}
\mathcal{E}_n^I = E^{(0)}_n +\frac{\kappa\sum\limits_{\ell=1}^N \ell\alpha_n^{(II,\ell)}(n)q^\ell}{1+ \sum\limits_{\ell=1}^{N-1}\alpha_n^{(II,\ell)}(n)q^\ell} + O\big(q^{N+1}\big) .
\end{gather}
As an example we give
\begin{gather*}
\alpha_n^{(II,1)}(n) = \frac{\mathcal{E}_n^{(1)}}{\kappa}, \qquad \alpha_n^{(II,2)}(n) = \frac{\mathcal{E}_n^{(2)}}{2\kappa} + \frac{\big(\mathcal{E}_n^{(1)}\big)^2}{2\kappa^2},
\end{gather*}
with $\mathcal{E}_n^{(1)}$ and $\mathcal{E}_n^{(2)}$ in \eqref{cEn1} and \eqref{cEn2}, which is obtained by straightforward computations using the algorithm in Proposition~\ref{prop2}; note that, when this is inserted in \eqref{Pade} for $N=2$, the singularities at $\kappa=0$ in the numerator and denominator cancel. From our results above it is clear that this cancellation of singularities at $\kappa=0$ happens for arbitrary $N$ (this can be proved by the same argument we used to prove Lemma~\ref{lemResonance} in Section~\ref{secResonances}). This implies that the limit $\kappa\to 0$ is well-def\/ined (in the sense made precise in Lemma~\ref{lemResonance}), and we thus obtain from \eqref{Pade} an interesting representation of the eigenvalues of the $BC_1$ Inozemtsev model.
\begin{Remark}\label{remComplexity}
It is interesting to note that \eqref{Pade} provides a simpler representation of the Taylor coef\/f\/icients $\mathcal{E}_n^{(\ell)}$ of the eigenvalues of the Heun equation for larger values of $\ell$: we computed $\mathcal{E}_n^{(I,\ell)}$ and $\alpha_n^{(II,\ell)}(n)$ using MAPLE for $\ell=1,2,3,4,5$ and found that the former have $2$, $18$, $162$, $1776$, $21002$ distinct terms,\footnote{The numbers $2$ and $18$ in comparison with \eqref{cEn1} and \eqref{cEn2} explain what we mean by ``number of distinct terms''.} whereas the latter have $2$, $18$, $148$, $1298$, $11632$ distinct terms, respectively.
Moreover, as discussed in Appendix~\ref{appCombinatorics}, there exist explicit formulas for the coef\/f\/icients $\mathcal{E}_n^{(I,\ell)}$ and $\alpha_n^{(II,\ell)}(n)$, and the ones for the latter are much simpler than the ones for the former.
\end{Remark}
\section{Solution to all orders}\label{secAllOrders}
In this section we use results in \cite{ELeCS2} to obtain perturbative solutions of the equation in \eqref{Heun} to all orders, in generalization of the results in Section~\ref{secTrigonometric} for $q=0$.
As shown in Section~\ref{Proofprop0}, for all $n\in\mathbb{Z}$, \eqref{Heun} has solutions $\psi_n(x)$, $E_n$ as in \eqref{solution} and \eqref{Pnseries}--\eqref{cNn} provided that the coef\/f\/icients $\alpha_n(m)$ and the (shifted) eigenvalue $\mathcal{E}_n$ are (suitable) solutions of the dif\/ferential-dif\/ference equations in \eqref{aneqs}.
We observe that, by introducing the notation\footnote{As explained further below, the following def\/ines operators $\mathbb{A}$ and $\mathbb{B}$ on the vector space of sequences $\alpha=\{\alpha(m)\}_{m\in\mathbb{Z}}$.}
\begin{gather}\label{AbbBbb}
(\mathbb{A}\alpha)(m) \equiv \left(-\kappa q\frac{\partial}{\partial q} + E^{(0)}_m\right) \alpha(m),\qquad
(\mathbb{B}\alpha)(m) \equiv \sum_{\mu\in{\mathbb{Z}'} } S_\mu \alpha(m+\mu)
\end{gather}
with $E^{(0)}_m$ in \eqref{PTsolution} and $S_\mu$ in \eqref{gammanu1}, one can write \eqref{aneqs} in a simple way as follows,
\begin{gather}\label{eCSeq}
[\mathbb{A}-\mathcal{E} ]\alpha(m) = (\mathbb{B}\alpha)(m)
\end{gather}
(we f\/ind it convenient to suppress the dependence on $n$ here). We note that the notation introduced here is such that we can use results in \cite[Section~4.1]{ELeCS2} as they stand. We f\/irst consider the special case $\kappa=0$ (Heun equation) in Section~\ref{secAllOrders0}, and then the general case in Section~\ref{secAllOrders1}. We note that Theorem~\ref{Thm1} is a generalization of Proposition~\ref{prop1} restricted to $\kappa=0$; Theorem~\ref{Thm2} generalizes Proposition~\ref{prop2}.
\subsection{Heun equation}\label{secAllOrders0}
The Heun equation corresponds to the special case $\kappa=0$ of \eqref{Heun}. This is an eigenvalue equation for a 1D Schr\"odinger operator, and $\psi(x)$ and $E$ have a quantum mechanical interpretation as energy eigenfunction and eigenvalue, respectively. In this section we show how to use results in~\cite{ELeCS2} to obtain a perturbative solution for this case to all orders.
Using the Feshbach method and expanding a Neumann series yields implicit solutions of \eqref{AbbBbb}--\eqref{eCSeq} for $\kappa=0$ to all orders \cite{ELeCS2}. To formulate this result we introduce the following two functions of a complex variable $z$ ($n\in\mathbb{N}_0$ and $m\in\mathbb{Z}$ are f\/ixed),
\begin{gather}
\Phi_n(z) \equiv -\sum_{s=2}^\infty \sum_{\mu_1\in{\mathbb{Z}'}} S_{\mu_1} \cdots \sum_{\mu_s\in{\mathbb{Z}'}} S_{\mu_s} \nonumber\\
\hphantom{\Phi_n(z) \equiv}{} \times
\frac{\delta(0,\mu_1+\cdots+\mu_s)}{\big[b^{(0)}_{n}(\mu_1)-z\big]_n\big[b^{(0)}_{n}(\mu_1+\mu_2)-z\big]_n\cdots \big[b^{(0)}_{n}(\mu_1+\cdots+\mu_{s-1})-z\big]_n}
\nonumber\\
\hphantom{\Phi_n(z)}{}
= -\sum S_{\mu_1} S_{\mu_2} \frac{\delta(0,\mu_1+\mu_2)}{\big[b^{(0)}_{n}(\mu_1)-z\big]_n}\nonumber\\
\hphantom{\Phi_n(z) \equiv}{}
-\sum S_{\mu_1} S_{\mu_2} S_{\mu_3} \frac{\delta(0,\mu_1+\mu_2+\mu_3)}{\big[b^{(0)}_{n}(\mu_1)-z\big]_n\big[b^{(0)}_{n}(\mu_1+\mu_2)-z\big]_n} + \cdots\label{Phin}
\end{gather}
and
\begin{gather}
G_n(z;m) \equiv \delta(m,n)+\sum_{s=1}^\infty
\sum_{\mu_1\in{\mathbb{Z}'}}S_{\mu_1} \cdots \sum_{\mu_s\in{\mathbb{Z}'}} S_{\mu_s}\nonumber \\
\hphantom{G_n(z;m) \equiv}{}
\times
\frac{\delta(m,n+\mu_1+\cdots+\mu_s)}{\big[b^{(0)}_{n}(m\!-\!n)\!-\!z\big]_n\big[b^{(0)}_{n}(m\!-\!n\!+\!\mu_1)\!-\!z\big]_n\cdots \big[b^{(0)}_{n}(m\!-\!n\!+\!\mu_1\!+\!\cdots\!+\!\mu_{s-1})\!-\!z\big]_n}
\nonumber\\
\hphantom{G_n(z;m)}{}
= \delta(n,m) + \sum S_{\mu}\frac{\delta(n,m+\mu)}{\big[b^{(0)}_{n}(m-n)-z\big]_n} \nonumber\\
\hphantom{G_n(z;m) \equiv}{}
+ \sum S_{\mu_1} S_{\mu_2} \frac{\delta(n,m+\mu_1+\mu_2)}{\big[b^{(0)}_{n}(m-n)-z\big]_n\big[b^{(0)}_{n}(m-n+\mu_1)-z\big]_n} +\cdots\label{Gnm}
\end{gather}
with the Kronecker delta $\delta(n,m)$, $S_\mu$ in \eqref{gammanu1}, and the convenient shorthand notations
\begin{gather}\label{DEmn}
\frac1{[b^{(0)}_{n}(k)-z]_n} \equiv \begin{cases} 1/\big[E^{(0)}_{n+k}-E^{(0)}_{n}-z\big], & k\neq 0, \\ 0, & k=0. \end{cases}
\end{gather}
It is important to note that these formulas should be interpreted in a perturbative sense as follows: for f\/ixed $N\in\mathbb{N}$, there are only f\/initely many terms in~\eqref{Phin} and~\eqref{Gnm} that are $O(q^\ell)$ with $\ell<N$ and thus, to obtain results up to $O(q^N)$-corrections, all inf\/inite series in our formulas can be truncated to {\em finite series}. The reason why it is convenient to write inf\/inite series is that it is dif\/f\/icult to give a simple general recipe for f\/inite-$N$ truncations; for example, in \eqref{Phin} one can restrict to $2\leq s\leq N$ and $-N\leq \mu_j\leq N$ for $j=1,2,\ldots,s$, but this is somewhat arbitrary since the resulting f\/inite sum still contains many terms which do not contribute to $O(q^N)$.
\begin{Theorem}\label{Thm1}
Let $n\in\mathbb{Z}$, $-\lambda\notin\mathbb{N}_0$ for $n>0$ and $-(g_0+g_1)\notin\mathbb{N}_0$ for $n<0$, $\big\{f^{(\ell)}_m(z)\big\}_{m\in\mathbb{Z}}$ the functions defined and characterized in Lemma~{\rm \ref{Lemma:fn2}}. Then the Heun equation in \eqref{Heun} for $\kappa=0$ and $g_0+g_1\notin\mathbb{Z}$ has a unique solution as in \eqref{solution} and \eqref{Pnseries}--\eqref{cNn} provided that
\begin{gather*}
\mathcal{E}_n = E^{(0)}_n+\tilde{\mathcal{E}}_n,\qquad \alpha_n(m)=G_n\big(\tilde{\mathcal{E}}_n;m\big)
\end{gather*}
with $\tilde{\mathcal{E}}_n$ the unique solution of the equation
\begin{gather*}
\tilde{\mathcal{E}}_n = \Phi_n\big(\tilde{\mathcal{E}}_n\big)
\end{gather*}
vanishing like $O(q)$ as $q\to 0$.
\end{Theorem}
(The proof can be found at the end of this section.)
From this result one can obtain the following fully explicit formula for the generalized eigenvalues of the non-stationary Heun equation,
\begin{gather}\label{Enexplicit}
\mathcal{E}_n = E^{(0)}_n + \sum_{m=1}^\infty \sum_{k_0,k_1,\ldots,k_{m-1}\in\mathbb{N}}\!\delta\left(\sum_{r=0}^{m-1}k_r, m\right)\!
\delta\left(\sum_{r=1}^{m-1}rk_r, m-1\right)\! (m-1)! \prod_{r=1}^m \frac{[\Phi_n^{(r)}]^{k_r}}{k_r!}\!\!\!
\end{gather}
with
\begin{gather}
\Phi^{(r)}_n(z) \equiv -\sum_{s=2}^\infty \sum_{\mu_1,\ldots,\mu_s\in{\mathbb{Z}'}}\sum_{r_1,\ldots,r_{s-1}\in\mathbb{N}_0} S_{\mu_1}\cdots S_{\mu_s} \nonumber\\
\hphantom{\Phi^{(r)}_n(z) \equiv}{} \times
\frac{\delta(0,\mu_1+\cdots+\mu_s)\delta(r_1+\cdots r_{s-1},r)}{[b^{(0)}_{n}(\mu_1)]^{1+r_1}_n[b^{(0)}_{n}(\mu_1+\mu_2)]^{1+r_2}_n\cdots [b^{(0)}_{n}(\mu_1+\cdots+\mu_{s-1})]^{1+r_{s-1}}_n},\label{Phin111}
\end{gather}
and similarly for $\alpha_n(m)$; see Theorem~4.3.1 in \cite{ELeCS2} (one can check that the proof of the later theorem applies as it stands to the present case). As explained in Appendix~\ref{appCombinatorics}, this formula can be used to turn the computation of the generalized eigenvalues $\mathcal{E}_n$ of the non-stationary Heun equation into a combinatorial problem; see~\eqref{Enell11} {\em ff}.
\begin{Remark}
The elliptic generalizations of the Jacobi polynomials $P^{\big(g_0-\frac{1}{2},g_1-\frac{1}{2}\big)}_n(z)$ provided by Theorem~\ref{Thm1} can be formally written as
\begin{gather*}
\mathcal{P}_n(z) = {\mathcal N}_n\oint_{\mathcal{C}}\frac{d\xi}{2\pi{\rm i}\xi}\frac{\prod\limits_{\nu=0}^3\Theta_{\nu+1}(\xi)^{\tilde g_\nu}}{\Theta(z,\xi)^\lambda}\tilde{\mathcal{P}}_n(\xi)
\end{gather*}
with
\begin{gather*}
\tilde{\mathcal{P}}_n(\xi) = \xi^{-n} +\sum_{s=1}^\infty \prod_{\kappa=1}^s\left( \sum_{\mu_\kappa\in{\mathbb{Z}'}} \frac{ S_{\mu_\kappa}}{\Big[b_n^{(0)}(m-n+\sum\limits_{\ell=1}^{\kappa-1}\mu_{\ell})-\tilde{\mathcal{E}}_n \Big]_n} \right) \xi^{-n-\mu_1-\cdots-\mu_s}.
\end{gather*}
It is possible to identify $\tilde{\mathcal{P}}_n(\xi)$ with a singular eigenfunction of the Inozemtsev Hamiltonian $H(y;\{\tilde g_\nu\}_{\nu=0}^3)$ appearing in the generalized kernel function identity in Lemma~\ref{Lemma:kernel} and, in this way, extend the interpretation of the kernel function method given in \cite{ELsigma} to the present case.
\end{Remark}
\begin{proof}[Proof of Theorem~\ref{Thm1}] (We are brief since interested readers can f\/ind further details in \cite{ELeCS2}. In particular, it is explained in \cite{ELeCS2} why we can ignore questions of convergence in the argument below.)
Let $V$ be the vector space of sequences $\{\alpha(m)\}_{m\in\mathbb{Z}}$ and regard $\mathbb{A}$ and $\mathbb{B}$ in \eqref{AbbBbb} as linear operators $V\to V$. Def\/ine projections $\mathbb{P}$ on $V$ as follows,\footnote{We write $\mathbb{P}$ short for $\mathbb{P}_n$.}
\begin{gather}\label{Pbb}
(\mathbb{P}\alpha)(m) \equiv \delta(n,m)\alpha(m)\qquad \forall\, \alpha\in V
\end{gather}
and $\mathbb{P}^\perp\equiv I-\mathbb{P}$. Then $\mathbb{A}$ in \eqref{AbbBbb} commutes with $\mathbb{P}$, and the equation $\mathbb{P}\alpha_0=\alpha_0$ is solved by $(\alpha_0)(m)\equiv \delta(n,m)$.
Thus Lemma~4.1.1 in \cite{ELeCS2} implies that
\begin{gather}\label{Feshbach}
\alpha = \big[I+\big(\mathbb{A}-\mathcal{E}-\mathbb{P}^\perp\mathbb{B}\big)^{-1}\mathbb{P}^\perp\mathbb{B}\big]\alpha_0 , \qquad
\mathcal{E}\alpha_0 = \mathbb{A}\alpha_0 -\mathbb{P}\mathbb{B}\alpha
\end{gather}
is a solution of \eqref{eCSeq}. Expanding the resolvent in a Neumann series we obtain \cite{ELeCS2}
\begin{gather}\label{alphanmEn}
\alpha(m) = \sum_{s=0}^\infty \big( \big([\mathbb{A}-\mathcal{E}]^{-1}\mathbb{P}^\perp \mathbb{B}\big)^s\alpha_0\big)(m),\!\!\!\qquad \mathcal{E} = E^{(0)}_n-\sum_{s=0}^\infty \big(\mathbb{B}\big([\mathbb{A}-\mathcal{E}]^{-1}\mathbb{P}^\perp \mathbb{B}\big)^s\alpha_0\big)(n).\!\!\!\!\!
\end{gather}
The ansatz $\mathcal{E}=E^{(0)}_n+ \tilde{\mathcal{E}}$ implies
\begin{gather}\label{Abbinv}
\big([\mathbb{A}-\mathcal{E}]^{-1} \mathbb{P}^\perp \alpha\big)(m) = \frac1{\big[b^{(0)}_{n}(m-n)-\tilde{\mathcal{E}} \big]_n}\alpha(m)
\end{gather}
using the shorthand notation in \eqref{DEmn}. Using \eqref{Abbinv} to compute \eqref{alphanmEn} we obtain $\alpha(m)=G_n(\tilde{\mathcal{E}};m)$ and $\tilde{\mathcal{E}} = \Phi_n(\tilde{\mathcal{E}})$ with the functions def\/ined in \eqref{Phin}--\eqref{Gnm} \cite{ELeCS2}. The latter should be interpreted as non-linear equation whose solution $\tilde{\mathcal{E}}=\tilde{\mathcal{E}}_n$ vanishing like $O(q)$ determines the solution of the Heun equation we are interested in; see \cite{ELeCS2} for details.
\end{proof}
\subsection{Time dependent Heun equation}\label{secAllOrders1}
We now present generalizations of the results in the previous section to the non-stationary case $\kappa\neq 0$.
The argument to solve \eqref{AbbBbb}--\eqref{eCSeq} in the proof of Theorem~\ref{Thm1} can be generalized to $\kappa\neq 0$ if the following projection is used,
\begin{gather}\label{Pbb1}
(\mathbb{P}\alpha)^{(\ell)}(m) = \delta(\ell,0)\delta(n,m)\alpha^{(\ell)}(m)
\end{gather}
where $\alpha^{(\ell)}(m)$ are def\/ined as coef\/f\/icients of the formal power series in $q$ (see \eqref{anEnseries}); with that, the results in \eqref{Feshbach} hold true as they stand.
However, in the present case, the second of these equations is trivially solved by $\mathcal{E}=E^{(0)}_n$, and this implies a stronger result:
\begin{Theorem}\label{Thm2}
Let $n\in\mathbb{Z}$, $-\lambda\notin\mathbb{N}_0$ for $n>0$ and $-(g_0+g_1)\notin\mathbb{N}_0$ for $n<0$, $\big\{f^{(\ell)}_m(z)\big\}_{m\in\mathbb{Z}}$ the functions defined and characterized in Lemma~{\rm \ref{Lemma:fn2}}, and assume that $\Im(\kappa)\neq 0$ and $g_0+g_1\in\mathbb{R}$.\footnote{The latter are no-resonance conditions discussed in Section~\ref{secResonances}.}
Then the non-stationary Heun equation in \eqref{Heun} has a unique solution $\psi_n(x)$, $E_n$ as in \mbox{\eqref{solution}--\eqref{P}} given by \eqref{cNn}, \eqref{Pnellz}, \eqref{Ensimple} and the coefficients
\begin{gather}
\alpha^{(\ell)}_n(m) = \sum_{s=0}^\infty \sum_{k_1,\ldots,k_s\in\mathbb{N}_0} \sum_{\mu_1,\ldots,\mu_s\in{\mathbb{Z}'}} S_{\mu_1}(k_1)\cdots S_{\mu_s}(k_s)
\nonumber\\ \times
\frac{\delta(\ell,|\mu_1|k_1+\cdots+|\mu_s|k_s)\delta(n,m+\mu_1+\cdots+\mu_s)}{\big[b^{(\ell)}_{n}(m\!-\!n)\big] \big[b^{(\ell-|\mu_1|k_1)}_{n}(m\!-\!n\!+\!\mu_1)\big] \cdots \big[b^{(\ell-|\mu_1|k_1\!-\!\cdots\!-\!|\mu_{s-1}|k_{s-1})}_{n}(m\!-\!n\!+\!\mu_1\!+\!\cdots\!+\!\mu_{s-1})\big]} \nonumber\\ =
\delta(\ell,0)\delta(n,m) + \sum S_\mu(k) \frac{\delta(\ell,|\mu|k)\delta(n,m+\mu)}{[b_n^{(\ell)}(n-m)]} \nonumber\\
+ \sum S_{\mu_1}(k_1) S_{\mu_2}(k_2) \frac{\delta(\ell,|\mu_1|k_1+|\mu_2|k_2)\delta(n,m+\mu_1+\mu_2)}{\big[b_n^{(\ell)}(n-m)\big]\big[b_n^{(\ell-|\mu_1|k_1)}(n-m+\mu_1)\big]} + \cdots\label{Gnm1}
\end{gather}
using the shorthand notations
\begin{gather}\label{Sellmu1}
S_{\mu}(k) \equiv \begin{cases} \mu \gamma_0^\mu & \mbox{if $k=0$ and $\mu>0$}, \\
|\mu| \gamma_{k}^\mu & \mbox{if $k\in\mathbb{N}$}, \\
0 & \mbox{otherwise }
\end{cases}
\end{gather}
with $\gamma_k^\mu$ in \eqref{gammanu}, and
\begin{gather}\label{bnml1}
\frac1{[b^{(\ell)}_n(k)]}\equiv \begin{cases} 0 & \mbox{if } \ell=0 \mbox{ and } k=0, \\
1/b_n^{(\ell)}(k) & \mbox{otherwise} \end{cases}
\end{gather}
with $b_n^{(\ell)}(k)$ in \eqref{bnml}.
\end{Theorem}
(The proof can be found at the end of this section.)
It is not dif\/f\/icult to convince one-selves that the coef\/f\/icients in \eqref{Gnm1} are identical with the ones determined by the algorithm in Proposition~\ref{prop2}.
Thus, by setting $m=n$ and using the second identity in \eqref{relprops}, we obtain a formula to all orders for the generalized eigenvalues $\mathcal{E}_n$ of the time dependent Heun equation:
\begin{gather}\label{Enexplicit1}
\mathcal{E}_n = E^{(0)}_n + \frac{\sum\limits_{\ell =1}^\infty \kappa\ell\alpha_n^{(\ell)}(n)q^\ell}{1+\sum\limits_{\ell=1}^\infty \ell\alpha_n^{(\ell)}(n)q^\ell}
\end{gather}
with $\alpha_n(m)$ in \eqref{Gnm1} for $m=n$; the limit $\kappa\to 0$ of this formula is non-trivial but well-def\/ined. It would be interesting to f\/ind a re-summation which makes this manifest.
We thus obtained two explicit formulas for the eigenvalues of the Heun equation in \eqref{Heun} for $\kappa=0$: the formula in \eqref{Enexplicit}, and the limit $\kappa\to 0$ of the formula in \eqref{Enexplicit1}.
The former has the advantage that it is manifestly f\/inite for $\kappa=0$, whereas the latter requires a non-trivial limit. However, as explained in Appendix~\ref{appCombinatorics}, the latter formula is much simpler than the former.
\begin{proof}[Proof of Theorem~\ref{Thm2}]
Let $V$ be the vector space of all sequences $\alpha=\{\alpha^{(\ell)}(m)\}_{\ell\in\mathbb{N}_0,m\in \mathbb{Z}}$ identif\/ied with $\alpha=\{\alpha(m)\}_{m\in\mathbb{Z}}$ as in \eqref{anEnseries}.
Then $\mathbb{A}$ and $\mathbb{B}$ in \eqref{AbbBbb} can be written as linear operators $V\to V$ as follows,
\begin{gather}\label{AbbBbb1}
(\mathbb{A}\alpha)^{(\ell)}(m) = \big({-}\kappa\ell+E^{(0)}_m\big)\alpha^{(\ell)}(m),\qquad (\mathbb{B}\alpha)^{(\ell)}(m) = \sum_{\ell'=0}^\ell \sum_{\mu\in{\mathbb{Z}'}} S^{(\ell')}_\mu \alpha^{(\ell-\ell')}(m+\mu)
\end{gather}
with
\begin{gather}\label{Sellmu}
S_{\mu}^{(\ell)} \equiv \begin{cases} \mu \gamma_0^\mu & \mbox{if $\ell=0$, $\mu>0$}, \\
|\mu| \gamma_{\ell/|\mu|}^\mu & \mbox{if $\ell/|\mu|\in\mathbb{N}$}, \\
0 & \mbox{otherwise }
\end{cases}
\end{gather}
for $\mu\in{\mathbb{Z}'}$ and $\ell\in\mathbb{N}_0$ (the latter follows with \eqref{gammanu}, \eqref{Snumu}, and \eqref{gammanu1}).
This allows to rewrite~\eqref{aneqs} as in~\eqref{eCSeq}.
It is clear that the projection $\mathbb{P}$ in \eqref{Pbb1} commutes with $\mathbb{A}$ in \eqref{AbbBbb1}, and that the equation $\mathbb{P}\alpha_0=\alpha_0$ is solved by
\begin{gather}\label{alpha00}
(\alpha_0)^{(\ell)}(m)=\delta(\ell,0)\delta(n,m).
\end{gather}
Thus Lemma~4.1.1 in \cite{ELeCS2} implies that the solution $\alpha$ and $\mathcal{E}$ of \eqref{aneqs} satisf\/ies \eqref{Feshbach} with that $\alpha_0$ ($\mathbb{P}^\perp=I-\mathbb{P}$).
The def\/initions imply
\begin{gather*}
(\mathbb{P}\mathbb{B}\alpha)^{(\ell)}(m) = \delta(\ell,0)\delta(n,m)\sum_{\mu=1}^\infty \mu\gamma_0^{\mu}\alpha^{(0)}(n+\mu).
\end{gather*}
We showed already in Section~\ref{secTrigonometric} that the solution $\alpha$ of \eqref{aneqs} is such that $\alpha^{(0)}(m>n)=0$, and thus $\mathbb{P}\mathbb{B}\alpha=0$.
With that we f\/ind that the second equation in \eqref{Feshbach} is solved by $\mathcal{E}=\mathcal{E}^{(0)}_n$.\footnote{This result was already obtained by a dif\/ferent argument in the proof of Proposition~\ref{prop2}.}
Using this the f\/irst equation in \eqref{alphanmEn} simplif\/ies to
\begin{gather}\label{alphanmEn1}
\alpha(m) = \sum_{s=0}^\infty \big( \big(\big[\mathbb{A}-E^{(0)}_n\big]^{-1}\mathbb{P}^\perp \mathbb{B}\big)^s\alpha_0\big)(m)
\end{gather}
Inserting def\/initions we f\/ind
\begin{gather*}
\big(\big[\mathbb{A}-E^{(0)}_n\big]^{-1}\mathbb{P}^\perp\mathbb{B}\alpha\big)^{(\ell)}(m) = \frac1{\big[b^{(\ell)}_n(m-n)\big]}\sum_{\ell'=0}^\ell \sum_{\mu\in{\mathbb{Z}'}} S^{(\ell')}_\mu \alpha^{(\ell-\ell')}(m+\mu)
\end{gather*}
using the notation in \eqref{bnml1}. With that one can prove by induction that
\begin{gather*}
\big(\big(\big[\mathbb{A}-E^{(0)}_n\big]^{-1}\mathbb{P}^\perp\mathbb{B}\big)^s\alpha\big)^{(\ell)}(m) = \sum_{\ell_1=0}^{\ell}\sum_{\ell_2=0}^{\ell-\ell_1}\cdots \sum_{\ell_s=0}^{\ell-\ell_1-\cdots-\ell_{s-1}}
\sum_{\mu_1,\ldots,\mu_s\in{\mathbb{Z}'}} S^{(\ell_1)}_{\mu_1}\cdots S^{(\ell_s)}_{\mu_s} \\ \qquad{}\times
\frac1{\big[b^{(\ell)}_n(m-n)\big]\big[b^{(\ell-\ell_1)}_n(m-n+\mu_1)\big]\cdots \big[b^{(\ell-\ell_1-\cdots-\ell_{s-1})}_n(m-n+\mu_1+\cdots+\mu_{s-1})\big]} \\
\qquad{} \times
\alpha^{(\ell-\ell_1-\cdots-\ell_s)}(m+\mu_1+\cdots+\mu_s)
\end{gather*}
for all $s=1,2,\ldots$. Inserting this in \eqref{alphanmEn1} and using \eqref{alpha00} give
\begin{gather}\label{Gnm11}
\alpha^{(\ell)}_n(m) = \sum_{s=0}^\infty \sum_{\ell_1,\ldots,\ell_s\in\mathbb{N}_0} \sum_{\mu_1,\ldots,\mu_s\in{\mathbb{Z}'}} S^{(\ell_1)}_{\mu_1}\cdots S^{(\ell_s)}_{\mu_s}
\\ \hphantom{\alpha^{(\ell)}_n(m) =}{} \times
\frac{\delta(\ell,\ell_1+\cdots+\ell_s)\delta(n,m+\mu_1+\cdots+\mu_s)}{b^{(\ell)}_{n}(m-n)b^{(\ell-\ell_1)}_{n}(m-n+\mu_1)\cdots b^{(\ell-\ell_1-\cdots-\ell_{s-1})}_{n}(m-n+\mu_1+\cdots+\mu_{s-1})}\nonumber
\end{gather}
(to simplify notation we extended the $\ell_j$-summations to all non-negative integers, which is possible due to the f\/irst Kronecker delta in the sum; the term for $s=0$ is by def\/inition equal to the r.h.s.\ in~\eqref{alpha00}).
The def\/inition of $S_\mu^{(\ell)}$ in \eqref{Sellmu} suggests to change summation variables from $\ell$ to $k$ such that $\ell=|\mu| k$ (to reduce the number of terms in the formula which give zero contributions). Wri\-ting~$S_\mu(k)$ short for $S_\mu^{(|\mu|k)}$ we obtain the result in \eqref{Gnm1}--\eqref{Sellmu1}.
\end{proof}
\section{Final remarks}\label{secFinal}
We showed that a solution method based on kernel functions and developed to solve the elliptic Calogero--Sutherland (eCS) model \cite{ELeCS0,ELeCS2} generalizes to the non-stationary Heun equation in~\eqref{Heun}. This suggests to us that also other elliptic problems can be treated by this method; as a possible candidate we mention the non-stationary generalization of the eigenvalue problem for the $BC_{N}$ Inozemtsev model for $N\geq 2$ \cite{I}, which def\/ines a natural many-variable generalization of the non-stationary Heun equation:
\begin{gather}
\Biggl( \frac{{\rm i}}{\pi}\kappa \frac{\partial}{\partial\tau} + \sum_{j=1}^N\Biggl( -\frac{\partial^2}{\partial x_j^2} +\sum_{\nu=0}^3 g_\nu(g_\nu-1) \wp(x_j+\omega_\nu) \Biggr)\nonumber\\
\qquad{} + 2\lambda(\lambda-1)\sum_{1\leq j<k\leq N}\left\{\wp(x_j-x_k)+\wp(x_j+x_k) \right\} \Biggr) \psi(x)=E\psi(x) ;\label{BCN}
\end{gather}
note that generalized kernel function identities for this problem and a discrete set of $\kappa$-values (depending on the other model parameters) are known \cite{LT}.
We also obtained results beyond previous results about the eCS model: the non-stationary Heun equation in \eqref{Heun} is invariant under the transformations in \eqref{symmetry}, and we found that, for $\kappa\neq 0$, this symmetry can be exploited to construct simpler representations of the generalized eigenvalues $E$ which are useful even in the case $\kappa=0$; see \eqref{Enexplicit1}.
We expect that a formula similar to the one in \eqref{Enexplicit1} for the eigenvalues of the eCS model can be found, and that this would be interesting since it might allow to better understand the relations between the solutions of the eCS model in \cite{ELeCS2} and the one by Nekrasov and Shatiashvili \cite{NS}.
As discussed in Appendix~\ref{appCombinatorics}, the explicit formulas for the solutions of the Heun equation in Theorem~\ref{Thm1} can be regarded as translations of the problem to solve this equation into a combinatorial problem. It is remarkable that dif\/ferent elliptic problems lead to the same combinatorial problem: we f\/ind that the combinatorial structure of the solution of the eCS model and the non-stationary Heun equation are the same, and model details only af\/fect the basic building blocks of the solutions. We expect the same is true for other elliptic problems like the one in \eqref{BCN}. We believe that a similar remark applies to non-stationary extensions of elliptic problems. It would be interesting to explore these combinatorial aspects of our solution in future work.
One important question about the non-stationary Heun equation is uniqueness: which conditions determine a unique solution?
Our results shed light on this question: we f\/ind that the conditions in \eqref{solution}--\eqref{eJacobi} do not f\/ix the solution uniquely, and our results suggest the following further requirements:
\begin{gather}\label{condlast}
\mathcal{E}_n^{(\ell)}=0,\qquad \mathcal{P}_n^{(\ell)}(z)=O\big(z^{n+\ell}\big) \qquad \forall\, n+\ell\geq 0,\; \ell\geq 1.
\end{gather}
It would be interesting to prove that the conditions \eqref{solution}--\eqref{eJacobi} and \eqref{condlast} imply uniqueness.
We f\/inally note that kernel functions have been used since a long time to transform the Heun equation into integral equations \cite{Erd,LW}; see also \cite{LaSl,Novikov}. This has provided powerful tools to study analytical properties of solutions; for example, this was used by Ruijsenaars in his work on the hidden permutation symmetry mentioned after \eqref{S4} \cite{RHeun}. Our approach is similar in that the kernel function we use determines the analytic structure of the solutions we construct.
However, there are also important dif\/ferences. For example, our kernel functions are {\em not} given by hypergeometric functions as the ones in \cite{Erd,LW}, and they are singular and {\em not} $L^2$ as the ones used in \cite{RHeun}. Moreover, our emphasis is on constructing explicit representations of solutions.
|
1,477,468,750,374 | arxiv | \section{Introduction}
Many models in $D$ dimensions, with $D/2$ odd, contain chiral bosons
i.e. $p$--form gauge fields (scalars in $D = 2$) with self--dual or
anti--selfdual field strength. The description of the dynamics
of such theories through Lorentz--invariant
actions has been an unsolved problem for long time \cite{MS}.
Previous attempts to face this problem were based on
non manifestly covariant actions \cite{nmcov,PS} or on formulations
involving an infinite set of auxiliary fields \cite{infin}.
On the other hand Siegel's approach to two dimensional chiral bosons
\cite{Siegel} admits only rather problematic extensions to
$D > 2$ \cite{armeni}.
Recently a new manifestly Lorentz--invariant
approach \cite{PST,PST3,M5} for chiral $p$-forms in $D = 2(p+1)$ with $p$
even has been proposed. This approach is based on a single auxiliary
scalar field and allows to write actions which are manifestly
invariant under Lorentz transformations and under two
new bosonic local symmetries. The gauge fixing of these symmetries
allows to remove the auxiliary field and to eliminate half of the
physical degrees of freedom of the $p$--form so that its
field strength becomes (ant)--selfdual in the free case, or satisfies
a generalized selfduality condition in the interacting case.
Moreover, this approach reproduces on one hand, through an appropriate
gauge fixing, the non manifestly covariant approach of
\cite{PS} and it is, on the other hand, related to the
approach in \cite{infin} in that it provides a consistent truncation of its
infinite set of auxiliary fields.
The new approach has been first introduced in \cite{PSTplb} to
reformulate the four--dimensional Maxwell theory
without sources in such a way that its invariance under
electric/magnetic duality and under Lorentz transformations is manifest;
the coupling to electric and magnetic sources was performed in \cite{berk}.
The approach allowed also to rewrite the heterotic string effective
action of \cite{M5alt} in a form such that duality symmetry is manifest
\cite{PSTprd}.
The method appeared also perfectly suitable \cite{PST,PST3} for
providing lagrangian formulations for theories with chiral $p$--forms.
It has been used, in particular, to obtain a manifestly Lorentz invariant
and $\kappa$--invariant action for the eleven--dimensional
$M$--theory five--brane \cite{M5} and to obtain a new
formulation for Green--Schwarz heterotic strings which involves chiral
bosons instead of heterotic chiral fermions \cite{stringa}.
An equivalent non--manifestly covariant formulation of the
$M$--theory five--brane has been given in \cite{APPS}.
This theory has also
been worked out in \cite{HS} on the basis of a purely geometrical
doubly supersymmetric approach which does, however, not furnish
a Lagrangian formulation but only the equations of motion.
Subsequently in ref. \cite{BNLPST} it has been shown that
this formulation, at the level of the field equations,
is equivalent to the ones presented in \cite{M5} and \cite{APPS}.
The coupling of all these models with chiral bosons to gravity can be
easily achieved since the approach is manifestly covariant
under Lorentz transformations; as a consequence it is
obvious that the two above mentioned bosonic symmetries, which are
a crucial ingredient of the new approach, are compatible with
diffeomorphism invariance. Therefore, to establish the general
validity of the approach, it remains to establish its compatibility
with global and local supersymmetry. This is the aim of the
present paper.
Chiral $p$--forms are, in fact, present in many supersymmetric and
supergravity models in two, six and ten dimensions.
The covariant action for the bosonic
sector of $D = 10$, $IIB$ supergravity has already been described
in \cite{IIB}.
The problem we address is if one can deduce the dynamics of
supersymmetric models with chiral bosons from supersymmetry
invariant actions which respect also the new bosonic
symmetries of the covariant approach. There is already some
evidence that supersymmetry is compatible with the new bosonic
symmetries in a rather natural way. Indeed, on one hand in refs.
\cite{PSTplb,PST2} a four dimensional model has
been considered where these bosonic symmetries and rigid
supersymmetry are both simultaneously present:
this was obtained by a simple modification of the supersymmetry
transformations of the fermions, which vanishes on--shell.
On the other hand, in the $M$--five--brane action invariance under
$\kappa$--symmetry was achieved in a very natural way.
The present paper deals, in particular, with the problem of
constructing covariant
actions for supersymmetric and supergravity models with chiral
bosons in {\it six} dimensions.
In six dimensions there are four kind of multiplets: the supergravity
multiplet, the tensor multiplet, the vector multiplet and the
hypermultiplet. The supergravity multiplet and the tensor multiplets
contain a chiral two--form with anti--selfdual and selfdual field
strength respectively.
The on--shell supersymmetry transformation rules as well as the field
equations for these supermultiplets are well known \cite{sugra6d}.
Moreover, in
\cite{DFR} the group--manifold action for $D = 6$ pure supergravity has been
obtained. Usually the group manifold action, when restricted to ordinary
space--time, gives rise to an action for the component fields which
is invariant under supersymmetry. In the presence of chiral bosons,
however, the component action obtained in this way fails to be susy
invariant. Proposals for covariant actions for pure supergravity in $D=
6$ have been made in \cite{armeni}, which constitutes a generalization
of Siegel's approach \cite{Siegel} from two to six dimensions, and
in \cite{Sok} where a harmonic superspace lagrangian
approach is used, which involves, however, an infinite number
of auxiliary fields. The same kind of problems arises also in $D=10$,
$IIB$ supergravity which involves a four-form with selfdual field
strength\footnote{Also for $D = 10$, $IIB$ supergravity
the on shell susy transformation rules and the field equations are well known
\cite{sugraIIB} and a group manifold action exists \cite{genIIB};
an action based on the Siegel approach has been proposed in
ref. \cite{armeni}.}
In the present paper we shall show that the new covariant approach
for chiral $p$--forms allows to obtain in a natural and elegant way,
covariant and supersymmetric actions for six dimensional
models with chiral bosons which
involve the supergravity multiplet and/or tensor
multiplets. For completeness in section two we review the
covariant approach for a chiral two--form in six dimensions.
In section three we illustrate our technique for writing supersymmetric
covariant actions with chiral bosons in the case of a single free tensor
multiplet in flat space. This simple example appears rather instructive
in that it exhibits all principal features of our technique.
Along the lines of this example we construct in section four the action for
pure $N = 1$, $D = 6$ supergravity and present in section five
the action for the more general case of $N = 1$, $D = 6$ supergravity coupled
to an arbitrary number, $n$, of tensor supermultiplets.
The couplings of these multiplets with
hypermultiplets and vector multiplets will be briefly
discussed in the concluding section six.
The general strategy developed in this paper extends in a rather
straightforward way to two and ten dimensions. Particularly interesting
is the case of $IIB$, $D = 10$ supergravity whose covariant
action we hope to present elsewhere.
\section{Chiral bosons in six dimensions: the general method}
In this section we present the method for a chiral boson
in interaction with an external or dynamical gravitational
field in six dimensions. To this order we introduce sechsbein one--forms
$e^a = d x^m {e_m}^a(x)$. With $m,n =0,\ldots,5$ we indicate
curved indices and with $a,b=0,\ldots,5$ we indicate tangent
space indices, which are raised and lowered with the flat
metric $\eta_{ab}=(1,-1,\cdots,-1)$.
We define the tangent space components of a generic
$p$--forms, $\phi_p$, according to
\begin{equation}\label{dec}
\phi_p={1\over p!}e^{a_1}\cdots e^{a_p}\phi_{a_p \cdots a_1},
\end{equation}
where the wedge product between forms here and in the following
will always be understood.
To consider a slightly more general self-duality condition for
interacting chiral bosons we introduce the two-form potential
$B$ and its generalized curvature three--form $H$ as
$$
H=dB+C\equiv {1\over 3!}e^a e^b e^c H_{cba},
\label{forms}
$$
where $C$ is a three-form which depends on the fields to which
$B$ is coupled, such as the graviton, the gravitino and so on,
but not on $B$ itself. The free (anti)self--dual boson
will be recovered for $C=0$ and $e_m{}^a=\delta_m{}^a$.
The Hodge--dual of the three--form $H$ is again a three--form
$H^*$ with components
$$
H^*_{abc} = \frac{1}{3!} \varepsilon_{abcdef} H^{def}.
$$
The self--dual and anti self--dual parts of $H$ are defined respectively
as the three--forms
$$
H^{\pm} \equiv \frac{1}{2} (H \pm H^*).
$$
The equations of motion for interacting chiral bosons in supersymmetric
and supergravity theories, as we will see in the examples worked out in
the next sections, are in general of the form
\begin{equation}
\label{eqm}
H^{\pm}=0,
\end{equation}
for a suitable three--form $C$ whose explicit expression is usually
determined by supersymmetry.
To write a covariant action which gives eventually rise to \eref{eqm}
we introduce as new ingredient the scalar auxiliary field $a(x)$
and the one--forms
\begin{eqnarray}
u&=&da\equiv e^a u_a\\
v&=&{1\over \sqrt{-u^2}}\, u\equiv e^a v_a.
\end{eqnarray}
In particular we have $v_a={u_a\over \sqrt{-u^2}}$ and $v_av^a=-1$.
Using the vector $v^a$ to the
three--forms $H,H^*$ and $H^\pm$ we can then associate two-forms $h,h^*$
and $h^\pm$ according to
$$
h_{ab}=v^cH_{abc}, \qquad h={1\over 2} e^a e^b h_{ba},
$$
and similarly for $h^*$ and $h^\pm$.
The action we search for can now be written equivalently in one of the
following ways
\begin{eqnarray}
\label{S0}
S_0^\pm &=& \pm\int \left(v h^{\pm} H + {1\over 2} dB C\right)\nonumber\\
&=& \pm\int \left(v h^{\pm} H + {1\over 2} H C\right)\nonumber\\
&=& \int d^6x\sqrt{g}\left({1\over 24}H_{abc}H^{abc}
+{1\over 2}h_{ab}^\pm h^{\pm ab}\right) \pm\int {1\over 2}dBC
\nonumber\\
&=& {1\over 4}\int d^6x\sqrt{g} \, h^*_{ab}\left(h^{*ab}\pm h^{ab}
\right)\pm\int {1\over 2}dBC.
\end{eqnarray}
$S_0^+$ will describe anti self--dual bosons ($H^+=0$)
and $S_0^-$ self--dual bosons ($H^-=0$).
The last term, $\int dBC$, is of the Wess--Zumino type and is absent
for free chiral bosons.
What selects this form of the action are essentially
the local symmetries it possesses. Under a general variation of the fields
$B$ and $a$ it varies, in fact, as
\begin{equation}
\label{dS0}
\delta S_0^\pm = \pm2\int \left(vh^\pm d\delta B +
{v\over \sqrt{-u^2}} h^{\pm}h^{\pm} d\delta a\right).
\end{equation}
From this formula it is rather easy to see that $\delta S^\pm_0$ vanishes
for the following three bosonic transformations, with transformation
parameters $\Lambda$ and $\psi$, which are one--forms, and $\varphi$
which is a scalar:
\begin{eqnarray}\label{bos}
&I)&\qquad \delta B=d\Lambda,\qquad \delta a =0\nonumber\\
&II)&\qquad \delta B= -{2h^\pm\over \sqrt{-u^2}}\,\varphi,\qquad
\delta a =\varphi\nonumber\\
&III)&\qquad \delta B=\psi da ,\qquad \delta a =0.
\end{eqnarray}
The transformation $I)$ represents just the ordinary gauge invariance for
abelian two--form gauge potentials.
The symmetry $II)$ implies that $a(x)$ is an
auxiliary field which does, therefore, not correspond to a propagating
degree o freedom\footnote{Notice however that, since the action becomes
singular in the limit of a vanishing or constant $a(x)$, the gauge
$d a(x) = 0$ is not allowed.}. Finally, the symmetry $III)$ eliminates
half of the propagating degrees of freedom carried by $B$ and allows to
reduce the second order equation of motion for this field to the desired
first order equation, i.e. \eref{eqm}. To see this we note that the equations
of motion for $B$ and $a$, which can be read from \eref{dS0},
are given respectively by
\begin{eqnarray}
d\left(vh^\pm\right)&=&0\label{emb}\\
d\left({v\over \sqrt{-u^2}}h^\pm h^\pm\right)&=&0.
\end{eqnarray}
First of all it is straightforward to check that the $a$--equation is
implied by the $B$-equation, as expected, while the general solution of
the $B$--equation is given by
\begin{equation}
\label{sol}
vh^\pm={1\over 2}d\tilde{\psi}da,
\end{equation}
for some one--form $\tilde{\psi}$. On the other hand under the
transformation $III)$ we have
$$
\delta\left(vh^\pm\right)={1\over 2}d\psi da
$$
which is precisely of the same form as \eref{sol}.
Therefore we can use this symmetry to reduce the $B$-equation \eref{emb}
to $vh^\pm=0$ which amounts to $h^\pm=0$, and this equation, in turn, can
be easily seen to be equivalent to $H^\pm=0$, which is the desired
chirality condition.
This concludes the proof that the actions $S_0^\pm$ describe indeed
correctly the propagation of chiral bosons.
In a theory in which the $B$ field is coupled to other dynamical
fields, for example in supergravity theories, we can now conclude
that the complete action has to be of the form
$$
S=S_0^\pm+S_6,
$$
where $S_6$ contains the kinetic and interaction terms for the fields
to which $B$ is coupled. To maintain the symmetries $I)$--$III)$
one has to assume that those fields are invariant under these
transformations and, moreover, that $S_6$ is independent off the $B$
and $a$ fields themselves.
For more general chirality conditions describing self--interacting
chiral bosons, like e.g. those of the Born--Infeld type, see ref. \cite{PST3}.
To conclude this section we introduce two three--form fields,
$K^\pm$, which will play a central role in the next sections
due to their remarkable properties. They are defined as
\begin{equation}\label{k}
K^\pm\equiv H+2vh^\mp
\end{equation}
and are uniquely determined by the following peculiar properties:
\begin{itemize}
\item[i)] they are (anti) self--dual: $K^{\pm*} = \pm K^{\pm}$;
\item[ii)]they reduce to $H^\pm$ respectively if $H^\mp= 0$;
\item[iii)] they are invariant under the symmetries $I)$ and $III)$,
and under $II)$ modulo the field equations \eref{emb}.
\end{itemize}
These fields constitute therefore a kind of off--shell generalizations
of $H^\pm$.
\section{The action for a free $N=1$, $D=6$ tensor multiplet}
In this section we illustrate the compatibility of the general
method for chiral bosons with supersymmetry in a
simple example i.e. the one involving only one free
tensor supermultiplet in flat space--time in six dimensions. The strategy
developed in this case admits natural extensions to more general
cases as we will see in the next two sections.
An $N=1,D=6$ tensor supermultiplet is made out of an antisymmetric tensor
$B_{[ab]}$, a symplectic Majorana--Weyl
spinor $\lambda_{\alpha i}$ ($\alpha = 1,\ldots,4; i = 1,2$) and a real scalar $\phi$.
For more details on our spinor conventions, see the appendix.
The equations of motion for this multiplet and its on--shell susy transformation
rules are well known. The scalar obeys the free Klein--Gordon equation,
the spinor the free Dirac equation and the $B$--field the self--duality
condition
$$
H^-=0,
$$
where $H=dB$, which means that in this case we have $C=0$.
The on-shell supersymmetry transformations, with rigid transformation
parameter $\xi^{\alpha i}$, are given by
\begin{eqnarray}
\label{susy}
\delta_\xi \phi &=& \xi^i \lambda_i, \nonumber\\
\delta_\xi \lambda_{ i} &=& \left( \Gamma^a \partial_a \phi +
\frac{1}{12} \Gamma^{abc}H_{abc}^+ \right)\xi_i,\nonumber\\
\delta_\xi B_{ab} &=& - \xi^i \Gamma_{ab} \lambda_i.
\end{eqnarray}
The $USp(1)$ indices
$i,j$ are raised and lowered according to
$ K_i = \varepsilon_{ij} K^j, K^i = - \varepsilon^{ij} K_j, $ where $\varepsilon_{12} =
\varepsilon^{12} = -\varepsilon_{21} = -\varepsilon^{21} = 1 $, \cite{DL}.
Since the equations of motion are free our ansatz for the action, which
depends now also on the auxiliary field $a$, is
\begin{equation}\label{SH}
S=S_0^-+S_6
=- \int v h^- H +{1\over 2}\int d^6x \left(\lambda^i \Gamma^a \partial_a \lambda_i +
\partial_a \phi \partial^a \phi \right).
\end{equation}
This action is invariant under the symmetries $I)$--$III)$ if we assume
that $\phi$ and $\lambda$ are invariant under these transformations.
For what concerns supersymmetry we choose first of all the transformation
for $a$
$$
\delta_\xi a=0,
$$
which is motivated by the fact that $a$ is non propagating and
does therefore not need a supersymmetric partner. Next we should
find the off--shell generalizations of \eref{susy}. For dimensional
reasons only $\delta_\xi\lambda$ allows for such an extension. To find
it we compute the susy variation of $S_0^-$, which depends only on $B$ and
$a$, as
$$
\delta_\xi S_0^-=-2\int vh^-d\delta_\xi B=-\int K^+d\delta_\xi B
$$
in which the self-dual field $K^+$, defined in the previous section,
appears automatically. Since $\delta_\xi S_0^-$ should be cancelled by
$\delta S_6$ this suggests to define the off--shell
susy transformation of $\lambda$ by making the simple replacement
$H^+\rightarrow K^+$, i.e.
$$
\delta_\xi \lambda_{ i}\rightarrow \bar\delta_\xi \lambda_{ i} = \left( \Gamma^a \partial_a \phi +
\frac{1}{12} \Gamma^{abc}K_{abc}^+ \right)\xi_i.
$$
With this modification it is now a simple exercise to show that
the action \eref{SH} is indeed invariant under supersymmetry.
The relative coefficients of the terms in the action are actually
fixed by supersymmetry.
The general rules for writing covariant actions for supersymmetric
theories with chiral bosons,
which emerge from this simple example, are the following.
First one has to determine the on--shell susy transformations of the
fields and their equations of motion, in particular one has to determine
the form of the three-form $C$. The off--shell extensions of the susy
transformation laws are obtained by substituting in the transformations
of the fermions $H^\pm\rightarrow K^\pm$. The action has then to be written
as $S_0^\pm+S_6$ where the relative coefficients of the various terms
in $S_6$ have to be determined by susy invariance. The field $a$, finally,
is required to be supersymmetry invariant.
An essential step in this procedure is the determination of the susy
transformation laws and equations of motion for the fields. This can
generally be done most conveniently using superspace techniques,
especially in the case of supergravity theories, like the ones in the
subsequent sections. Here we illustrate the procedure by
rephrasing the results in \eref{susy} in a (flat) superspace language.
We follow here the superspace conventions of ref. \cite{DL} to which
we refer the reader for more details on our notations.
One introduces the supervielbeins $e^A=(e^a, e^{\alpha i}=d\vartheta^{\alpha i})$,
which are one--superforms, and the
three--superform $A = \hat dB$ where now $B$ is a two--{\it super}form
and $\hat d=e^a\partial_a+e^{\alpha i}D_{\alpha i} $
is the superspace differential. The superspace torsion
is the one--form $T^A=\hat de^A$ and a generic $p$-superform allows a
decomposition analogous to \eref{dec} with $e^a\rightarrow e^A$.
Then one imposes the rigid superspace constraints
\begin{eqnarray}
T^a &=& - e^i \Gamma^a e_i, \label{vincTa}\\
T^{\alpha i} &=& 0, \label{vincTai}\\
A_{\alpha i\beta j\gamma k} &=& 0, \label{vincH} \\
A_{a \alpha i \beta j} &=& - 2 \phi \varepsilon_{ij} (\Gamma_a)_{\alpha\beta}
\end{eqnarray}
and solves the Bianchi identity
\begin{equation}
\label{IBH}
\hat d A = 0.
\end{equation}
The solution gives
\begin{equation}\label{soluz}
\hat dB =A= \frac{1}{3!} e^a e^b e^c H_{cba}
- \left(e^i \Gamma_a e_i\right) e^a \phi - \frac{1}{2} e^b e^a
(e^i \Gamma_{ab} \lambda_i)
\end{equation}
and
\begin{eqnarray}\label{spi}
\lambda_{\alpha i} &=& D_{\alpha i} \phi,\nonumber \\
D_{\alpha i} \lambda_{\beta j} &=& \varepsilon_{ij} \left( \Gamma^a_{\alpha\beta} \partial_a \phi - \frac{1}{12}
(\Gamma^{abc})_{\alpha\beta} H^+_{abc} \right),\nonumber \\
D_{\alpha i} H_{abc} &=& -3 (\Gamma_{[ab})_\alpha{}^\beta \partial_{c]} \lambda_{\beta i},\nonumber\\
H^-_{abc} &=& 0.
\end{eqnarray}
The lowest components (obtained at $\vartheta=d\vartheta=0$)
of the superfields $\phi$, $\lambda_{\alpha i}$ and $B$ are
respectively the scalar, spinor and tensor component fields of the
supermultiplet. Notice also that the Bianchi identity
\eref{IBH} forces the tensor $B$ to be on--shell. In fact, taking
the $\vartheta=d\vartheta=0$ component of \eref{soluz} the last two
terms go to zero and the last equation in \eref{spi} becomes simply
$H^-=0$, where now $H\equiv dB$ in ordinary space;
here we retrieve the result $C=0$.
In superspace formalism the supersymmetry transformation of a superfield
is given by its Lie derivative along the vector field
$\xi = \xi^{\alpha i} D_{\alpha i}$. The transformation of the component
fields is just obtained by performing the superspace Lie derivative
and setting finally $\vartheta=d\vartheta=0$.
Using this one can read the transformations \eref{susy} directly from
the spinorial derivatives given in \eref{spi}.
\section{The action for pure $N = 1$, $D=6$ supergravity}
The supergravity multiplet in six dimensions contains the graviton,
a gravitino and an
antisymmetric tensor with anti--selfdual (generalized) field strength.
The graviton is described by the vector--like vielbein $e^a = dx^m
{e_m}^a$, the gravitino by the spinor--like one--form $e^{\alpha i} =
dx^m {e_m}^{\alpha i}$ and the tensor by the two--form $B$. We
introduce also the Lorentz connection one--form
$\omega^{ab} = dx^m \omega_{m}{}^{ab}$
in order to implement a first order formalism.
As outlined at the end of
the previous section we promote these forms to superforms and
define super--torsions and super--curvatures as
\begin{eqnarray}
T^a &\equiv& D e^a = \hat d e^a + e^{b} {\omega_b}^a,\nonumber\\
T^{\alpha i} &\equiv& D e^{\alpha i} = \hat d e^{\alpha i} + \frac{1}{4} e^{\beta i}
(\Gamma_{ab})_\beta{}^\alpha \omega^{ab}, \nonumber\\
R^{ab} &\equiv& \hat d \omega^{ab} + \omega^{ac} {\omega_c}^{b}\nonumber\\
A&\equiv& \hat d B.
\end{eqnarray}
The standard constraints read now
\begin{eqnarray}
T_{\alpha i\beta j}{}^a &=&-2\varepsilon_{ij}\Gamma^a_{\alpha\beta} \\
T_{ab}{}^c &=& 0\\
A_{\alpha i\beta j\gamma k} &=& 0 \\
A_{a \alpha i \beta j} &=& - 2 \varepsilon_{ij} (\Gamma_a)_{\alpha\beta},
\end{eqnarray}
and the solution of the relevant superspace Bianchi identities
leads to the following parametrizations of these torsions and curvatures:
\begin{eqnarray}
\label{vinc1}
T^a &=& - e^i\Gamma^a e_i,\\
T^{\alpha i} &=& \frac{1}{8} e^{\beta i} e^a (\Gamma^{bc})_\beta{}^\alpha
H^-_{abc} + \frac{1}{2} e^{a} e^b T_{ba}{}^{\alpha i}, \label{vinc2}\\
R_{ab} &=& \frac{1}{2} e^i \Gamma^c e_i
H^-_{abc} - e^{\alpha i} e^c [ (\Gamma_c)_{\alpha\beta} T_{ab}{}^\beta_i - 2
(\Gamma_{[a})_{\alpha\beta} T_{b]c}{}^\beta_i] \label{vinc3}
+ \frac{1}{2} e^d e^c R_{cdab}, \\
\hat dB =A&=& \frac{1}{3!} e^a e^b e^c H_{cba}
- \left(e^i\Gamma_a e_i\right) e^a.\label{vinc4}
\end{eqnarray}
The constraint of a vanishing purely bosonic torsion $T_{abc}$
is, in general, conventional in that, through a redefinition of the
connection, it can be set equal to any tensor. Sometimes it
is more convenient, in fact, to have $T_{abc}\neq 0$, see e.g.
\cite{DL}. In the present case, however, having $T_{abc}=0$
constrains $\omega_{m}{}^{ab}$ to be the usual super--covariant
connection which depends only on the graviton and the gravitino;
it has, in particular, no spurious dependence on $B$. This implies,
in turn, that the covariant derivatives are automatically invariant
under the bosonic transformations $I)$--$III)$ and,
to maintain those symmetries, one has to avoid in
$S_6$ only the explicit appearance of $B$.
The $A$--Bianchi identity implies, in particular, also the
$B$--equation of motion
$$
H_{abc}^+=0.
$$
Due to \eref{vinc4} this implies, in ordinary space--time, $H^+=0$,
where now
$$
H=dB+ \left(e^i\Gamma_a e_i\right) e^a= \frac{1}{3!} e^a e^b e^c H_{cba}.
$$
This means that in this case the three--form $C$ is non vanishing
being given by
\begin{equation}\label{C3}
C=\left(e^i\Gamma_a e_i\right) e^a.
\end{equation}
The supersymmetry transformations of $e^a$, $e^{\alpha i}$, $\omega^{ab}$ and
$B$ can
obtained as (covariant) Lie derivatives of these forms along the local
superspace vector $\xi(x) = \xi^{\alpha i} (x) D_{\alpha i}$ and, therefore,
they can be read directly from \eref{vinc1}--\eref{vinc4}:
\begin{eqnarray}
\label{susyi}
\delta_\xi e^a &=& i_{\xi} T^a = -2 \xi^i \Gamma^a e_i, \\
\delta_\xi e^{\alpha i} &=& D \xi^{\alpha i} + i_\xi T^{\alpha i} = D
\xi^{\alpha i} - \frac{1}{8} \xi^{\beta i} e^a (\Gamma^{bc})_\beta{}^\alpha
K^-_{abc}, \label{28b}\\
\delta_\xi \omega_{ab} &=& i_\xi R_{ab} = \xi^i \Gamma^c e_i K^-_{cab} -
\xi^{\alpha i} e^c \left( (\Gamma_c)_{\alpha\beta} T_{ab}{}^\beta_i - 2 (\Gamma_{[a})_{\alpha\beta}
T_{b]c}{}^\beta_i\right), \\
\delta_\xi B &=& i_\xi A = -2 (\xi^i \Gamma_a e_i) e^a,\\
\delta_\xi a &=& 0.
\end{eqnarray}
In these relations, according to our general rule, we made already the
replacements $H^-_{abc}\rightarrow K^-_{abc}$ -- which occur, in fact, only
in the gravitino transformation in that the connection $\omega_{ab}$ is not
an independent field but depends on the graviton and the gravitino --
and we added the trivial
transformation law for the auxiliary field $a$.
As it stands, this trivial
transformation law does not seem to preserve the susy algebra in that
the commutator of two supersymmetries does not amount to a translation.
On the other hand it is known that the supersymmetry algebra closes on
the other symmetries of a theory; in the present case it is easily seen
that the anticommutator of two susy transformations on the $a$
field closes on the bosonic transformations $II)$. It is also
interesting to observe that the new terms in the susy transformations
of the gravitino are exactly the ones that ensure the closure of the total
symmetry algebra on the $B$--field, and again susy closes on the
transformations $II)$\footnote{We thank D.Sorokin for these remarks.}.
The covariant action for pure $N = 1$, $D = 6$ supergravity can
now be written as
\begin{eqnarray}\label{azsu}
S&=&S_0^+ +\int L_6\\
S_0^+&=& \int \left(v h^+ H + {1\over 2} dB C\right)\\
L_6 &=&
\frac{1}{48} \varepsilon_{a_1 \ldots a_6 } e^{a_1} e^{a_2} e^{a_3} e^{a_4}
R^{a_5a_6} - \frac{1}{3} e^{a_1}e^{a_2}e^{a_3} (De^i\Gamma_{a_1 a_2 a_3}e_i).
\end{eqnarray}
The three--form $C$ is given in eq. \eref{C3} and for convenience
we wrote the term $S_6$ as an integral of a six--form, $L_6$. This
six--form contains just the Einstein term, relative to the
super--covariantized spin connection, and the kinetic term for
the gravitino. The relative coefficients are fixed by susy invariance,
see below. In this case $S_0^+$ contains also the couplings of
$B$ to the gravitino and the graviton.
This action is invariant under the symmetries $I)$--$III)$ because
$L_6$ does not contain $B$ and we assume the graviton and the gravitino
to be invariant under those transformations.
The evaluation of the supersymmetry variation of $S$ is now a merely
technical point. The variation of $\int(vh^+H)$ has to be computed
"by hand" while the variation of the remaining terms, which are forms,
is most conveniently evaluated by lifting them to superspace,
performing their superspace differential and taking the interior
product with the vector $\xi$. The results are
\begin{eqnarray}\label{var}
\delta_\xi S_0^+&=&
\int \left(i_\xi R + \frac{1}{2} (\xi^i \Gamma^a e_i) e^b
e^c K^-_{cba}\right)K^- -{1\over 2} \int i_\xi\left(RC\right)\\
\delta_\xi \int L_6 &=& \int i_\xi \hat dL_6=
\int i_\xi\left({1\over 2}RC-{1\over 3} e^a e^b e^c (T^i \Gamma_{abc} T_i)
\right).
\end{eqnarray}
Here we defined
$$
R=\hat d C=2 (T^i \Gamma_a e_i) e^a,
$$
and the parametrization of $T^i=De^i$ is given in \eref{vinc2}
(with $H_{abc}^-\rightarrow K_{abc}^-$).
We see that the susy variation of $S_0^+$ depends on $B$ only through
the combination $K^-$, justifying again our rule for the modified susy
transformation rules for the fermions.
The susy variation of the total action becomes then
$$
\delta_\xi S=
\int \left(i_\xi R + \frac{1}{2} (\xi^i \Gamma^a e_i) e^b
e^c K^-_{cba}\right)K^- -{2\over 3} e^a e^b e^c
(i_\xi T^i) \Gamma_{abc} T_i
$$
and is easily seen to vanish using the expression for $i_\xi T^i$
given in \eref{28b}.
\section{$N = 1$ supergravity coupled to $n$ tensor multiplets}
As last example we consider the case in which the supergravity
multiplet is coupled to an arbitrary number $n$ of tensor multiplets.
This situation arises, for example, in $M$--theory compactified on
$(K_3\times S^1)/Z_2$ \cite{Sen}
and, until now, for this system a covariant lagrangian
formulation did not exist. The purpose of this section is to fill this
gap. For simplicity we will disregard vector and hypermultiplets.
The field equations for supergravity coupled to an arbitrary number
of tensor multiplets have been obtained in \cite{sugra6d}. For a recent
account see ref. \cite{new6d}, where this system has been generalized to
include
hypermultiplets and vector multiplets, see also \cite{sa}.
For what concerns the geometrical
aspects involved in this system we will
follow basically the notations of \cite{new6d}.
The supergravity multiplet is described as before by the vielbein one--form
$e^a$, the gravitino one--form $e^{\alpha i}$ and by the two--form
$B$. The $n$ tensor supermultiplets are composed by $n$
two--forms $B^{\underline{m}}$, $n$ symplectic Majorana--Weyl spinors
$\lambda^{\underline{m}}_{\alpha i}$ and by $n$ scalars $\phi^{\underline{\a}}$ ($\underline{\a} = 1,\ldots,n$)
which parametrize as local coordinates the coset space $SO(1,n)/SO(n)$.
The indices $\underline{m} = (1,\ldots,n)$ span the fundamental representation
of $SO(n)$. The two--form of the supergravity multiplet and the $n$
two--forms of the tensor multiplets are collectively denoted by
$B^I$, where the indices $I,J = (0,1,\ldots, n)$ span the
fundamental representation of $SO(1,n)$.
A convenient way to parametrize
the coset geometry is to introduce the $SO(1,n)$ group element
$L(\phi)\equiv (L^I, L^I_{\underline{m}})$ and its inverse $L^{-1}(\phi)
\equiv (L_I, L_I^{\underline{m}} )$, which are local functions on the coset space
and obey
\begin{equation}
\label{LL}
L^I L_I = 1, \quad L^I_{\underline{m}} L_I = 0 = L^I L_I^{\underline{m}}, \quad L^I_{\underline{m}}
L_I^{\underline{n}} = \delta_{\underline{m}}^{\underline{n}}.
\end{equation}
The coset connection $A_{\underline{\a}}{}^{\underline{m}}{}_{\underline{n}}(\phi)$ and the coset vielbein
$V_{\underline{\a}}{}^{\underline{m}}(\phi)$ are defined by
\begin{equation}
D_{\underline{\a}} L_I^{\underline{m}}\equiv \partial_{\underline{\a}} L_I^{\underline{m}} + A_{\underline{\a}}{}^{\underline{m}}{}_{\underline{n}}
L_I^{\underline{n}} = {V_{\underline{\a}}}^{\underline{m}} L_I,
\end{equation}
\begin{equation}
\label{DL}
\partial_{\underline{\a}} L_I = {V_{\underline{\a}}}^{\underline{m}} L_{I\underline{m}}.
\end{equation}
These relations imply also that
\begin{equation}
\label{defeta}
\eta_{IJ} = -L_I L_J + L_I^{\underline{m}}L_{J\underline{m}}
\end{equation}
is the constant Minkowski metric of $SO(1,n)$.
It is also convenient to define the matrix
\begin{equation}
\label{defG}
G_{IJ}(\phi) = L_I L_J + L_I^{\underline{m}}L_{J\underline{m}}
\end{equation}
which is a local function on the coset.
The above relations imply, in
particular, that the coset manifold is a maximally symmetric space
with constant negative curvature. This property of the manifold ensures,
ultimately,
the supersymmetry of the equations of motion and the closure of the
susy algebra \cite{new6d}.
It is also convenient to introduce for any $SO(1,n)$ vector $W^I$
its components $W=L_IW^I$ and $W^{\underline{m}}=L_I^{\underline{m}}W^I$.
The equations of motion and susy transformations can now be derived
through the superspace techniques outlined in the previous sections.
In addition to the super--torsion we introduce $n+1$ two--superforms
$B^I$ and their supercurvatures $A^I=\hat dB^I$ and impose the constraints
\begin{eqnarray}
T_{\alpha i\beta j}{}^a &=&-2\varepsilon_{ij}\Gamma^a_{\alpha\beta} \\
T_{ab}{}^c &=& 0\\
A_{\alpha i\beta j\gamma k}^I &=& 0 \\
A_{a \alpha i \beta j}^I &=& - 2 \varepsilon_{ij}L^I (\Gamma_a)_{\alpha\beta}.
\end{eqnarray}
The solution of the torsion Bianchi identities and of $\hat dA^I=0$
leads now to the parametrizations
\begin{eqnarray}
\label{56}
T^a &=& - e^i \Gamma^a e_i,\\
T^{\alpha i} &=& e^{\beta j} e^a \left( 3 \delta_\beta{}^\alpha
V_{a\,j}{}^i + (\Gamma_{ab})_\beta{}^\alpha V^b{}_{j}{}^i + \frac{1}{8}
\delta_j{}^i (\Gamma^{bc})_\beta{}^\alpha A^-_{abc} \right) + \frac{1}{2} e^a
e^b T_{ba}{}^{a i},\\
R_{ab} &=& \frac{1}{2} e^{\alpha i} e^{\beta j} \left( \varepsilon_{ij}
(\Gamma^c)_{\alpha\beta} A^-_{cab} - 4 (\Gamma_{abc})_{\alpha\beta} V^c_{ij}\right) \nonumber\\
&-& e^{\alpha i}e^c \left((\Gamma_c)_{\alpha\beta} T_{ab}{}^\beta_i - 2 (\Gamma_{[a})_{\alpha\beta}
T_{b]c}{}^\beta_i\right)
+ \frac{1}{2} e^d e^c R_{cdab},\\
\hat dB^I=A^I&=&\frac{1}{3!} e^a e^b e^c A_{cba}^I
- \left(e^i\Gamma_a e_i\right) e^a L^I
+{1\over 2}e^be^a e^i(\Gamma_{ab})\lambda_i^{\underline{m}}L^I_{\underline{m}} \label{defh},\\
D \phi^{\underline{\a}} &=& e^i \lambda_i^{\underline{m}} V_{\underline{m}}{}^{\underline{\a}} + e^a D_a
\phi^{\underline{\a}}, \\
D\lambda_{\alpha i}^{\underline{m}} &=& -e^\beta_i\left((\Gamma^a)_{\beta\alpha} D_a \phi^{\underline{\a}}
V_{\underline{\a}}{}^{\underline{m}} + \frac{1}{12} (\Gamma^{abc})_{\alpha\beta}
A^{\underline{m} +}_{abc}\right) + e^a D_a \lambda_{\alpha i}^{\underline{m}}. \label{62}
\end{eqnarray}
Here the $n$ fermions in the tensor multiplet are represented by
the superfields $\lambda_{\alpha i}^{\underline{m}}=D_{\alpha i}\phi^{\underline{\a}}V^{\underline{m}}_{\underline{\a}}$
and we defined
\begin{equation}
V^a_{ij} = - \frac{1}{16} \lambda_{i}^{\underline{m}} \Gamma^a \lambda_{j\underline{m}}.
\end{equation}
According to our convention above we have also $A_{abc}=L_IA_{abc}^I$
and $A^{\underline{m}}_{abc}=L_I^{\underline{m}} A^I_{abc}$.
Most importantly, the closure of the susy algebra imposes also
the (anti)-selfduality equations of motion
\begin{eqnarray}\label{asd}
A^{\underline{m}-}_{abc}&=&0\\
A^+_{abc}&=&{1\over8}\lambda^i_{\underline{m}}(\Gamma_{abc})\lambda_i^{\underline{m}}.
\end{eqnarray}
Due to \eref{defh} these equations amount to
\begin{eqnarray}\label{s1}
H^{\underline{m} -}&=&0\\
H^+&=&0,\label{s2}
\end{eqnarray}
where the generalized curvatures $H^I$ are defined, now in ordinary space,
as
\begin{eqnarray}
H^I&=&dB^I+C^I,\\
C^I&=&\left((e^i \Gamma_a e_i) e^a - \frac{1}{48} e^a e^b
e^c ( \lambda^i_{\underline{m}} \Gamma_{cba} \lambda_i^{\underline{m}})\right)L^I
-{1\over 2}e^a e^b (e^i \Gamma_{ba} \lambda_i^{\underline{m}})L^I_{\underline{m}}\label{ci}\\
&\equiv& CL^I+C^{\underline{m}} L^I_{\underline{m}}. \nonumber
\end{eqnarray}
Introducing now again a single auxiliary field $a$ and the
related vector $v^a$ as in section two, we define the
$n+1$ two--forms $h^I$, which replace the two--forms $h^\pm$
introduced in the preceding sections, as
\begin{eqnarray}\label{hpicc}
h^I&=&{1\over 2}e^ae^b h^I_{ba}\\
h^I_{ba}&=&{1\over 2}v^c\left(H_{cba}^I-\eta^{IK}G_{KJ}H_{cba}^{J*}
\right)\label{hpicc1}\\
&=&v^c\left(L^IH_{cba}^++L_{\underline{m}}^IH^{\underline{m}-}_{cba}\right).
\label{hpicc2}
\end{eqnarray}
The three--forms $K^I$, corresponding to $K^\pm$, are then given by
$$
K^I=H^I+2vh^I.
$$
Their components along $L_I$ and $L_I^{\underline{m}}$
satisfy identically the (anti)--selfduality conditions
$$
K^{*}=-K, \qquad K^{\underline{m}*}=K^{\underline{m}}
$$
and reduce respectively to $H^{-}$ and $H^{\underline{m}+}$ if
the (anti)--selfduality conditions \eref{s1} and \eref{s2}
are satisfied.
The off--shell susy transformations are now obtained from
\eref{56}--\eref{62} in the usual way with the replacements
\begin{equation}
\label{RP}
A_{abc}^-\rightarrow K_{abc}, \qquad A^{\underline{m}+} _{abc}\rightarrow K^{\underline{m}}_{abc}.
\end{equation}
The covariant action for this system can now be written
again in the form
\begin{equation}
S = S_0 +\int L_6
\end{equation}
where
\begin{eqnarray}\label{Sn}
S_0&=&-\int\eta_{IJ}\left(vh^IH^J+{1\over 2}dB^I C^J\right)\\
&=& \int \left(v h H+ {1\over 2} HC\right)
-\left(vh_{\underline{m}} H^{\underline{m}} +{1\over 2}H_{\underline{m}} C^{\underline{m}}\right)
\end{eqnarray}
and
\begin{eqnarray}
L_6 &=&
\frac{1}{48} \varepsilon_{a_1 \cdots a_6 } e^{a_1} e^{a_2} e^{a_3} e^{a_4}
R^{a_5a_6} - \frac{1}{3} e^{a_1}e^{a_2}e^{a_3} (De^i\Gamma_{a_1 a_2 a_3}e_i)
+ \nonumber\\
&-& \frac{1}{2} \frac{\varepsilon_{a_1 \cdots a_6}}{5!}
e^{a_1} \cdots e^{a_5} (\lambda^{i}_{\underline{m}} \Gamma^{a_6} D \lambda_{i}^{\underline{m}})
+ \nonumber\\
&+& \varepsilon_{a_1 \cdots a_6} \left( \frac{e^{a_1} \cdots e^{a_6} }{6!}
\frac{1}{2} Q_{b\underline{\a}} Q^{b\underline{\a}} -
\frac{e^{a_1} \cdots e^{a_5} }{5!} (D \phi^{\underline{\a}}-
e^i \lambda_i^{\underline{m}} V_{\underline{m}}{}^{\underline{\a}}) Q_{\underline{\a}}^{a_6} \right) + \nonumber \\
&-& \frac{1}{2}
\frac{\varepsilon_{a_1 \cdots a_6}}{4!} e^{a_1} \cdots e^{a_4}
(e^i \Gamma^{a_5 a_6} \lambda_{i\underline{m}}) (D \phi^{\underline{\a}}
V_{\underline{\a}}{}^{\underline{m}} - e^j \lambda_j^{\underline{m}})+ \nonumber \\
&-& 16 \frac{\varepsilon_{a_1 \cdots
a_6}}{6!} e^{a_1} \cdots e^{a_6} V_{b\,ij} V^{b ij} -
\frac{1}{3} e^{a_1} \cdots e^{a_4} (e^i \Gamma_{a_1 a_2
a_3} e^j) V_{a_4 ij}.
\end{eqnarray}
In $L_6$ there are no quartic terms in the
gravitino because the unique term which respects
all the symmetries would be $T^a T^b E_a E_b$ and this vanishes
due to the cyclic gamma matrix identity in six dimensions.
The relative coefficients of the various terms are fixed by
susy, see below.
The field $Q_{b\underline{\a}}$ is an auxiliary field which has been introduced
in order to write $L_6$ as a six--form. Its equation of motion gives
$e^bQ_{b}^{\underline{\a}}=D\phi^{\underline{\a}}- e^i\lambda_i^{\underline{m}}V^{\underline{\a}}_{\underline{m}}$ and, upon substituting
this back in $\int L_6$, one obtains the usual super--covariantized
kinetic term for the scalars in the tensor multiplet.
Since under a generic variation of $B^I$ and $a$ one has
\begin{equation}\label{dds}
\delta S_0 = -2\int\eta_{IJ} \left(vh^I d\delta B^J +
{v\over \sqrt{-u^2}} h^I h^J d\delta a\right),
\end{equation}
$S_0$ is now evidently invariant under the bosonic symmetries
$I)$--$III)$ which in the present case take the form
\begin{eqnarray}
&I)&\qquad \delta B^I=d\Lambda^I,\qquad \delta a =0\\
&II)&\qquad \delta B^I= -{2h^I \over \sqrt{-u^2}}\,\varphi,\qquad
\delta a =\varphi\\
&III)&\qquad \delta B^I=\psi^I da ,\qquad \delta a =0.
\end{eqnarray}
Under these transformations $\int L_6$ is trivially invariant.
From \eref{dds} one sees that the equations of motion for $B^I$ become
in this case $d(vh^I)=0$ which, by fixing the symmetries $III)$,
can be reduced to $h^I=0$ and these last equations correspond just to
\eref{s1} and \eref{s2} (see \eref{hpicc2}). The $a$--equation of motion
is again a consequence of the $B^I$--equations.
The supersymmetry variation of the action can be computed as in
the previous section:
\begin{eqnarray}
\delta_\xi S_0&=&
-\int \eta_{IJ} \left(i_\xi R^I + \frac{1}{2} (\xi^i \Gamma^a e_i) e^b
e^c K^I_{cba}\right)K^J+\nonumber\\
&+&{1\over 2} \int i_\xi \left(\eta_{IJ}R^IC^J\right)+
(\xi^i \lambda_i^{\underline{m}}) K K_{\underline{m}},\label{var0}\\
\delta_\xi \int L_6 &=& \int i_\xi \hat dL_6.
\end{eqnarray}
Here we defined the four--forms
$$
R^I=\hat d C^I.
$$
The variation of $S_0$ depends on $B^I$ again only through
$K^I$ and, with respect to the expression found in \eref{var}, there
is an additional term, proportional to $\lambda_{\underline{m}}$, which comes
from the self--interactions of the tensor multiplet. The action $S_0$
depends on the geometry of the coset manifold, in fact, only
through the matrix $G_{IJ}$ (see eqs. \eref{hpicc},\eref{hpicc1}). Since
\begin{equation}
\hat d G_{IJ} =2D \phi^{\underline{\a}} V_{\underline{\a}\underline{m}} (L_I^{\underline{m}} L_J + L_I L_J^{\underline{m}})
\end{equation}
we have
\begin{equation}
\delta_\xi G_{IJ} = 2 \xi^i \lambda_i^{\underline{m}} (L_{I\underline{m}} L_J + L_I L_{J\underline{m}})
\end{equation}
and this leads to the additional term in $\delta_\xi S_0$.
The explicit expressions for $R^I=\hat d C^I$ and $\hat dL_6$
can be obtained using \eref{56}--\eref{62}, with the replacements
\eref{RP}, and
with a long but straightforward calculation one can show that
$\delta_\xi S$ indeed vanishes. The explicit expression for
$R^I$ can be found in the appendix.
\section{Concluding remarks}
In this paper we analyzed, in the framework of the approach of
\cite{PST,PST3,M5}, supersymmetric and supergravity models
with chiral bosons in six dimensions, and, outlining a general
procedure, we wrote covariant and supersymmetric actions for some
of these models.
All these actions have the structure of \eref{S0} and are invariant
under the bosonic symmetries $I)$--$III)$, typical of the approach of
\cite{PST}-\cite{PSTprd}, as
well as under (modified) supersymmetry transformations. Our general recipe is
that these modified transformations are obtained from the standard
ones in a natural and universal way by simply replacing the
(anti)--selfdual field strength tensors $H^{(\pm)}_{abc}$, which arise in the
standard supersymmetry transformations of the fermions,
with the special tensors $K^{(\pm)}_{abc}$.
We treated in detail the case of a free tensor supermultiplet, pure
$N = 1$ supergravity and $N = 1$ supergravity coupled with $n$
tensor supermultiplets. More general models containing also
hypermultiplets and vector multiplets have not been considered
explicitly in this paper. The inclusion of an arbitrary number
of hypermultiplets does not present any conceptual new
difficulty; we did not consider them here only for the sake
of simplicity.
The same holds for the inclusion of Yang--Mills supermultiplets \cite{sa}
a part from
the following important observation. In this case, in fact, the
tensor multiplets can be coupled to a certain number of Lie--algebra
valued Yang--Mills
fields $A_k$, with curvatures $F_k=dA_k+A_kA_k$, where $k=1,\cdots,N_{YM}$
and $N_{YM}$ is the number of factor gauge groups. The couplings are
realized in a standard way by defining the generalized curvatures now
as
\begin{equation}\label{hym}
H^I=dB^I+\tilde C^I + c^I_k \, \omega_{YM}(A^k),
\end{equation}
where $d\omega_{YM}(A_k)=tr(F^kF^k)$ defines the usual Chern--Simons
three--forms, one for each factor group, and $\tilde C^I$ is given by
$C^I$ in \eref{ci} augmented by a term proportional to the gluino
bi--linears, see e.g. \cite{new6d}. The $c^I_k$ form a {\it constant}
matrix, which weighs the couplings of the various Chern--Simons terms
to the tensor multiplets,
and in \eref{hym} a sum over $k$ is understood.
The (self)--duality equations of motion for these $H^I$ are
then again given by \eref{s1},\eref{s2}.
The appearance of the
Chern--Simons terms leads then in the action $S_0$ in \eref{Sn}
to a contribution given by
\begin{equation}\label{GS}
-{1\over 2}\int \eta_{IJ}\, dB^I c^J_k \omega_{YM}(A^k)
\end{equation}
which, in general, is not gauge invariant its variation leading to
\begin{equation}
\delta_{YM}S_0=
{1\over 2}\int \eta_{IJ}c^I_k c^J_ltr(\lambda^ldA^l)tr(F^kF^k),
\end{equation}
where the $\lambda^l$ are the Yang--Mills transformation parameters.
For cohomological reasons this would then imply that the action
necessarily breaks also supersymmetry.
To save gauge invariance, and
also supersymmetry, one has to impose that the constants which weigh
the various Chern--Simons terms in \eref{hym} have to be constrained
by
\begin{equation}\label{const}
\eta_{IJ}c^I_k c^J_l=0.
\end{equation}
If this relation is not satisfied one has a set of supersymmetric and
gauge invariant equations of motion which are non integrable, in the sense
that they can not be deduced from
an action. In such a situation conservation of energy--momentum
and of the Yang--Mills currents is not guaranteed. In ref.
\cite{new6d} it was, indeed, found that the Yang--Mills
currents are conserved if and only if the constraint \eref{const} holds.
It can also be seen that, if this constraint holds, then
the Yang--Mills equation derived from our covariant Lagrangian
coincides with the one given in \cite{new6d} upon fixing the
symmetry $III)$ according to $h^I=0$.
This situation is not new. It was, in particular, noticed in
\cite{DL} that in the case of tensor multiplets coupled
in flat six--dimensional space--time to Yang--Mills supermultiplets
through Chern--Simons terms
one obtains a set of supersymmetric and gauge invariant equations
of motion which do, however, not admit an action. This system represents,
in some sense, a limiting case in which the supergravity multiplet
decouples. In this "limit", however, the geometry of the $n$ tensor
multiplets becomes trivial and the constraint \eref{const} would
reduce to $\delta_{IJ}c^I_k c^J_l=0$, where now $I,J=(1,\cdots,n)$,
whose unique solution is $c_k^I=0$. This means that the flat
tensor--Yang--Mills system is consistent only in the absence of
Chern--Simons couplings, i.e. when both systems are free,
in which case the action becomes simply a sum of $n$ terms like
the ones given in \eref{SH} plus the free super--Yang--Mills
action.
In the case when the supergravity multiplet is present the meaning
of the constraint \eref{const} can be understood as follows. Its
general solution is in fact
$$
c_k^I=\alpha_k c^I,
$$
where the $\alpha_k$ are $N_{YM}$ arbitrary but non vanishing constants
and $c^I$ is an $SO(1,n)$ null vector,
$$
\eta_{IJ}c^I\,c^J=0.
$$
This means that there is, actually, a unique total Chern--Simons
three--form appearing in the theory, given by
$$
\omega_{YM}=\sum_{k=1}^{N_{YM}}\alpha_k\, \omega_{YM}(A^k),
$$
and the generalized curvatures become then
$$
H^I=dB^I+\tilde C^I + c^I \omega_{YM}.
$$
Choosing for $c^I$ a standard representative, for example
$c^I=(1,0,\cdots,0,1)$, it is easily seen that there is only {\it one}
two--form which carries a Chern--Simons correction, i.e. $B^0+B^n$,
while all the others, i.e. $B^0-B^n$ and $B^I$ with $1\leq I \leq n-1$,
are gauge invariant. In the case of one tensor multiplet $(n=1)$
this means that you can form a two--form with closed field strength
and one whose curvature carries a Chern--Simons correction, the
two field strengths being related on--shell by Hodge duality. It is
indeed known, see e.g. \cite{new6d},\cite{DL}, that for $n=1$ only
under these circumstances you can construct a local, gauge invariant,
and supersymmetric theory.
If \eref{const} is not satisfied one can interpret
$\delta_{YM}S_0$ as a "classical" gauge anomaly which, as observed
above, would then also have a supersymmetric partner
\cite{anom}. For an appropriate content of fields it can then happen
that these "classical" anomalies are cancelled
by the one loop quantum ABBJ anomalies through a generalized
Green--Schwarz mechanism which invloves all the $B^I$ fields \cite{sa},
since \eref{GS} has precisely the structure of the
Green--Schwarz counterterm. In this case, however, since supersymmetry
and gauge invariance require also the inclusion of one--loop
quantum corrections, it is no longer meaningful to search
for a {\it local} invariant action
(see also the discussion in\cite{new6d}).
\paragraph{Acknowledgements.}
\ We are grateful to I. Bandos, P. Pasti and D. Sorokin for their interest
in this work and useful discussions. This work was supported by the
European Commission TMR programme ERBFMPX-CT96-0045 to which K.L. and
M.T. are associated.
\section{Appendix}
In this appendix we give some details on our notations and conventions
and report the explicit expression for the four forms $R^I$.
We write the six--dimensional symplectic Majorana-Weyl spinors as
$\psi_{\alpha i}$
(left-handed) and $\psi^{\alpha i}$ (right-handed) where $i = 1,2$ is
an $USp(1)$ index which can be
raised and lowered with the invariant antisymmetric tensor $\varepsilon_{ij}$
\begin{equation}
\psi_i = \varepsilon_{ij} \psi^j, \qquad \psi^i = \varepsilon^{ji} \psi_j,
\end{equation}
while $\alpha=1,\cdots,4$ is a chiral $SO(1,5)$ spinor index which cannot be raised
or lowered. The symplectic Majorana-Weyl condition reads
\begin{equation}
\varepsilon^{ij} \psi^{\alpha j} = O^{\alpha\beta} \psi^{\star \beta i}
\label{A1}
\end{equation}
where the matrix $O$ satisfies
\begin{equation}
O^T = -O, \quad O^\star = O, \quad O^2 = -1.
\end{equation}
The $4\times 4$ matrices $(\Gamma^{a})_{\alpha\beta}$ and $(\Gamma^a)^{\alpha\beta}$ span a
Weyl-algebra,
$(\Gamma_{(a})_{\alpha\beta} (\Gamma_{b)})^{\beta\gamma} = \eta_{ab} \delta_\alpha^\gamma$,
and satisfy the hermiticity condition
\begin{equation}
O \Gamma^a{}^\dagger O = \Gamma^a. \label{A2}
\end{equation}
Since our formalism is manifestly $USp(1)$ invariant the relations
\eref{A1}--\eref{A2} need, however, never be used explicitly.
The duality relations for the anti-symmetrized $\Gamma$-matrices is
\begin{equation}
(\Gamma_{a_1\ldots a_k})_{\alpha\beta} = - (-1)^{k(k+1)/2} \frac{1}{(6-k)!}
\varepsilon_{a_1\ldots a_6} \left(\Gamma^{a_{k+1}\ldots a_6}\right)_{\alpha\beta}
\end{equation}
where no "$\gamma_7$" appears since our $\Gamma$-matrices are $4\times 4$
Weyl matrices. The cyclic identity reads
\begin{equation}
(\Gamma^a)_{\alpha(\beta} (\Gamma_a)_{\gamma)\delta} = 0,
\end{equation}
and another fundamental identity is
\begin{equation}
(\Gamma_a)_{\alpha\beta}(\Gamma^a)^{\gamma\delta} = -4 \delta^{\gamma}_{[\alpha} \delta^\delta_{\beta]}.
\end{equation}
The explicit expression for the four--forms $R^I=\hat d C^I$,
appearing in the variation \eref{var0}, is
\begin{eqnarray}
R^I&=&-{1\over 2}e^{a_1}e^{a_2} (e^i\Gamma^{a_3}e_i)K^I_{a_1a_2a_3}\\
&+&\left[e^{a_1}e^{a_2}e^{a_3}e_i\Gamma_{a_1}T^i_{a_2a_3}
+{3\over 8}e^{a_1}e^{a_2}e^{a_3}e^i\Gamma_{a_1a_2}\lambda_{i\underline{m}}V_{\underline{\a}}^{\underline{m}}
D_{a_3}\phi^{\underline{\a}}\right.\\
&-&{1\over 48}\varepsilon_{a_1\cdots a_6}e^{a_1}e^{a_2}e^{a_3}e^i\Gamma^{a_4a_5}
\lambda_{i\underline{m}}V_{\underline{\a}}^{\underline{m}}D^{a_6}\phi^{\underline{\a}}
-{1\over8}e^{a_1}e^{a_2}e^{a_3}(e^i\Gamma^{a_1}{}_b \lambda_{i\underline{m}})K^{\underline{m}}_{ba_2a_3}\\
&+&\left.{1\over 4}e^i\lambda_{i\underline{m}}K^{\underline{m}}
+2e^{a_1}e^{a_2}e^{a_3}e^{a_4}D_{a_1}\lambda_{a_2a_3a_4}\right]L^I+\\
&+&\left[-{1\over 2}e^{a_1}e^{a_2}e^{a_3}e^i\Gamma_{a_1a_2}D_{a_3}\lambda_i^{\underline{m}}
+{1\over 2}e^{a_1}e^{a_2}T^i\Gamma_{a_1a_2}\lambda_i^{\underline{m}}\right.\\
&+&\left.2e^{a_1}e^{a_2}e^{a_3}V_{\underline{\a}}^{\underline{m}}D\phi^{\underline{\a}}\lambda_{a_1a_2a_3}\right]
L_{\underline{m}}^I,
\end{eqnarray}
where $\lambda_{abc}\equiv-{1\over 96}\lambda_{\underline{m}}^i(\Gamma_{abc})\lambda^{\underline{m}}_i$.
|
1,477,468,750,375 | arxiv | \section{Introduction}
\IEEEPARstart{I}{n} order to ensure and support demanding multimedia services and applications in the future $ 5^{th} $ generation wireless networks, introducing and/or combining already existing communication technologies with the novel ones is required. Since the optical fiber implementation is proved to be quite complicated and expensive in some areas, research interest in optical wireless technology has become renewed due to many useful benefits. Free-space optics (FSO) represents outdoor link, which uses infrared band and provides high bandwidth capacity and operation in licence-free unregulated spectrum, with very easy and low-cost implementation and repositioning possibility \cite{book, survey, new1, new2}.
The FSO signal transmission via atmospheric channel is seriously affected by few phenomena, such as the atmospheric turbulence and the misalignment between FSO transmitter and receiver (pointing errors) \cite{book, survey, PE2, PE4}. Furthermore, aggravating requirement for the FSO system application, is obligatory line-of-sight (LOS) existence between FSO transmitter and receiver. Some environment scenarios, such as difficult terrains and crowded urban streets, make very hard or even impossible to provide LOS component in wide areas. For that reason, utilization of the relaying technology within FSO systems has been proposed to realize coverage area extension where LOS cannot be achieved. In \cite{lee}, a mixed dual-hop amplify-and-forward (AF) relaying system, which is composed of radio frequency (RF) and FSO links, has been proposed, providing a convenient way to enable multiple RF users to be multiplexed via a single FSO link, and to simultaneously use FSO as a last mile access network. The performance of the mixed RF/FSO system with fixed AF gain relay was analyzed in \cite{lee, endend, Ansari-Impact, Anees2, Zhang_JLT, Zedini_PhotonJ}, while the RF/FSO system with variable AF gain relay was observed in \cite{nova,Zedini_PhotonJ, JSAC1, JSAC2,var2,var3}.
In order to ensure further improvement of the system performance, implementation of multiple relays within FSO systems has been also investigated in \cite{FSO1,FSO4,FSO5}. The idea of utilization of multiple relays within mixed RF/FSO system has been proposed in \cite{JLT}. The study in \cite{JLT} was concentrated on the RF/FSO system with multiple fixed gain AF relays, while partial relay selection (PRS) procedure was employed to choose active relay for further transmission. The PRS technique, firstly presented in \cite{PRS}, represents the effective and low-cost technique since relay selection is proceeded based on single-hop instantaneous channel state information (CSI), avoiding additional network delays and achieving power save. In \cite{JLT}, the outage probability expressions were derived, assuming the first RF hops experience Rayleigh fading, and the second FSO hops are influenced by Gamma-Gamma (GG) atmospheric turbulence. Further, the same RF/FSO system with fixed gain AF relays was analyzed in \cite{chapter}, while the outage probability expression was provided for the case when the pointing errors effect was taken into account. Additionally, \cite{prsF1,prsF2} considered RF/FSO system with fixed gain AF relays taking account aggregate hardware impairments.
In contrast to \cite{JLT,chapter,prsF1,prsF2}, which assumed fixed AF gain relays, this paper analyses the PRS based multiple RF/FSO system with variable AF gain relays. The main contribution of the paper is to provide exact expression for the outage probability, considering that
the optical signal intensity fluctuations due to atmospheric turbulence are modeled by general M$\acute{{\rm{a}}}$laga ($ \mathcal{M} $) distribution, which takes into account multiple scattering effects and represents more general model compared to GG distribution \cite{JSAC1,FSO5,M1,M4}. In addition, the pointing errors are taken into account \cite{endend,M5,M6,M7}.
Assuming that the RF signal transmission from source to the relay station is performed in frequency range from 900 MHz to 2.4 GHz, due to fast fading statistic, the estimation of the RF channel is happened to be imperfect, so the \textit{outdated} CSI is used for both relay selection and relay gain regulation.
The first RF hops are introduced since the LOS is not provided in that area, so the RF links are assumed to be subject to Rayleigh fading.
In practical system scenario, it can be possible that the relay with best estimated CSI is not able to forward the signal. This problem is also taken into consideration in \cite{prs1}.
A novel outage probability expression is derived, which is further simplified to some special cases.
Approximate outage probability expressions are also provided, which are utilized to determine the outage probability floor.
Furthermore, as a special case when only one relay is assumed, the
outage probability expression is simplified
to the corresponding results already reported in \cite{JSAC2}.
Based on derived analytical expressions, numerical results are obtained and validated by Monte Carlo simulations, which represent widely used computing algorithms performed to obtain and confirm numerical results by generating repeated random sampling.
The rest of the paper is organized as follows. Section II presents
the system and channel model. The outage probability analysis is described in Section III, which also contains some special cases and approximation expressions. Numerical results and simulations with discussions are given in Section IV. Some concluding remarks are presented in Section V.
\section{System and channel model}
Fig.~\ref{Fig_1} presents the mixed RF/FSO system with multiple variable gain relays. The observed system consists of a source, $S$, a destination, $D$, and $M \ge 1$ relays. The node $S$ continuously monitors and periodically estimates the conditions of the $ S-R$ RF channels. The node $S$ selects the active relay $ R_l $, which is the one with best estimated CSI of the hop between source and relay. In the case the best selected relay is not able to perform further communication, next best relay is chosen, etc. In other words, the $ l $th worst or $ (M-l) $th best relay is selected \cite{prs1}.
The RF hops are considered in the first part of the system since the LOS component is not provided in that area. For that reason, Rayleigh distribution is assumed to describe the RF channel fading conditions. In practice, temporal variations of the RF channel occurs. Hence, the errors in channel estimation can happen, and the estimated CSI used for relay selection is not the same as the one at the time when transmission is performed. It means that the channel estimation at the relay is imperfect, and PRS is performed based on outdated CSI.
Since variable relays are employed, the gain is determined based on short-term statistics of the RF hops. In order to avoid additional channel estimation, the outdated CSI used for relay selection is also utilized to adjust relay gain.
\begin{figure}[!b]
\centering
\includegraphics[width=3.4in]{Fig1.pdf}
\caption{Multiple mixed RF/FSO system based on PRS with outdated CSI.}
\label{Fig_1}
\end{figure}
In the first phase of the transmission, after selection of the active relay, $ R_l $, signal is transmitted over RF link. Received electrical signal at the relay $ R_l $ is defined as
\begin{equation}
{r_{R_l}} = {h_{SR_l}}r + {n_{SR}},
\label{signal_r}
\end{equation}
where $ r $ denotes the signal with an average power $ P_s $, sent from the node $ S $. The fading amplitude over the RF link is denoted by $ h_{SR_l} $, with $ {\rm E}[ {h_{SR_l}^2}] = 1 $ $ ({\rm E}[\cdot] $ is mathematical expectation$)$. An additive white Gaussian noise (AWGN) with zero mean and variance $ \sigma_{SR}^2$ is denoted by $n_{SR} $.
Signal at the relay is amplified by gain $ G $ based on outdated CSI, which is defined as \cite{JSAC2, prs3}
\begin{equation}
{G^2} = \frac{1}{{\tilde h_{SR_l}^2{P_s}}},
\label{gain}
\end{equation}
where $ \tilde h_{SR_l} $ represents the estimated version of $ h_{SR_l} $.
In the next phase, amplified signal is converted to the optical one by subcarrier intensity modulation. DC bias is added to satisfy the non-negativity constraint. The optical signal at the relay is defined as
\begin{equation}
{r_{o}} ={P_t} \left( {1 + mG{r_{R_l}}} \right),
\label{signal_ro}
\end{equation}
where $ P_t $ represents the average transmitted optical power, $ m $ denotes modulation index $(m=1)$, and $ r_{R_l} $ is defined in (\ref{signal_r}). Optical signal is further forwarded to the destination via atmospheric turbulence channel. At the destination, direct detection is done, DC bias is removed, and an optical-to-electrical conversion is performed via PIN photodetector. The received electrical signal is expressed as
\begin{equation}
\begin{split}
{r_D}&= {I_{{R_l}D}}\eta{P_t}G r_{R_l} + {n_{RD}} \\
&= {I_{{R_l}D}}\eta{P_t}G \left( {{h_{SR}}_lr + {n_{SR}}} \right) + {n_{RD}},
\end{split}
\label{signal_rd}
\end{equation}
where $ I_{R_lD} $ represents the intensity of an optical signal, and $ \eta $ denotes an optical-to-electrical conversion coefficient. The AWGN over FSO link with zero mean and variance $ \sigma _{RD}^2 $ is denoted by $ n_{RD} $.
Following (\ref{gain}) and (\ref{signal_rd}), the equivalent instantaneous overall signal-to-noise ratio (SNR) at the destination is defined as
\begin{equation}
{\gamma _{eq}} = \frac{I_{{R_l}D}^2{\eta ^2}{P_t^2{G^2}h_{S{R_l}}^2{P_s}}}{{I_{{R_l}D}^2{\eta ^2}P_t^2{G^2}\sigma _{SR}^2 + \sigma _{RD}^2}} = \frac{{{\gamma _{1_l}}{\gamma _{2_l}}}}{{{\gamma _{2_l}} + {{\tilde \gamma }_{1_l}}}},
\label{eq_snr}
\end{equation}
where ${\gamma _{{1_l}}} = {{h_{S{R_l}}^2{P_s}} \mathord{\left/
{\vphantom {{h_{S{R_l}}^2{P_s}} {\sigma _{SR}^2}}} \right.
\kern-\nulldelimiterspace} {\sigma _{SR}^2}} = \;h_{S{R_l}}^2{\mu _1} $ represents the instantaneous SNR of the first RF hop with the average SNR defined as $ {\mu _1} = {\rm{E}}\left[ {{\gamma _{{1_l}}}} \right] = {{{P_s}} \mathord{\left/
{\vphantom {{{P_s}} {\sigma _{SR}^2}}} \right.
\kern-\nulldelimiterspace} {\sigma _{SR}^2}} $; $ {\tilde \gamma _{{1_l}}} = {{\tilde h_{S{R_l}}^2{P_s}} \mathord{\left/
{\vphantom {{\tilde h_{S{R_l}}^2{P_s}} {\sigma _{SR}^2}}} \right.
\kern-\nulldelimiterspace} {\sigma _{SR}^2}}$ is its estimated version; and the instantaneous SNR of the FSO hop is defined as $ {\gamma _{{2_l}}} = {{I_{{R_l}D}^2{\eta ^2}P_t^2} \mathord{\left/
{\vphantom {{I_{{R_l}D}^2{\eta ^2}P_t^2} {\sigma _{RD}^2}}} \right.
\kern-\nulldelimiterspace} {\sigma _{RD}^2}} $. The electrical SNR is defined as $ {\mu _2} = {{{{\rm{E}}^2}\left[ {{I_{{R_l}D}}} \right]{\eta ^2}P_t^2} \mathord{\left/
{\vphantom {{{{\rm{E}}^2}\left[ {{I_{{R_l}D}}} \right]{\eta ^2}P_t^2} {\sigma _{RD}^2}}} \right.
\kern-\nulldelimiterspace} {\sigma _{RD}^2}} $.
\subsection{RF channel model}
Since the RF hops experience Rayleigh fading, the instantaneous SNR of the first RF hop and its estimated version are two exponentially distributed correlated RVs. Since PRS with outdated CSI is considered, as well as possibility that the best relay is not able to perform transmission and the $ l $th worst (or $ (M-l) $th best) relay is selected, the joint probability density function (PDF) of SNRs ${\gamma _{{1_l}}}$ and $ {\tilde\gamma _{{1_l}}}$ is expressed as \cite[(48)]{prs3}
\begin{equation}
\begin{split}
{f_{{\gamma _{1_l}},{{\tilde \gamma }_{1_l}}}}\left( {x,y} \right) & = l{M \choose l}\frac{{e^{ - \frac{x}{{\left( {1 - \rho } \right){\mu _1}}}}}}{{\left( {1 - \rho } \right)\mu _1^2}}{I_0}\left( {\frac{{2\sqrt {\rho xy} }}{{\left( {1 - \rho } \right){\mu _1}}}} \right)\\
& \times \sum\limits_{i = 0}^{l - 1}\! {l-1 \choose i} {\left( { - 1} \right)^i}{e^{ - \frac{{\psi_i y}}{{\left( {1 - \rho } \right){\mu _1}}}}},
\end{split}
\label{pdf_rf}
\end{equation}
where $ \rho $ is correlation coefficient, $\psi_i = \left( {M - l + i} \right)\left( {1 - \rho } \right) + 1 $, and $ {I_\nu} \left( \cdot \right)$ represents the $ \nu $th order modified Bessel function of the first kind \cite[(8.406)]{Grad}.
\subsection{FSO channel model}
The intensity fluctuations of the received optical signal, caused by atmospheric turbulence, are modeled by recently presented M$\acute{{\rm{a}}}$laga ($ \mathcal{M} $) distribution \cite{M1,M4,M5,M6,M7}. The pertinence of the $ \mathcal{M} $ distribution in regard to the ones earlier considered in literature (such as Log-normal, K, GG, Exponential, etc.), is the consideration of the multiple scattering effects \cite{M1}. More precisely, presented model includes three components. Beside an one, $ U_L $, which occurs due to LOS contribution, there are two more components occurred due to scattering effects: the component which is scattered by the eddies on the propagation axis, $ U_S^{C} $, and the one scattered by the off-axis eddies, $ U_S^{G} $. The component $ U_S^{C} $ is coupled to $ U_L $, while the component $ U_S^{G} $ is statistically independent from both $ U_L $ and $ U_S^{C} $. More details about $ \mathcal{M} $ distribution can be found in \cite{M1}. In addition, pointing errors are taken into consideration. The PDF of the optical signal intensity is derived as \cite[(5)]{M7}
\begin{equation}
\begin{split}
{f_{{I_{{R_l}D}}}}\left( I \right)& = \frac{{{\xi ^2}{\rm A}}}{2}{I^{ - 1}}\sum\limits_{k = 1}^\beta {{a_k}} {\left( {\frac{{\alpha \beta }}{{g\beta + \Omega '}}} \right)^{ - \frac{{\alpha + k}}{2}}} \\
&\times \MeijerG*{3}{0}{1}{3}{\xi^2+1}{\xi ^2, \, \alpha, \, k}{{\frac{{\alpha \beta }}{{g\beta + \Omega '}}\frac{I}{{{A_0}{I_l}}}}},
\end{split}
\label{pdf_I}
\end{equation}
where $ \beta $ is natural number representing the amount of fading parameter, $ G_{p,q}^{m,n}\left( \cdot \right) $ is Meijer's \textit{G}-function \cite[(9.301)]{Grad}, and constants $ \rm A $ and $ a_k $ are defined as \cite[(25)]{M1}
\begin{equation}
{\rm A} \buildrel \Delta \over = \frac{{2{\alpha ^{\frac{\alpha }{2}}}}}{{{g^{1 + \frac{\alpha }{2}}}\Gamma \left( \alpha \right)}}{\left( {\frac{{g\beta }}{{g\beta + \Omega '}}} \right)^{\beta + \frac{\alpha }{2}}},
\label{const1}
\end{equation}
\begin{equation}
{a_k} \buildrel \Delta \over = {\beta-1 \choose k-1}
\frac{{{{\left( {g\beta + \Omega '} \right)}^{1 - \frac{k}{2}}}}}{{\left( {k - 1} \right)!}}{\left( {\frac{{\Omega '}}{g}} \right)^{k - 1}}{\left( {\frac{\alpha }{\beta }} \right)^{\frac{k}{2}}},
\label{const2}
\end{equation}
with a positive parameter $\alpha $ related to the effective number of large-scale cells of the scattering process. Further,
$g~=~{\rm E}\left[ {{{\left| {U_S^G} \right|}^2}} \right] = 2{b_0}\left( {1 - \rho_M } \right) $ represents the average power of the scattering component received by off-axis eddies, where $ 2{b_0} = {\rm E}\left[ {{{\left| {U_S^C} \right|}^2} + {{\left| {U_S^G} \right|}^2}} \right] $ defines the average power of the total scatter components. The amount of scattering power coupled to the LOS component is denoted by $ 0~\le~\rho_M~\le 1$. The average power from the coherent contributions is expressed as $ {\Omega'} = \Omega + 2{b_0}\rho + 2\sqrt {2{b_0}\rho_M \Omega } \cos \left( {{\phi _A} - {\phi _B}} \right) $, where $ \Omega = {\rm E}\left[ {{{\left| {{U_L}} \right|}^2}} \right] $ represents the average power of the LOS component. Deterministic phases of the LOS and the coupled-to-LOS scatter terms are denoted as $ \phi _A$ and $ \phi _B $, respectively.
The path loss component is defined by deterministic model as $ {I_l}=~\exp \left( { - \chi d} \right) $ \cite{PE2}, where $ \chi $ denotes the atmospheric attenuation coefficient and $ d $ represents length of the FSO link. The parameter $ {\xi} $ is defined as
\begin{equation}
\xi = \frac{{{a_{{d_{eq}}}}}}{{2{\sigma _s}}},
\label{ksi}
\end{equation}
with the equivalent beam radius at the receiver and the pointing error (jitter) standard deviation at the receiver denoted by $ {a_{{d_{eq}}}} $ and $ \sigma_s $, respectively.
Further, the parameter $ {a_{{d_{eq}}}} $ is related to the beam radius at the distance $ d $, $ a_d $, as $a_{{d_{eq}}}^2=~a_d^2\sqrt \pi {{\operatorname{erf} (v)} \mathord{\left/
{\vphantom {{erf(v)} {(2v\exp ( - {v^2}))}}} \right.
\kern-\nulldelimiterspace} {(2v\exp ( - {v^2}))}} $, with $ v =~{{\sqrt \pi a} \mathord{\left/
{\vphantom {{\sqrt \pi a} {(\sqrt 2 {a_d})}}} \right.
\kern-\nulldelimiterspace} {(\sqrt 2 {a_d})}} $, and the parameter $ a$ denotes the radius of a circular detector aperture.
The parameter $ A_0 $ is defined as $ {A_0} = {\left[ {\operatorname{erf} \left( v \right)} \right]^2} $, where $ \operatorname{erf} \left( \cdot \right) $ is the error function \cite[(8.250.1)]{Grad}. Next, the parameter $ a_d $ is dependent on the optical beam radius at the waist, $ a_0 $, and to the radius of curvature, $ F_0 $, as ${a_d}\!=~\!{a_0}{\left( {({\Theta _o} + {\Lambda _o})(1 + 1.63\sigma_R^{12/5}{\Lambda _1})} \right)^{1/2}}$, where $ {\Theta _o} =1-{d \mathord{\left/
{\vphantom {L {{F_0}}}} \right.
\kern-\nulldelimiterspace} {{F_0}}}$, $ {\Lambda _o} = {{2d} \mathord{\left/
{\vphantom {{2d} {(\iota a_0^2)}}} \right.
\kern-\nulldelimiterspace} {(\iota a_0^2)}}$, $ {\Lambda _1} = {{{\Lambda _o}} \mathord{\left/
{\vphantom {{{\Lambda _o}} {(\Theta _o^2 + \Lambda _o^2)}}} \right.
\kern-\nulldelimiterspace} {(\Theta _o^2 + \Lambda _o^2)}} $ \cite{PE4}. The Rytov variance determines the optical signal intensity due to atmospheric turbulence. It is defined as $ \sigma_R^{2}=1.23C_n^{2}\iota^{7/6}d^{11/6} $, where $ \iota = 2\pi/\lambda $ is the wave number with the wavelength $ \lambda $, and $ C_n^{2} $ is the refractive index structure parameter.
Based on definition of the instantaneous SNR of FSO hop, $ \gamma_{2_l} $, and the PDF of $ I_{R_lD} $ in (\ref{pdf_I}), after some mathematical manipulations, the PDF of $ \gamma_{2_l} $ is easily derived as \cite[(7)]{M7}
\begin{equation}
\begin{split}
{f_{{\gamma _{2}}}}\left( \gamma \right)& = \frac{{{\xi ^2}{\rm A}}}{{4\gamma }}\sum\limits_{k = 1}^\beta {{a_k}} {\left( {\frac{{\alpha \beta }}{{g\beta + \Omega '}}} \right)^{ - \frac{{\alpha + k}}{2}}} \\
&\times \MeijerG*{3}{0}{1}{3}{\xi^2+1}{\xi ^2, \, \alpha, \, k}{\frac{{\alpha \beta \kappa \left( {g + \Omega '} \right)}}{{g\beta + \Omega '}}\sqrt {\frac{\gamma }{{{\mu _2}}}} },
\end{split}
\label{pdf_g2}
\end{equation}
where $ \kappa = {{{\xi ^2}} \mathord{\left/
{\vphantom {{{\xi ^2}} {\left( {{\xi ^2} + 1} \right)}}} \right.
\kern-\nulldelimiterspace} {\left( {{\xi ^2} + 1} \right)}} $. Based on definition of the moments of the combined model ($ \mathcal{M} $ distribution + pointing errors) presented in \cite[(33)]{M5}, the electrical SNR is determined as ${\mu _2} = {{{\eta ^2}P_t^2{A_0}^2{I_l}^2{\kappa ^2}{{\left( {g + \Omega '} \right)}^2}} \mathord{\left/
{\vphantom {{{\eta ^2}P_t^2{A_0}^2{I_l}^2{\kappa ^2}{{\left( {g + \Omega '} \right)}^2}} {\sigma _{RD}^2}}} \right.
\kern-\nulldelimiterspace} {\sigma _{RD}^2}} $ \cite{M7}.
The cumulative distribution function (CDF) of $ \gamma_{2_l} $ is obtained as
\begin{equation}
\begin{split}
{F_{{\gamma _{2}}}}\left( \gamma \right)& =\! \!\int\limits_0^\gamma \!\! {{f_{{\gamma _{2}}}}\left( x \right)} dx = \frac{{{\xi ^2}{\rm A}}}{2}\sum\limits_{k = 1}^\beta {{a_k}} {\left( {\frac{{\alpha \beta }}{{g\beta + \Omega '}}} \right)^{ - \frac{{\alpha + k}}{2}}} \\
&\times \MeijerG*{3}{1}{2}{4}{1, \, \xi^2+1}{\xi ^2, \, \alpha, \, k, \, 0}{\frac{{\alpha \beta \kappa \left( {g + \Omega '} \right)}}{{g\beta + \Omega '}}\sqrt {\frac{\gamma }{{{\mu _2}}}}}.
\end{split}
\label{cpdf_g2}
\end{equation}
\section{Outage probability analysis}
As one of the most important system performance metric, the outage probability indicates how often the system is under the desired performance threshold.
The outage probability, defined as the probability that the instantaneous overall SNR, $\gamma_{eq}$, defined in (\ref{eq_snr}), falls below a predetermined outage threshold, $\gamma_{th}$, is given by
\begin{equation}
\begin{split}
{P_{out}} = {F_{eq}}\left( {{\gamma _{th}}} \right) = \Pr \left( \gamma_{eq}< {\gamma _{th}} \right),
\end{split}
\label{pout01}
\end{equation}
where $\Pr\left( \cdot \right)$ denotes probability. After substituting (\ref{eq_snr}) into (\ref{pout01}) and applying some mathematical manipulations, (\ref{pout01}) is re-written as
\begin{equation}
\begin{split}
&{P_{out}} = \Pr \left( {\frac{{{\gamma _{{1_l}}}{\gamma _{{2_l}}}}}{{{\gamma _{{2_l}}} + {{\tilde \gamma }_{{1_l}}}}} < {\gamma _{th}}\left| {{\gamma _{{1_l}}},{{\tilde \gamma }_{{1_l}}}} \right.} \right) \\
&= \!\!1 - \!\!\!\int\limits_0^\infty \!\! {\int\limits_0^\infty \!\!{\Pr \left(\! {{\gamma _{{2_l}}} > \frac{{{\gamma _{th}}y}}{x}} \right)\!\!{f_{{\gamma _{1_l}},{{\tilde \gamma }_{1_l}}}}\left( {x\!+\!{\gamma _{th}},y} \right)dxdy} } \\
& =\!\! 1 -\!\!\! \int\limits_0^\infty \!\!{\int\limits_0^\infty {{{\bar F}_{{\gamma _2}}}\left( {\frac{{{\gamma _{th}}y}}{x}} \right){f_{{\gamma _{1_l}},{{\tilde \gamma }_{1_l}}}}\left( {x\!+\!{\gamma _{th}},y} \right)dxdy} },
\end{split}
\label{pout2}
\end{equation}
where ${\bar F_{{\gamma _2}}}\left( \cdot \right) = 1 - {F_{{\gamma _2}}}\left( \cdot \right)$ is complementary CDF (CCDF).
After substituting (\ref{pdf_rf}) and (\ref{cpdf_g2}) into (\ref{pout2}), and mathematical derivation presented in Appendix~\ref{App1}, the analytical expression for outage probability is derived as
\begin{equation}
\begin{split}
&{P_{out}} = 1 - l {M \choose l}\sum\limits_{i = 0}^{l - 1} \frac{{{{{l-1 \choose i} \left( { - 1} \right)}^i}}}{{M - l + i + 1}}{e^{ - \frac{{{\gamma _{th}}\left( {M - l + i + 1} \right)}}{{\psi_i {\mu _1}}}}}\\
& \!+ l{M \choose l}\!\!\sum\limits_{k = 1}^\beta {\sum\limits_{i = 0}^{l - 1} {\sum\limits_{t = 0}^\infty {\sum\limits_{d = 0}^t {\frac{{{{{l-1 \choose i}{t \choose d}\left( { - 1} \right)}^i}{\rho ^t}{\psi_i ^{ - (t + 1)}}}}{{\pi t{!^2}{{\left( {1 - \rho } \right)}^{t - d - 1}}\mu _1^{t - d}}}}} } } \\
&\! \times {\rm A}{a_k}{\left( {\frac{{\alpha \beta }}{{g\beta + \Omega '}}} \right)^{ - \frac{{\alpha + k}}{2}}}{\xi ^2}{2^{\alpha + k - 4}}\gamma _{th}^{t - d}{e^{ - \frac{{{\gamma _{th}}}}{{\left( {1 - \rho } \right){\mu _1}}}}}\\
& \!\!\!\times \!\MeijerG*{6}{2}{3}{7}{1, \, -t, \, {\frac{{{\xi ^2} + 2}}{2}}}{{\frac{{{\xi ^2} }}{2}},\! \, \!{\frac{{\alpha}}{2},}\! \, \!{\frac{{\alpha+1}}{2},}\! \,\!{\frac{{k }}{2},} \!\,\!{\frac{{ k+1}}{2},}\! \,\!1+d,\!\,0}{\frac{{{\alpha ^2}{\beta ^2}{\kappa ^2}{{\left( {g\!+\!\Omega '} \right)}^2}\!\!{\gamma _{th}}}}{{16{{\left( {g\beta\!+\!\Omega '} \right)}^2}\psi_i{\mu _2}}}}\!.
\end{split}
\label{pout}
\end{equation}
\subsection{Special cases}
If the amount of scattering power coupled to the LOS component is $ \rho_M=1 $ (i.e., the average power of the scattering component received by off-axis eddies, $ g $, is equal to zero), and the average power from the coherent contributions is expressed as $ {\Omega'} =1$, the $ \mathcal{M} $ distribution is reduced to the GG distribution. In that case, the product $ {{\rm A}{a_k}} $ is nonzero only when $ k=\beta $, and based on (\ref{const1}) and (\ref{const2}), it holds ${\rm{A}}{a_k}~=~{{2{\alpha ^{\frac{{\alpha + \beta }}{2}}}{\beta ^{\frac{{\alpha + \beta }}{2}}}} \mathord{\left/
{\vphantom {{2{\alpha ^{\frac{{\alpha + \beta }}{2}}}{\beta ^{\frac{{\alpha + \beta }}{2}}}} {\left( {\Gamma \left( \alpha \right)\Gamma \left( \beta \right)} \right)}}} \right.
\kern-\nulldelimiterspace} {\left( {\Gamma \left( \alpha \right)\Gamma \left( \beta \right)} \right)}}$. Hence, the outage probability in (\ref{pout}) is reduced to the one when the FSO hop is influenced by the GG atmospheric turbulence with the pointing errors, as
\begin{equation}
\begin{split}
&{P_{out}^{GG}} = 1 - l {M \choose l}\sum\limits_{i = 0}^{l - 1} \frac{{{{{l-1 \choose i}\left( { - 1} \right)}^i}}}{{M - l + i + 1}}{e^{ - \frac{{{\gamma _{th}}\left( {M - l + i + 1} \right)}}{{\psi {\mu _1}}}}}\\
& \!+ \!\! \frac{{l{M \choose l}{\xi ^2}{2^{\alpha + \beta - 3}}}}{{\pi \Gamma \left( \alpha \right)\Gamma \left( \beta \right)}}
\sum\limits_{i = 0}^{l - 1} {\sum\limits_{t = 0}^\infty {\sum\limits_{d = 0}^t {\frac{{{l-1 \choose i}{t \choose d}{{\left( { - 1} \right)}^i}{\rho ^t}{\psi ^{ - (t + 1)}}}}{{ t{!^2}{{\left( {1 - \rho } \right)}^{t - d - 1}}\mu _1^{t - d}}}}} } \\
&\! \!\times \!\!\gamma _{th}^{t - d}\!{e^{ - \frac{{{\gamma _{th}}}}{{\left( {1 \!-\!\rho } \right){\mu _1}}}}}\!\!\MeijerG*{6}{2}{3}{7}{1, \, -t, \, {\frac{{{\xi ^2} + 2}}{2}}}{{\frac{{{\xi ^2} }}{2}},\! \,\! {\frac{{\alpha}}{2},}\! \,\!{\frac{{\alpha+1}}{2},} \!\,\!{\frac{{\beta}}{2},} \!\,\!{\frac{{ \beta+1}}{2},}\! \,\!1+d,\!\,0}{\frac{{{\alpha ^2}\!{\beta ^2}\!{\kappa ^2}\!{\gamma _{th}}}}{{16\psi_i{\mu _2}}}}\!.
\end{split}
\label{poutGG}
\end{equation}
When the pointing errors are neglected, i.e., $ \xi \to \infty $, the FSO hop is only affected by GG atmospheric turbulence. After using \cite[(07.34.25.0007.01),
(07.34.25.0006.01) and (06.05.16.0002.01)]{sajt} to find the limit of (\ref{poutGG}) for $ \xi \to \infty $, and assuming that the relay with the best estimated CSI is always available $( M=l )$, the result in (\ref{poutGG}) is simplified to the corresponding one in \cite[(15)]{telfor}.
If system consists of only one relay, the outage probability
is derived by substituting $M=l=1$ into (\ref{pout}) as
\begin{equation}
\begin{split}
&{P_{out}^{M=1}}\!= \!1\! -\!{e^{ - \frac{{{\gamma _{th}}}}{{ {\mu _1}}}}}\!+\!\sum\limits_{k = 1}^\beta\sum\limits_{t = 0}^\infty \sum\limits_{d = 0}^t{\frac{{{t \choose d}{\rho ^t}{\xi ^2}{2^{\alpha + k - 4}}}}{{\pi t{!^2} \left( {1 - \rho } \right)^{-1}}}} \\
&\times {\rm A}{a_k}{\left( {\frac{{\alpha \beta }}{{g\beta + \Omega '}}} \right)^{ - \frac{{\alpha + k}}{2}}} \!\!\!{\left( {\frac{{{\gamma _{th}}}}{{\left( {1 - \rho } \right){\mu _1}}}} \right)^{t - d}} \!\!{e^{ - \frac{{{\gamma _{th}}}}{{\left( {1 - \rho } \right){\mu _1}}}}} \\
& \! \!\times \!\MeijerG*{6}{2}{3}{7}{1, \, -t, \, {\frac{{{\xi^2} + 2}}{2}}}{{\frac{{{\xi ^2} }}{2}},\! \, \!{\frac{{\alpha}}{2},}\! \,\!{\frac{{\alpha+1}}{2},} \!\,\!{\frac{{k }}{2},} \!\,\!{\frac{{ k+1}}{2},} \!\,\!1+d,\,0}{\frac{{{\alpha ^2}{\beta ^2}{\kappa ^2}{{\left( {g \!+ \!\Omega '} \right)}^2}{\gamma _{th}}}}{{16{{\left( {g\beta + \Omega '} \right)}^2}{\mu _2}}}}\!.
\end{split}
\label{poutM1}
\end{equation}
When system consists of only one relay and the second FSO hop is affected by the GG atmospheric turbulence with the pointing errors, the outage probability is derived by substituting $M=l=1$ into (\ref{poutGG}), which represents the result already reported in \cite[(23)]{JSAC2}.
\subsection{High SNR Approximations}
When the value of the electrical SNR of the FSO link is very high, the outage probability for any value of $\mu_1$ can be derived by taking the limit of (\ref{pout}), i.e., $ P_{out}^{{\mu _2}}=\mathop {\lim }\limits_{{\mu _2} \to \infty } {P_{out}}$. Observe that the Meijer's $ G $-function is the only term dependent on $ {{\mu _2} } $ in (\ref{pout}). After utilizing \cite[(07.34.06.0001.01)]{sajt}, it can be concluded that the Meijer's $ G $-function tends to zero when $ {\mu _2}\! \to \!\infty $. The high electrical SNR approximation is derived as
\begin{equation}
P_{out}^{{\mu _2}}\!\! =\!\!\!\! \mathop {\lim }\limits_{{\mu _2} \to \infty } \!\!\!\!{P_{out}}\!\!\approx\!\!1\! -\! l {M \choose l}\!\!\sum\limits_{i = 0}^{l - 1} \frac{{{{{l-1 \choose i}\left( { - 1} \right)}^i}{e^{ - \frac{{{\gamma _{th}}\left( {M\!-\!l\!+\!i\!+\!1} \right)}}{{\psi_i {\mu _1}}}}}}}{{M - l + i + 1}}\!.
\label{mi2inf}
\end{equation}
When $\mu_1$ tends to infinity, the third addend in (\ref{pout}) is nonzero only when $ t=d $. Considering this, as well as $\mathop {\lim }\limits_{{\mu _1} \to \infty }{e^{ - \frac{{{\gamma _{th}}\left( {M - l + i + 1} \right)}}{{\psi_i {\mu _1}}}}}=1 $ and $\mathop {\lim }\limits_{{\mu _1} \to \infty }{e^{ - \frac{{{\gamma _{th}}}}{{\left( {1 - \rho } \right){\mu _1}}}}}=1$, the high average SNR approximation is obtained as
\begin{equation}
\begin{split}
&P_{out}^{{\mu _1}}\!\! =\!\!\mathop {\lim }\limits_{{\mu _1} \to \infty } \!\!\!\!{P_{out}}\approx1\!-\!l {M \choose l}\!\!\sum\limits_{i = 0}^{l - 1} \frac{{{{{l-1 \choose i} \left( { - 1} \right)}^i}}}{{M - l + i + 1}} \\
& \!+ l{M \choose l}\!\!\sum\limits_{k = 1}^\beta {\sum\limits_{i = 0}^{l - 1} {\sum\limits_{t = 0}^\infty {\frac{{{{ {l-1 \choose i}\left( { - 1} \right)}^i}{\rho ^t}(1-\rho){\psi ^{ - (t + 1)}}}}{{\pi t{!^2}}}}} } \\
&\! \times {\rm A}{a_k}{\left( {\frac{{\alpha \beta }}{{g\beta + \Omega '}}} \right)^{ - \frac{{\alpha + k}}{2}}}{\xi ^2}{2^{\alpha + k - 4}}\\
& \!\times \!\MeijerG*{6}{2}{3}{7}{1, \, -t, \, {\frac{{{\xi ^2} + 2}}{2}}}{{\frac{{{\xi ^2} }}{2}},\! \, \!{\frac{{\alpha}}{2},} \!\,\!{\frac{{\alpha+1}}{2},}\! \,\!{\frac{{k }}{2},}\! \,\!{\frac{{ k+1}}{2},} \!\,1+t,\,0}{\frac{{{\alpha ^2}{\beta ^2}{\kappa ^2}{{\left( {g + \Omega '} \right)}^2}{\gamma _{th}}}}{{16{{\left( {g\beta + \Omega '} \right)}^2}\psi_i{\mu _2}}}}\!.
\end{split}
\label{mi1inf}
\end{equation}
Based on (\ref{mi2inf}) and (\ref{mi1inf}), the outage probability floors can be efficiently calculated. The meaning of outage floor is that the further increase of the signal power will not improve system performance, which will be illustrated in the next Section.
Since the approximation in (\ref{mi1inf}) is represented in the infinite series form, the simpler outage probability approximation when $\mu_1$ tends to infinity is obtained by considering only the first term of infinite summation in (\ref{mi1inf}) as
\begin{equation}
\begin{split}
&P_{out}^{{\mu _{1_{app}}}}\!\!\approx1\!-\!l {M \choose l}\!\!\sum\limits_{i = 0}^{l - 1} \frac{{{{{l-1 \choose i}\left( { - 1} \right)}^i}}}{{M - l + i + 1}} \\
& \!\! \!\!+ l{M \choose l}\!\!\sum\limits_{k = 1}^\beta \! {\sum\limits_{i = 0}^{l - 1} {\frac{{{{{l-1 \choose i}\left( { - 1} \right)}^i}{\xi ^2}{2^{\alpha \!+ \!k\! -\! 4}}{\rm A}{a_k}}}{{\pi\psi_i (1-\rho)^{-1}}}}} {\left( {\frac{{\alpha \beta }}{{g\beta\! +\! \Omega '}}} \right)^{\!\!\!-\frac{{\alpha\!+\!k}}{2}}}\!\!\! \\
& \!\times \!\MeijerG*{6}{2}{3}{7}{1, \, 0, \, {\frac{{{\xi ^2} + 2}}{2}}}{{\frac{{{\xi ^2} }}{2}}, \, {\frac{{\alpha}}{2},} \,{\frac{{\alpha+1}}{2},} \,{\frac{{k }}{2},} \,{\frac{{ k+1}}{2},} \,1,\,0}{\frac{{{\alpha ^2}{\beta ^2}{\kappa ^2}{{\left( {g + \Omega '} \right)}^2}{\gamma _{th}}}}{{16{{\left( {g\beta + \Omega '} \right)}^2}\psi_i{\mu _2}}}}\!.
\end{split}
\label{mi1inf1}
\end{equation}
This approximation is valid only in certain conditions. Since the infinite series in (\ref{mi1inf}) originates from the representation of the modified
Bessel function of the first kind into a series form (see Appendix \ref{App1}), the approximation in (\ref{mi1inf1}) is valid when the argument of $ I_0(.) $ tends to zero, i.e., for lower values of $\rho $.
\section{Numerical results and simulations}
In this Section, numerical results obtained by derived outage probability expressions are presented. In addition, Monte Carlo simulations are provided to confirm the accuracy of the derived expressions. The second FSO hop is influenced by the $ \mathcal{M} $-distributed atmospheric turbulence channel when the pointing errors is taken into account. Based on \cite{book}, the intensity of atmospheric turbulence is determined by the Rytov variance, previously defined as $ \sigma_R^{2}=~1.23C_n^{2}\iota^{7/6}d^{11/6} $, with the refractive index structure parameter, $ C_n^{2} $, varying in the range from $ 10^{-17} $ to $ 10^{-13} $ m$ ^{-2/3} $. In this paper, the values of the parameters are taken from \cite{M1,M4,M5}, which are determined by some experimental measurements. The FSO link is assumed to be $ 1 $ km long, and the wavelength employed in optical second hop is $ 785 $ nm. In addition, the average optical power of the FSO hop is normalized, i.e., $\Omega+2b_0=1 (\Omega=0.5,b_0=0.25)$, and $ {{\phi _A} - {\phi _B}}=\pi/2 $. The pointing errors strength is determined by the normalized jitter standard deviation, $\sigma_s/a$, where the radius of a circular detector aperture takes a value $a=5$ cm. Further, the value of the optical beam radius at the waist is $ a_0=5 $ cm, and of the radius of curvature is $ F_0=-10$ \cite{PE4,chapter}. The value of the outage threshold is $ \gamma_{th}=-10$ dB.
\begin{table}[b]
\centering
\caption{ Values of $\alpha,\beta, \rho_M$ for different scattering component $U_S^C $ strength for the same intensity of atmospheric turbulence ($ \sigma_R^{2}=0.36$, $C_n^2=0.83 \times 10^{-14} $ m$ ^{-2/3} $) \cite{M5}}
\begin{tabular}{cc}
\hline
$ (\alpha,\beta, \rho_M) $ & impact of $ U_S^C $ \\
\hline
$ (11,4,1) $ & low $(g=0)$ \\
$ (10,5,0.95) $ & medium \\
$ (25,10,0.75) $ & great \\
\hline
\end{tabular}
\label{table}
\end{table}
\begin{figure}[!b]
\centering
\includegraphics[width=3.5in]{Fig22.pdf}
\caption{Outage probability vs. $ \mu_1=\mu_2 $ when the best and the worst relay is selected to perform transmission.}
\label{Fig_22}
\end{figure}
Fig.~\ref{Fig_22} presents the outage probability dependence on $ \mu_1~=~\mu_2 $ when the relay with the best estimated CSI is able to perform further transmission $ (l=M) $, as well as when all except the one with worst estimated CSI are unavailable $ (l=1) $. The atmospheric turbulence intensity is determined to be $ \sigma_R^{2}=0.36 $ and $ C_n^2=0.83 \times 10^{-14} $ m$ ^{-2/3} $, based on experimental measurements performed in University of Waseda, Japan, on the 15th October, 2009 (see \cite{M5}).
For the same Rytov variance, the following parameters $ (\alpha,\beta, \rho_M) $ are considered: $ (11,4,1) $, $ (10,5,0.95) $, and $ (25,10,0.75) $, considering different strength of the scattering component $ {U_S^G} $ (see Table I). Since the intensity of the turbulence is the same, in Fig.~\ref{Fig_22} the value of the parameter $\rho_M$, representing the amount of the scattering power coupled to the LOS component, defines the outage probability performance. Greater values of $ \rho_M $ reflect in improved system performance, meaning that the average power of the scattering component $ {U_S^G} $, $ g $, is lower. In fact, when $ \rho_M =1 $, it holds that $ g=0 $, i.e., total scattering power is related to the component $ {U_S^C} $. This case implies that the atmospheric turbulence is modeled by GG distribution, which does not take into consideration the scattering component received by off-axis eddies. Lowering the value of $ \rho_M $, the average power $ g $ is greater and the scattered component $ {U_S^G} $ has greater impact on the system performance. Furthermore, it is expected that the outage probability is lower when the selected relay is the one with best estimated CSI.
\begin{figure}[!b]
\centering
\includegraphics[width=3.5in]{Fig21.pdf}
\caption{Outage probability vs. $ \mu_1 $ for different values of correlation coefficient and the amount of the scattering power coupled to the LOS component.}
\label{Fig_21}
\end{figure}
\begin{figure}[!t]
\centering
\includegraphics[width=3.5in]{Fig24.pdf}
\caption{Outage probability vs. $ \mu_1 $ for different values of correlation coefficient in various atmospheric turbulence conditions.}
\label{Fig_24}
\end{figure}
\begin{figure}[!t]
\centering
\includegraphics[width=3.5in]{Fig17b.pdf}
\caption{Outage probability vs. $ \mu_2 $ for different values of correlation coefficient and various pointing errors strength.}
\label{Fig_17b}
\end{figure}
The outage probability dependence on the electrical SNR of the FSO hop for the same intensity of the atmospheric turbulence is presented in Fig.~\ref{Fig_21}. Different values of the amount of the scattering power coupled to the LOS component, $ \rho_M $, and the correlation coefficient, $ \rho $, are assumed. Greater values of the parameter $\rho$ mean that the outdated CSI, used for PRS and relay gain determination, and actual CSI of RF link at the time of transmission, are more correlated, leading to the better system performance. The impact of $ \rho_M $ on the outage probability is more expressed when the value of $\rho$ is greater. When the correlation coefficient is lower, the RF channel conditions are worse, and the impact of the scattering conditions of the FSO hop on system performance is lower.
In addition, the results obtained based on the high SNR approximation in (\ref{mi1inf}), are also presented in Fig~\ref{Fig_21}. When the average SNR over RF link is very great, the outage probability floor exists, and further increase in signal power at the source node will not result in improved system performance. The results obtained by (\ref{mi1inf}) are overlapped with the results achieved by (\ref{pout}) for greater $ \mu_1 $. Further, the outage probability floor occurs at lower vales of $ \mu_1 $ when $\rho $ or $\rho_M$ is lower.
\begin{table}[b]
\centering
\caption{ Values of $\sigma_R,C_n^2, \rho_M$ for different intensity of the atmospheric turbulence ($\alpha=8.1$, $\beta=4 $ ) \cite{M4}}
\begin{tabular}{cc}
\hline
$ (\sigma_R,C_n^2, \rho_M) $ & atmospheric turbulence strength \\
\hline
$ (0.52, 1.2 \times 10^{-14}$ m$ ^{-2/3} $, 0.88) & weak (in sunrise) \\
$ (1.2, 2.8 \times 10^{-14}$ m$ ^{-2/3} $, 0.1) & strong (in mid-day) \\
\hline
\end{tabular}
\label{table}
\end{table}
Fig.~\ref{Fig_24} shows the outage probability dependence on $ \mu_1$ for different values of parameter $\rho $. The atmospheric turbulence is determined by the Rytov variance and the refractive index structure parameter given in Table II.
As expected, system performs better in weak turbulence conditions. Further, the correlation effect on outage probability is more pronounced when the second FSO hop is influenced by weak atmospheric turbulence.
The high SNR approximation results obtained based on (\ref{mi1inf1}) are also provided in Fig.~\ref{Fig_24}. The outage probability floor is present for great values of $ \mu_1$, which is in agreement with the curves obtained based on (\ref{mi1inf1}), but only when the correlation coefficient is low.
The impact of the pointing errors strength on the outage probability is depicted in Fig.~\ref{Fig_17b}, assuming different values of parameter $\rho $. System has better performance when the normalized jitter standard deviation is lower, meaning that the alignment between transmitter laser at the relay and receiver telescope at the destination is very good and the pointing error is low. The impact of the correlation on the overall outage probability is more prominent when the pointing error is low.
In addition, the outage probability floor exists for great values of the electrical SNR over FSO link. Further increase of the optical power will not improve system performance. This outage probability floor is in agreement with the high electrical SNR approximation results obtained, based on (\ref{mi2inf}). As it can be concluded from both (\ref{mi2inf}) and Fig.~\ref{Fig_17b}, this outage probability floor is not dependent on the FSO channel conditions, but only on RF channel parameters. With increasing the electrical SNR over FSO link, the curves for $\sigma_s/a=1 $, $\sigma_s/a=5 $ and $\sigma_s/a=6 $ when the correlation coefficient is the same, will overlap.
\begin{figure}[!t]
\centering
\includegraphics[width=3.5in]{Fig11.pdf}
\caption{Outage probability vs. $ \sigma_s $ for different values of correlation coefficient.}
\label{Fig_11}
\end{figure}
\begin{figure}[!t]
\centering
\includegraphics[width=3.5in]{Fig19.pdf}
\caption{Outage probability vs. number of relays $ M$.}
\label{Fig_19}
\end{figure}
Fig.~\ref{Fig_11} presents the outage probability dependence on jitter standard deviation. In accordance with the conclusions from Fig.~\ref{Fig_24} and Fig.~\ref{Fig_17b}, the effect of correlation has more influence on the outage probability in weak atmospheric turbulence condition and when the pointing error is low. In addition, the pointing errors effect is more dominant when the optical signal transmission suffers from weak atmospheric turbulence.
The usefulness of the multiple relay implementation within RF/FSO system in regards to various atmospheric turbulence and pointing errors conditions is presented in Fig.~\ref{Fig_19}. The greatest SNR gain is accomplished by using PRS with two relays in regard to one relay. Also, when the FSO hop suffers from damaging conditions (strong atmospheric turbulence or intense pointing errors), the multiple relays implementation within RF/FSO system will not provide significant system performance improvement.
\section{Conclusion}
This paper has presented the outage probability analysis of the mixed multiple AF relying RF/FSO system. Contrary to previous published studies, this system has employed the variable gain AF relays, and the selection of the active relay is performed based on PRS procedure. The RF links experience the Rayleigh fading environment, which is characterized by fast fading statistics. For that reason, the outdated CSI of the RF channel is used for PRS and relay gain determination. The intensity fluctuations of the optical signal are assumed to originate from M$\acute{{\rm{a}}}$laga ($ \mathcal{M} $) distribution, as well as from pointing errors. The outage probability expression is derived, and simplified to some special cases previously reported in literature. Approximate high SNR expressions are also provided.
Based on derived expressions, numerical results have been presented and confirmed by Monte Carlo simulations. The effects of system and channel parameters have been examined. The existence of the outage probability floor has been observed, which is an important limiting factor of the RF/FSO system. The outage floor can be efficiently calculated by derived approximate expressions in high average/electrical SNR region. It has been illustrated that the outdated CSI used for relay gain adjustment and PRS procedure can seriously determine the outage probability performance, particularly when the optical signal transmission via FSO hop is endangered by favorable conditions (weak atmospheric turbulence in the mid-day, very low average power of the scattering component received by off-axis eddies, and/or weak pointing errors). Furthermore, the pointing errors phenomenon is more important to the system performance, when transmission is performed via weak atmospheric turbulence conditions.
As the most significant conclusion obtained by presented analysis and results, we have concluded that the implementation of multiple relays within RF/FSO system will not provide important performance improvement compared with the costs and difficulty of implementation
when the optical signal transmission via free space in the second hop is impaired by harmful conditions, such as strong atmospheric turbulence or expressive misalignment between FSO apertures.
\section*{Acknowledgement}
This work has received funding from the European Union Horizon 2020 research and innovation programme under the Marie Skodowska-Curie grant agreement No 734331. The work was supported by Ministry of Education, Science and Technology Development of Republic of Serbia under grants TR-32025 and III-47020.
\appendices
\section{}
\label{App1}
After substituting (\ref{pdf_rf}) and (\ref{cpdf_g2}) into (\ref{pout2}), the outage probability is expressed as
\begin{equation}
{P_{out}} = 1 - {\Im _1} + {\Im _2},
\label{A1}
\end{equation}
where integral $ {\Im _1} $ is defined and solved by \cite[(6.614.3)]{Grad}, \cite[(07.44.03.0007.01) and (07.02.03.0002.01)]{sajt} as
\begin{equation}
\begin{split}
&{\Im _1} = l{M \choose l}\sum\limits_{i = 0}^{l - 1} {l-1 \choose i}\frac{{{{\left( { - 1} \right)}^i}}}{{\left( {1 - \rho } \right)\mu _1^2}}{e^{ - \frac{{{\gamma _{th}}}}{{\left( {1 - \rho } \right){\mu _1}}}}}\\
& \times \!\!\!\int\limits_0^\infty \!\!\! {\int\limits_0^\infty {{e^{ - \frac{x}{{\left( {1 - \rho } \right){\mu _1}}}}}{e^{ - \frac{{\psi_ y}}{{\left( {1 - \rho } \right){\mu _1}}}}}} {I_0}}\!\! \left( {\frac{{2\sqrt {\rho \left( {x + {\gamma _{th}}} \right)y} }}{{\left( {1 - \rho } \right){\mu _1}}}} \right)\!\!dxdy\\
& = \! l{M \choose l}\!\sum\limits_{i = 0}^{l - 1} {l-1 \choose i} \frac{{{{\left( { - 1} \right)}^i}}}{{\left( {M - l + i + 1} \right)}}{e^{ - \frac{{{\gamma _{th}}\left( {M - l + i + 1} \right)}}{{\psi_i {\mu _1}}}}}.
\end{split}
\label{A2}
\end{equation}
Integral $ {\Im _2} $ is defined as
\begin{equation}
\begin{split}
&{\Im _2}\!= l{M \choose l}\!\!\sum\limits_{k = 1}^\beta {\sum\limits_{i = 0}^{l - 1} \frac{{{{{l-1 \choose i} \left( { - 1} \right)}^i}{\xi ^2}{\rm A}{a_k}}}{{2\left( {1 - \rho } \right)\mu _1^2}}} {\left(\! {\frac{{\alpha \beta }}{{g\beta + \Omega '}}}\! \right)^{\!\! \!\!-\! \frac{{\alpha + k}}{2}}}{\kern 1pt} \\
&\!\! \times \!\!{\kern 1pt} {e^{ - \frac{{{\gamma _{th}}}}{{\left( {1 - \rho } \right){\mu _1}}}}}\!\!\!\!\int\limits_0^\infty\!\! \!\! {\int\limits_0^\infty \!\! {{e^{ - \frac{x}{{\left( {1 - \rho } \right){\mu _1}}}}}} } {e^{ - \frac{{\psi_i y}}{{\left( {1 - \rho } \right){\mu _1}}}}}{I_0}\!\!\left( {\frac{{2\sqrt {\rho \left( {x\!+ \!{\gamma _{th}}} \right)y} }}{{\left( {1 - \rho } \right){\mu _1}}}} \right)\\
&\times \MeijerG*{3}{1}{2}{4}{1, \, \xi^2+1}{\xi ^2, \, \alpha, \, k, \, 0}{\frac{{\alpha \beta \kappa \left( {g + \Omega '} \right)}}{{g\beta + \Omega '}}{\sqrt {\frac{{{\gamma _{th}}y}}{{{\mu _2}x}}} }} dxdy.
\end{split}
\label{A3}
\end{equation}
After utilization of \cite[(03.02.06.0037.01)]{sajt} as $ {I_0}\! \! \left( {\frac{{2\sqrt {\rho \left( {x + {\gamma _{th}}} \right)y} }}{{\left( {1 - \rho } \right){\mu _1}}}} \right)\!\! \! \! = \! \! \! \! \sum\limits_{t = 0}^\infty {\frac{{{\rho ^t}{{\left( {x + {\gamma _{th}}} \right)}^t}{y^t}}}{{t{!^2}{{\left( {1 - \rho } \right)}^{2t}}\mu _1^{2t}}}} $, integral $ \! {\Im _2} $ is re-written as
\begin{equation}
\begin{split}
&{\Im _2}\! =\! l{M \choose l}\!\sum\limits_{k = 1}^\beta {\sum\limits_{i = 0}^{l - 1} \frac{{{{{l-1 \choose i}\left( { - 1} \right)}^i}{\xi ^2}{\rm A}{a_k}}}{{2\left( {1 - \rho } \right)\mu _1^2}}} {\left(\! {\frac{{\alpha \beta }}{{g\beta + \Omega '}}}\! \right)^{ \!\!\!\!- \frac{{\alpha + k}}{2}}}{\kern 1pt} \\
& \!\!\times \!\!\sum\limits_{t = 0}^\infty \! {\frac{{{\rho ^t}{e^{ - \frac{{{\gamma _{th}}}}{{\left( {1 - \rho } \right){\mu _1}}}}}}}{{t{!^2}{{\left( {1 - \rho } \right)}^{2t}}\mu _1^{2t}}}} \!\!\int\limits_0^\infty \!\! {{{\left( {x + {\gamma _{th}}} \right)}^t}{e^{ - \frac{x}{{\left( {1 - \rho } \right){\mu _1}}}}}} dx \times {\Im _{21}},
\end{split}
\label{A4}
\end{equation}
where
\begin{equation}
\begin{split}
{\Im _{21}}& = \int\limits_0^\infty {{y^t}} {e^{ - \frac{{\psi_i y}}{{\left( {1 - \rho } \right){\mu _1}}}}} \\
& \times \MeijerG*{3}{1}{2}{4}{1, \, \xi^2+1}{\xi ^2, \, \alpha, \, k, \, 0}{\frac{{\alpha \beta \kappa \left( {g + \Omega '} \right)}}{{g\beta + \Omega '}}{\sqrt {\frac{{{\gamma _{th}}y}}{{{\mu _2}x}}} }} dy.
\end{split}
\label{A5}
\end{equation}
After representing exponential function in terms of Meijer's $ G $-function as $ {e^{ - \frac{{\psi_i y}}{{\left( {1 - \rho } \right){\mu _1}}}}} = \MeijerG*{1}{0}{0}{1}{-}{0}{{\frac{{\psi_i y}}{{\left( {1 - \rho } \right){\mu _1}}}}} $ by using \cite[(01.03.26.0004.01)]{sajt}, integral $ {\Im _{21}} $ is solved with a help of \cite[(07.34.21.0013.01)]{sajt}. The Meijer's $ G $-function in obtained expression is simplified and transformed by \cite[(07.34.03.0002.01), (07.34.03.0001.01) and (07.34.16.0002.01)]{sajt}, and the final integral $ {\Im _{21}} $ is
\begin{equation}
\begin{split}
{\Im _{21}}&= \frac{{{2^{\alpha + k - 3}}}}{\pi }{\left( {\frac{\psi_i }{{\left( {1 - \rho } \right){\mu _1}}}} \right)^{ - (t + 1)}} \\
& \times \MeijerG*{2}{5}{6}{3}{{\frac{2-\xi^2}{2}},\,{\frac{2-\alpha}{2}},\,{\frac{1-\alpha}{2}},\,{\frac{2-k}{2}}, \, {\frac{1-k}{2}},\,1}{0, \, 1+t, \, -\frac{\xi^2}{2}} {{\Psi x}},
\end{split}
\label{A6}
\end{equation}
where $ \Psi = \frac{{16{{\left( {g\beta + \Omega '} \right)}^2}\psi_i {\mu _2}}}{{{\alpha ^2}{\beta ^2}{\kappa ^2}{{\left( {g + \Omega '} \right)}^2}{\gamma _{th}}\left( {1 - \rho } \right){\mu _1}}} $.
Next, after substituting (\ref{A6}) into (\ref{A4}), integral $\Im _{2}$ is found as
\begin{equation}
\begin{split}
&{\Im _2} = l{M \choose l}\sum\limits_{k = 1}^\beta {\sum\limits_{i = 0}^{l - 1} \frac{{{{ {l-1 \choose i}\left( { - 1} \right)}^i}{\xi ^2}A{a_k}{2^{\alpha + k - 3}}}}{{2\pi }}} \\
& \!\!\times \!\!{e^{ - \frac{{{\gamma _{th}}}}{{\left( {1 - \rho } \right){\mu _1}}}}}{\left( {\frac{{\alpha \beta }}{{g\beta + \Omega '}}} \right)^{\!\!\!\!\!- \frac{{\alpha + k}}{2}}}\!\!\! \sum\limits_{t = 0}^\infty {\frac{{{\rho ^t}{\psi_i ^{ - (t + 1)}}}}{{t{!^2}{{\left( {1 - \rho } \right)}^t}\mu _1^{t + 1}}}} \times {\Im _{22}},
\end{split}
\label{A7}
\end{equation}
where
\begin{equation}
\begin{split}
{\Im _{22}}&= \int\limits_0^\infty {{{\left( {x + {\gamma _{th}}} \right)}^t}{e^{ - \frac{x}{{\left( {1 - \rho } \right){\mu _1}}}}}} \\
& \times \MeijerG*{2}{5}{6}{3}{{\frac{2-\xi^2}{2}},\,{\frac{2-\alpha}{2}},\,{\frac{1-\alpha}{2}},\,{\frac{2-k}{2}}, \, {\frac{1-k}{2}},\,1}{0, \, 1+t, \, -\frac{\xi^2}{2}} {{\Psi x}}.
\end{split}
\label{A8}
\end{equation}
Binomial theorem \cite[(1.111)]{Grad} is applied as $ {\left( {x + {\gamma _{th}}} \right)^t}\! \! \! = \sum\limits_{d = 0}^t {t \choose d} {x^d}\gamma _{th}^{t - d} $, and by using \cite[(01.03.26.0004.01)]{sajt} the exponential function is represented in terms of Meijer's $ G $-function as $ {e^{ - \frac{{x}}{{\left( {1 - \rho } \right){\mu _1}}}}}\! \! =\! \! \MeijerG*{1}{0}{0}{1}{-}{0}{{\frac{{x}}{{\left( {1 - \rho } \right){\mu _1}}}}} $. Afterwards, integral $\! {\Im _{22}} $ is solved with the help of \cite[(07.34.21.0011.01)]{sajt}
\begin{equation}
\begin{split}
&{\Im _{22}} = \sum\limits_{d = 0}^t {t \choose d} \gamma _{th}^{t - d}{\left( {1 - \rho } \right)^{d + 1}}{\mu _1}^{d + 1} \\
&\times \!\MeijerG*{2}{6}{7}{3}{{\frac{2-\xi^2}{2}},\,{\frac{2-\alpha}{2}},\,{\frac{1-\alpha}{2}},\,{\frac{2-k}{2}}, \, {\frac{1-k}{2}},\,-d,\,1}{0, \, 1+t, \, -\frac{\xi^2}{2}} {{\Psi \left( {1 - \rho } \right){\mu _1}\!}}.
\end{split}
\label{A9}
\end{equation}
The Meijer's $ G $-function in (\ref{A9}) is transformed by using \cite[(07.34.16.0002.01)]{sajt} as
\begin{equation}
\begin{split}
&\MeijerG*{2}{6}{7}{3}{{\frac{2-\xi^2}{2}},\,{\frac{2-\alpha}{2}},\,{\frac{1-\alpha}{2}},\,{\frac{2-k}{2}}, \, {\frac{1-k}{2}},\,-d,\,1}{0, \, 1+t, \, -\frac{\xi^2}{2}} {\Psi \left( {1 - \rho } \right){\mu _1}}\\
& = \MeijerG*{6}{2}{3}{7}{1, \, -t, \, \frac{\xi^2+2}{2}}{{\frac{\xi^2}{2}},\,{\frac{\alpha}{2}},\,{\frac{\alpha+1}{2}},\,{\frac{k}{2}}, \, {\frac{k+1}{2}},\,1+d,\,0} {\frac{1}{{\Psi \left( {1 - \rho } \right){\mu _1}}}}.
\end{split}
\label{A10}
\end{equation}
First, substitutions of (\ref{A10}) into (\ref{A9}), and afterwards (\ref{A9}) into (\ref{A7}) are performed. Next, replacement of $\Psi$ is done. Novel obtained form of $ {\Im _{2}} $, together with $ {\Im _{1}} $ in (\ref{A2}), are placed in (\ref{A1}). Final outage probability expression is presented in (\ref{pout}).
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
|
1,477,468,750,376 | arxiv | \section{Supplemental Information}
The following is a Python implementation of the DNN-SP2-PT method using an example Hamiltonian and an example perturbation. This Python script is a translation of the pseudocode in Alg. 1 to Alg. 3.
\begin{lstlisting}
import numpy as np
def UPDATE_LAYER_0(X0,S0,TrS0,D0_converged,v_sigma,\
idemp_err,layer,frob_err_0):
M=0
#### CONVERGENCE TEST FOR S0 USING S0 CONVERGENCE CRITERIA
if (idemp_err[layer] <= 0):
D0_converged = True
print("===================")
print("S0 converged to D0.")
print("===================")
frob_err_0 = np.linalg.norm(S0-X0)
M = layer
elif (layer > 2 and v_sigma[layer-1] != v_sigma[layer-2]\
and idemp_err[layer] >= 4.5 * idemp_err[layer-2] * idemp_err[layer-2]):
D0_converged = True
print("===================")
print("S0 converged to D0.")
print("===================")
frob_err_0 = np.linalg.norm(S0-X0)
M = layer
else:
#### UPDATE S0 IF IT HAS NOT CONVERGED
sigma = v_sigma[layer]
W = sigma # Weight function
B = (1-sigma)*S0 # Bias function
S0 = W*X0 + B # Apply weight and bias
TrS0 = W*TrX0 + (1-sigma)*TrS0 # Update occupation
return S0, TrS0, D0_converged, M, frob_err_0
def UPDATE_LAYER_1(X1,S1,sigma,D0_converged,D1_converged,\
frob_err_0,frob_err_1,idemp_err,layer,M):
#### CONVERGENCE TEST FOR S1 ONLY AFTER S0 CONVERGES
if (D0_converged==True):
#### CONVERGENCE CRITERIA FOR S1
if (frob_err_1[layer] > 9 * frob_err_1[layer-2] * frob_err_0) \
and (layer > M + 1):
D1_converged=True
print("===================")
print("S1 converged to D1.")
print("===================")
#### UPDATE S1 IF IT HAS NOT CONVERGED
if (D1_converged==False):
W = sigma # Weight function
B = (1-sigma)*S1 # Bias function
S1 = W*X1 + (1-W)*S1 # Apply weight and bias
return S1, D1_converged
def generate_H(N):
#### INITIALIZE SYMMETRIC TEST HAMILTONIAN
H=np.zeros((N,N))
for i in range(0,N):
for j in range(i,N):
H[i,j] = np.exp(-.5*np.abs(i-j))*np.sin(i+1);
H[j,i] = H[i,j];
return H
def generate_H1(N):
#### INITIALIZE SYMMETRIC TEST PERTURBATION
H1 = np.zeros((N,N))
for i in range(0,N):
for j in range(i,N):
H1[i,j] = np.exp(-2*np.abs(i-j))*np.sin(3*i+1)/(i+1);
H1[j,i] = H1[i,j];
return H1
def gersgorin(M):
#### FIND EIGENVALUE ESTIMATES USING THE GERSGORIN CIRCLE THEOREM
min_e=0
max_e=0
for i in range(0,np.shape(M)[0]):
#### EVALUE CIRCLE CENTER
e=M[i,i]
r=0
#### SUM OF ABS VALUE OF COMPONENTS IN ROW I
for j in range(0,np.shape(M)[0]):
r+=np.abs(M[i,j])
#### GERSGORIN EVALUE CIRCLE RADIUS
r-=np.abs(e)
#### UPDATE MIN AND MAX EVALUES AS YOU LOOPS OVER ROWS
if e-r < min_e:
min_e = e-r
elif e+r > max_e:
max_e = e+r
return (min_e,max_e)
#### DNN FORMULATION OF DENSITY MATRIX PERTURBATION THEORY
if __name__=="__main__": # Execute only when called from
# command line
#### PRINT 15 DIGITS
np.set_printoptions(precision=15)
#### INITIALIZE
N = 1200 # Number of basis orbitals
Nocc = int(N*0.66) # Number of occupied orbitals
eps = 1e-16 # Small finite value in FP32
sigma = 0 # Initial value of sgn
I = np.eye(N) # Identity matrix
maxlayer = 200 # Maximum number of layers
v_sigma = np.zeros(maxlayer) # Vector to record sigma
idemp_err = np.zeros(maxlayer) # Zeroth-order Idemp. error estimate
frob_err_0 = 0 # Frobenius norm of S0 Idemp. error
frob_err_1 = np.zeros(maxlayer) # Frobenius norm of S1 Idemp. error
layer = 0 # Deep layer counter
M = 0 # Layer where S0 converges to D0
#### LOAD/CONSTRUCT HAMILTONIAN AS INPUT LAYER
H0 = generate_H(N) # Symmetric NxN Hamiltonian matrix
X0 = H0 # Initial input layer
H1 = generate_H1(N) # Perturbation
X1 = H1 # Initial input layer
#### INITIAL IN-PLACE LEARNING FOR FIRST LAYER
(hN,h1) = gersgorin(X0) # Obtain eigenvalue estimates
# using Gersgorin circle theorem
#### INITIAL LINEAR TRANSFORM
W0 = -1/(hN-h1) # Weight (scalar)
B0 = (hN/(hN-h1))*I # Bias (diagonal matrix)
S0 = W0*X0 + B0 # Zeroth-order initial transform
S0 = np.single(S0) # Store in FP32
TrS0 = np.trace(S0) # Keep track of occupation
S1 = W0*X1 # First-order initial transform
S1 = np.single(S1) # Store in single precision
#### SET CONVERGENCE FLAGS
D0_converged=False
D1_converged=False
#### COMPUTATIONAL DEEP LAYERS
while D0_converged==False or D1_converged==False:
#### ACTIVATION FUNCTION f^(1) FROM THREE DUAL
#### HALF-PRECISION MATRIX-MATRIX MULTIPLICATIONS
if D0_converged==False:
X0_h = np.single(np.half(S0)) # FP16[S0], FP32 acc.
X0_l = np.single(np.half(S0-X0_h)) # FP16[S0-X0_h], FP32 acc.
X1_h = np.single(np.half(S1)) # FP16[S1], FP32 acc.
X1_l = np.single(np.half(S1-X1_h)) # FP16[S1-X1_h], FP32 acc.
X0X1_hh = np.single(np.matmul(X0_h,X1_h)) # FP16 mult., FP32 acc.
X0X1_hl = np.single(np.matmul(X0_h,X1_l)) # FP16 mult., FP32 acc.
X0X1_lh = np.single(np.matmul(X0_l,X1_h)) # FP16 mult., FP32 acc.
X0X1 = np.single(X0X1_hh+X0X1_hl+X0X1_lh) # Accumulation in FP32
X1X0 = np.transpose(X0X1) # Use symmetry of S0 and S1
X1 = np.single(X0X1 + X1X0)
if D0_converged==False:
#### ACTIVATION FUNCTION f^(0) FROM TWO DUAL
#### HALF-PRECISION MATRIX-MATRIX MULTIPLICATIONS
X0_hh = np.single(np.matmul(X0_h,X0_h)) # FP16 mult., FP32 acc.
X0_hl = np.single(np.matmul(X0_h,X0_l)) # FP16 mult., FP32 acc.
X0_lh = np.transpose(X0_hl) # Use symmetry of S0
X0 = np.single(X0_hh+X0_hl+X0_lh) # Accumulation in FP32
TrX0 = np.trace(X0) # Approx. occupation
#### ZEROTH ORDER IDEMPOTENCY ERROR ESTIMATE
idemp_err[layer] = TrS0-TrX0
print("Layer = " + str(layer) + ", Error = " + str(idemp_err[layer]))
#### LEARNING THROUGH BINARY ON-THE-FLY
#### IN-PLACE OCCUPATION ERROR MINIMIZATION
sigma = np.sign(np.abs(2*TrS0 - TrX0 - Nocc) \
- np.abs(TrX0 - Nocc) + eps)
v_sigma[layer] = sigma
#### UPDATE D0 APPROXIMATION AND CHECK CONVERGENCE
(S0, TrS0, D0_converged, M, frob_err_0) \
= UPDATE_LAYER_0(X0,S0,TrS0,D0_converged,\
v_sigma,idemp_err,layer,frob_err_0)
else:
#### IF S0 HAS CONVERGED, DO NOT UPDATE S0 AND FLIP SIGMA
S0 = S0
sigma = (-1)*v_sigma[layer-1] # Flip sigma
v_sigma[layer] = sigma
W = v_sigma[layer] # Weight function
frob_err_1[layer] = np.linalg.norm(S1-X1)
print("Frobenius error for S1 = " + str(frob_err_1[layer]))
#### UPDATE D1 APPROXIMATION AND CHECK CONVERGENCE
(S1,D1_converged) \
= UPDATE_LAYER_1(X1,S1,sigma,D0_converged, D1_converged,\
frob_err_0,frob_err_1,idemp_err,layer,M)
#### UPDATE LAYER
layer += 1
D0 = np.double(S0) # Output layer estimate of D^{(0)}
D1 = np.double(S1) # Output layer estimate of D^{(1)}
\end{lstlisting}
\end{widetext}
\end{document}
\section{Introduction}
A material's properties are in general characterized by its response to perturbations. Quantum perturbation theory and response calculations are therefore critical to our understanding of materials. Rayleigh-Schr\"{o}dinger perturbation theory and Green's function approaches are commonly used in response calculations \cite{ESchrodinger26,EAHylleraas30,RMSternheimer54,MKarplus60,RMStevens63,JPople79,SBaroni87,SBaroni01,thelgaker02,JJSakuraiBook}, but they require explicit knowledge of the eigenstates or complex contour evaluations over residues, which can be a major hindrance for fast calculations. A less frequently used method is density matrix perturbation theory (DMPT), which is well-suited for time-independent response calculations. DMPT was pioneered by McWeeny in the 1960's \cite{rmcweeny62}, but was reintroduced in a new form based on recursive Fermi-operator expansion methods in the early 2000's \cite{ANiklasson04,VWeber04,ANiklasson05,VWeber05,ANiklasson07c,ANiklasson15,LTruflandier20,HShang21b}. This new form of DMPT represents a surprisingly simple methodology for calculating time-independent quantum response properties at any order. Costly summations and evaluations of eigenstates, as in Rayleigh-Schr\"{o}dinger perturbation theory, or the summations and evaluations of complex residues, as in Green's function methods, are replaced by matrix-matrix multiplications of density matrix response terms and perturbations in the Hamiltonian, leading to rapidly converging recursive expansions \cite{ANiklasson04,LTruflandier20,HShang21b,HShang21c} which are well-adapted for modern hybrid and high-performance computing.
In this article, we explore the use of Nvidia Tensor cores \cite{nvidia-tc} for increasing the performance of time-independent DMPT calculations. Tensor cores, and the related Tensor Processing Units (TPUs) \cite{TPU} by Google and Matrix cores \cite{matrix-cores} from AMD, are a relatively new type of accelerated hardware developed to address the explosion in popularity of machine learning and artificial intelligence (AI) algorithms.
These specialized hardware units are designed to be extraordinarily efficient at dense tensor contractions, i.e.\ matrix-matrix multiplications, in low floating point (mixed) precision, which is the dominating operation required in deep neural network (DNN) models. To tap into the raw computational power of Tensor core units, and other similarly accelerated hardware, for more general scientific applications, we must therefore develop stable computational algorithms that can take advantage of fast matrix-matrix multiplications and that are robust to low precision floating point operations. We achieve this by mapping the recursive DMPT-based response calculations onto the computational structure of a generalized deep neural network (DNN). This deep neural network density matrix perturbation theory (DNN-DMPT) framework is ideally suited for Tensor core computation. Matrix-matrix multiplications, which appear in the activation functions of our generalized DMPT network, dominate the cost of the DMPT-based response calculations and are naturally carried out on the Tensor core hardware. The multiplications can be performed in mixed precision arithmetics using a dual half-precision matrix representation to enhance the numerical accuracy \cite{JFinkelstein21a}. We also derive a robust stopping criterion that determines at what deep layer the approximate density matrix response has converged under the numerically noisy conditions of the low precision.
DMPT and related approaches to time-independent quantum response theory can be designed to take advantage of matrix sparsity. This makes it possible to achieve a reduced complexity scaling of the computational cost in quantum response calculations as a function of system size \cite{VWeber04,VWeber05,ANiklasson07c,LTruflandier20, HShang10, HShang21a}, which is suitable for the study of large systems. However, in this article, we describe a dense matrix algebra approach, extending previous work \cite{ANiklasson04,ANiklasson15,JFinkelstein21a}, which can utilize the substantial computing power offered by the specialized Tensor core platform. In this way, we show how to achieve high-performance response calculations of 120 Tflops on the Tensor cores of a single Nvidia A100 GPU. In this article, a combined multiplication and addition are counted as a single floating-point operation. Furthermore, by taking advantage of the computational structure of our DNN-DMPT framework, we also develop a multi-GPU based approach. We demonstrate a performance of almost 200 Tflops using two sets of Tensor cores on two separate Nvidia A100 GPUs for a first-order density matrix response calculation.
The development of accelerated quantum response calculations presented in this article continues a currently ongoing theme of using Tensor cores (or other similar machine learning inspired hardware) for more general scientific applications \cite{JFinkelstein21a,JFinkelstein21c,abdelfattah2019-pb,AHaidar20, BLi21, ALewis21, MHauru21, AMorningstar21, RPederson22}. This work is similar in spirit to the transition from CPUs to GPUs for scientific computations that started over a decade ago \cite{JStone10,TGermann09,TMartinez08,TMartinez09a,TMartinez09b,TMartinez11,JMaia12,MHacene12,FLiu2015,WHuhn20,MGordon20,ZGuoqing20,SGoedecker09}, and in some sense, represents the next phase of a Darwinian-like computational evolution driven by new hardware environments. Currently, seven of the ten most powerful computers in the world utilize chips built with Tensor core accelerators, and GPU-accelerated architectures are common among the top 500 supercomputers \footnote{\url{https://top500.org}. Accessed: 2022-3-10.}. Although this article is focused on response calculations using Tensor cores, our approach is general and should also apply to other specialized AI hardware platforms that are designed for high-performance matrix-matrix multiplications with low precision floating point operations, such as AMD's Matrix cores or Google's TPUs.
In our presentation of time-independent quantum response calculations with Tensor cores, we first (in Section II) discuss some background on Tensor cores, electronic structure theory and the density matrix formalism, and how the unperturbed density matrix can be calculated with a deep neural network \cite{JFinkelstein21a}. In Section III we present our approach to time-independent density matrix perturbation theory, and in Section IV, how we can formulate the response calculations through a deep neural network. In Section IV we also derive a robust parameter-free stopping criterion for the layered network, which is of critical importance for the numerically noisy environment. In Section V, we show some examples of density matrix linear response calculations using our deep neural network. We analyze the numerical accuracy and the performance. These examples highlight the utility and efficiency of our framework. We conclude with a brief summary and some final remarks.
\section{Deep Neural Network approximation of the electronic structure using Tensor cores}
Tensor cores can be used to calculate the electronic structure of a molecular system using the computational framework of a convolutional deep neural network, where the electronic structure, as in Hartree-Fock or density functional theory \cite{VFock30,croothaan51,PHohenberg64,RParr89,RDreizler90,thelgaker02}, is represented by an effective single-particle density matrix \cite{rmcweeny56}. This technique, described in the subsections below, forms the basis of our approach to quantum response calculations using DMPT. These response calculations can then similarly be formulated within the framework of a deep neural network.
\subsection{Tensor cores}
A Tensor core is a specialized compute unit designed by Nvidia that exclusively performs a single dense matrix-matrix multiplication each GPU clock cycle. A schematic drawing of a GPU including four Tensor core units is shown in Figure\ \ref{fig:a100-arch}. In our local cluster environment, an A100 GPU has 432 Tensor cores and each A100 Tensor core can compute a $4\times 8$ matrix times an $8 \times 8$ matrix per clock cycle. Nvidia makes Tensor cores available through low-level CUDA API commands to individual blocks of $16 \times 16$ matrices and additionally through more high-level optimized CUDA libraries such as cuBLAS \cite{cublas}, which is what is used in this article. Since each A100 GPU has a maximum clock rate of 1.41 GHz, this equates to a peak theoretical flop rate of approximately 156 Tflops, where we count a combined multiplication and addition as a single floating point operation. Compared to the standard single-precision A100 GPU compute units, this is a more than order-of-magnitude increase in performance. Although impressive, this gain in performance is offset by a reduction in accuracy and under the condition that the Tensor cores only operate at peak performance when utilizing half-precision (16-bit) multiplications with single-precision accumulations. It is therefore paramount that any numerical algorithms used with the Tensor core hardware are made robust to this introduction of lower precision arithmetic. We have previously shown how this is possible for quantum-based molecular dynamics simulations and density-matrix electronic structure calculations using the computational framework of a generalized convolutional deep neural network \cite{JFinkelstein21a,JFinkelstein21c}.
\begin{figure}
\centering
\includegraphics[width=.49\textwidth]{Figures/nvidia-a100-SM.pdf}
\caption{Schematic drawing of an Nvidia A100 GPU streaming multiprocessor (SM) with 4 Tensor core (TC) units and 64 FP32 CUDA cores. Each A100 GPU on our local cluster has 108 SM's. INT and FP64 cores are not displayed.}
\label{fig:a100-arch}
\end{figure}
\subsection{Density matrix construction using a deep neural network} \label{sec:dnn-sp2}
In Kohn-Sham density functional theory \cite{PHohenberg64,RParr89,RDreizler90} the electron density, $\rho({\bf r})$, is given as the sum of densities from occupied effective single-particle orbitals, $\{ \psi_i \}$, i.e.\ $\rho({\bf r}) = \sum_{i \in {\rm occ}} \vert \psi_i({\bf r})\vert^2$. The single-particle molecular orbitals that represent the electronic ground state density are obtained from the solution of the non-linear Kohn-Sham eigenvalue equation,
\begin{equation}\label{KS_0}
\left( -\frac{1}{2} \nabla^2 + V_{\rm KS}\left[ \mathbf{R},\rho \right](\r)\right) \psi_i(\r) = \varepsilon_i \psi_i(\r) \;.
\end{equation}
Here $ V_{\rm KS}\left[ \mathbf{R},\rho \right](\r)$ is the density-dependent effective single-particle Kohn-Sham potential (see Appendix for details). The single-particle orbitals, $\{\psi_i({\bf r})\}$, can be expanded in some finite basis set,
\begin{align}
\psi_i({\bf r}) = \sum_l C_{i,l}\phi_l({\bf r}),
\end{align}
so that the ground state density is given by
\begin{align}
\rho({\bf r}) = \sum_i \vert \psi_i\vert^2 &=
\sum_{i} \sum_{ll'}
C_{i,l}^*C_{i,l'} \phi_l^*({\bf r}) \phi_{l'}({\bf r}) \\
&\equiv \sum_{l,l'} D_{l,l'} \phi_l^* ({\bf r}) \phi_{l'}({\bf r}) \;,
\end{align}
which defines the density matrix, $D$.
In this density matrix formulation, the non-linear Kohn-Sham eigenvalue equation is replaced by the construction of the \emph{density matrix} (see Appendix for details), which is given implicitly by the Heaviside matrix step function,
\begin{align}\label{eq:theta}
D^\perp = \theta\left(\mu I - H^\perp [D]\right)\;.
\end{align}
Here $H^\perp[D]\equiv H^\perp[\rho]$ is the orthogonalized matrix representation, $H^\perp = Z^THZ$, of the density-matrix dependent or density dependent Kohn-Sham Hamiltonian matrix,
\begin{align}
H_{ij}[\rho] = \int \phi_i^*({\bf r}) \left( -\frac{1}{2} \nabla^2 + V_{\rm KS}\left[ \mathbf{R},\rho \right](\r)\right) \phi_j(\r) d{\bf r},
\end{align}
and $Z$ is an inverse factor of the overlap matrix, $O_{ij} = \int \phi_i^*({\bf r}) \phi_j({\bf r}) d{\bf r}$, such that $Z^TOZ = I$. There are a number of efficient techniques to calculate $Z$ \cite{CNegre16,ANiklasson04b,ERubensson08c,mbenzi96,ERubensson21}. A favorite of ours is based on a matrix recursion technique that is also well-suited for Tensor core multiplications (see Appendix for details). The Hamiltonian matrix, $H[\rho] \equiv H[D]$, depends on the density matrix, $D$, through the density, $\rho({\bf r})$ in the Kohn-Sham potential, $V_{\rm KS}\left[ \mathbf{R},\rho \right](\r)$. The solution of the electronic structure problem is therefore equivalent to a direct self-consistent calculation of the Heaviside matrix step function of $H^\perp[D]$, with the step formed at the chemical potential $\mu$, and where $\mu$ is set such that the trace of the density matrix equals the number of occupied states, i.e.\ ${\rm Tr}[D^\perp] = N_{\rm occ}$ .
A practical way of calculating a matrix step function is by expressing $\theta$ as a recursive expansion \cite{ANiklasson04,rmcweeny59,ANiklasson2011,LTruflandier20,NHighamBook}. A particularly simple and efficient technique for constructing such an expansion is the recursive second-order spectral projection scheme (SP2) \cite{ANiklasson02,ERubensson11,ERubensson14}. In this method, repeated compositions of the polynomials $g(x)=x^2$ and $g(x) = 2x-x^2$, applied to a re-scaled Hamiltonian matrix (so that the spectrum is inside [0,1]), leads to an approximation of the Heaviside step function in \cref{eq:theta}. In each iteration, the polynomial is chosen based on which polynomial gives a matrix trace closest to the true occupation number. In this way, the step is formed automatically at the correct chemical potential.
The SP2 scheme can easily be mapped onto the computational structure of a convolutional deep neural network (DNN) \cite{JFinkelstein21a}, where the orthogonalized density matrix, given by the step function of $H^\perp$, is generated through a recursive DNN expansion,
\begin{equation}\label{Deep_NN}
D_{\perp} = f\left( \ldots f(W_1f(W_0X_0+B_0) + B_1) \ldots \right).
\end{equation}
Here $\{W_n\}$ and $\{B_n\}$ are the weight and bias values of the network, with the matrix square activation function, $f(S) = S^2$. This DNN formulation of the SP2 scheme is formally the same scheme as the original SP2 method, though it has some conceptual advantages \cite{JFinkelstein21a}. The DNN formulation may also appear more familiar to many readers thanks to the widespread popularity of machine learning methods using neural networks. In general, we can expect Tensor cores to accelerate any recursive expansion that is dominated by matrix-matrix multiplications. This includes matrix inversions using Schulz iterations \cite{GSchulz33} or the inverse factorization of the overlap matrix \cite{ANiklasson04b}, as well as higher-order recursive Fermi-operator expansion methods \cite{ANiklasson2011,DBowler12}. However, in most cases these schemes do not naturally fit into a DNN structure.
After an initial normalization layer, the orthogonalized density matrix approximation at the $n$-th deep layer of the DNN-SP2 scheme is constructed recursively by the network and is given by $S_n$, where
\begin{align}
&S_n = W_n X_n + B_n \;,\label{LinTransf}\\
&X_n = f(S_{n-1})=S_{n-1}^2 \;,\label{DNN-SP2_act}\\
&W_n = \sigma_nI, ~~\sigma_n = \pm 1\;,\\
&B_n = (1-\sigma_n)S_n\;.
\end{align}
A schematic diagram illustrating the deep neural network structure is given in Figure~\ref{fig:Deep_NN-schematic}.
\begin{figure}
\centering
\includegraphics[width=.49\textwidth]{Figures/DNN-schematic.pdf}
\caption{
The DNN-SP2 electronic structure method represented as a deep neural network. Given an orthogonalized Hamiltonian matrix, $H^\perp$, as the input layer, the neural network outputs the (orthogonalized) density matrix, $D^\perp$, given by the Heaviside matrix step function $D^\perp = \theta(\mu I-H^\perp)$, with step formed at the Fermi level, $\mu$. The dominating computational cost for the network is the activation function, $f(S_{n-1})=S_{n-1}^2$, a matrix square operation, which Tensor cores are optimized for. After an initial normalization layer, the weights ($W_n$), biases ($B_n$) and layers ($X_n$) are given by $W_n = \sigma_n I, B_n = (1-\sigma_n)S_n$, $S_n = W_n X_n + B_n$, and $X_n = f(S_{n-1}) = S_{n-1}^2$ for the deep layers. The number $\sigma_n$ is chosen to be $\pm 1$ and $S_n$ represents the $n$-th density matrix approximation.}
\label{fig:Deep_NN-schematic}
\end{figure}
The recursive deep neural network procedure in \cref{Deep_NN} starts with an input layer, $X_0$, which is given by the orthogonalized Hamiltonian, $X_0 = H^\perp$. Next, specific weight and bias values, $W_0$ and $B_0$, are chosen so that the eigenvalue spectrum of $H^\perp$ is brought to within the interval $[0,1]$ in reverse order, i.e.\ with the lowest lying eigenvalues closer to 1 and the highest closer to 0. This initial normalization step requires some approximate upper and lower bounds of the eigenvalue spectrum of $H^\perp$, which can be estimated, for example, by Gershgorin circles. Application of the activation function (the matrix square) to this linearly transformed initial layer moves us to the next layer, $X_1$. This sequence of steps represents the initial step $f(W_0X_0+B_0)$ in \cref{Deep_NN}. After this first layer, at each deep layer of the DNN-SP2 network in Eq.\ (\ref{LinTransf}), the weights are chosen as $W_n = \sigma_n I$ and biases as $B_n = (1-\sigma_n)S_{n-1}$, with $\sigma_n=\pm 1$. The signs are selected such that the density matrix converges to the correct occupation by projecting the eigenvalues of the occupied states to 1 and the unoccupied states to 0 \cite{ANiklasson02,ANiklasson2011}. The layers are repeated until all eigenvalues of the output matrix are sufficiently close to 1 or 0, i.e.\ $f(S_{n-1}) \approx S_{n-1}$, in which case we have an idempotent approximation of the density matrix, i.e.\ $D^\perp = D^\perp D^\perp$ or $D = DOD$. This idempotency condition can be used to check the convergence of the DNN-SP2 scheme \cite{ANiklasson2011,JFinkelstein21a,AKruchinina16}.
\subsection{The activation function and its dual matrix representation}
To make full use of the Tensor core hardware, the DNN-SP2 algorithm needs to be re-purposed for low precision arithmetic, as the Tensor cores are at peak performance only when multiplying matrices in half-precision (FP16) and accumulating the products in single-precision (FP32). This can be accomplished by approximating a given matrix $X$ as a sum of two low precision parts \cite{JFinkelstein21a,SMarkidis18,HOotomo22},
\begin{equation}\label{DualMat}
X \approx X_{\rm high} + X_{\rm low},
\end{equation}
where, in the case of half-precision representations, the two matrices are given by
\begin{equation}\label{DualMat2}
\begin{array}{ll}
X_{\rm high} = {\rm FP16}\left[X\right] \\
X_{\rm low} = {\rm FP16}\left[X - X_{\rm high}\right].
\end{array}
\end{equation}
Here ``${\rm FP16}[X]$" denotes the half-precision (16 bit) floating point (FP) representation of $X$. Because we represent the error between $X$ and its half-precision representation, $X_{\rm high}$, also in half-precision, we expect the overall approximation in \cref{DualMat} to have an accuracy of about $10^{-6}$. Machine epsilon for FP16 arithmetic is approximately $10^{-3}$. The matrix square activation function, $f$, can then be approximated by two separate half-precision Tensor core matrix-matrix multiplications with their product accumulated in single-precision (32 bit), that is,
\begin{align}
\begin{split}\label{S0 tensor core}
f(X) &\approx {\rm FP32}[(X_{\rm high} + X_{\rm low})^2] \\
&\approx {\rm FP32}\left[ \mathcal{A} + \mathcal{B} + \mathcal{B}^T \right],
\end{split}
\end{align}
where
\begin{align}
\mathcal{A} &= X_{\rm high} \times X_{\rm high} ~({\rm Tensor~ core~ mult.}) \\
\mathcal{B} & = X_{\rm high} \times X_{\rm low} ~~({\rm Tensor~ core~ mult.}),
\end{align}
and the $X_{\rm low} \times X_{\rm low}$ has been discarded. To further enhance accuracy, a refinement, or purification layer may be used as the final output layer of the network in \cref{Deep_NN}, where all matrix algebra is carried out in double-precision, FP64 \cite{JFinkelstein21a,JFinkelstein21c}. This purification step is equivalent to a final double flip, e.g. $\sigma =1$ followed by $\sigma = -1$ or vice versa, and was shown to improve accuracy by second order for ground-state density matrix calculations \cite{JFinkelstein21a}. However, experience has shown this additional refinement step to be unnecessary in DMPT and other certain cases, e.g.\ for QMD simulations \cite{JFinkelstein21c}, and we will not consider any refinement layers in this article. In the case of DMPT, this ultimately stems from there being two main sources of error: the idempotency error and the commutation error. In a variational total energy expression, the energy error is first order in idempotency and second order in commutation. This is not true for the response calculations. The additional refinement therefore does not help.
\section{Density matrix perturbation theory as a Deep Neural Network}
\subsection{Density matrix perturbation theory}
In density functional theory, a time-independent external perturbation, e.g.\ from a static electric or a magnetic field, can be introduced to the electronic structure through the Kohn-Sham Hamiltonian matrix, $H$. If $\lambda$ is the magnitude of the perturbation we expand the Hamiltonian in powers of $\lambda$ as,
\begin{align}\label{eq:perturbed hamiltonian}
H(\lambda) = H^{(0)} + \lambda H^{(1)} + \lambda^2 H^{(2)} + \cdots \;,
\end{align}
where $H^{(1)}, H^{(2)}, \ldots$ are the perturbations to various order in $\lambda$. An example might be where $H^{(1)}$ is the dipole moment operator that couples the system to an external field in some direction, and $\lambda$ is the field strength in that direction. The corresponding expansion of $H(\lambda)$ in an orthogonalized representation is given by
\begin{align}\label{eq:perturbed hamiltonian_orth}
H^\perp(\lambda) = H^{(0)}_\perp + \lambda H^{(1)}_\perp + \lambda^2 H^{(2)}_\perp + \cdots \;,
\end{align}
where $ H^{(k)}_\perp = Z^T H^{(k)} Z$.
The expression in \cref{eq:perturbed hamiltonian} will induce a similar $\lambda$-expansion in the density matrix, $D$, i.e.\ describing the response in the electronic structure due to the perturbation. In general (as described in \cref{sec:dnn-sp2}) this is a non-linear matrix function of the Hamiltonian and the perturbation, where the density matrix and its response $D^\perp (H(\rho, \lambda)) = \theta(\mu I - H^\perp (\rho,\lambda))$, has to be calculated self-consistently, both with respect to the unperturbed ground-state density (or density matrix) and its response to the original perturbation in $H$. This DMPT approach is a density matrix formulation of what is usually referred to as the coupled-perturbed self-consistent field or density functional perturbation theory \cite{RMStevens63,JPople79,SBaroni87,SBaroni01,COchsenfeld97}.
Expanding $D$ in $\lambda$,
\begin{align}
D(\lambda) = D^{(0)} + \lambda D^{(1)} + \lambda^2 D^{(2)} + \cdots \;,
\end{align}
we can identify $D^{(k)}$ from
\begin{align}\label{eq:D_k}
D^{(k)}_\perp = \frac{1}{k!}\frac{\partial^k \theta(\mu I - H^\perp(\rho,\lambda))}{\partial \lambda^k} {\bigg |}_{\lambda = 0} \;,
\end{align}
through a Taylor series expansion for $D(\lambda)$, where $D^{(k)} = ZD^{(k)}_\perp Z^T$. This response in the density matrix leads to the corresponding response in the electron density,
\begin{align}\label{eq:rho_k}
\rho({\bf r}) = \rho^{(0)}({\bf r}) + \lambda \rho^{(1)}({\bf r}) + \lambda^2\rho^{(2)}({\bf r}) + \ldots \;,
\end{align}
where
\begin{align}
\rho^{(k)}({\bf r}) = \sum_{l,l'} D_{l,l'}^{(k)} \phi_l({\bf r}) \phi_{l'}({\bf r}).
\end{align}
The response in the density matrix using a recursive expansion of the matrix step function forms the basis of DMPT and together with the electron density couples to the perturbation in the Hamiltonian. In general, the response equations therefore have to be solved self-consistently.
In its formulation above, the DMPT is only valid for static, time-independent perturbations of non-degenerate systems at zero electronic temperatures.
The recursive nature of the DNN-SP2 algorithm allows us to map DMPT onto the computational structure of a DNN. In the same way as in Ref.\ \onlinecite{ANiklasson04}, we can determine how perturbations of the Hamiltonian of each order in $\lambda$ propagate, layer by layer, through the DNN-SP2 network. An approximate density matrix and its response to various orders in $\lambda$ are then given as the final output layer. To understand how this works, we can first look at the non-perturbative case in Eqs.\ (\ref{LinTransf}) and (\ref{DNN-SP2_act}). The $n$-th layer of the DNN-SP2 algorithm, $X_n$, is given by $X_n = f(S_{n-1}) = S_{n-1}^2$, and the $n$-th density matrix approximation, $S_n$, is given by
\begin{align}
\begin{split} \label{eq:Sn}
S_{n} & = W_n X_{n} + B_n \\
&= \sigma_{n} S_{n-1}^2 + (1-\sigma_{n})S_{n-1}\;.
\end{split}
\end{align}
If we use this update rule in Eq.~(\ref{eq:Sn}) with an approximate density matrix expanded up to some order in $\lambda$ at layer $n$, i.e.\
\begin{align}\label{density matrix perturbation expansion}
S_n(\lambda) = S_n^{(0)} + \lambda S_n^{(1)} + \lambda^2 S_n^{(2)} + \cdots ~,
\end{align}
we are able to calculate the updated response order by order. For $n>0$, the first three orders in $\lambda^k$ ($k = 0,1,2)$, of the approximate density matrix and its response are given by,
\begin{align}
\begin{split}\label{initialization}
S_{n}^{(0)} &= W_{n}(S_{n-1}^{(0)})^2 + B_{n}^{(0)} \\
S_{n}^{(1)} &= W_{n}(S_{n-1}^{(0)}S_{n-1}^{(1)}+S_{n-1}^{(1)}S_{n-1}^{(0)}) + B_{n}^{(1)} \\
S_{n}^{(2)} &= W_{n}(S_{n-1}^{(0)}S_{n-1}^{(2)}+S_{n-1}^{(1)}S_{n-1}^{(1)}\\
& \qquad + S_{n-1}^{(2)}S_{n-1}^{(0)}) + B_{n}^{(2)} ,\\
\end{split}
\end{align}
and more generally, for all $k$ by,
\begin{align}\label{S_n detailed}
S_{n}^{(k)} = W_{n} \sum_{j=0}^{k} S_{n-1}^{(j)} S_{n-1}^{(k-j)} + B_{n}^{(k)},
%
\end{align}
where the bias functions, $B_n^{(k)}$, is given by
\begin{align}\label{dB}
B_n^{(k)} = (1-\sigma_n)S_{n-1}^{(k)}\;.
\end{align}
%
We can then expect that $D^{(k)}_\perp = \lim_{n\to \infty} S_n^{(k)}$. At $n=0$, the initial transformation is given via the expansion
\begin{align*}
S_0(\lambda) &= \frac{\varepsilon_n I - H^\perp(\lambda)}{\varepsilon_n - \varepsilon_1} \\
& = \frac{\varepsilon_n I - H^{(0)}_\perp}{\varepsilon_n - \varepsilon_1} - \lambda \frac{1}{\varepsilon_n - \varepsilon_1} H^{(1)}_\perp - \lambda^2 \frac{1}{\varepsilon_n - \varepsilon_1} H^{(2)}_\perp - \cdots \\
& = S_0^{(0)} + \lambda S_0^{(1)} + \lambda^2 S_0^{(2)} + \cdots \;,
\end{align*}
where $\varepsilon_1, \varepsilon_n$ the estimates of the smallest and largest eigenvalue bounds of $H^{(0)}_\perp$, respectively. The recursive expansion of the density matrix response in \cref{S_n detailed,dB,initialization} forms the basis of DMPT in its DNN-SP2 form.
Alternatively, we can formulate this DMPT algorithm purely in terms of the network parameters $X$, $W$ and $B$ which may be more desirable for direct machine learning applications \cite{JFinkelstein21a}. In this case, we use the relation
\begin{align}
X_{n+1} = (W_nX_{n} + B_n)^2 \;,
\end{align}
so that, assuming $W_n$ is a scaled identity matrix,
\begin{align}
X_{n+1}^{(k)} & = (W_n)^2 \sum_{j=0}^{k} X_n^{(j)} X_n^{(k-j)} \\
& \qquad + W_n \sum_{j=0}^{k} \{X_{n}^{(j)}, B_n^{(k-j)}\} \\
& \qquad \qquad
+ \sum_{j=0}^{k} B_n^{(j)} B_n^{(k-j)},
\end{align}
where we use the anti-commutator notation, $\{A,B\} = AB + BA$. Notice, that for symmetric and commuting matrices, $(AB)^T = B^TA^T = BA = AB$, which is used to speed up the calculations.
In the next section, we adapt the DNN-SP2 method to calculate $D^{(k)}$, for any $k$, using a blocked matrix representation with the same convolutional deep neural network as the original DNN-SP2 scheme. This block-matrix formulation is valuable because it allows a more straightforward mathematical analysis of DMPT, while at the same time, generates an efficient way of calculating the matrix function derivatives in Eq.~(\ref{eq:D_k}), using dense matrix algebra as in Eq.\ (\ref{S_n detailed}), which is well-suited to the Tensor core design.
\subsection{Block matrix representation}
We now re-express DMPT as a DNN using upper-triangular block matrices through the DNN-SP2 method. We call this technique the DNN-SP2 Perturbation Theory (DNN-SP2-PT) method. Given a maximum response order $m$, we map the $m$-th order perturbative expansion for the Hamiltonian, $H = H^{(0)} + \lambda H^{(1)} + \lambda^2 H^{(2)} + \cdots + \lambda^m H^{(m)}$, onto an $(m+1)N \times (m+1)N$ block upper-triangular matrix,
\begin{align}
{\bf H} &=
\begin{pmatrix}
H^{(0)} & H^{(1)} & H^{(2)} & \cdots & H^{(m)} \\
& H^{(0)} & H^{(1)} & \cdots & H^{(m-1)} \\
& & H^{(0)} & \cdots & H^{(m-2)} \\
& & & \ddots & \vdots \\
& & & & H^{(0)} \\
\end{pmatrix} \;.
\end{align}
The bold notation indicates block matrices and the ``$\perp$'' symbols indicating an orthogonalized representation are dropped for convenience. All the lower triangular blocks of ${\bf H}$ are zero matrices.
%
Similarly, the $m$-th order expansion for the density matrix can be mapped onto the matrix
\begin{align}
{\bf D} &=
\begin{pmatrix}
D^{(0)} & D^{(1)} & D^{(2)} & \cdots & D^{(m)} \\
& D^{(0)} & D^{(1)} & \cdots & D^{(m-1)} \\
& & D^{(0)} & \cdots & D^{(m-2)} \\
& & & \ddots & \vdots \\
& & & & D^{(0)} \\
\end{pmatrix} \;.
\end{align}
The idempotency conditions for the $m$-th order density matrix expansion are then given by the blocks of ${\bf D}^2 = {\bf D}$. Similarly, the commutation conditions for DMPT can be captured for each order from the blocks using the block matrix equation ${\bf H}{\bf D}-{\bf D}{\bf H} = 0$.
This block representation can then be used in a straightforward manner to generalize the DNN-SP2 scheme. The input layer is set to be ${\bf X}_0={\bf H}$, in the same way as for DNN-SP2, where the initial weight and bias are also chosen to yield a shifting and rescaling of the spectrum for ${\bf H}$ (or, equivalently, $H^{(0)}$):
\begin{align}
{\bf S}_0 &= \frac{\varepsilon_N {\bf I}-{\bf H}}{\varepsilon_N - \varepsilon_1} \;.
\end{align}
The values $\varepsilon_1,\varepsilon_N$ are the smallest and largest eigenvalue estimates for the non-perturbed system, $H^{(0)}$, i.e.\ for $H(\lambda)$ in the limit $\lambda = 0$. In the same way as before, the $n$-th layer of the network, ${\bf X}_{n}$, is then defined through
\begin{align}
&{\bf S}_{n} = W_{n} {\bf X}_{n} + {\bf B}_{n} \\
&{\bf X}_{n} = f({\bf S}_{n-1} )
\end{align}
where $f$ is the matrix square activation function, i.e., $f({\bf X}) = {\bf X}^2$, the weights, $W_n$, are also defined as before through $W_n = \sigma_n {\bf I}$ (with $\sigma_n =\pm 1$) and the biases, ${\bf B}_n$, are defined by
\begin{align}
{\bf B}_n &= (1-\sigma_n) {\bf S}_{n-1} \;.
\end{align}
The upper-triangular block matrix
\begin{align}
{\bf S}_n &=
\begin{pmatrix}
S_n^{(0)} & S_n^{(1)} & S_n^{(2)} & \cdots & S_n^{(m)} \\
& S_n^{(0)} & S_n^{(1)} & \cdots & S_n^{(m-1)} \\
& & S_n^{(0)} & \cdots & S_n^{(m-2)} \\
& & & \ddots & \vdots \\
& & & & S_n^{(0)} \\
\end{pmatrix} \;,
\end{align}
corresponds to the approximate density matrix expansion at the $n$-th layer, $S_n^{(0)} + \lambda S_n^{(1)} + \lambda^2 S_n^{(2)} + \cdots + \lambda^m S_n^{(m)}$.
Figure~\ref{fig:dnn-schematic-block-matrix} presents a visual of the deep neural network for this generalized block matrix formulation of density matrix perturbation theory. As in the case of regular DNN-SP2, it is possible to append an additional double-precision refinement step to the final output layer, but again, we do not consider this here. No significant benefit in accuracy of the response matrices was observed with this extra refinement step.
\begin{figure}
\centering
\includegraphics[width=0.48\textwidth]{Figures/Deep_NN_Block_Matrix_Fig.pdf}
\caption{Deep neural network diagram for the generalized block matrix formulation of the DNN-SP2-PT method with $f({\bf X} )={\bf X}^2$, a matrix square activation function. The network is exactly the same as the network for the DNN-SP2 method we previously presented in Ref.~\onlinecite{JFinkelstein21a} except that here ${\bf B}_k, {\bf S}_k ,{\bf X}_k \in \mathbb{R}^{(m+1)N\times (m+1)N}$ are block matrices with the different response orders on the superdiagonals, where $m$ is the highest-order response considered and $N$ is the number of basis orbitals.}
\label{fig:dnn-schematic-block-matrix}
\end{figure}
\subsection{Activation function}\label{sec:activation-function}
Applying the matrix square activation function, $f$, to ${\bf S}_n$ from the previous section leads to
\begin{align}
f({\bf S}_n) &=
\begin{pmatrix}
f^{(0)} & f^{(1)} & f^{(2)} & \cdots & f^{(m)} \\
& f^{(0)} & f^{(1)} & \cdots & f^{(m-1)} \\
& & f^{(0)} & \cdots & f^{(m-2)} \\
& & & \ddots & \vdots \\
& & & & f^{(0)} \\
\end{pmatrix} \;,
\end{align}
where
\begin{align}\label{eq:activation-function}
f^{(k)} = f^{(k)} (S^{(0)}, S^{(1)}, ..., S^{(k)}) \equiv \sum_{j=0}^{k} S^{(j)}S^{(k-j)}\;.
\end{align}
It is straightforward to verify that this matrix square activation definition and the upper-triangular block matrix representation of the DNN-SP2 scheme yields the exact same update rules for each matrix component, $S_n^{(k)}$, as shown in the previous section in \cref{S_n detailed}.
It is also worth noting in particular that $f^{(m)}$ depends only on $S^{(0)}, S^{(1)}, ..., S^{(m)}$. So, to obtain the $m$-th order response approximation, $S_n^{(m)}$, at layer $n$, information from only the first $m$ order terms are needed from the previous layer, $n-1$. This observation provides a natural way of parallelizing the matrix multiplications over multiple GPU devices. Given $m$ GPU devices, the $\ell$-th device, $1 \le \ell \le m$, can calculate the $f^{(\ell)}$ activation function, requiring only that $S^{(0)}, S^{(1)}, ..., S^{(\ell-1)}$ be sent from the previous $\ell-1$ devices unidirectionally. Because the information flow only moves one-way, efficiencies can be recovered as latencies are hidden more easily. This multi-GPU approach will be demonstrated in the case of a linear response calculation in \cref{sec:linear response}.
Though we now need $m+1$ matrices to calculate the $m$-th order response (as we see in \cref{eq:activation-function}), the additional storage requirement will turn out to not be the principal limiting factor for system sizes that can be treated compared to the straightforward ground state calculations \cite{JFinkelstein21a} of the density matrix. It is the Tensor core to GPU I/O that remains the bottleneck and limits system size to around 20,000 orbitals for optimal performance.
\subsection{Dual matrix representations for the activation function}
The computational cost for each deep layer is dominated by the matrix activation function, which requires repeated matrix-matrix multiplications. To enhance the accuracy of the low precision floating-point operations from the Tensor cores, we follow the same recipe as our original DNN-SP2 scheme \cite{JFinkelstein21a}, and use a dual matrix representation as in Eqs.\ (\ref{DualMat}) and (\ref{DualMat2}). In the activation functions, $f^{(m)}$, we use this dual matrix representation for each factor $S^{(j)}$, where
\begin{align}
\begin{split}\
f^{(m)}(S^{(0)},...,S^{(m)}) &\equiv \sum_{j=0}^m S^{(j)}S^{(m-j)} \\
&= \sum_{j=0}^m (S^{(j)}_{\rm high} + S^{(j)}_{\rm low}) \\ & \qquad \times ( S^{(m-j)}_{\rm high} + S^{(m-j)}_{\rm low})\;.
\end{split}
\end{align}
In the same way as before, the half-precision representations of the matrices $S^{(j)}$ are given by
\begin{align}
S^{(j)}_{\rm high} &= {\rm FP16}\left[S^{(j)}\right] \\
S^{(j)}_{\rm low} &= {\rm FP16}\left[S^{(j)} - S^{(j)}_{\rm high}\right],
\end{align}
where ``${\rm FP16}[S]$" denotes the half-precision (16 bit) floating point (FP) representation of $S$ \cite{ANiklasson2011,JFinkelstein21a,AKruchinina16}. We then approximate the $m$-th order activation function, $f^{(m)}$, by
\begin{align}
f^{(m)} \approx \sum_{j=1}^m \big( S^{(j)}_{\rm high}S^{(m-j)}_{\rm high} + S^{(j)}_{\rm low}S^{(m-j)}_{\rm high} + S^{(j)}_{\rm high}S^{(m-j)}_{\rm low} \big)\;,
\end{align}
and discard the less significant low $\times$ low terms. Each summand in the activation function $f^{(m)}$ is then approximated by separate half-precision Tensor core matrix-matrix multiplications with their product accumulated in single-precision (FP32), i.e.\
\begin{align}\label{FP32S0S1}
S^{(j)} S^{(m-j)} \approx {\rm FP32}\left[ A + B + C \right]
\end{align}
\begin{align}
\begin{split} \label{S0S1 tensor core}
A &= S^{(j)}_{\rm high}S^{(m-j)}_{\rm high} ~~({\rm Tensor~ core~ mult.}), \\
B &= S^{(j)}_{\rm high}S^{(m-j)}_{\rm low} ~~({\rm Tensor~ core~ mult.}),\\
C &= S^{(j)}_{\rm low}S^{(m-j)}_{\rm high} ~~~({\rm Tensor~ core~ mult.}).
\end{split}
\end{align}
This is, once again, analogous to the original DNN-SP2 scheme. Symmetry of the Hamiltonian matrix $H$ and its perturbations $H^{(l)}$ implies
\[
S^{(m-j)}S^{(j)} = (S^{(j)}S^{(m-j)})^T
\]
for all $0 \le j < m$, so that we actually only need $\sim m/2$ matrix multiplications instead of $m$ (see Python code in Supplemental Information).
\section{Linear Response Calculations}\label{sec:linear response}
To illustrate and analyze our DNN-SP2 DMPT framework we focus on first-order linear response calculations, for time- and basis-set-independent perturbations. Higher orders in the response are given by a straightforward generalization and extension to basis-set dependent response calculations are also possible \cite{ANiklasson05}.
First we look at calculations using self-consistent density functional-based tight-binding theory and then Hartree-Fock theory. A pseudocode for a first-order response calculation is shown in \Cref{alg:main,alg:D0,alg:D1} and a Python translation of it is given in the Supporting Information.
Before analyzing the performance of \cref{alg:main} we need to decide how to determine when convergence is reached for the response. This is far from trivial as the numerically noisy low precision Tensor core arithmetics make it hard to determine convergence and previous stopping criteria for the unperturbed DNN-SP2 scheme may no longer apply \cite{JFinkelstein21c}.
\subsection{Parameter-free stopping criterion}
The stopping criterion that we develop in this section is based on a generalization of the previous parameter-free convergence measures developed for the ground state density matrix \cite{JFinkelstein21a, AKruchinina16}. Parameter-free stopping criteria are based on the idea of identifying an analytical convergence rate that is valid under exact arithmetics. Whenever this convergence rate is broken, which eventually occurs because of the finite precision in the numerical calculations, we expect that we have the best possible answer using the available numerical precision and we terminate the calculation.
In exact arithmetic,
\begin{align}
\lim_{n\to \infty} {\bf S}_n = \lim_{n\to \infty} \begin{pmatrix}
S_n^{(0)} & S_n^{(1)} \\
0 & S_n^{(0)}
\end{pmatrix} & =
\begin{pmatrix}
D^{(0)} & D^{(1)} \\
0 & D^{(0)}
\end{pmatrix} \\
&= {\bf D}\;,
\end{align}
where $D^{(1)}$ is the matrix directional derivative (or G{\^a}teaux derivative) in the direction of the linear perturbation, $H^{(1)}$, i.e.
\begin{align}
D^{(1)} = \lim_{\lambda \to 0} \frac{D(H^{(0)}+\lambda H^{(1)})-D(H^{(0)})}{\lambda}\;.
\end{align}
We may write
\begin{equation}\label{eq:1st-order-upper-triangular-block-matrix}
g\begin{pmatrix}
H^{(0)} & H^{(1)} \\
0 & H^{(0)}
\end{pmatrix}
={\bf S}_n =
\begin{pmatrix}
S_n^{(0)} & S_n^{(1)} \\
0 & S_n^{(0)}
\end{pmatrix} \;,
\end{equation}
%
where $g = g_n \circ g_{n-1} \circ \cdots \circ g_1$ is the composition of all prior activation functions, weight and bias applications of the DNN-SP2 method (as in \cref{Deep_NN}), or equivalently, polynomials $x^2$ and $2x-x^2$, using the original SP2 framework. Consider a $\sigma$--flip iteration, that is, a double iteration choosing $\sigma_{n+1} = 1$ and $\sigma_{n+2} = -1$, or vice versa, for the next two iterations in the DNN-SP2-PT method. This is equivalent to applying the function $h(x) = 2x^2-x^4$ or $h(x) = (2x-x^2)^2$ (the two possible function compositions with $x^2$ and $2x-x^2$) to ${\bf S}_n$ in Eq.~(\ref{eq:1st-order-upper-triangular-block-matrix}) to get,
\begin{align}\label{DoubleStep}
{\bf S}_{n+2} = h\begin{pmatrix}
S_n^{(0)} & S_n^{(1)} \\
0 & S_n^{(0)}
\end{pmatrix} = \left\{ \begin{array}{l}
2{\bf S}_n^2 - {\bf S}_n^4\\
\left(2{\bf S}_n-{\bf S}_n^2\right)^2
\end{array} \right. \;.
\end{align}
The evaluation of $h$ in \cref{DoubleStep} yields the approximate density matrix and its response at layer $n+2$, ${\bf S}_{n+2}$. Due to the particular upper-triangular block form of ${\bf S}_n$, results by Mathias~\cite{Mathias_derivative_1996, NHighamBook} say that the evaluation of $h$ at the matrix ${\bf S}_n$
in Eq.\ (\ref{DoubleStep}) can be written as
\begin{align}
h({\bf S}_n) = \begin{pmatrix}
h(S_n^{(0)}) & \frac{\partial h (S_n^{(0)}+ \lambda S_n^{(1)})}{\partial \lambda} \big |_{\lambda = 0}\\
0 & h(S_n^{(0)})
\end{pmatrix} \;,
\end{align}
where the upper right block, $S_{n+2}^{(1)}$, is the directional derivative of $h$ at $S_n^{(0)}$ in the direction of $S_n^{(1)}$. Furthermore, assuming an eigendecomposition $S_n^{(0)} = V\Lambda V^{T}$, where $\ \Lambda = \textrm{diag}(\lambda_i)$ and $\lambda_1 \geq \dots \geq \lambda_N$, the theorem of Dalecki-Krein \cite{NHighamBook,JDaletskii65} tells us that this derivative can be computed analytically and is given by
\begin{align}
\left. \frac{\partial h (S_n^{(0)}+ \lambda S_n^{(1)})}{\partial \lambda} \right\vert_{\lambda = 0} = V (G \odot (V^{T}S_n^{(1)}V)) V^{T} \;,
\end{align}
where in this expression the symbol $\odot$ represents the elementwise, or Hadamard, product and the entries of the matrix $G$ are defined by $G_{ij} = h[\lambda_i,\lambda_j]$, the divided differences quotient for $h$ using eigenvalues $\lambda_i$ and $\lambda_j$. The divided differences quotients are defined as
\begin{align}
h[\lambda_i,\lambda_j] =
\begin{cases}
\frac{h(\lambda_i) - h(\lambda_j)}{\lambda_i-\lambda_j}, & \lambda_i \neq \lambda_j \\
h'(\lambda_i), & \lambda_i = \lambda_j
\end{cases}\;.
\end{align}
The expression in Eq.\ (\ref{DoubleStep}) then becomes
\begin{align}
{\bf S}_{n+2} = \begin{pmatrix}
h(S_n^{(0)}) & V (G \odot (V^{T} S_n^{(1)} V)) V^{T}\\
0 & h(S_n^{(0)})
\end{pmatrix} \;.
\end{align}
As a first attempt to deriving a stopping criterion, we assume that the zeroth order approximation, $S_n^{(0)}$, has perfectly converged after $n$ iterations to $D^{(0)}$. Then, all of its eigenvalues, $\lambda_i$, will be $0$ or $1$, so that the divided differences quotients, $h[\lambda_i,\lambda_j]$, are also $0$ or $1$ because:
\begin{align}
h[\lambda_i,\lambda_j] =
\begin{cases}
0 = h'(0), & \lambda_i = 0, \lambda_j =0 \\
1 = \frac{h(0) - h(1)}{0-1}, & \lambda_i = 0, \lambda_j = 1 \\
1 = \frac{h(1) - h(0)}{1-0}, & \lambda_i = 1, \lambda_j =0 \\
0 = h'(1), & \lambda_i = 1, \lambda_j =1 \\
\end{cases}\;.
\end{align}
%
Therefore we have that
\begin{align}
\Lambda = \begin{pmatrix}
I & 0 \\
0 & 0
\end{pmatrix}
\end{align}
and
\begin{align}
G = \begin{pmatrix}
0 & & {\bf 1} \\
{\bf 1} & & 0 \\
\end{pmatrix} \;,
\end{align}
where ${\bf 1}$ is a matrix of ones and $I$ is the identity matrix. The form of $G$ then implies that the operation $G\odot $ is idempotent, so that application of $h$ a second time to the expression in Eq.~\eqref{DoubleStep} does not change it. In other words, applying another $\sigma$--flip iteration does not improve or worsen the approximation to $D^{(1)}$.
%
This would suggest that a single $\sigma$-flip iteration following the convergence of $S_n^{(0)}$ should be sufficient for the convergence of $S_n^{(1)}$. However it was sometimes observed numerically that the first order idempotency error, $ \text{Idemp}_{n}^{(1)} = S_n^{(1)} - S_n^{(0)}S_n^{(1)} - S_n^{(1)}S_n^{(0)}$, continues to decrease if one does additional $\sigma$--flip iterations.
To further understand this behavior let us again assume that $S_n^{(0)}$ has converged exactly. The first order idempotency error is given by
\begin{align}
\text{Idemp}_{n}^{(1)}
& = S_n^{(1)} - S_n^{(0)}S_n^{(1)} - S_n^{(1)}S_n^{(0)} \\
& = VV^TS_n^{(1)}V V^T - V\Lambda V^T S_n^{(1)} VV^T \\
& \hspace{2cm} - VV^T S_n^{(1)} V\Lambda V^T \\
&= V( (I - \Lambda) Y - Y \Lambda) V^T \\
&= V
\begin{pmatrix}
-Y_{1,1} & 0 \\
0 & Y_{2,2}
\end{pmatrix}V^T,
\end{align}
where $Y = V^TS_n^{(1)}V$ and $Y_{i,j},\ i,j \in \{1,2\}$ are the submatrices of $Y$ conforming to the previous block partitions of $\Lambda$ and $G$. This means that the purpose of any additional iterations should be only to zero out the diagonal blocks of $Y$, ideally leaving the off-diagonal blocks, which do not contribute to the idempotency error, unchanged. Any changes to the off-diagonal blocks are the result of spurious fluctuations at the level of the numerical accuracy. Failure to converge using a single $\sigma$-flip iteration is due to the supposedly zero diagonal blocks of $G$ not being exactly zero. Numerical errors in those blocks play a key role for the final convergence of $S_n^{(1)}$.
We therefore model the convergence in this final phase by including a block-diagonal perturbation of $G$:
\begin{equation}
G = \begin{pmatrix}
\delta_1 & {\bf 1} \\
{\bf 1} & \delta_2
\end{pmatrix}
\end{equation}
with
\begin{equation}
(\delta_1)_{i,j} = h[\lambda_i,\lambda_j],\quad i,j = 1,\dots,N_{\text{occ}}
\end{equation}
and
\begin{equation}
(\delta_2)_{i,j} = h[\lambda_{N_{\text{occ}}+i},\lambda_{N_{\text{occ}}+j}], \quad i,j = 1,\dots, N-N_{\text{occ}},
\end{equation}
where $\lambda_1 \approx \dots \approx \lambda_{N_{\text{occ}}} \approx 1$, $\lambda_{N_{\text{occ}}+1} \approx \dots \approx \lambda_{N} \approx 0$ are the actual eigenvalues of $S_n^{(0)}$.
Given this model, we derive a worst case reduction of the first order idempotency error by a $\sigma$-flip iteration. The first order idempotency error after a $\sigma$-flip iteration is then given by
\begin{align}
\text{Idemp}_{n+2}^{(1)}
& = S_{n+2}^{(1)} - S_n^{(0)}S_{n+2}^{(1)} - S_{n+2}^{(1)}S_n^{(0)} \label{eq:idem_np2} \\
& = V(G\odot Y)V^T - V\Lambda V^T V(G\odot Y)V^T \\
& \hspace{2cm} - V(G\odot Y_n)V^TV\Lambda V^T \\
&= V( (G-\Lambda G -G \Lambda ) \odot Y ) V^T \\
&= V
\begin{pmatrix}
-\delta_1 \odot Y_{1,1} & 0 \\
0 & \delta_2 \odot Y_{2,2}
\end{pmatrix}V^T.
\end{align}
Thus, instead of immediate convergence after a single $\sigma$-flip iteration, we get an iteration with fast linear convergence and a small prefactor determined by $\delta_1$ and $\delta_2$. Note that we have used $S_{n}^{(0)}$ rather than $S_{n+2}^{(0)}$ in \eqref{eq:idem_np2} since we have assumed our algorithm stops updating $S_{n}^{(0)}$ once it has converged.
The absolute values of the entries of $\delta_1$ can be bounded as
\begin{align}
|(\delta_1)_{i,j}|
& = |h[\lambda_i,\lambda_j]| \\
& \leq \max(|h'(\lambda_i)|,|h'(\lambda_j)|) \label{eq:delta_bound_a} \\
& \leq \max_k |h'(\lambda_k)| \\
& \leq C_{\text{pt}} \max_k |\lambda_k-\lambda_k^2| \label{eq:delta_bound_c} \\
& = C_{\text{pt}} \|S_n^{(0)} - (S_n^{(0)})^2 \|_2 \\
& \leq C_{\text{pt}} \|S_n^{(0)} - (S_n^{(0)})^2 \|_F.
\end{align}
The inequality in \eqref{eq:delta_bound_a} is based on the assumption that all eigenvalues $\lambda_1,\dots,\lambda_{N_{\text{occ}}}$ are larger than the inflection point of $h(x)$ so that $h$ is concave on their convex hull. The inflection points of $h(x)= (2x-x^2)^2$ and $h(x)= 2x^2-x^4$ are $1-1/\sqrt{3} \approx 0.42$ and $1/\sqrt{3} \approx 0.58$, respectively. The constant introduced in \eqref{eq:delta_bound_c} can be chosen as $$C_{\text{pt}} \geq 6+4 \sqrt{0.25 + \|S_n^{(0)} - (S_n^{(0)})^2 \|_F}\;,$$ which ensures that the inequality is valid on an interval containing all eigenvalues; including the one that gives the maximum. In practice, one may for example set $C_{\text{pt}} = 9$ which makes the inequality valid on the $[-0.25, 1.25]$ interval, which should always include all eigenvalues.
The corresponding \hbox{$|(\delta_2)_{i,j}| \leq C_{\text{pt}} \|S_n^{(0)} - (S_n^{(0)})^2 \|_F$} bound can be derived similarly.
Using all of this, and that the Frobenius norm is unitarily invariant, we get that the reduction of the first order idempotency error in the Frobenius norm is bounded as follows:
\begin{align}\label{ConvCrit}
\begin{split}
\|\text{Idemp}_{n+2}^{(1)}\|_F
& = \sqrt{\|\delta_1 \odot Y_{1,1}\|_F^2 + \|\delta_2 \odot Y_{2,2}\|_F^2} \\
& \leq C_{\text{pt}} \|S_n^{(0)} - (S_n^{(0)})^2\|_F\\
& \hspace{1cm} \times \sqrt{\|Y_{1,1}\|_F^2 + \|Y_{2,2}\|_F^2} \\
& = C_{\text{pt}} \|S_n^{(0)} - (S_n^{(0)})^2 \|_F \\
& \hspace{2cm} \times \|\text{Idemp}_{n}^{(1)}\|_F.
\end{split}
\end{align}
Our proposed parameter-free convergence criteria is then given by this bound no longer holding. We also remark that there is negligible cost to evaluating the Frobenius norm and idempotency error for determining the convergence of $S_n^{(1)}$, because the matrix square activation function of $S_n^{(0)}$ is no longer computed once $S_n^{(0)}$ has converged. The parameter-free convergence criteria often leads to a few extra (around three or four) iterations beyond what is required to reach convergence for $S_n^{(0)}$, and no additional accuracy is achieved by performing more iterations once the convergence in Eq.\ (\ref{ConvCrit}) has been reached. The convergence criterion is given in \Cref{alg:D1} as part of the DNN-SP2-PT scheme in \Cref{alg:main,alg:D0,alg:D1}.
\begin{algorithm}
\caption{{\small Pseudocode for the DNN-SP2-PT formulation of the recursive Fermi-operator expansion algorithm for linear response calculations. }}
\label{alg:main}
\algsetup{indent=1em}
\begin{algorithmic}
\STATE $ N$, ~~ \comm{Number of orbitals}
\STATE $ N_{\rm occ}$, ~ \comm{Number of occupied orbitals}
\STATE $ H^{(0)}$ , ~ \comm{Hamiltonian matrix}\\
\STATE $ H^{(1)}$ , ~ \comm{First order perturbation of the Hamiltonian}
\STATE $ \varepsilon_1, ~\varepsilon_N, ~\mbox{Spectral bound estimates of } H^{(0)}$
\STATE $ X_0^{(0)} = Z^T H^{(0)} Z$, ~\comm{Orthogonlized input}
\STATE $ X_0^{(1)} = Z^T H^{(1)} Z$, ~\comm{Orthogonlized input response}
\STATE $ W_0 = - (\varepsilon_N - \varepsilon_1)^{-1}, ~~ B_0^{(0)} = \varepsilon_N(\varepsilon_N-\varepsilon_1)^{-1}I, ~~ B_0^{(1)} = 0$
\STATE $S_0^{(0)} = W_0X_0^{(0)} + B_0^{(0)}$, ~\comm{Initial transform, zeroth order}
\STATE $N_S = {\rm Tr}[S_0^{(0)}]$, ~\comm{Occupation of $S_0^{(0)}$}
\STATE $S_0^{(1)} = W_0X_0^{(1)} + B_0^{(1)}$, ~\comm{Initial transform, first order}
\STATE $ n = 1$, ~~\comm{Number of layers}
\STATE $D^{(0)}, D^{(1)}$ converged = \FALSE
\WHILE{$D^{(0)}$ and $D^{(1)}$ converged = \FALSE}
\STATE $ X_n^{(1)} = S_{n-1}^{(0)}S_{n-1}^{(1)}+S_{n-1}^{(1)}S_{n-1}^{(0)}$, \comm{Activation function $f^{(1)}$}
\IF {$D^{(0)}$ converged = \FALSE}
\STATE $ X_n^{(0)} = ({S_{n-1}^{(0)}})^2$, \comm{Activation function $f^{(0)}$}
\STATE $N_X^{(0)} = {\rm Tr}[X_n^{(0)}]$
\STATE ${\rm IdErr}_n^{(0)} = N_S^{(0)} - N_X^{(0)}$, ~~\comm{Idemp. error estimate}
\STATE $\sigma_n = \mbox{sign} \left( |2N_S^{(0)} - N_X^{(0)} -
N_{\rm occ}| - | N_X^{(0)} -N_{\rm occ}| \right)$
\STATE Call Layer Update $D^{(0)}$ (Alg. 2)
\ELSE
\STATE $S_n^{(0)} = S_{n-1}^{(0)}$
\STATE $\sigma_n = (-1) \times \sigma_{n-1}$, \comm{Do the $\sigma$ flip}
\STATE $W_n = \sigma_n I$
\ENDIF
\STATE Call \text {Layer Update $D^{(1)}$} (Alg. 3)
\STATE $ n = n + 1$
\ENDWHILE
\end{algorithmic}
\end{algorithm}
\begin{algorithm}
\caption{{\small Parameter-free convergence criteria and deep layer update for $D^{(0)}$. The convergence criteria shown here for ${\rm IdErr}_{n}^{(0)}$ was derived in a previous work \cite{JFinkelstein21a} by establishing an analytical bound on ${\rm IdErr}_{n}^{(0)}$. }}
\label{alg:D0}
\algsetup{indent=1em}
\begin{algorithmic}
\IF{${\rm IdErr}_n^{(0)} <= 0$}
\STATE {\rm $D^{(0)}$ converged} = \TRUE\\
$\text{FroErr}^{(0)} = || S_{n-1}^{(0)} - (S_{n-1}^{(0)})^2||_F$, ~~\comm{Idemp. error}
\ELSIF{$n > 2$ \AND $\sigma_{n-1} \ne \sigma_{n-2}$ \AND ${\rm IdErr}_{n}^{(0)} > 4.5 \times ({\rm IdErr}_{n-2}^{(0)})^2$}
\STATE {\rm $D^{(0)}$ converged} = \TRUE\\
$\text{FroErr}^{(0)} = || S_{n-1}^{(0)} - (S_{n-1}^{(0)})^2||_F$, ~~\comm{Idemp. error}
\ELSE
\STATE $ W_n = \sigma_n I, ~ B_n^{(0)} = (I-W_n)S_{n-1}^{(0)}$
\STATE $ S_n^{(0)} = W_nX_n^{(0)} + B_n^{(0)} $, \comm{Update deep layer, zeroth-order}
\STATE $ N_S^{(0)} = W_n N_X^{(0)} + (1-\sigma_n) N_S^{(0)}$, \comm{Update occ. of $S_n^{(0)}$}
\ENDIF
\end{algorithmic}
\end{algorithm}
\begin{algorithm}
\caption{{\small Parameter-free convergence criteria and deep layer update for $D^{(1)}$.}}
\label{alg:D1}
\algsetup{indent=1em}
\begin{algorithmic}
\IF {$D^{(0)}$ converged = \TRUE}
\STATE $\text{FroErr}_n^{(1)} = ||S_{n-1}^{(1)}- (S_{n-1}^{(0)}S_{n-1}^{(1)}+S_{n-1}^{(1)}S_{n-1}^{(0)}) ||_F,$ \comm{First order idemp. error}
\IF{$\text{FroErr}_n^{(1)} > 9 \times \text{FroErr}^{(0)} \times \text{FroErr}_{n-2}^{(1)}$}
\STATE {\rm $D^{(1)}$ converged} = \TRUE
\ENDIF
\ENDIF
\IF {$D^{(1)}$ converged = \FALSE}
\STATE $ B_n^{(1)} = (I-W_n)S_{n-1}^{(1)}$
\STATE $ S_n^{(1)} = W_nX_n^{(1)} + B_n^{(1)},$ \comm{Update deep layer, first order} \\
\ENDIF
\end{algorithmic}
\end{algorithm}
\subsection{Uniform electric field perturbations in DFTB theory}
To demonstrate and evaluate the DNN-SP2-PT scheme in \Cref{alg:main,alg:D0,alg:D1} using Tensor cores, we will first use second-order self-consistent charge density functional tight-binding (SCC-DFTB) theory \cite{DPorezag95,MElstner98,MFinnis98,TFrauenheim00,BHourahine20}, which is an approximation of Kohn-Sham DFT. SCC-DFTB theory is based on a second-order expansion of the Kohn-Sham DFT energy functional around a reference electron charge distribution of overlapping neutral atomic electron densities, where the electrostatic interaction is approximated by overlapping atom-centered electron charge distributions at short distances and monopole net partial atomic charges at large distances. In all our SCC-DFTB examples we use periodic boundary conditions, but without any k-points or Bloch functions (Gamma-point only). Our implementation is based on the SCC-DFTB software package LATTE \cite{LATTE}. As a perturbation we use an external electric field, ${\boldsymbol {\cal E}} = [{\cal E}_x, {\cal E}_y, {\cal E}_z]$ that interacts with the system through the dipole moment. Because of the periodic boundary conditions and the real-space representation, the external field interaction term has a simple sawtooth form and our response calculations do not use the geometric (Berry) phase approach that plays a crucial role in the modern theory of polarization \cite{KingSmith93,Resta93}. This theory is required to better estimate the polarizability measured in experiments for systems with periodic cells. In this way our calculations only represent an approximate model problem, but still fully illustrate the efficiency and accuracy of our approach to quantum response calculations.
In our model the perturbed DFTB approximation of the Kohn-Sham Hamiltonian matrix is,
\begin{equation}
H = H^{(0)} + \sum_\alpha {\cal E}_\alpha H_{\alpha}^{(1)}, ~~ \alpha = x,y,z,
\end{equation}
where
\begin{equation}
H^{(1)}_\alpha = \frac{1}{2} \left(OR_\alpha + R_\alpha O\right),
\end{equation}
and $H^{(0)}$ is the DFTB Hamiltonian of the unperturbed system.
Here $O_{kl} = \int \phi_k^*({\bf r}) \phi_l({\bf r}) d{\bf r}$ is the basis-set overlap matrix and $R_\alpha$ is the diagonal matrix with the atomic coordinates (in the $\alpha = x,y,z$ direction) for each atom-centered basis-set orbital at position ${\bf R}$.
Orthogonalization by the square root of the inverse overlap matrix, $Z = O^{-1/2}$, leads to the transformed input Hamiltonian matrices,
\begin{align}
{H^{(0)}}^{\perp} = Z^T H^{(0)} Z, \quad {H^{(1)}}^{\perp} = Z^T H^{(1)} Z\;,
\end{align}
that can be used to demonstrate and evaluate the DNN-SP2-PT calculations with Tensor cores. In the first calculations below we will only investigate non-self-consistent perturbation calculations of the linear response in the electron density represented by the first-order perturbation in the density matrix $D^{(0)}$. No coupled perturbed self-consistent field optimizations \cite{RMStevens63,JPople79,SBaroni87,SBaroni01,VWeber04,VWeber05} are performed for the DFTB examples. The computational cost of repeated coupled-perturbed self-consistent-field iterations would depend linearly on the number of iterations required to reach some chosen convergence tolerance, which here can be ignored.
The use of Tensor cores makes a substantial difference in performance. Figure \ref{fig:dnn-sp2-pt performance} displays the computational performance of the DNN-SP2-PT method when calculating the first order response for water systems of various sizes (periodic boxes containing varying number of water molecules randomly distributed) using Nvidia A100 GPUs. For the water system with Hamiltonian matrix size of $19,008 \times 19,008$ (corresponding to 3,168 water molecules), the observed peak performance of the DNN-SP2-PT calculations on the Tensor cores of a single A100 GPU (not including memory data transfers or I/O) is approximately 120 Tflops, consistent with previous results for the DNN-SP2 algorithm \cite{JFinkelstein21a,JFinkelstein21c}.
Using the Tensor cores increases the flop rate by an order of magnitude in comparison to the GPU-only performance. Though, notice that the GPU calculations are single-precision (FP32) and the dual matrix representation that is needed to achieve a comparable level of accuracy approximately doubles the cost for the half-precision (FP16) calculations on the Tensor cores.
To further increase performance, we take advantage of the computational structure inherent in the DNN-DMPT formalism discussed above in \cref{sec:activation-function}. There, it was suggested that a multi-GPU approach could be used to exploit the unidirectional dependency in the response calculation. The two activation functions, $f^{(0)}$ and $f^{(1)}$, can utilize the Tensor cores of two separate A100 GPUs to calculate $D^{(0)}$ and $D^{(1)}$. The algorithm only requires that $S_n^{(0)}$ be passed, once, unidirectionally between the two GPUs at each step. Using the Tensor cores of two GPUs, in parallel, our peak performance is approximately 195 Tflops, an almost 63\% gain in performance over a single A100.
The flop rate for both implementations, that is the single GPU and the multi-GPU configuration, was estimated using the formula:
\begin{align}\label{flop rate formula}
\text{Est. flop rate} = \frac{5 \times N^3 \times \text{number of iterations}}{\text{time for main loop [s]}} \;
\end{align}
where timings for \cref{flop rate formula} were measured using CUDA event timers for a single run. The variation in time over consecutive runs is small, less than 5\% for the smallest system and less than 1\% for the largest one. The factor 5 comes from the fact that five total Tensor core matrix multiplications in half-precision are needed in each iteration using the dual matrix representation, as can be seen from \cref{S0 tensor core} and \cref{S0S1 tensor core}. The update to $S^{(0)}$ requires two Tensor core multiplications and the update to $S^{(1)}$ requires three Tensor core multiplications.
To test the numerical accuracy of the DNN-SP2-PT method, a reference calculation using double-precision arithmetic instead of mixed precision is produced for each system of water. We then compare the density matrix response generated by the DNN-SP2-PT method using Tensor cores and the dual matrix representation with that of the reference calculation by computing the relative 2-norm error between the two. The result is shown in the bottom panel of Figure~\ref{fig:dnn-sp2-pt performance}. These relative errors appear to be independent of system size and hover around $5 \times 10^{-5}$, which is about a factor 20 less than machine epsilon for FP16 arithmetic. With 10 bits in the mantissa, FP16 provides about 3 digits accuracy, because $1/2^{10} \approx 10^{-3}$. Ideally we would therefore have an accuracy of 6 digits with the dual matrix representation. Even if we don't fully reach this limit, we get an accuracy of $\sim 10^{-5}$. A thorough investigation of Tensor core FP16/FP32 precision can be found in Refs.~\onlinecite{HOotomo22} and \onlinecite{MFasi21}.
\begin{figure}
\centering
\begin{tikzpicture}
\begin{axis}[xbar = .05cm,
enlargelimits=0.055,
height=0.5\textwidth,
width=0.46\textwidth,
xlabel= Matrix size ($N$),
ylabel = Tflops,
symbolic x coords ={5904,10266,12720,16512, 19008},
ylabel near ticks,
xlabel near ticks,
bar width = 0.2 cm,
xtick style={draw=none},
ytick pos=left,
xticklabels={,5904,10266,12720, 16512, 19008},
ytick={50,100,150,200},
ymajorgrids=true,
grid style=dashed,
enlarge x limits=0.1,
x label style={font=\large},
y tick label style={font=\large},
y label style={font=\large},
legend pos=north west,
legend cell align={left}]
\addplot[ybar,fill=orange] coordinates {
(5904, 8.95749)
(10266, 9.25438)
(12720, 9.37222)
(16512, 9.47657)
(19008, 9.46085)
};
\addplot[ybar,fill=blue] coordinates {
(5904, 79.2236)
(10266, 86.0932)
(12720, 88.9632)
(16512, 98.5815)
(19008, 118.962)
};
\addplot[ybar,fill=red] coordinates {
(5904, 121.719)
(10266, 140.48)
(12720, 146.909)
(16512, 163.227)
(19008, 194.184)
};
\legend {GPU-only (FP32) ,Tensor core (FP16/FP32), 2x Tensor core (FP16/FP32)};
\end{axis}
\end{tikzpicture}
\begin{tikzpicture}
\begin{axis}[
height=0.25\textwidth,
width=0.46\textwidth,
xlabel={Matrix size ($N$)},
ylabel={Relative error},
xmin=4900, xmax=20000,
ymin=0, ymax=1.1E-4,
xtick={5000,10000,15000,20000},
xticklabels={5000,10000,15000,20000},
scaled x ticks=false,
ytick={0,5e-5, 1e-4},
yticklabels={0,5e-5, 1e-4},
legend pos=north west,
ymajorgrids=true,
grid style=dashed,
enlarge x limits=0.05,
x label style={font=\large},
y tick label style={font=\large},
scaled y ticks=false,
y label style={font=\large},
]
\addplot[ color = blue,
line width = 0.75,
mark size = 3,
mark options={fill=orange,scale=1},
mark = square*,
]
coordinates {
(5904, 5.269263e-05)
(10248, 2.83086e-05)
(12720, 4.3089376e-05)
(16512, 3.3885891e-05)
(19008, 4.638253e-05)
};
\legend{DNN-SP2-PT + Tensor cores}
\end{axis}
\end{tikzpicture}
\caption{Top figure displays the flop rate for periodic box water systems with 66\% occupation, $N_{\rm occ} = (2/3) \times N$, as a function of number of orbitals, $N$, for the DNN-SP2-PT algorithm using only one Nvidia A100 GPU with Tensor cores (Tensor core) and two separate sets of A100 Tensor cores (2x Tensor core), along with a standard single-precision version using one A100 GPU's single-precision compute cores only (GPU-only). The theoretical flop rate for a single A100 GPU using the Tensor cores is 156 Tflops and for the A100 only using its FP32 compute cores is 9.75 Tflops \cite{nvda-a100}. Bottom figure displays $|| \cdot ||_2$ norm error in the first-order density matrix response produced by the DNN-SP2-PT method using Tensor cores on the test water boxes. The reference was obtained from a double-precision DNN-SP2-PT response calculation. }
\label{fig:dnn-sp2-pt performance}
\end{figure}
Because the DNN-SP2-PT method calculates density matrix derivatives in the direction of the perturbation, we could, alternatively, compute these derivatives through finite differencing. How does such a direct approach compare to the DNN-SP2-PT scheme?
To investigate this we consider two and four point finite differencing stencils, with an accuracy of second and fourth order in the step size, $h$. The first order density matrix $D^{(1)}$, can be then approximated using a finite difference approximation by
\begin{equation}\label{FnD_2}\begin{array}{ll}
{\displaystyle D^{(1)} = \frac{D(H^{(0)} + h H^{(1)}) - D(H^{(0)} - h H^{(1)})}{2h} + \mathcal{O}(h^2)}\;,
\end{array}
\end{equation}
or by
\begin{equation}\label{FnD_4} \begin{array}{l}
{\displaystyle D^{(1)} = \frac{-D(H^{(0)} + 2h H^{(1)}) + 8D(H^{(0)} + h H^{(1)})}{12h}} \\
~~\\
{\displaystyle \hspace{.50cm} + \frac{- 8D(H^{(0)} - h H^{(1)}) + D(H^{(0)} - 2h H^{(1)})}{12h} + \mathcal{O}(h^4) \;,}
\end{array}
\end{equation}
for a suitably chosen step sizes $h$ and where each evaluation of $D$ uses regular DNN-SP2 with Tensor core acceleration. The second order approximation costs about the same as a first-order DNN-SP2-PT calculation (4 instead of 5 matrix multiplications in each layer), whereas the four-point stencil would take about twice as long (8 instead of 5 multiplications).
A water box with $N=5904$ orbitals is used for testing. Figure~\ref{fig:finite-diff} shows the relative error in approximating $D^{(1)}$ by using both finite differencing stencils for several values of step size. The second order 2-point stencil approximation yields an almost order-of-magnitude decrease in accuracy, for the best possible case, but requires finding the optimal value of step size, $h$, which depends on machine epsilon for the numerical precision of the algorithm. Using the higher order 4-point stencil yields better accuracy, though for a different $h$ value, but is still a factor 2 less accurate than our DMPT framework even for the most optimal step size displayed. The four-point stencil requires 4 separate density matrix calculations which represents a considerable overhead compared to the DNN-SP2-PT calculation. From this analysis it is clear that our DNN-SP2-PT method offers significant advantages. No search for an optimal step size $h$ is needed (which is non-trivial due to the mixed FP16/FP32 precision), the numerical accuracy is higher, and the cost is significantly lower compared to the more accurate four-point stencil finite difference approximation.
\begin{figure}
\centering
\includegraphics[width=0.48\textwidth]{Figures/finite-diff.pdf}
\caption{Standard finite differencing 2-point and 4-point stencil for approximating the first-order term in the density matrix response, Eqs.\ (\ref{FnD_2}) and (\ref{FnD_4}). The dotted line shows the relative error obtained from using our Tensor core based DNN-SP2-PT approach. }
\label{fig:finite-diff}
\end{figure}
\subsection{Hartree-Fock based polarizability calculations}
\begin{table}[]
\centering
\begin{tabular}{ |l|l|c|c| }
\hline
$N$ & $N_{\rm occ}$ & DMPT & Ergo Reference \\
& & TC FP16 & CPU FP64\\
\hline
1128 & 235 & -286.118378567 & -286.104481906 \\
1824 & 380 & -463.263240067 & -463.26617634 \\
2400 & 500 & -610.91459163 & -610.92738903 \\
4560 & 950 & -1169.0446183 & -1169.104407943 \\
7224 & 1505 & -1856.85780567 & -1856.9148556467\\
\hline
\end{tabular}
\caption{Isotropic polarizabilities, $(\alpha_{xx}+\alpha_{yy}+\alpha_{zz})/3$, of water clusters using DNN-SP2-PT with Tensor cores. Double-precision Ergo calculations on CPUs are given as reference values, which use the regular SP2 Fermi-operator expansion scheme for the density matrix construction. Here, $N$ is the number of basis functions and $N_{\rm occ}$ is the occupation number. The 6-31G$^{**}$ basis set is used for the Ergo calculations.} \label{table:polarizability}
\end{table}
In the demonstration and analysis of the DNN-SP2-PT approach above we used SCC-DFTB theory with periodic boundary conditions. In this section, we use restricted Hartree-Fock theory, where the response is given from the solution of the coupled-perturbed self-consistent field equations \cite{RMStevens63,JPople79,VWeber04,VWeber05}. Molecular clusters without periodic boundary conditions were used. To generate the relevant self-consistent Fockians (i.e.\ the effective single-particle Hamiltonians) and their first-order perturbations we have used the electronic structure software package Ergo \cite{Ergo}. Our calculations only include the last step of this coupled-perturbed self-consistent field calculation, with the already converged self-consistent Fockian, and its response obtained from double-precision arithmetic. As a scalable testbed system we have used water clusters of approximately spherical shape and the Gaussian 6-31G$^{**}$ basis set. The water clusters were extracted from a molecular dynamics simulation of bulk water at standard temperature and pressure by including all water molecules within spheres of varying radius. The water cluster geometries are publicly available at \url{ergoscf.org}. The coupled perturbed self-consistent field iterations were converged to within a tolerance $\|Y-Y^T\|_F < 10^{-3}$, $Y = (H^{(0)}D^{(1)}+H^{(1)}D^{(0)})O$, for the first order commutation conditions.
\Cref{table:polarizability} shows the calculated isotropic polarizabilities using the DNN-SP2-PT algorithm with Tensor cores in low mixed precision, FP16/FP32 floating-point arithmetics, in comparison to the Ergo reference CPU calculations in double-precision. The accuracy (4 to 5 digits) is about the same as for the SCC-DFTB calculations with the same system-size independence. This is the expected accuracy for the dual matrix representations using mixed half-precision calculations with single-precision accumulation as in Eqs.\ (\ref{FP32S0S1}) and (\ref{S0S1 tensor core}).
\section{Conclusions}
We have shown how time-independent quantum response calculations based on density matrix perturbation theory can be generated through a deep neural network. Tensor cores, a new type of accelerated hardware developed for machine learning, are ideally suited for computations using this network structure. Our results add to the nascent, yet growing, body of work that utilizes specialized machine learning hardware for non-AI scientific purposes, e.g.\ see Refs.\ \onlinecite{AMorningstar21,ALewis21,MHauru21,JFinkelstein21a,JFinkelstein21c}. With this article, we have further broadened the applicability of Tensor cores to quantum response calculations, which represents yet another example of a more general non-AI science application. This further demonstrates their, mostly untapped, potential for accelerating numerical methods in chemistry and materials science.
Despite the low precision arithmetics of Tensor cores, we maintain sufficient accuracy for DFTB-based linear response calculations while obtaining significant performance gains, more than an order of magnitude, over what can be achieved with a high-performance GPU-only implementation. The computational structure of density matrix perturbation theory, and our corresponding deep neural network formulation, is uniquely suited toward leveraging Tensor cores. This structural framework further allows us to design a natural multi-GPU approach to perturbation theory. With two separate intra-node A100 devices, we achieved close to 200 Tflops when computing the density matrix and its first order response.
Though we have focused our applications to Nvidia Tensor core computations, our DNN-SP2-PT scheme is quite general and similar gains in performance should be expected from other accelerated hardware architectures such as TPUs and Matrix cores \cite{TPU,matrix-cores}.
\section{Acknowledgments}
This work is supported by the LANL LDRD-ER program, and by the U.S. Department of Energy through the Los Alamos National Laboratory, as well as the Swedish national strategic e-science research program (eSSENCE). We thank the CCS-7 group and Darwin cluster at Los Alamos National Laboratory for computational resources. Darwin is funded by the Computational Systems and Software Environments (CSSE) subprogram of LANL’s ASC program (NNSA/DOE).
\section{Supporting Information}
Complete Python implementation of linear response calculation using DNN-SP2-PT; includes Python implementation of psuedocode in Algorithms 1-3.
|
1,477,468,750,377 | arxiv | \section{Introduction }
Today it is well known that strong correlated systems
in condensed matter can be sucessfully described with the help of non-relativistic holography, for review see for example \cite{Hartnoll:2016apf}. This duality is based on the idea that the strongly coupled theory on the boundary
can be described by string theory in the bulk. Further, when the curvature of the space-time is small we can use the
classical gravity instead full string theory machinery. In case of non-relativistic holography the situation is even more interesting since we have basically two possibilities: Either we use Einstein metric with non-relativistic isometries
\cite{Son:2008ye,Balasubramanian:2008dm,Herzog:2008wg} or we
introduce non-relativivistic gravities in the bulk
\cite{Son:2013rqa,Janiszewski:2012nb}, like Newton-Cartan gravity \cite{Cartan:1923zea}
\footnote{For some recent works, see
\cite{Bergshoeff:2017dqq,Bergshoeff:2016lwr,Hartong:2016yrf,Hartong:2015zia,
Afshar:2015aku, Bergshoeff:2015ija,Bergshoeff:2015uaa,Bergshoeff:2014uea,
Andringa:2010it}.}
or Ho\v{r}ava gravity \cite{Horava:2009uw}. Then it is certainly very interesting
to study matter coupled to non-relativistic gravity. We can either study
field theories on non-relativistic background as in
\cite{Bergshoeff:2015sic,Bagchi:2015qcw,Festuccia:2016caf,Geracie:2015dea,Jensen:2014aia,Hartong:2014pma} or particles \cite{Kuchar:1980tw,Barducci:2017mse,Bergshoeff:2014gja,Kluson:2017pzr}
or even higher dimensional objects, as for example non-relativistic strings
and p-branes \cite{Andringa:2012uz,Harmark:2017rpg}.
In this work we would like to focus on the canonical formulation of non-relativistic
string and p-brane in Newton-Cartan background. The starting point of our analysis is
the relativistic string in general background that couples to NSNS two form.
Then we use the limiting procedure that was proposed in
\cite{Bergshoeff:2015sic} and try to find corresponding string action.
Note that this is different limiting procedure than in case of the non-relativistic string in flat background where the non-relativistic limit is performed
on coordinates \cite{Gomis:2000bd,Gomis:2005bj,Gomis:2005pg}
\footnote{For recent work, see for example \cite{Batlle:2016iel,Kluson:2017djw,Kluson:2017ufb,Kluson:2017vwp}.}.
It is important to stress that if we apply this limiting procedure that leads to corank-1 spatial metric and rank one temporal metric of Newton-Cartan gravity to the case of the string action
we find that there is no way how to ensure that this action is finite. In order
to resolve this problem we have to select two flat target space longitudial directions exactly in the same way as in \cite{Andringa:2012uz}. Then we propose such an ansatz for NSNS two form field that is constructed with
the help of the fields that define Newton-Cartan geometry and where the divergent contribution from the coupling to NSNS two form exactly cancels the divergent
contribution coming from Nambu-Gotto part of the action. As a result we obtain
an action for the string in Newton-Cartan background that was proposed in
\cite{Andringa:2012uz} using different procedure.
As the next step we proceed to the canonical formulation of this theory. Then however
we encounter an obstacle in the form that we are not able to invert relation
between conjugate momenta and velocity in case of non-zero gauge field $m_\mu^{ \ a}$ whose explicit definition will be given in the next section. For that reason
we restrict ourselves to the case of the zero gauge field keeping in mind that
the case of on-zero gauge field deserves further study. Then we find Hamiltonian for this non-relativistic string that is linear combination of two first class constraints
which is the manifestation of the fact that two dimensional string action
is invariant under world-sheet diffeomorphism. As the next step we generalize this analysis to the case of $p-$brane.
We firstly determine well defined action for non-relativistic p-brane
when we consider specific form of the background $p+1$ form that couples to the world-volume of p-brane. Then we introduce an equivalent form of $p-$brane action
that allows us to consider canonical analysis of this theory. Finally we determine constraint structure of this theory and we show that there are $p+1$ first class constraints, $p-$ spatial diffeomorphism constraints and one Hamiltonian constraint. We again show that these constraints are the first class constraints.
This paper is organized as follows. In the next section (\ref{second}) we determine the form of non-relativistic string in Newton-Cartan background and perform its Hamiltonian analysis. Then in section (\ref{third}) we generalize this analysis to the case of
$p-$brane. Finally in conclusion (\ref{fourth}) we outline our results and suggest possible extension of this work.
\section{Review of the Non-relativistic Limit for Nambu-Gotto String}
\label{second}
In this section we derive non-relativistic form of the string action
in Newton-Cartan background using the limiting procedure developed in
\cite{Bergshoeff:2015uaa}. We start with
the Nambu-Gotto form of the action in the general background
\begin{equation}\label{funstringact}
S=-\tilde{\tau}_F \int d\tau d\sigma\sqrt{-\det (E_\mu^{ \ A}E_\nu^{ \ B}\eta_{AB}
\partial_\alpha x^\mu\partial_\beta x^\nu)}+\tilde{\tau}
\int d\tau d\sigma B_{\mu\nu}\partial_\tau x^\mu \partial_\sigma x^\nu \ ,
\end{equation}
where $E_\mu^{ \ A}$ is $d-$dimensional vierbein so that the metric components have the form
\begin{equation}
G_{\mu\nu}=E_\mu^{ \ A}E_\nu^{ \ B}\eta_{AB} \ , \eta_{AB}=\mathrm{diag}(-1,1,\dots,1)
\end{equation}
Note that the metric inverse $G^{\mu\nu}$ is defined with the help of the inverse vierbein $E^\mu_{ \ B}$ that obeys the relation
\begin{equation}
E_\mu^{ \ A}E^\mu_{ \ B}=\delta^A_{ \ B} \ , \quad E_\mu^{ \ A}E^\nu_{ \ A}=
\delta_\mu^{ \ \nu} \ .
\end{equation}
Further, $B_{\mu\nu}$ is NSNS two form field that plays the crucial role in the
limiting procedure. Finally $x^\mu \ ,\mu=0,\dots,d-1$ are embedding coordinates of the string where the two dimensional world-sheet is parameterised by $\sigma^\alpha\equiv(\tau,\sigma)$.
Let us now proceed to the brief description of the procedure that
leads to Newton-Cartan background from general background, for more detailed discussion, see the original paper \cite{Bergshoeff:2015uaa}. The starting point
is following ansatz for $d-$dimensional vierbein \cite{Bergshoeff:2015uaa}
\begin{equation}\label{vierbeinomega}
E_\mu^{ \ 0}=\omega\tau_\mu+\frac{1}{2\omega}m_\mu \ , \quad E_\mu^{ \ a'}= e_\mu^{ \ a'} \ ,
\end{equation}
where $a'=1,\dots,d-1$ and where $\omega$ is free parameter which goes to infinity in the Newton-Cartan limit. Note that in this case the metric has the form
\begin{eqnarray}
& &G_{\mu\nu}=E_\mu^{ \ A}E_\nu^{ \ B}\eta_{AB}
=-\omega^2 \tau_\mu \tau_\nu-\frac{1}{2}\tau_\mu m_\nu-\frac{1}{2}\tau_\nu m_\mu+h_{\mu\nu}+\frac{1}{4\omega^2}m_\mu m_\nu=\nonumber \\
&=&-\omega^2\tau_\mu\tau_\nu
+\bar{h}_{\mu\nu}+\frac{1}{4\omega^2}m_\mu m_\nu \ , \quad
\bar{h}_{\mu\nu}=h_{\mu\nu}-\frac{1}{2}\tau_\mu m_\nu-\frac{1}{2}\tau_\nu m_\mu \ , \quad
h_{\mu\nu}=e_\mu^{ \ a'}e_\nu^{ \ b'}\delta_{a'b'} \ . \nonumber \\
\end{eqnarray}
Inserting this metric into the Nambu-Gotto action and performing expansion
with respect to $\omega$ we obtain
\begin{equation}
S=-\tilde{\tau}_F \omega^2\int d\tau d\sigma
\sqrt{-\det\mathbf{a}} -\frac{\tilde{\tau}_F}{2}\int d\tau d\sigma
\sqrt{-\det\mathbf{a}}\mathbf{a}^{\alpha\beta}\bar{h}_{\alpha\beta} \ ,
\end{equation}
where we defined
\begin{equation}\label{actnaive}
\mathbf{a}_{\alpha\beta}=\tau_{\mu\nu}\partial_\alpha x^\mu \partial_\beta x^\nu \ ,
\quad \mathbf{a}^{\alpha\beta}\mathbf{a}_{\beta\gamma}=\delta^\alpha_\gamma \ , \quad
\bar{h}_{\alpha\beta}=\bar{h}_{\mu\nu}\partial_\alpha x^\mu \partial_\beta x^\nu \ .
\end{equation}
We also used the fact that $\mathbf{a}_{\alpha\beta}$ is $2\times 2$ matrix that is non-singular.
Apparently we see from (\ref{actnaive}) that there is a term proportional to $\omega^2$ that cannot be made
finite by rescaling of $\tilde{\tau}_F$. In case of the string in the flat non-relativistic
background such a term
can be canceled with the suitable form of the background $NSNS$ two form. Further,
, this two form field should be build from the fields that define Newton-Cartan theory as $m_\mu,\tau_\nu$. However it turns out that it is not possible to find such a NSNS two form due to the fact that it has to be antisymmetryc in space-time indicies. In order to find solution of this problem we implement the generalization of the Newton-Cartan gravity that was introduced in
\cite{Andringa:2012uz}. Explicitly, we split the target-space indices $A$ into $A=(a',a)$ where now $a=0,1$ label longitudial and $a'=2,\dots,d-1$ correspond to transverse directions. Then we introduce $\tau_\mu^{ \ a}$ so that we write
\begin{equation}
\tau_{\mu\nu}=\tau_\mu^{ \ a}\tau_\nu^{ \ b}
\eta_{ab} \ , \quad a,b=0,1 \ , \quad \eta_{ab}=\mathrm{diag}(-1,1) \ .
\end{equation}
In the same way we introduce vierbein $e_\mu^{ \ a'}, a'=2,\dots,d-1$ and also we generalize $m_\mu$ into $m_\mu^{ \ a}$. The $\tau_\mu^{ \ a}$ can be interpreted as the gauge fields of the longitudinal translations while $e_\mu^{ \ a'}$ as the gauge fields of the transverse translations
\cite{Andringa:2012uz}. Then we can also introduce their inverses with respect to their longitudinal and transverse subspaces
\begin{eqnarray}
e_\mu^{ \ a'}e^\mu_{ \ b'}=\delta^{a'}_{b'} \ , \quad
e_\mu^{ \ a'}e^\nu_{ \ a'}=\delta_\mu^\nu-\tau_\mu^{ \ a}
\tau^\nu_{ \ a} \ , \quad \tau^\mu_{ \ a}\tau_\mu^{ \ b}=\delta_a^b \ , \quad
\tau^\mu_{ \ a}e_\mu^{ \ a'}=0 \ , \quad
\tau_\mu^{ \ a}e^\mu_{ \ a'}=0 \ . \nonumber \\
\end{eqnarray}
Performing this generalization implies following form
of the vierbein
\begin{equation}
E_\mu^{ \ a}=\omega \tau_\mu^{ \ a}+\frac{1}{2\omega}m_\mu^{ \ a} \ , \quad
E_\mu^{ \ a'} =e_\mu^{ \ a'}
\end{equation}
so that we find following form of the metric
\begin{eqnarray}
G_{\mu\nu}&=&E_\mu^{ \ a}E_\nu^{ \ b}\eta_{ab}+E_\mu^{ \ a'}E_\nu^{ \ b'}\delta_{a'b'}
=\nonumber \\
&=&\omega^2 \tau_{\mu\nu}+h_{\mu\nu}+\frac{1}{2}\tau_\mu^{ \ a}m_\nu^{ \ b}\eta_{ab}+
\frac{1}{2}m_\mu^{ \ a}\tau_\nu^{ \ b}\eta_{ab}+\frac{1}{4\omega^2}m_\mu^{ \ a}m_\nu^{ \ b}
\eta_{ab} \ . \nonumber \\
\end{eqnarray}
It was shown in \cite{Bergshoeff:2015uaa} that in order to find the right form
of the particle action in Newton-Cartan background we should consider following
ansatz for the background gauge field $A_\mu=\omega \tau_\mu-\frac{1}{2\omega}m_\mu$.
In order to find correct form of the action for the string in Newton-Cartan background
we propose analogue form of the NSNS two form
\begin{eqnarray}
B_{\mu\nu}&=&\left(\omega\tau_\mu^{ \ a}-\frac{1}{2\omega}m_\mu^{ \ a}\right)\left( \tau_\nu^{ \ b}-\frac{1}{2\omega}m_\nu^{ \ b}\right)\epsilon_{ab}=
\nonumber \\
&=&\omega^2\tau_\mu^{ \ a}\tau_\nu^{ \ b}\epsilon_{ab}-
\frac{1}{2\omega}(m_\mu^{ \ a}\tau_{\nu}^{ \ b}+
\tau_\mu^{\ a}m_\nu^{ \ b})\epsilon_{ab}+\frac{1}{4\omega^2}
m_\mu^{ \ a}m_\mu^{ \ b}\epsilon_{ab} \ , \quad
\epsilon_{ab}=-\epsilon_{ba} \ , \quad
\epsilon_{01}=1 \ .
\nonumber \\
\end{eqnarray}
It is important that terms proportional to $\omega^{-2}$ and $\omega^{-4}$ vanish
in the limit $\omega\rightarrow \infty$ while the divergent contribution cancel the divergence coming from Nambu-Gotto part of the action since
\begin{eqnarray}
&- &\tilde{\tau}_F\omega^2\int d^2\sigma\sqrt{-\det\mathbf{a}}+\frac{\tilde{\tau}_F}{2} \int d^2\sigma
\epsilon^{\alpha\beta}B_{\mu\nu}\partial_\alpha x^\mu
\partial_\beta x^\nu \nonumber \\
&=&
-\tilde{\tau}_F \omega^2\int d^2\sigma \det \tau_\alpha^{ \ a}+
\omega^2 \frac{\tilde{\tau}_F}{2}\int d^2\sigma \epsilon^{\alpha\beta}\epsilon_{ab}\tau_\alpha^{ \ a}\tau_\beta^{ \ b}=0 \ ,
\nonumber \\
\end{eqnarray}
where we introduced $2\times 2$ matrix $\tau_\alpha^{ \ a}\equiv \tau_\mu^{ \ a}\partial_\alpha x^\mu$ and where we used the fact that $\det \tau_\alpha^{ \ a}=
\frac{1}{2}\epsilon^{\alpha\beta}\epsilon_{ab}\tau_\alpha^{ \ a}\tau_\beta^{ \ b}$ where $\epsilon^{\alpha\beta}=-\epsilon^{\beta\alpha} \ , \epsilon^{01}=1
$ is antisymmetric symbol with upper indices.
In summary we obtain the action for non-relativistic string in Newton-Cartan background in the form
\begin{equation}\label{nonrelstringNC}
S=-\frac{\tau_F}{2}\int d^2\sigma \sqrt{-\det\mathbf{a}}\mathbf{a}^{\alpha\beta}
\bar{h}_{\alpha\beta} \ ,
\end{equation}
where $\tilde{\tau}_F=\tau_F$. Note that the action (\ref{nonrelstringNC})
was derived previously in \cite{Andringa:2012uz}
using slightly different procedure.
Our goal is to find Hamiltonian formulation of this theory. To do this we
rewrite the Lagrangian density introduced in (\ref{nonrelstringNC}) into the form
\begin{eqnarray}\label{actionextended}
\mathcal{L}&=&\frac{1}{4\lambda^\tau}(\bar{h}_{\tau\tau}-2\lambda^\sigma
\bar{h}_{\tau\sigma}+(\lambda^\sigma)^2\bar{h}_{\sigma\sigma})-\lambda^\tau
\tau_F^2\bar{h}_{\sigma\sigma}+\nonumber \\
&+&B^\tau \left(\lambda^\tau-\frac{\sqrt{-\det\mathbf{a}}}{2\tau_F\mathbf{a}_{\sigma\sigma}}\right)
+B^\sigma \left(\lambda^\sigma-\frac{\mathbf{a}_{\tau\sigma}}{\mathbf{a}_{\sigma\sigma}}\right) \ .
\nonumber \\
\end{eqnarray}
It is easy to see an equivalence of these two Lagrangians since the
equations of motion for $B^\tau$ and $B^\sigma$
give
\begin{equation}
\lambda^\tau=\frac{\sqrt{-\det\mathbf{a}}}{2\tau_F\mathbf{a}_{\sigma\sigma}} \ , \quad
\lambda^\sigma=\frac{\mathbf{a}_{\tau\sigma}}{\mathbf{a}_{\sigma\sigma}} \ .
\end{equation}
Inserting this result into (\ref{actionextended}) and using the fact that
\begin{eqnarray}
& &\frac{1}{\lambda^\tau}=
-2\tau_F \mathbf{a}^{\sigma\sigma}\sqrt{-\det\mathbf{a}} \ , \quad
\frac{\lambda^\sigma}{\lambda^\tau}
=2\tau_F \mathbf{a}^{\tau\sigma}
\sqrt{-\det\mathbf{a}} \ ,
\nonumber \\
& & \frac{1}{4\lambda^\tau}
((\lambda^\sigma)^2-4\tau_F^2(\lambda^\tau)^2)=
-\frac{\tau_F}{2}\mathbf{a}^{\sigma\sigma}\sqrt{-\det\mathbf{a}}
\nonumber \\
\end{eqnarray}
we find that (\ref{actionextended}) reduces into (\ref{nonrelstringNC}).
Then from (\ref{actionextended}) we obtain conjugate momenta
\begin{eqnarray}\label{pmu}
p_\mu&=&\frac{1}{2\lambda^\tau}\bar{h}_{\mu\nu}\partial_\tau x^\nu-
\frac{\lambda^\sigma}{2\lambda^\tau}\bar{h}_{\mu\nu}\partial_\sigma x^\nu
-B^\tau\frac{1}{2\tau_F\mathbf{a}_{\sigma\sigma}}\tau_{\mu\nu}\partial_\alpha x^\nu \mathbf{a}^{\alpha\tau}\sqrt{-\det\mathbf{a}}-\frac{B^\sigma}{\mathbf{a}_{\sigma\sigma}} \tau_{\mu\nu}\partial_\sigma x^\nu \ , \nonumber \\
P^\tau_B&=&\frac{\partial L}{\partial \partial_\tau B^\tau}\approx 0 \ , \quad
P^\sigma_B=\frac{\partial L}{\partial\partial_\tau B^\sigma}\approx 0
\ , \quad P_\lambda^\tau=\frac{\partial L}{\partial \partial_\tau \lambda^\tau}\approx 0 \ , \quad P_\lambda^\sigma=\frac{\partial L}{\partial \partial_\tau \lambda^\sigma}\approx 0
\nonumber \\
\end{eqnarray}
Now we come to the most important problem in our analysis which is an imposibility to invert the relation between $p_\mu$ and $\partial_\tau x^\mu$ in order to express $\partial_\tau x^\mu$ using the canonical variables. The reason why we are not able to do it is in the presence of the vector field $m_\mu^{ \ a}$ so that the contraction of the metric $\bar{h}_{\mu\nu}$ with $\tau^\mu$ is non-zero. For that reason we restrict to the simpler case when $m_\mu^{\ a }=0$. Despite of this simplification we will see that even in this case the Hamiltonian formulation of the non-relativistic string in Newton-Cartan background is non-trivial task.
In case when $m_\mu^{ \ a}=0$ we have
$\bar{h}_{\mu\nu}=h_{\mu\nu}$ and $\bar{h}_{\mu\nu}h^{\nu\rho}
\bar{h}_{\rho\sigma}=h_{\rho\sigma}$ and hence from (\ref{pmu}) we obtain
\begin{eqnarray}
\left(p_\mu+\frac{\lambda^\sigma}{2\lambda^\tau}h_{\mu\rho}
\partial_\sigma x^\rho\right) h^{\mu\nu}\left(p_\nu
+\frac{\lambda^\sigma}{2\lambda^\tau}h_{\nu\sigma}
\partial_\sigma x^\sigma\right)=\frac{1}{4(\lambda^\tau)^2}
\partial_\tau x^\mu h_{\mu\nu}\partial_\tau x^\nu \ .
\end{eqnarray}
On the other hand let us multiply both sides of expression
(\ref{pmu}) with
$\tau^\mu_{ \ a}\eta^{ab}\epsilon_{bc}\tau_\rho^{ \ c}\partial_\sigma x^\rho$ and we obtain
\begin{eqnarray}\label{pmumilt}
& &p_\mu \tau^\mu_{ \ a}\eta^{ab}\epsilon_{bc}\tau_\rho^{ \ c}\partial_\sigma x^\rho
=-B^\tau\frac{1}{2\tau_F \mathbf{a}_{\sigma\sigma}}\partial_\alpha x^\mu \tau_\mu^{ \ a}\epsilon_{ab}
\tau_\rho^{ \ b}\partial_\sigma x^\rho\mathbf{a}^{\alpha\tau}\sqrt{-\det\mathbf{a}}-B^\sigma\frac{1}{\mathbf{a}_{\sigma\sigma}}
\partial_\sigma x^\mu \tau_\mu^{ \ a}\epsilon_{ab}\tau_\nu^{ \ b}\partial_\sigma x^\nu\nonumber \\
\nonumber \\
&=&-B^\tau\frac{1}{2\tau_F\mathbf{a}_{\sigma\sigma}}\partial_\tau x^\mu \tau_\mu^{ \ a}\epsilon_{ab}\tau_\rho^{ \ b}\partial_\sigma x^\rho
\frac{\mathbf{a}_{\sigma\sigma}}{\det\mathbf{a}}\sqrt{-\det\mathbf{a}}=\frac{1}{2\tau_F}B^\tau\nonumber \\
\end{eqnarray}
using
\begin{equation}
\tau^\mu_{ \ a}\tau_{\mu\nu}=\tau_\nu^{ \ b}\eta_{ba}
\end{equation}
and also we used the fact that
\begin{equation}
\sqrt{-\det \mathbf{a}}=\det \tau_\alpha^{ \ a}=
\tau_\tau^a\tau_\sigma^b\epsilon_{ab} \ ,
\end{equation}
where $\tau_\alpha^{ \ a}=\tau_\mu^{ \ a}\partial_\alpha x^\mu$.
Then the relation (\ref{pmumilt}) implies following primary constraint
\begin{equation}\label{prim1}
\Gamma^\tau\equiv
2\tau_Fp_\mu \tau^\mu_{ \ a}\eta^{ab}\epsilon_{bc}\tau_\rho^{ \ c}\partial_\sigma x^\rho-B^\tau \approx 0 \ .
\end{equation}
On the other hand if we multiply the relation (\ref{pmu}) with $\tau^{\mu\nu}\tau_{\nu\rho}\partial_\sigma x^\rho$ we obtain
\begin{eqnarray}
p_\mu \tau^{\mu\nu}\tau_{\nu\rho}\partial_\sigma x^\rho
=-
B^\tau \frac{1}{2\tau_F \mathbf{a}_{\sigma\sigma}}
\mathbf{a}_{\sigma\alpha}
\mathbf{a}^{\alpha\tau}\sqrt{-\det\mathbf{a}}-B^\sigma =
-B^\sigma \
\nonumber \\
\end{eqnarray}
and hence we obtain second primary constraint
\begin{equation}
\Gamma^\sigma\equiv p_\mu \tau^{\mu\nu}\tau_{\nu\rho}\partial_\sigma x^\rho+B^\sigma\approx 0 \ .
\end{equation}
As a result the extended
Hamiltonian with all primary constraints included has the form
\begin{eqnarray}
H_E&=&\int d\sigma\left(\lambda^\tau (p_\mu h^{\mu\nu}p_\nu+\tau_F^2 h_{\sigma\sigma})
-B^\tau \lambda^\tau-B^\sigma \lambda^\sigma
+\lambda^\sigma p_\mu h^{\mu\nu}h_{\nu\rho}\partial_\sigma x^\rho+\right.
\nonumber \\
&+&\left. U_\tau \Gamma^\tau+U_\sigma \Gamma^\sigma +v
_\tau^B P^\tau_B+v_\sigma^B P_B^\sigma+v^\lambda_\tau P_\lambda^\tau+
v_\sigma^\lambda P_\lambda^\sigma\right) \ .
\nonumber\\
\end{eqnarray}
Let us now analyze the requirement of the preservation of the primary constraints
$P_\lambda^\tau\approx 0 , P_\lambda^\sigma\approx 0$. In case of $P_\lambda^\tau$ we obtain
\begin{eqnarray}
\partial_\tau P_\lambda^\tau&=&\pb{P_\lambda^\tau,H_E}=
-p_\mu h^{\mu\nu}p_\nu-\tau_F^2 h_{\sigma\sigma}+B^\tau=
\nonumber \\
&=&
-p_\mu h^{\mu\nu}p_\nu-\tau_F^2h_{\sigma\sigma}+2\tau_F p_\mu
\tau^\mu_{ \ a}\eta^{ab}\epsilon_{bc}\tau_\rho^{ \ c}\partial_\sigma x^\rho-
2\tau_F \Gamma^\tau \approx \nonumber \\
&\approx &-p_\mu h^{\mu\nu}p_\nu-\tau_F^2h_{\sigma\sigma}+2\tau_F p_\mu
\tau^\mu_{ \ a}\eta^{ab}\epsilon_{bc}\tau_\rho^{ \ c}\partial_\sigma x^\rho\equiv -\mathcal{H}_\tau
\approx 0 \nonumber \\
\end{eqnarray}
and also
\begin{equation}
\partial_\tau P_\lambda^\sigma=\pb{P_\lambda^\sigma,H}=
-p_\mu\partial_\sigma x^\mu+p_\mu \tau^{\mu\nu}\tau_{\nu\rho}\partial_\sigma x^\rho
+B^\sigma=-p_\mu\partial_\sigma x^\mu+\Gamma^\sigma\approx -p_\mu\partial_\sigma x^\mu
\equiv -\mathcal{H}_\sigma\approx 0 \ .
\end{equation}
We see that the requirement of the preservation of the primary constraints
$P_\lambda^\tau\approx 0 \ , P_\lambda^\sigma\approx 0$ implies an existence of two secondary constraints:
\begin{eqnarray}
\mathcal{H}_\sigma=p_\mu\partial_\sigma x^\mu\approx 0 \ , \quad
\mathcal{H}_\tau=p_\mu h^{\mu\nu}p_\nu+\tau_F^2h_{\sigma\sigma}-2\tau_F p_\mu
\tau^\mu_{ \ a}\eta^{ab}e_{bc}\tau_\rho^{ \ c}\partial_\sigma x^\rho\approx 0 \ .
\nonumber \\
\end{eqnarray}
Further, since $\pb{P_B^\tau(\sigma),\Gamma^\tau(\sigma')}=\delta(\sigma-\sigma'),
\pb{P_B^\sigma(\sigma),\Gamma^\sigma(\sigma')}=-\delta(\sigma-\sigma')$ we see that these constraints are the second class constraints that can be explicitly solved for $B^\tau$ and $B^\sigma$. Then these constraints vanish strongly and hence the
Hamiltonian is linear combination of the constraints
\begin{equation}
\mathcal{H}_E=\lambda^\tau \mathcal{H}_\tau+\lambda^\sigma \mathcal{H}_\sigma+v_\tau^\lambda P_\lambda^\tau+
v_\sigma^\lambda P_\lambda^\sigma \ .
\end{equation}
As the next step we should check that $\mathcal{H}_\tau\approx 0 \ ,\mathcal{H}_\sigma\approx 0$
are the first class constraints. To do this we introduce the smeared
forms of these constraints
\begin{eqnarray}
\mathbf{T}_\tau(N)=\int d\sigma N \mathcal{H}_\tau \ ,
\mathbf{T}_\sigma(N^\sigma)=\int d\sigma N^\sigma \mathcal{H}_\sigma \ .
\nonumber \\
\end{eqnarray}
First of all we easily find
\begin{eqnarray}
\pb{\mathbf{T}_\sigma(N^\sigma),\mathbf{T}_\sigma(M^\sigma)}=
\mathbf{T}_\sigma(N^\sigma\partial_\sigma M^\sigma-N^\sigma
\partial_\sigma M^\sigma) \ . \nonumber \\
\end{eqnarray}
In case of the Hamiltonian constraints the situation is more involved since the explicit calculation gives
\begin{eqnarray}
& & \pb{\mathbf{T}_\tau(N),\mathbf{T}_\tau(M)}=\int d\sigma
(N\partial_\sigma M-M\partial_\sigma N)4\tau_F^2p_\mu h^{
\mu\nu}h_{\nu\rho}\partial_\sigma x^\rho
+
\nonumber \\
&+&2\int d\sigma \tau_F (N\partial_\sigma M-M\partial_\sigma N)
p_\mu V^\mu_{ \ \nu}h^{\nu\omega}p_\omega+
\nonumber \\
&+&\int d\sigma (N\partial_\sigma M-M\partial_\sigma N)
p_\rho V^\rho_{ \ \sigma}V^\sigma_{\ \omega}\partial_\sigma x^\omega+
\nonumber \\
&+&\tau_F^2\int d\sigma (N\partial_\sigma M-M\partial_\sigma N)4\tau_F^2V^\mu_{ \ \nu}\partial_\sigma x^\nu h_{\mu\rho}\partial_\sigma x^\rho\ ,
\nonumber \\
\end{eqnarray}
where we defined
\begin{equation}
V^\mu_{ \ \nu}=-2\tau_F\tau^\mu_{ \ a}\eta^{ab}\epsilon_{bc}
\tau_\nu^{\ c} \ .
\end{equation}
To proceed further we calculate
\begin{eqnarray}
& &4p_\mu h^{\mu\nu}h_{\nu\rho}\partial_\sigma x^\rho=
4\tau_F^2p_\mu \partial_\sigma x^\mu-4\tau_F^2 p_\mu\tau^{\mu\rho}\tau_{\rho\nu}
\partial_\sigma x^\nu \ , \quad
\nonumber \\
& &p_\rho V^\rho_{ \ \mu}V^\mu_{ \ \nu}\partial_\sigma x^\nu=4\tau_F^2 p_\mu \tau^\mu_{ \ a}\tau^a_{ \ \nu}\partial_\sigma x^\nu=4\tau_F^2 p_\mu
\tau^{\mu\rho}\tau_{\rho\nu}\partial_\sigma x^\nu \ , \nonumber \\
& & V^\mu_{ \ \nu}\partial_\sigma x^\nu h_{\mu\rho}\partial_\sigma x^\rho=0 \ ,
\quad
p_\mu V^\mu_{ \ \nu}h^{\nu\omega}p_\omega=0 \ .
\nonumber \\
\end{eqnarray}
Collecting all these results together we finally obtain
\begin{equation}
\pb{\mathbf{T}_\tau(N),\mathbf{T}_\tau(M)}=\mathbf{T}_\sigma ((N\partial_\sigma M-M\partial_\sigma N)
4\tau_F^2 ) \ .
\end{equation}
Finally we also calculate Poisson bracket between $\mathbf{T}_\sigma(N^\sigma)$ and $\mathbf{T}_\tau(M)$ and we find
\begin{equation}
\pb{\mathbf{T}_\sigma(N^\sigma),\mathbf{T}_\tau(M)}=
\mathbf{T}_\tau(\partial_\sigma MN^\sigma-M\partial_\sigma N^\sigma) \ .
\end{equation}
These results show that $\mathcal{H}_\tau\approx 0,\mathcal{H}_\sigma \approx 0 $ are the first class constraints and the non-relativistic string is well defined system from the Hamiltonian point of view.
Finally we also say few words about the gauge fixed theory. We showed above that the Hamiltonian and spatial diffeomorphism constraints are the first class. Standard way how to deal with such a theory is to gauge fix them. For example, we can impose the static gauge when we introduce two gauge fixing functions
\begin{equation}
\mathcal{G}^0=x^0-\tau \approx 0 \ , \quad \mathcal{G}^1=x^1-\sigma \approx 0 \ .
\end{equation}
It is easy to see that
$\mathcal{G}^a\approx 0$ are the second class constraints together with $\mathcal{H}_\tau\approx 0 , \mathcal{H}_\sigma\approx 0$. Since then these constraints vanish strongly we can identify the
Hamiltonian density on the reduced phase space from the action principle
\begin{equation}
S=\int d\tau d\sigma (p_\mu \partial_\tau x^\mu-H)=
\int d\tau d\sigma (p_i\partial_\tau x^i+p_0)
\end{equation}
so that it is natural to identify $-p_0$ as the Hamiltonian on the reduced phase space
$H_{red}=-p_0$. The explicit form of the Hamiltonian follows from the Hamiltonian constraint that can be solved for $p_0$. Note also that $\mathcal{H}_\sigma$ can be solved for $p_1$ as $p_1=-p_I\partial_\sigma x^I \ , I=2,\dots,d-1$.
\section{Generalization: Non-relativistic p-brane}\label{third}
In this section we perform generalization of the analysis presented
previously to the case of non-relativistic $p-$brane.
As the first step we determine an action for non-relativistic p-brane in Newton-Cartan backgorund in the same way as in the string case. Explicitly, we start with the relativistic p-brane action coupled to $C^{p+1}$ form
whose action has the form
\begin{equation}\label{pbraneaction}
S=-\tilde{\tau}_p\int d^{p+1}\xi\sqrt{-\det \mathbf{A}_{\alpha\beta}}
+\tilde{\tau}_p\int C^{(p+1)} \ , \quad \mathbf{A}_{\alpha\beta}=G_{\mu\nu}\partial_\alpha x^\mu
\partial_\beta x^\nu \ ,
\end{equation}
where $\xi^\alpha \ , \alpha=0,\dots,p$ label world-volume of p-brane
and where
\begin{equation}
C^{(p+1)}=C_{\mu_1\dots \mu_{p+1}}dx^{\mu_1}\wedge \dots dx^{\mu_{p+1}}=
\frac{1}{(p+1)!}\epsilon^{\alpha_1\dots \alpha_{p+1}}
C_{\mu_1\dots \mu_{p+1}}\partial_{\alpha_1}x^{\mu_1}\dots \partial_{\alpha_{p+1}}x^{\mu_{p+1}} \ ,
\end{equation}
where again $\epsilon^{\alpha_1\dots \alpha_{p+1}}$ is totally antisymmetric
symbol.
With the help of the action (\ref{pbraneaction}) we can proceed to the definition of non-relativistic p-brane in Newton-Cartan background. As we have seen in case of the non-relativistic string
the requirement that the action for non-relativistic string should be finite
select two longitudial directions. Then we can deduce that in case of non-relativistic p-brane we should select $p+1$ longitudial directions. Explicitly, we presume that in case of the probe p-brane the background metric has the form
\begin{eqnarray}
G_{\mu\nu}&=&E_\mu^{ \ a}E_\nu^{ \ b}\eta_{ab}+E_\mu^{ \ a'}E_\nu^{ \ b'}\delta_{a'b'}
=\nonumber \\
&=&\omega^2 \tau_{\mu\nu}+h_{\mu\nu}+\frac{1}{2}\tau_\mu^{ \ a}m_\nu^{ \ b}\eta_{ab}+
\frac{1}{2}m_\mu^{ \ a}\tau_\nu^{ \ b}\eta_{ab}+\frac{1}{4\omega^2}m_\mu^{ \ a}m_\nu^{ \ b}
\eta_{ab} \ , \nonumber \\
\end{eqnarray}
where now $a,b=0,\dots,p$ and $a',b'\dots=(p+1,\dots,d-1)$. Further, $\tau_{\mu\nu}$
and $h_{\mu\nu}$ are defined as
\begin{equation}
\tau_{\mu\nu}=\tau_\mu^{ \ a}\tau_\nu^{ \ b}
\eta_{ab} \ , \quad \eta_{ab}=\mathrm{diag}(-1,\dots,1) \ , \quad h_{\mu\nu}=
e_\mu^{ \ a'}e_\nu^{ \ b'}\delta_{a'b'} \ .
\end{equation}
We also introduce their inverses with respect to their longitudinal and transverse dimensions
\begin{eqnarray}
e_\mu^{ \ a'}e^\mu_{ \ b'}=\delta^{a'}_{b'} \ , \quad
e_\mu^{ \ a'}e^\nu_{ \ a'}=\delta_\mu^\nu-\tau_\mu^{ \ a}
\tau^\nu_{ \ a} \ , \quad \tau^\mu_{ \ a}\tau_\mu^{ \ b}=\delta_a^b \ , \quad
\tau^\mu_{ \ a}e_\mu^{ \ a'}=0 \ , \quad
\tau_\mu^{ \ a}e^\mu_{ \ a'}=0 \ . \nonumber \\
\end{eqnarray}
In case of $p+1$-form $C^{(p+1)}$ we presume, with the analogy with the
string case, that it has the form
\begin{eqnarray}
C_{\mu_1\dots \mu_{p+1}}=\left(\omega\tau_{\mu_1}^{ \ a_1}-\frac{1}{2\omega}m_{\mu_1}^{ \ a_1}\right)\times \dots\times \left( \tau_{\mu_{p+1}}^{ \ a_{p+1}}-
\frac{1}{2\omega}m_{\mu_{p+1}}^{ \ a_{p+1}}\right)\epsilon_{a_1\dots a_{p+1}} \ ,
\end{eqnarray}
where $\epsilon_{a_1\dots a_{p+1}}$ is totally antisymmetric symbol.
Now we are ready to define non-relativistic limit of p-brane action. We start with the kinetic term and we obtain
\begin{eqnarray}
S_{DBI}=-\tilde{\tau}_p\omega^{p+1}\int d^{p+1}\xi\sqrt{-\det\mathbf{a}}-
\frac{\tilde{\tau}_p}{2}\omega^{p-1}\int d^{p+1}\xi\sqrt{-\det\mathbf{a}}\tilde{\mathbf{a}}^{\alpha\beta}
\bar{h}_{\alpha\beta} \ ,
\nonumber \\
\end{eqnarray}
where $\tilde{\mathbf{a}}^{\alpha\beta}$ is inverse to $\mathbf{a}_{\alpha\beta}$. In fact, it is reasonable to presume that $\mathbf{a}_{\alpha\beta}=\partial_\alpha x^\mu \eta_{ab}\partial_\beta x^\nu=\tau_\alpha^{ \ a}\eta_{ab}\tau_\beta^{ \ b}$ since
$\tau_\alpha^{ \ a}$ and $\eta_{ab}$ are $(p+1)\times (p+1)$ non-singular matrices.
From the requirement that the kinetic term is finite we have to perform following rescaling
\begin{equation}
\tilde{\tau}_p \omega^{p-1}=\tau_p \ .
\end{equation}
Further, the divergent term can be written as
\begin{equation}
\tilde{\tau}_p \omega^{p+1}\int d^{p+1}\xi \sqrt{-\det\mathbf{a}}=
\tau_p \omega^2\int d^{p+1}\xi\det \tau_\alpha^{ \ b} \ , \quad \tau_\alpha^{ \ a}=
\tau_\mu^{ \ a}\partial_\alpha x^\mu \ .
\end{equation}
Let us now concentrate on the second term in the action
(\ref{pbraneaction}). If we express $\tilde{\tau}_p$ using $\tau_p$ as
$\tilde{\tau}_p=\frac{1}{\omega^{p-1}}\tau_p$ we find that the only non-zero
contribution comes from the product of $\tau_\mu^{ \ a}$'s while remaining terms vanish in the limit $\omega\rightarrow \infty$. Then we obtain
\begin{eqnarray}
S_{WZ}&=&\frac{1}{\omega^{p-1}}\tau_p\int d^{p+1}\epsilon^{\alpha_1\dots \alpha_{p+1}}
\omega\tau_{\mu_1}^{ \ a_1}\partial_{\alpha_1}x^{\mu_1}\dots \omega\tau_{\mu_{p+1}}
^{ \ a_{p+1}}\partial_{\alpha_{p+1}}x^{\mu_{p+1}}\epsilon_{a_1\dots a_{p+1}}
\nonumber \\
&=&\omega^2\tau_p\int d^{p+1}\xi \frac{1}{p!}
\epsilon^{\alpha_1\dots \alpha_{p+1}}\tau_{\alpha_1}^{ \ a_1}
\dots \tau_{\alpha_{p+1}}^{ \ a_{p+1}}=
\omega^2\tau_p \int d^{p+1}\xi \det\tau_\alpha^{ \ a}
\nonumber \\
\end{eqnarray}
and we again see that these two divergent contributions cancel. As a result we obtain well defined action for non-relativistic p-brane in Newton-Cartan background
\begin{equation}\label{nonrelpbraneaction}
S=-\frac{\tau_p}{2}\int d^{p+1}\xi
\sqrt{-\det\mathbf{a}}\tilde{\mathbf{a}}^{\alpha\beta}\bar{h}_{\mu\nu}\partial_\alpha x^\mu
\partial_\beta x^\nu \ .
\end{equation}
Now we proceed to the Hamiltonian formulation of this theory. With the analogy with the string case we write the action as
\begin{eqnarray}\label{pbraneactionextended}
S&=&\int d^{p+1}
\xi \left(\frac{1}{4\lambda^\tau}(\partial_0 x^\mu-\lambda^i\partial_i x^\mu)
h_{\mu\nu}(\partial_0 x^\nu-\lambda^j\partial_j x^\nu)-\lambda^\tau \tau^2_p\det\mathbf{a}_{ij}\mathbf{a}^{ij}h_{ij} \right.
\nonumber \\
& &\left.+B^0\left(\lambda^0-\frac{\sqrt{-\det \mathbf{a}}}{2\tau_p
\det \mathbf{a}_{ij}}\right)+B^i(\lambda_i-\mathbf{a}_{i0})\right) \ ,
\nonumber \\
\end{eqnarray}
where
\begin{equation}
\lambda^i=\mathbf{a}^{ij}\mathbf{a}_{j0} \ , \quad
\mathbf{a}_{ij}\mathbf{a}^{jk}=\delta_i^ k \ .
\end{equation}
In order to see
an equivalence between the action (\ref{pbraneactionextended}) and
(\ref{nonrelpbraneaction}) we note that the inverse matrix $\tilde{\mathbf{a}}^{\alpha\beta}$ to the matrix $\mathbf{a}_{\alpha\beta}$ has the form
\begin{eqnarray}\label{inversematrix}
\tilde{\mathbf{a}}^{00}&=&\frac{\det\mathbf{a}_{ij}}{\det\mathbf{a}} \ , \quad
\tilde{\mathbf{a}}^{0i}=-\mathbf{a}_{0k}\mathbf{a}^{kj}\frac{\det\mathbf{a}_{ij}}{\det\mathbf{a}} \ , \nonumber \\
\tilde{\mathbf{a}}^{i0}&=&-\mathbf{a}^{ik}\mathbf{a}_{k0}\frac{\det\mathbf{a}_{ij}}{\det\mathbf{a}} \ , \quad
\tilde{\mathbf{a}}^{ij}=\mathbf{a}^{ij}+\frac{\det\mathbf{a}_{ij}}{\det\mathbf{a}}
\mathbf{a}^{ik}\mathbf{a}_{k0}\mathbf{a}_{0l}\mathbf{a}^{lj} \ , \nonumber \\
\end{eqnarray}
where $\mathbf{a}^{ij}\mathbf{a}_{jk}=\delta^i_k$.
Then the equation of motion for
$B^0$ and $B^i$ imply
\begin{equation}
\lambda^0=-\frac{\sqrt{-\det\mathbf{a}}}{2\tau_p \det\mathbf{a}_{ij}} \ , \quad \lambda_i=\mathbf{a}_{0i} \ .
\end{equation}
Inserting this result into (\ref{pbraneactionextended}) we obtain that it is equal to the action (\ref{nonrelpbraneaction}).\
Let us now return to the action (\ref{pbraneactionextended})
and determine conjugate momenta from it
\begin{eqnarray}\label{pmubrane}
& &p_\mu=\frac{\partial \mathcal{L}}{\partial (\partial_0 x^\mu)}\nonumber \\
&=&\frac{1}{2\lambda^\tau}\bar{h}_{\mu\nu}\partial_0 x^\nu-\frac{\lambda^i}{2\lambda^\tau}
\bar{h}_{\mu\nu}\partial_i x^\nu-B^\tau\frac{1}{2\tau_p\det\mathbf{a}_{ij}}
\tau_{\mu\nu}\partial_\alpha x^\nu (\mathbf{a}^{-1})^{\alpha 0}\sqrt{-\det\mathbf{a}}-
B^i\tau_{\mu\nu}\partial_i x^\nu \ , \nonumber \\
\nonumber \\
& &P_0=\frac{\partial \mathcal{L}}{\partial(\partial_0 \lambda^0)}\approx 0 \ , \quad
P_i=\frac{\partial \mathcal{L}}{\partial(\partial_0 \lambda^i)}\approx 0 \ , \quad
P^B_0=\frac{\partial \mathcal{L}}{\partial(\partial_0 B^0)}\approx 0 \ , \quad
P^B_i=\frac{\partial \mathcal{L}}{\partial(\partial_0 B^i)}\approx 0 \ .
\nonumber \\
\end{eqnarray}
From the same reason as in case of the fundamental string we have to restrict
to the case $m_\mu^{ \ a}=0$ so that $\bar{h}_{\mu\nu}=h_{\mu\nu}$. Then if we multiply
(\ref{pmubrane}) with $h^{\mu\nu}$ we obtain
\begin{eqnarray}
\left(p_\mu+\frac{\lambda^i}{2\lambda^\tau}h_{\mu\rho}\partial_i x^\rho\right)
h^{\mu\nu}\left(p_\nu+\frac{\lambda^j}{2\lambda^\tau}h_{\nu\sigma}\partial_jx^\sigma\right)=
\frac{1}{4(\lambda^\tau)^2}\partial_0 x^\mu h_{\mu\nu}\partial_0 x^\nu \ . \nonumber \\
\end{eqnarray}
On the other hand let us multiply both sides of (\ref{pmubrane}) with $\tau^{\mu\nu}\tau_{\nu\rho}
\partial_i x^\rho$ and we obtain
\begin{eqnarray}
p_\mu \tau^{\mu\nu}\tau_{\nu\rho}\partial_i x^\rho=
-B^\tau \frac{1}{2\tau_p\det\mathbf{a}_{ij}}
\partial_i x^\mu \tau_{\mu\nu}\partial_\alpha x^\nu \mathbf{a}^{\alpha 0}
\sqrt{-\det\mathbf{a}}-\partial_i x^\mu \tau_{\mu\nu}\partial_j x^\nu B^j=
-\mathbf{a}_{ij}B^j
\nonumber \\
\end{eqnarray}
that imlies $p-$primary constraints
\begin{eqnarray}\label{primpcon}
\Sigma^i\equiv p_\mu \tau^{\mu\nu}\tau_{\nu\sigma}\partial_j x^\sigma \mathbf{a}^{ji}+B^i \approx 0 \ .
\nonumber \\
\end{eqnarray}
On the other hand let us multiply (\ref{pmubrane}) with following expression
\begin{equation}
\frac{1}{p!}\tau^\mu_{ \ a}\eta^{aa_1}\epsilon_{a_1\dots a_{p+1}}
\tau^{\ a_2}_{\nu_2}\dots \tau^{\ a_{p+1}}_{\nu_{p+1}}
\epsilon^{j_2\dots j_{p+1}}\partial_{j_2}x^{\nu_2}\dots \partial_{j_{p+1}}
x^{\nu_{p+1}} \ .
\end{equation}
Then using the fact that
\begin{eqnarray}
& &\partial_i x^\nu\tau_{\nu\mu}\tau^\mu_{ \ a}\eta^{aa_1}
\epsilon_{a_1\dots a_{p+1}}\tau_{\nu_2}^{ \ a_2}
\dots \tau_{\nu_{p+1}}^{\ a_{p+1}}\epsilon^{j_2\dots j_{p+1}}
\partial_{j_2}x^{\nu_2}\dots \partial_{j_{p+1}}x^{\nu_{p+1}}=0 \ ,
\nonumber \\
& &\frac{1}{p!}\mathbf{a}^{0\alpha}\partial_\alpha x^\nu \tau_{\nu\mu}
\tau^\mu_{ \ a}\eta^{aa_1}\epsilon_{a_1\dots a_{p+1}}
\tau_{\nu_2}^{ \ a_2}\dots \tau_{\nu_{p+1}}^{ \ a_{p+1}}
\epsilon^{j_2\dots j_{p+1}}\partial_{j_2}x^{\nu_2}\dots \partial_{j_{p+1}}
x^{\nu_{p+1}}\frac{\sqrt{-\det\mathbf{a}}}{\det\mathbf{a}_{ij}}=\nonumber \\
& &=-\frac{1}{(p+1)!}\epsilon_{a_1\dots a_{p+1}}
\epsilon^{j_1\dots j_{p+1}}\tau_{\nu_1}^{ \ a_1}\partial_{\alpha_1}x^{\nu_1}
\dots \tau_{\nu_{p+1}}^{ \ a_{p+1}}\partial_{\alpha_{p+1}}x^{\nu_{p+1}}
\frac{1}{\det \tau_\alpha^{ \ a}}
=-1 \nonumber \\
\end{eqnarray}
we obtain second primary constraint
\begin{equation}
\Sigma^0 \equiv
2\tau_p p_\mu \frac{1}{p!}\tau^\mu_{ \ a}\eta^{aa_1}\epsilon_{a_1\dots a_{p+1}}
\tau^{\ a_2}_{\nu_2}\dots \tau^{\ a_{p+1}}_{\nu_{p+1}}
\epsilon^{j_2\dots j_{p+1}}\partial_{j_2}x^{\nu_2}\dots \partial_{j_{p+1}}
x^{\nu_{p+1}}-B^0 \approx 0 \ .
\end{equation}
Using all these results we determine extended Hamiltonian with all primary constraints included in the form
\begin{eqnarray}\label{Hamextendedbrane}
H_E=\int d^p\xi (\lambda^0 p_\mu h^{\mu\nu}p_\nu+\lambda^i p_\mu h^{\mu\nu}h_{\nu\sigma}
\partial_i x^\sigma+\lambda^\tau \tau_p^2\det\mathbf{a}_{ij}\mathbf{a}^{ij}h_{ij}-
\nonumber \\
-B^0\lambda^\tau-B^i\lambda_i+v^0P_0+v^iP_i+v^0_BP_0^B+v_i^B P^i_B
+\Psi_0\Sigma^0+\Psi_i\Sigma^i) \ .
\nonumber \\
\end{eqnarray}
Since $\pb{P_0^B(\xi),\Sigma^0(\xi')}=\delta(\xi-\xi') \ ,
\pb{P^i_B(\xi),\Sigma^j(\xi')}=-\delta^{ij}\delta(\xi-\xi')$ we see that
that $P_0^B$ together with $\Psi^0$ are the couple of $p+1$ second class constraints. Then we can solve these constraints with respect to $B^0,B^i$ and we
we obtain the Hamiltonian
in the form
\begin{eqnarray}
H_B
=\int d^p\xi (\lambda^0\mathcal{H}_0+\lambda^i\mathcal{H}_i+v^0P_0+v^i P_i) \nonumber \\
\end{eqnarray}
where
\begin{eqnarray}
\mathcal{H}_0&=&p_\mu h^{\mu\nu}p_\nu
+\tau_p^2\det\mathbf{a}_{ij}\mathbf{a}^{ij}h_{ij} -\nonumber \\
&-&2\tau_p p_\mu \frac{1}{p!}\tau^\mu_{ \ a}\eta^{aa_1}\epsilon_{a_1\dots a_{p+1}}
\tau^{\ a_2}_{\nu_2}\dots \tau^{\ a_{p+1}}_{\nu_{p+1}}
\epsilon^{j_2\dots j_{p+1}}\partial_{j_2}x^{\nu_2}\dots \partial_{j_{p+1}}
x^{\nu_{p+1}}\approx 0 \ , \nonumber \\
\mathcal{H}_i&=&p_\mu\partial_ix^\mu \approx 0 \ .
\nonumber \\
\end{eqnarray}
Then the requirement of the preservation of the constraint $P_0\approx 0\ , P_i\approx 0$ implies $p+1$ secondary constraints
\begin{equation}
\mathcal{H}_0\approx 0 \ , \quad \mathcal{H}_i\approx 0 \ .
\end{equation}
Now we have to check that these constraints are the first class constraints. We
introduce their smeared form
\begin{equation}
\mathbf{T}_T(N)=\int d^p\xi N\mathcal{H}_0 \ , \quad
\mathbf{T}_S(N^i)=\int d^p\xi N^i\mathcal{H}_i \
\end{equation}
and calculate corresponding Poisson brackets. First of all we have
\begin{eqnarray}
\pb{\mathbf{T}_S(N^i),\mathbf{T}_S(M^j)}=\mathbf{T}_S(N^j\partial_j M^i-M^j\partial_j N^i) \ .
\nonumber \\
\end{eqnarray}
In case of the calculation of the Poisson brackets between $\mathbf{T}_S(N^i)$ and
$\mathbf{T}_T(M)$ we have
to be more careful. First of all we have
\begin{equation}
\pb{\mathbf{T}_S(N^ i),\tau_i^{ \ a}}=-N^k\partial_k \tau_i^{ \ a}-
\partial_i N^j \tau_j^{ \ a} \ , \quad \tau_i^{ \ a}\equiv \partial_i x^\mu
\tau_\mu^{ \ a} \ .
\end{equation}
Then we obtain
\begin{eqnarray}
& &\pb{\mathbf{T}_S(N^i),\mathbf{a}_{ij}}=-N^k\partial_k \mathbf{a}_{ij}-\partial_i N^k\mathbf{a}_{kj}-
\mathbf{a}_{ik}\partial_j N^k \ , \nonumber \\
& &\pb{\mathbf{T}_S(N^i),\mathbf{a}^{ij}}=-N^k\partial_k \mathbf{a}^{ij}+\partial_k N^i\mathbf{a}^{kj}+
\mathbf{a}^{ik}\partial_k N^j \ , \nonumber \\
& &\pb{\mathbf{T}_S(N^i),\det\mathbf{a}_{ij}}=-N^k\partial_k (\det\mathbf{a}_{ij})-
2\partial_i N^i\det\mathbf{a}_{ij} \ . \nonumber \\
\end{eqnarray}
Using also the fact that
\begin{eqnarray}
& &\pb{\mathbf{T}_S(N^i),\partial_i x^\mu}=-N^k\partial_k(\partial_i x^\mu)-\partial_i
N^k\partial_k x^\mu \ , \nonumber \\
& &\pb{\mathbf{T}_S(N^i),h_{ij}}=-N^k\partial_k h_{ij}-\partial_i N^k h_{kj}
-h_{ik}\partial_j N^k \ \nonumber \\
\end{eqnarray}
we finally obtain
\begin{equation}
\pb{\mathbf{T}_S(N^i),\det\mathbf{a}_{ij}\mathbf{a}^{kl}h_{kl}}=-N^k\partial_k (\det\mathbf{a}_{ij}
\mathbf{a}^{kl}h_{kl})-2\partial_i N^i\det\mathbf{a}_{ij}\mathbf{a}^{kl}h_{kl} \ .
\end{equation}
Let us introduce following vector
\begin{equation}
V^\mu=-2\tau_p \frac{1}{p!}\tau^\mu_{ \ a}\eta^{aa_1}\epsilon_{a_1\dots a_{p+1}}
\tau^{\ a_2}_{\nu_2}\dots \tau^{\ a_{p+1}}_{\nu_{p+1}}
\epsilon^{j_2\dots j_{p+1}}\partial_{j_2}x^{\nu_2}\dots \partial_{j_{p+1}}
x^{\nu_{p+1}} \ .
\end{equation}
Then after some algebra we obtain
\begin{equation}
\pb{\mathbf{T}_S(N^i),V^\mu}=-N^k\partial_k V^\mu-2\partial_k N^k V^\mu \ .
\end{equation}
Collecting all these results together we finally find
\begin{equation}
\pb{\mathbf{T}_S(N^i),\mathbf{T}_S(M)}=\mathbf{T}_T(N^i\partial_iM-\partial_i N^i M) \ .
\end{equation}
Finally we calculate Poisson brackets of the smeared forms of Hamiltonian
constraints
and we obtain
\begin{eqnarray}\label{pbTTpbrane}
& &\pb{\mathbf{T}_T(N),\mathbf{T}_T(M)}=\int d^p\xi (N\partial_iM-M\partial_iN)4\tau_p^2
\det\mathbf{a}_{ij}\mathbf{a}^{ij}p_\mu h^{\mu\nu}h_{\nu\sigma}\partial_j x^\sigma+
\nonumber \\
&+&2\tau_p\int d^p\xi
(N\partial_i M-M\partial_iN)\frac{1}{(p-1)!}p_\nu\tau^\nu_{ \ a}\eta^{aa_1}
\epsilon_{a_1\dots a_{p+1}}\tau_{\mu}^{ \ a_2}\tau_{\nu_3}^{ \ a_3}
\dots \tau_{\nu_{p+1}}^{ \ a_{p+1}}\times \nonumber \\
&\times &
\epsilon^{i j_3\dots j_{p+1}}\partial_{j_3}x^{\nu_3}\dots \partial_{j_{p+1}}
x^{\nu_{p+1}}V^\mu \ . \nonumber \\
\end{eqnarray}
Then after some lengthy calculations we find
that the last expression is equal to
\begin{equation}
4\tau_p^2 (N\partial_i M-M\partial_i N)\mathbf{a}^{ij}p_\mu \tau^{\mu\nu}
\tau_{\nu\sigma}\partial_j x^\sigma \det \mathbf{a}_{ij} \ .
\end{equation}
Inserting this result into (\ref{pbTTpbrane} we obtain final result
\begin{equation}
\pb{\mathbf{T}_T(N),\mathbf{T}_T(M)}=\mathbf{T}_S((N\partial_iM-M\partial_iN)4\tau_p^2\mathbf{a}^{ij}
\det\mathbf{a}_{ij}) \ .
\end{equation}
These results show that $\mathcal{H}_0$ and $\mathcal{H}_i$ are the first class constraints
that reflect diffeomorphism invariance of non-relativistic p-brane.
\section{Conclusion}\label{fourth}
In this paper we formulated non-relativistic actions for string and p-brane in Newton-Cartan background. Then we found their Hamiltonian formulations and we determined structure of constraints in the special case where the gauge field $m_\mu^{ \ a}$ is zero. We argued that we restricted to this case since we were not able to express time derivatives of $x^\mu$ or their combinations as functions of canonical variables in the case when $m_\mu^{ \ a}\neq 0$. Certainly this more general case deserves further study.
One possibility is to address this problem from different point of view when we
start with the Hamiltonian formulation string in general background, then we
perform limiting procedure on the background metric and NSNS two form field
and derive corresponding Hamiltonian. This problem is currently under study and we
hope to report on this analysis in near future.
\acknowledgments{This work was
supported by the Grant Agency of the Czech Republic under the grant
P201/12/G028. }
|
1,477,468,750,378 | arxiv | \section{INTRODUCTION}
The kiloHertz quasi-periodic oscillations (kHz QPOs) have been
measured in more than twenty neutron star low-mass X-ray binaries
(NS LMXBs) in their persistent emission with the {\em Rossi X-ray
Timing Explorer} ({\em RXTE}), which offered unique insights
into the physics of strong gravity and dense matter (see van der
Klis 2000, 2005 for reviews). In many cases the twin kHz QPOs
appear simultaneously and the correlations between the pair
frequencies have been investigated extensively (e.g., Psaltis et
al. 1998, 1999b; Belloni et al. 2005).
Moreover, their frequencies also follow rather tight correlations
with other timing features of the X-ray emission (Ford \& van der
Klis 1998; Psaltis et al. 1999a; Belloni et al. 2002).
There is currently no consensus as to the origin of these QPOs,
nor on what physical parameters determine their frequencies, which
have been identified with various characteristic frequencies in
the inner accretion flow (see e.g. Stella \& Vietri 1999;
Osherovich \& Titarchuk 1999; Lamb \& Miller 2001; Abramowicz et
al. 2003).
With the discovery of the twin kHz QPOs in the accretion-powered
millisecond pulsar SAX J1808.4$-$3658, it was found that the
frequency separation $\mbox{$\Delta\nu$}$ is almost half the spin frequency
(Wijnands et al. 2003). For other sources with detected spin
frequencies $\mbox{$\nu_{\rm s}$}$ from the burst oscillations, $\mbox{$\Delta\nu$}$ are shown
to be close to either the spin frequencies or half of them (van
der Klis 2005 and references therein).
%
These findings seem to hint some underlying mechanisms relating
$\mbox{$\nu_{\rm s}$}$ to the upper and lower kHz QPO frequencies ($\mbox{$\nu_2$}$ and
$\mbox{$\nu_1$}$). However, the more detailed measurements show that $\mbox{$\Delta\nu$}$
is generally inconsistent with a constant value of $\mbox{$\nu_{\rm s}$}$, but
varying with $\mbox{$\nu_2$}$ or $\mbox{$\nu_1$}$ (van der Klis 2000, 2005 and
references therein), which cast doubts about the validity of the
simple beat-frequency interpretation (Miller, Lamb \& Psaltis
1998; see Lamb \& Miller 2001 for a modified version). Osherovich
\& Titarchuk (1999) suggested the lower kHz QPO frequency $\mbox{$\nu_1$}$
to be the Keplerian frequency in the disk and the higher kHz QPO
frequency $\mbox{$\nu_2$}$ the hybrid between $\mbox{$\nu_1$}$ and $2\mbox{$\nu_{\rm s}$}$. Klu\'zniak
et al. (2004) showed that the twin kHz QPOs can be explained by a
nonlinear resonance in the epicyclic motion in the accretion
disks, which can lead to the 3:2 ratio for the two main resonances
(see also Abramowicz \& Klu\'zniak 2001; Abramowicz et al. 2003).
In this paper we propose an alternative interpretation for the
origin of the twin kHz QPOs, by considering the interaction
between the neutron star magnetic field and the surrounding
accretion disk. We introduce the model in \S 2, and present its
applications to several NS LMXBs with the simultaneously detected
kHz QPOs and known spin frequencies in \S 3. The possible physical
implications and conclusions are given in \S 4.
\section{MODEL}
Neutron stars in LMXBs generally accretes via an accretion disk.
For most part of the disk, the plasma rotates in a Keplerian
orbit. Close to the NS surface, the stellar magnetic fields begin
to truncate the disk and control the motion of the plasma,
resulting in a non-Keplerian boundary layer lying between the
magnetosphere corotating with the star and the outer Keplerian
disk. We assume that the boundary layer is confined by the inner
and outer radii, $\mbox{$R_{\rm in}$}$ and $\mbox{$R_0$}$ respectively. As conceivable,
the plasma corotates with the magnetosphere at $\mbox{$R_{\rm in}$}$, and the
plasma's motion begins to deviate from Keplerian rotation and take
its maximum value at $\mbox{$R_0$}$ (see Fig.~1).
As for the construction of the model, {\em we identify the upper
kHz QPO frequency $\mbox{$\nu_2$}$ to be the rotational frequency at $\mbox{$R_0$}$},
i.e.,
\begin{equation}
\mbox{$\nu_2$}\equiv\nu(\mbox{$R_0$})\equiv\xi\nu_{\rm K}(\mbox{$R_0$}),
\end{equation}
where $\nu_{\rm K}(\mbox{$R_0$})$ is the Keplerian rotation frequency at
$\mbox{$R_0$}$, and $0<\xi\lsim 1$. The value of $\xi$ depends on the
rotational frequency distribution inside the boundary layer.
Unfortunately, there is no analytic solution to the structure of
the boundary layer, and the value of $\xi$ can be evaluated only
through numerical calculations.
However, based on the qualitative characteristics of the rotation
rate in the boundary layer, Campbell (1987) has suggested the
following form for the disk rotation profile close to the
magnetosphere
\begin{equation}
\nu(R)=\nu_{\rm K}(R)-\nu_{\rm K}(\mbox{$R_{\rm in}$})(1-\mbox{$\omega_{\rm in}$})
\exp[\frac{3(R/\mbox{$R_{\rm in}$}-1)}{2(1-\mbox{$\omega_{\rm in}$})}],
\label{nur}
\end{equation}
where $\mbox{$\omega_{\rm in}$}=\mbox{$\nu_{\rm s}$}/\nu_{\rm K}(\mbox{$R_{\rm in}$})$.
When $R\gg\mbox{$R_{\rm in}$}$, $\nu(R)\rightarrow\nu_{\rm K}(R)$; when
$R\rightarrow\mbox{$R_{\rm in}$}$, $\nu(R)\rightarrow\mbox{$\nu_{\rm s}$}$.
Equation (2) gives a reasonable description of $\nu(R)$ close to
$\mbox{$R_{\rm in}$}$, but our interest here focuses on the rotational behavior
around $\mbox{$R_0$}$, at which Eq.~(2) fails to be valid for a
sufficiently wide range of $\mbox{$\omega_{\rm in}$}$.
Instead, we take a slightly modified version of Eq.~(2) to
account for disk rotation,
\begin{equation}
\nu(R)=\nu_{\rm K}(R)-\nu_{\rm K}(\mbox{$R_{\rm in}$})(1-\mbox{$\omega_{\rm in}$})
\exp[\frac{2(R/\mbox{$R_{\rm in}$}-1)}{(1-\mbox{$\omega_{\rm in}$})}].
\label{nur2}
\end{equation}
As an illustration, Fig.~1 shows $\nu(R)/\nu_{\rm K}(\mbox{$R_{\rm in}$})$
against $R/\mbox{$R_{\rm in}$}$ for three selected values of $\mbox{$\omega_{\rm in}$}=0.2$, 0.5 and
0.8 in solid curves. The dashed curve corresponds to Keplerian
rotation for comparison.
Although the hypothesized form (3) is phenomenological, it shares
the main features with the numerically calculated results (e.g.,
Ghosh \& Lamb 1979), and the following analysis also indicates
that its adequate for the disk rotation.
From Eq.~(3) we can determine the location of $\mbox{$R_0$}$ using the
condition ${\rm d}\nu/{\rm d}R=0$ at $\mbox{$R_0$}$, and obtain the values
of $\xi\equiv\nu(\mbox{$R_0$})/\nu_{\rm K}(\mbox{$R_0$})$ for different values of
the so-called fastness parameter $\mbox{$\omega$}\equiv\mbox{$\nu_{\rm s}$}/\nu_{\rm K}(\mbox{$R_0$})$.
Figure 2 shows $\xi$ as a function of $\mbox{$\omega$}$.
Before discussing the origin of the lower kHz QPOs, we note that
in solar physics the QPOs of several minute periods in coronal
loops have been detected and successfully interpreted as the
standing kink modes of MHD waves (Aschwanden et al. 1999;
Nakariakov et al. 1999). Thus, following the similar ideas in
treating the QPOs in the coronal loops, we suppose that these
MHD waves could also be excited at the inner edge of the accretion
disks, where the poloidal field lines are dragged along the
azimuthal direction due to shear motion between the star and the
disk, and various types of MHD instabilities take place (Ghosh \&
Lamb 1979). The reconnection of the azimuthal magnetic field lines
could result in circular flux tubes (loops) in the boundary layer,
where plasma is confined along a magnetic field line with some
cross section. It is well known that the coronal loops may be set
into oscillations with various modes leading to brightness
oscillations (Roberts 2000 for a review). While the similar
oscillations may also occur in the accretion disks, generated by
MHD turbulence, here we focus on the fast kink mode of standing
MHD waves. These fast waves arise as a free mode only for high
density loops (Edwin \& Roberts 1982), which may be appropriate in
accretion disks in NS LMXBs. Assume the loop length being the
circumference of the magnetosphere, $2\pi\mbox{$R_0$}$, the oscillation
frequency is
\begin{equation}
\nu=\frac{Nc_{\rm k}}{4\pi\mbox{$R_0$}},
\end{equation}
where $c_{\rm k}$ is the kink speed. The wave number
$k=N\pi/(2\pi\mbox{$R_0$})=N/(2\mbox{$R_0$})$. The integer $(N-1)$ stands for the
node number of the vibration along the tube axis,
with $N=1$ being the principal mode. At the inner edge of
the accretion disk, the sound speeds are much smaller than the
azimuthal Alfv\'en speeds $c_{\rm A\phi}$, and the plasma inside
a loop is much denser than its surroundings for the comparable
magnetic field strengths, therefore we have the kink speed
$c_{\rm k}\simeq \sqrt{2}c_{\rm A\phi}$ (Roberts 2000),
and in turn the frequency of a standing
kink mode
\begin{equation}
\nu=\frac{Nc_{\rm A\phi}}{2\sqrt{2}\pi\mbox{$R_0$}}.
\end{equation}
In the following, we estimate the azimuthal field strength at
$\mbox{$R_0$}$. We assume that the stellar magnetic field lines are
initially dipolar and penetrate the accretion disk. The
differential rotation between the star and the disk generates the
azimuthal field component $\mbox{$B_{\phi}$}$ from the vertical component $\mbox{$B_{\rm z}$}$
(Ghosh \& Lamb 1979).
According to Wang (1995), if the growth of $\mbox{$B_{\phi}$}$ is limited by
the diffusive decay produced by the turbulent mixing within the
disk, $\mbox{$B_{\phi}$}$ is given by\footnote{Other processes related to the
dissipation of the magnetic field give similar expressions of
$\mbox{$B_{\phi}$}$ (Wang 1995).}
\begin{equation}
\mbox{$B_{\phi}$}(R)=\gamma\frac{\nu(R)-\mbox{$\nu_{\rm s}$}}{\nu(R)}\mbox{$B_{\rm z}$}(R),
\end{equation}
where the parameter $\gamma\sim 1$ (Aly 1984; Uzdensky,
K$\ddot{o}$nigl, \& Litwin 2002).
We assume that the magnitude of $\mbox{$R_0$}$ is close to that of the
Alf\'ven radius where the magnetic energy density equals the total
kinetic energy density (Davidson \& Ostriker 1973), i.e.,
\begin{equation}
\frac{\mbox{$B_{\rm z}$}(R_0)^2}{8\pi}=\frac{1}{2}\eta\rho v_{\rm K}^2|_{R_0},
\end{equation}
where $\rho$ is the mass density, and $\eta\sim 1$.
Combining Eqs.~(6) and (7) we get the azimuthal Alf\'ven speed at
$\mbox{$R_0$}$,
\begin{equation}
c_{\rm A\phi}(\mbox{$R_0$})=\gamma\eta^{1/2}\frac{(\mbox{$\nu_2$}-\mbox{$\nu_{\rm s}$})}{\mbox{$\nu_2$}}v_{\rm
K}(\mbox{$R_0$}).
\end{equation}
Furthermore, {\em we suggest the lower kHz QPOs to be the
principal fast kink mode of the standing MHD waves along the
$\mbox{$B_{\phi}$}$ field lines at $\mbox{$R_0$}$}. Inserting Eq.~(8) into Eq.~(5) we
find the frequency to be
\begin{equation}
\mbox{$\nu_1$}=\gamma(\frac{\eta}{2})^{1/2}\frac{(\mbox{$\nu_2$}-\mbox{$\nu_{\rm s}$})}{\mbox{$\nu_2$}}\nu_{\rm
K}(\mbox{$R_0$}).
\end{equation}
Combining Eqs.~(1) and (9) we have
\begin{equation}
\mbox{$\nu_1$}=(\frac{\alpha}{\xi})(\mbox{$\nu_2$}-\mbox{$\nu_{\rm s}$}),
\end{equation}
where $\alpha=\gamma(\eta/2)^{1/2}\sim 1$ is taken as a free
parameter that absorbs the uncertainties in determining $\mbox{$R_0$}$ and
$\mbox{$B_{\phi}$}(\mbox{$R_0$})$. Since the $\xi(\mbox{$\omega$})$ relation is known from Fig.~2,
Eq.~(10) suggests a unique relation between $\mbox{$\nu_1$}$ and $\mbox{$\nu_2$}$ for
given values of $\mbox{$\nu_{\rm s}$}$.
Obviously, the MHD oscillations represent the continuum, i.e.,
the oscillation frequency is a continuous function of a wave
number. We ascribe the lower kHz QPOs to be the principal fast
kink mode with the loop length of $L=2\pi\mbox{$R_0$}$ due to the following
facts. Firstly, the global modes with $N=1$ or 2 are easiest to
excite in the loops (Roberts, Edwin, \& Benz 1984). Secondly, for
a standing wave the decay time of the MHD oscillations is
$\tau\propto (L/N)^2$ (Roberts 2000). Apparently, the oscillations
with the longer $L$ and smaller $N$ last the longer lifetime, and
hence are more easily to be detected. Thus, for the fast kink mode
at $\mbox{$R_0$}$, oscillations along $\mbox{$B_{\phi}$}$ have the maximum loop length
and smallest wave number when $L=2\pi\mbox{$R_0$}$ and $N=1$.
There may exist the other oscillation modes (e.g. the fast sausage
mode of standing fast MHD waves, standing slow MHD waves, and
propagating MHD waves) in the accretion disks, and some of them
could be also shown as some kinds of QPOs. However, their
oscillation frequencies are either too high or too low compared to
those of the kHz QPOs in NS LMXBs. Additionally it seems that the
kink mode is most likely to be seen as a standing wave (Roberts
2000) .
\section{COMPARISON WITH OBSERVATIONS}
We have compared the predicted $\mbox{$\nu_1$}-\mbox{$\nu_2$}$ relation with the
observations of kHz QPOs in four NS LMXBs (4U1608$-$52,
4U1636$-$53, 4U1728$-$34, and 4U1915$-$05), in which both the
spin and twin kHz QPO frequencies have been measured (data were
provided by T. Belloni, M. M\'endez and D. Psaltis). In Fig.~3 the
crosses represent the measurements and the solid lines stand for
the theoretical relations. The spin frequency and the adopted
value of $\alpha$ for each source are also displayed. In all cases
the value of $\alpha$ is of order of unity as expected.
The theoretical predictions and the measured kHz QPO data match
quite well for 4U 1728$-$34 and 4U 1915$-$05, albeit the
approximated consistence is obtained for 4U 1608$-$52 and 4U
1636$-$53, which may be caused by the fact that the real magnetic
field structure in these sources may be more complicated than in
the simplified model considered here.
\section{DISCUSSION AND CONCLUSIONS}
In this paper we have presented a plausible mechanism for the
production of twin kHz QPOs in NS LMXBs, invoking the magnetic
field-accretion disk interaction. Although NSs in LMXBs are
thought to have weak magnetic fields ($\sim10^{8-9}$ G), their
influence on the disk rotation has not been paid much attention in
the existed works on this subject. Titarchuk, Lapdius, \& Muslimov
(1998) has already suggested that a shock occurs in the transition
layer where the Keplerian disk adjust to sub-Keplerian flow. The
disk can undergo various types of oscillations under the influence
of the gas, radiation, magnetic pressure and gravitational force.
As to the origin of the oscillations, the boundary layer in our
work is similar to but closer to the NS compared with the
centrifugal barrier region in Titarchuk et al. (1998).
One of the twin kHz QPO frequencies is usually interpreted as the
Keplerian rotation frequency at some preferred radii, most likely
the inner edge of the disk (e.g., Miller at al. 1998; Titarchuk \&
Osherovich 1999). We have followed this idea, but suggested that
the real (non-Keplerian) rotation at $\mbox{$R_0$}$ leads to the upper kHz
QPOs. Unfortunately, since there is no general analytic form for
the rotational profile within the boundary layer, we have to use a
phenomenological description for the rotation of disk plasma
around the inner edge\footnote{Even if we take $\xi=1$, that is,
Keplerian rotation at $\mbox{$R_0$}$, Eq.~(10) still fits the observational
data fairly well.}. Although it seems to be adequate, its validity
and accuracy should be testified more carefully in the future by
both observational and theoretical investigations.
Moreover, we interpret the lower kHz QPOs to be the fast kink
modes of MHD oscillations in loops along the $\mbox{$B_{\phi}$}$ field lines at
$\mbox{$R_0$}$, shared with the similar physical mechanism for coronal loop
oscillations. Since $\mbox{$B_{\phi}$}$ is generated by $\mbox{$B_{\rm z}$}$ through shear
motion between the NS and the disk, this naturally links the twin
kHz QPOs with the stellar spin. But it is distinct from the
traditional beat-frequency model by the fact that in our work the
twin OPOs originate from different physical processes. We note
that similar hypothesis was discussed by Muslimov (1995), who
suggested that the QPOs in LMXBs may be caused by the excitation
of the so-called nonlinear global Alfv\'en modes in the boundary
layer plasma. It is interesting that these modes were observed in
the studies of an ideal MHD spectrum of a toroidal plasma
(Goedbloed 1975). In addition, the detailed numerical
investigation of these modes was performed by Appert et al.
(1982), and their existence was confirmed experimentally by Behn
et al. (1984) and Evans et al. (1984).
Recent analysis by Barret, Olive, \& Miller (2005) showed that the
quality factor for the lower kHz QPOs in 4U 1636$-$53 increases
with frequency up to a factor of $\simeq 200$ when $\mbox{$\nu_1$}\simeq
850$ Hz, then drops at higher frequencies. A ceiling of the lower
kHz QPO frequency at 920 Hz is also seen. In the frame of the
present work, we ascribe these features to the evolution of the
twisted field lines. Several theoretical studies of the star-disk
interaction (e.g. Aly 1984, 1985; Uzdensky et al. 2002) have shown
that as a dipole field is twisted due to differential rotation,
the field lines inflate and effectively open up when a critical
twist angle is attained (Uzdensky et al. 2002). This limits the
azimuthal pitch at $\mbox{$R_0$}$,
$|\mbox{$B_{\phi}$}(\mbox{$R_0$})/\mbox{$B_{\rm z}$}(\mbox{$R_0$})|=\gamma(\mbox{$\nu_2$}-\mbox{$\nu_{\rm s}$})/\mbox{$\nu_2$}$, to some critical
value, say, $\gamma_{\rm c}$, demonstrating that a steady state of
configuration could be established only if the rotation shear
$\mbox{$\nu_2$}-\mbox{$\nu_{\rm s}$}$ is small enough. These arguments suggest that there
may exist a maximum value of $\mbox{$\nu_2$}$, beyond which the $\mbox{$B_{\phi}$}$ field
becomes unstable, resulting in decreasing quality factor of the
lower kHz QPOs at higher frequencies and a saturation frequency of
$\mbox{$\nu_1$}$ when most of the field lines become open.
We have presented a qualitative description of the kHz QPO
production mechanism and a crude quantitative expression of the
kHz QPO frequencies, to interpret the observed kHz QPO phenomena.
As a preliminary exploration, many physical details have not been
considered, such as in what condition and how much MHD wave energy
is produced to account for the observed Fourier power spectrum of
kHz QPOs. These should be investigated more carefully in the
future work.
\acknowledgements This work was supported by NSFC under grant
number 10025314 and MSTC under grant number NKBRSF G19990754.
We are grateful for T. Belloni, M. M\'endez and D. Psaltis for
providing the QPO data, and T. P. Li and P. F. Chen for helpful
discussions.
The authors express thanks to an anonymous referee for the
critical comments that greatly helped improve the manuscript.
|
1,477,468,750,379 | arxiv | \section{Analysis}
\label{analysis}
For our experiments, we reused most of the original DCRNN code\footnote{Code available at \url{https://github.com/victorchan314/DCRNN}}, with some adaptations to test variations of the algorithm. The hyperparameters used in the original paper produced positive results for us. We train for 100 epochs with batch size 64, decay the learning rate by 0.1 from an initial 0.1 every 10 epochs, and use scheduled sampling with an inverse sigmoid decay function to slowly introduce predicted data as labels. We focus on Mean Absolute Percentage Error (MAPE) as a normalized metric, although we include Root Mean Squared Error (RMSE) and Mean Absolute Error (MAE) for reference. We compared the results to the following baselines: Constant Mean, Seasonal Naive/Historical Average, ARIMAX $(2, 1, 0, 1)$, and GRU. For ARIMAX, we experimented with a seasonal term and an online version, but saw no improvement over ARIMAX. Our GRU architecture follows the architecture used in \cite{DCRNN}. Like with DCRNN, two encoder cells and two decoder cells are combined in a seq2seq architecture. Thus, the only difference between DCRNN and GRU is the diffusion convolution with the incorporation of signal phase timing data.
Experiments were conducted separately on three of the traffic signal timing plans: P1, P2, and P3. The difference in traffic flow between them is significant, so we trained separate models for each plan instead of learning one model for the entire dataset. Data from the nighttime plan E is sparse and noisy; moreover, because nighttime periods exhibit little congestion, it is less useful to predict for them than for the daytime, so we did not run experiments on plan E.
All experiments predict with a horizon of six, which is equivalent to half an hour. We test four window sizes: 15 minutes (15m), half an hour (30m), one hour (1hr), and two hours (2hr), corresponding to 3, 6, 12, and 24 data points. Because the lengths of some periods during the plans were short, for example only three hours or 36 points for the morning peak, we include a \textit{start buffer} of the previous plan's data at the beginning of each period of each plan in order to have access to more data points. The length of the start buffer is equal to the window size. For example, for data from plan P2 with a 1hr window size, we include the hour of data from plan E from 5:00 to 6:00. This way, the number of data points for each of the different window sizes remains the same.
\subsection{DCRNN with Full Information}
The DCRNN was our model of choice because we had access to full information with which we could populate a transition matrix. From the signal phase timing data, we knew the weights between all pairs of detectors in the network.
\input{results/full-information/flow/full-information_P2}
\input{results/full-information/flow/full-information_P1}
\input{results/full-information/flow/full-information_P3}
\paragraph{Morning Peak}
The results for the morning peak are available in Table \ref{tab:full-information_P2}. We note several trends in DCRNN results. Prediction error increases drastically as we predict flow at longer horizons. Unsurprisingly, data points further from input data points have higher entropy. Moreover, in general, error decreases as window size increases. However, in some situations, error increases when we use a 2hr window. This seems to be a common pattern across other experiments as well. There is enough variation in the training procedure that performance saturates once window size reaches 1hr. Because using longer windows requires longer times to train the model, a practical implementation of DCRNN could use a shorter window and still achieve near-optimal performance. For this location, 1hr is the plateau for window size.
We compare DCRNN results to that of ARIMAX and GRU. ARIMAX has decent performance on the data. However, for the longer-horizon 15m and 30m predictions, DCRNN performs much better than ARIMAX. The difference is less pronounced between DCRNN and GRU. In fact, GRU performance is very close to DCRNN performance, achieving lower error in several cases, even though the difference is within the bounds of random variation. Notably, the gap narrows for 15m predictions, and for 30m predictions, DCRNN achieves lower errors.
The difference between DCRNN and the other models becomes clear when we predict for longer horizons. DCRNN is able to learn long-term temporal characteristics of the system more effectively than GRU, the next-best model, can. As a result, DCRNN consistently outperforms GRU for long horizon predictions, even when GRU achieves lower error for the 5m and 15m predictions. Moreover, whereas GRU error plateaus at the 1hr window size, DCRNN error continues to drop when we use a 2hr window size, indicating that with more training data, we would see even better predictions from DCRNN. Because traffic during the morning peak is very irregular and complicated, the model requires more examples to learn the rich patterns present in the data.
\begin{figure}[t]
\centering
\includegraphics[clip,trim={1.9cm 0cm 1.3cm 0.6cm},width=\columnwidth]{images/detector_508306_predictions.png}
\caption{Test predictions of DCRNN on flow measured in the month of December by detector 508306. The model was trained with a window size of 1hr and a horizon of 6 data points, or half an hour.}
\label{fig:detector_508306_predictions}
\end{figure}
The comparison between DCRNN predictions and the ground truth is illustrated in Fig. \ref{fig:detector_508306_predictions}. All three horizons result in predictions that are very close to the ground truth, although the half-hour predictions are clearly less accurate. We note a consistent overshooting problem in the graph. When the ground truth data maintains its cyclical course, even with sawtooth edges, the predictions match very closely. However, for irregular changes in shape, the model's predictions at each time step continue along the previous trajectory and diverge from the ground truth. For example, on 12/12, traffic grinds to a halt in the middle of the morning peak, likely due to an exogenous event such as an accident. However, the model's predictions overshoot this drop and continue upwards in the direction that traffic flow would usually travel. After seeing the atypically low measurements, the model tries to correct by predicting a very sudden sharp downward trajectory. At this point, the output of DCRNN is actually negative, overshooting the flattening out of flow, so we clip the value to zero when predicting. While DCRNN is able to learn the cyclical patterns, it struggles to adapt to outliers and rare exogenous events.
\begin{figure}[t]
\centering
\includegraphics[clip,trim={1.9cm 0cm 1.3cm 0.6cm},width=\columnwidth]{images/detector_508306_baselines.png}
\caption{Test predictions of DCRNN, GRU, and ARIMAX on flow measured in the month of December by detector 508306. The model was trained with a window size of 1hr and a horizon of 6 data points, or half an hour. Here we show the predictions with a horizon of 6, where DCRNN improves the most compared to the other methods.}
\label{fig:detector_508306_baselines}
\end{figure}
In Fig. \ref{fig:detector_508306_baselines}, we see the comparison between DCRNN and its two closest baselines for half-hour predictions. ARIMAX exhibits the cyclical shape of the ground truth data, but lags behind by several time steps, and thus is unable to predict the future accurately. GRU is similar to DCRNN, except that it tends to not follow the ground truth pattern as closely as does DCRNN. Especially during the congested morning periods, DCRNN reaches the same amplitude that the ground truth reaches, but GRU peaks too soon and turns downwards before the ground truth flow subsides. Even during winter vacation, from Christmas (12/25) to New Year's Eve (12/31), when traffic flow has quieted down, DCRNN adapts to the smaller peaks more quickly than GRU does.
\paragraph{Other Times}
In both the afternoon peak (Table \ref{tab:full-information_P3}) and off peak (Table \ref{tab:full-information_P1}) results, we recognize many of the same trends present in the morning peak experiments. Longer windows consistently produce lower prediction errors, but not past 1hr. For the off peak plan, we see the same pattern as with the morning peak. DCRNN outperforms all of the other models except for with a 2hr window, where GRU achieves lower error for 5m and 15m horizon predictions. However, for the afternoon peak plan, DCRNN outperforms every other model. Thus, the results substantiate the hypothesis that a major advantage of DCRNN is its ability to learn the pattern of traffic flow for long horizon predictions. The ARIMAX and GRU models are able to memorize the gist of the data trends, but fail to understand finer details of the data.
One notable difference between these two periods and the morning peak is the drastically lower prediction error, often dipping below four percent, even for the baselines. The magnitude of data from outside the morning peak is much lower, and therefore the peaks are not as pronounced. Because there is less up-and-down variation in the data, the trends are easier for the models to learn and predict. Moreover, there is less contrast between different days of the week. Traffic during the afternoon peak is simpler than during the morning peak, so even with a limited amount of data, DCRNN enjoys the full benefits of the signal phase timing plans.
For the afternoon peak, DCRNN still has a clear improvement over GRU, especially for the 30m predictions. However, for the off peak plan, although DCRNN consistently achieves lower error than GRU, the decrease is far smaller. We believe that this is due to the fact that signal timing plans have the most substantial effect on traffic flow when there is congestion but not queue spillback. During these peak hours, cars are fully subject to the signal phases. Thus, the transition matrix representing the diffusion process comes into play. During the off peak hours, there is little congestion, and therefore little benefit from modeling the intersection.
\paragraph{Flow and Occupancy}
In theory, if we provide additional sensor information to the model, it should perform at least as well as before. This expectation was realized in experiments for the off peak and afternoon peak periods, where the results did not change significantly when occupancy was included in the input data. However, when we included occupancy for the morning peak, the error increased by a not insignificant amount. The MAPE for 30m predictions regressed by one percent for DCRNN and two percent for GRU. Notably, we saw the same trend as before, where DCRNN always outperformed the other baselines for 30m predictions, but was occasionally outperformed by GRU for 5m and 15m predictions. However, DCRNN and GRU both suffered when occupancy was added as an extra feature. We do not fully understand why this phenomenon occurs; however, our hypothesis is that occupancy data from the detectors is noisy enough that it affects the long-term relationships that the models learn.
\subsection{Incomplete Information Scenarios}
We chose a specific group of detectors for our experiment because they represented a system that was very close to being fully-observed for both upstream and downstream lanes. However, cities may not have the resources to cover every lane on every road with detectors, and even then, real-world detectors occasionally fail, as exemplified by our data. Even in our network, detectors 508302 and 508306 are the only two detectors with extensive coverage, and the system is still not fully closed. In order to test situations in which full data is not available, we ran experiments with augmented data. We provide a brief summary here; for complete results, see \cite{Chan:EECS-2020-68}.
The first scenario is the case of incomplete information, when we do not have full coverage. We simulated this by creating a new transition matrix with only a subset of the detectors and predicting using data from only that subset. Omitting upstream detectors resulted in saturation of prediction accuracy for longer horizons, indicating that the full information scenario might have superior performance with more data. Omitting downstream detectors actually produced a consistent slight improvement for the morning peak. This phenomenon is likely caused by the imperfect closed system in the downstream direction, with an unhealthy detector and multiple stop sign intersections violating the diffusion assumptions of the transition matrix. Overall, however, these differences were not significant.
The second scenario is the case of unhealthy detector data. We used the same transition matrix as the full information case, but zeroed out part of the data to simulate a situation in which the number of detectors is fixed, but some of the detectors do not provide good data during training. The augmented data was evaluated in two ways to determine robustness to unreliable input: predicting on the data with trained models from the full information scenario, and training new models.
In general, using the full information models to predict with the presence of any unhealthy detectors greatly diminished accuracy. Retraining on the augmented data alleviated the error, although it was unable to close the gap. Clearly, the full information models rely on data from all detectors for the most accurate predictions. Surprisingly, even when all but the two noisy stopbar detectors were augmented, short horizon predictions were not affected after retraining. While flow from our two detectors of interest is sufficient for one-step predictions, flow from the other detectors is crucial for long horizon predictions.
Our final scenario investigated setting the detector data to zero on a certain proportion of randomly-selected days to simulate temporary outages. We analyzed four percentages of days to augment: 5\%, 10\%, 25\%, and 50\%. For all plans and horizons, using the full information matrix to predict significantly curtailed performance, with MAPE surging to past 50\% as a larger portion of the data was augmented. Even after retraining, the errors were large and exhibited high variation. We can conclude that data quality is of utmost importance to our model. Training data must be carefully preprocessed to avoid detrimental effects. Unlike in the previous scenarios, where the model is robust to misbehaving detectors, here the quality of the data itself is degraded. While the model is able to absorb some of the impact and produce decent results in some cases, it produces much more accurate and consistent results when no data is corrupt.
\subsection{Other Experiments}
We also tested several DCRNN setup variations to investigate whether these alterations would generate more accurate predictions. First, we trained a different DCRNN for each day of the week, as \cite{lowrank} discovered significant changes in the traffic profile between each day of the week. Second, we trained six DCRNNs to make single-horizon predictions instead of predicting all six horizons at once. Neither of these experiments improved upon the original model; splitting by day of the week resulted in similar performance, while single-horizon predictions actually performed worse. Instead of just jumping ahead without intermediate context, the model requires all of the training labels to identify temporal patterns. The original DCRNN model has the structure and expressiveness to represent traffic in our system without these extra modifications.
\section{Conclusion}
\label{conclusion}
Arterial traffic prediction is far more challenging compared to freeway traffic prediction. Spatial information plays a much more salient role and must be effectively applied to optimize prediction accuracy. In this study, we explored using signal phase timing data to generate a weighted adjacency matrix based on traffic signal phase splits. Combined with our graph convolutional model of choice, the DCRNN, we show that the signal phase timing data enhances arterial flow predictions, especially long horizon forecasts. We achieve MAPE as low as 16\% for a prediction horizon of 30 minutes for morning peak congestion. For afternoon peak and off peak data, we achieve MAPE lower than 8\% and 10\% for the same horizon. Signal phase timing data defines the relationships between detectors in the network and allows the model to learn long-term temporal relationships for long horizon predictions.
In addition, we tested numerous variations of the measurements and the detector network to investigate the effects of detector coverage and data quality on prediction performance. One surprising discovery is that detector coverage is overshadowed by detector proximity and precise measurements; as a result, we saw no significant effects after omitting stopbar detectors and distant detectors. In every scenario with simulated unhealthy data, prediction accuracy and consistency diminished, but retraining the model mitigated that decline. Short horizon predictions were not particularly affected, but long-horizon prediction error skyrocketed with just one or two unhealthy detectors. When presented with more information, our model makes good use of it to generate excellent predictions, but it is also robust to faulty detectors during training. However, at least some of the detectors must be relatively reliable---errors soared even when only 5\% of days were zeroed out. Although the data can include some anomalies, it must be relatively consistent throughout the entire dataset.
In the future, we can study extensions and variations of this work. We can train deeper and more expressive models to better learn complex patterns. The area of deep unsupervised learning is burgeoning, and because traffic network matrices are polynomial with respect to the number of detectors and the size of the graph, it would be very useful to find a compressed feature representation for the entire network state. This would be particularly beneficial for the signal phase timing data. DCRNN applies a static transition matrix, so we used planned phase splits; however, traffic plans are dynamic and reactive to traffic conditions, so the actual phase splits are different for each point in time. With a latent embedding, we could encode the signal phases for each data point instead of aggregating them into a single static matrix. Some newer graph convolutional architectures, such as Graph WaveNet \cite{Graph_WaveNet}, allow adaptive filters, so they can be applied to the problem as well.
Another prospective direction is to include even more varied types of information, such as pedestrian activity at intersections. In addition, DCRNN allows prediction of all detectors at once. We could examine flow forecasts for an entire network of sensors, even one that isn't a closed system. Flow predictions can also be applied to signal control applications to determine the effect of forecasts on travel time and queue length on urban roads. Arterial traffic predictions have many applications, so we must leverage all the data and technology in our toolbox to tackle the challenge.
\section{Introduction}
\label{introduction}
\IEEEPARstart{T}{he} problem of efficient transportation has typically been a hardware and civil engineering problem, as companies have developed faster and cleaner cars, built carefully-designed freeways, and architected roads in cities. With the rise of intelligent transportation systems (ITS), the problem has shifted focus to the fields of mathematics, statistics, and computer science. As governments install more sensors in road networks and collect ever-increasing amounts of data, research has begun to concentrate on designing improved prediction and control techniques. Traffic flow and speed prediction has numerous applications, such as freeway metering, travel time prediction, intelligent intersection signal control, and traffic simulation software. With accurate traffic flow forecasts, cities can better plan logistics and allocation of resources for construction, road development, and safety. Predictions can also be leveraged to optimize signal control at intersections, saving commuters valuable time and reducing consumption of gas and electricity.
Historically, a wide variety of models have been used for traffic flow prediction. Although they are often grouped into parametric and nonparametric categories, or classified as statistics or machine learning, most models are closely-related and have overlapping properties \cite{comparison}. Statistical methods such as Kalman filters \cite{kalman_filtering}, exponential smoothing \cite{exponential_smoothing}, and techniques in the ARMA family \cite{ARIMA_OG, ARIMA, ARIMAX, SARIMA, ARMAX} typically rely on strong priors and assumptions about data distributions; as a result, traffic experts must carefully select and structure the models. Because of this, parametric methods present a trade-off between easy interpretation and practicality \cite{parametric_comparison}. Nonparametric methods, more flexible and expressive than parametric models, have surged in popularity as hardware has been upgraded and more powerful algorithms have been developed. Methods prevalent in the literature include nearest neighbors regression \cite{KNN_Similarity_Degree}, principal component analysis \cite{lowrank}, decision trees, support vector machines \cite{etaSVR}, and fuzzy rule-based systems \cite{FRBS}.
Even these machine learning methods have been overshadowed by the rise of neural networks. Deep learning has gained much traction in recent years as data has become more readily available and computer power has exponentiated. Simple feed-forward neural networks have evolved into convolutional neural networks, long short-term memory, and graph convolutions. State-of-the-art algorithms utilize meta-learning and distillation \cite{MetaST, ST-MetaNet}, residual connections \cite{ST-ResNet}, attention \cite{complicated_cnn_lstm}, and adversarial training \cite{GCGAN, TrafficGAN}. Deep architectures open the door for a new generation of nonparametric models that are constantly improving prediction accuracy.
Overall, most prediction methods are very proficient at forecasting freeway data. Freeways are a mostly-closed system, with leakages only from on-ramps and off-ramps. Traffic flows smoothly from one sensor to the next with few interruptions; thus, freeway traffic data is typically smooth and clean. In contrast, arterial traffic is much noisier and more difficult to predict. At intersections, traffic signals and stop signs introduce exogenous factors that affect the speed and movement of cars. Moreover, elements such as pedestrians, bikes, parking, and driveways further complicate traffic patterns. Much existing literature focuses on freeway traffic prediction, but less work explores its arterial counterpart.
One strategy that has proved useful in overall traffic flow prediction is the graph convolution, which applies to the setting of predicting a label for a graph, given a set of graphs with their associated labels \cite{DCNN}. In the most general case, the graphs are directed and weighted, and the labels can be associated with any part of the graph, including the nodes, edges, and the graphs themselves. The spatial information from graph convolutions is significant in arterial traffic flow prediction because the detector graph is much more complicated than that of freeways. Graph convolutions have spatial structure built into the architecture, so they naturally account for the spatial relationships between detectors when predicting traffic flow.
Another consideration is the inclusion of different types of data as input for prediction. Most models treat the data as a time series, thus relying only on historical values of the data to forecast future values. Sometimes, extra features such as date, time, day of week, and exogenous events are included \cite{RSLDS}. We employ signal phase timing data from the traffic signals at our study site. Previously, signal phase timing data has been combined with detailed traffic knowledge to develop a system of equations to predict traffic flow \cite{signal_timing}. However, the model only applied to very short-term predictions, as it was intended for real-time signal control.
In this study, we focus on the Diffusion Convolutional Recurrent Neural Network (DCRNN) \cite{DCRNN}. We apply DCRNN to predict arterial traffic flow for detectors with full coverage. In order to adapt DCRNN to arterial traffic, we use novel signal phase timing data to construct the weighted transition matrix of the graph. Instead of modeling transition probabilities with road distances, which are not suitable for intersections, we calculate the phase split fraction from the phase split and cycle length. We demonstrate that using signal phase timing information reduces prediction error, especially for long horizon predictions. Moreover, we find through many ablation studies that the model does indeed learn the relationships between the detectors in the network.\footnote{Code available at \url{https://github.com/victorchan314/arterial_traffic_flow_predictor}}
The rest of this paper is organized as follows. In section \ref{literature_review}, we summarize current literature on traffic flow prediction, especially in deep learning. In section \ref{method}, we present the model we use and our strategy to append signal phase timing data. In section \ref{studysite_dataset}, we introduce the study site and dataset used in this report. In section \ref{analysis}, we analyze our arterial traffic flow forecasts and evaluate their effectiveness through many ablation studies. In section \ref{conclusion}, we draw conclusions based on our analysis.
\section{Related Work}
\label{literature_review}
\subsection{ARMA Models}
Of the numerous statistical methods for traffic flow prediction, we focus on models in the ARMA family, which have seen much success in general time series prediction. Although not the first, Ahmed and Cook applied an $\text{ARIMA}(0, 1, 3)$ model to forecast freeway occupancy and flow in 1979 \cite{ARIMA_OG}. Hamed, Al-Masaeid, and Said extended the model to predict arterial flow \cite{ARIMA}. Williams and Hoel showed that a weekly seasonal difference could make freeway flow stationary, thus cementing the theoretical justification for fitting ARMA models to traffic data \cite{SARIMA}. The field has been further expanded by the application of exogenous data to standard ARIMA models \cite{ARIMAX, ARMAX}.
\subsection{Deep Learning}
\paragraph{Recurrent Neural Networks}
Because traffic data is a time series, it makes sense to apply recurrent neural networks (RNN) to the prediction problem to learn temporal patterns. Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU) architectures mitigate the vanishing gradient problem \cite{LSTM, LSTM_shallow, LSTM_GRU}. Zhao \textit{et al. } input an origin-destination correlation matrix, which captures the correlation between different points in the detector network, to LSTM \cite{LSTM_ODC}. The above methods dealt with freeway data; in contrast, Mackenzie, Roddick, and Zito used a sparse distributed representation of data with binary vectors and showed that it was comparable to LSTM for arterial flow prediction \cite{HTM_LSTM}.
\paragraph{Recurrent Convolutional Neural Networks}
RNN methods simply include data from multiple sensors to extract the encoded spatial information, which is not very effective. To directly handle the spatial dimension, models evolved to synthesize RNNs with convolutional neural networks (CNN). Yu \textit{et al. } filled an image of a road network with average link speeds and fed it sequentially into a 2D convolution and an LSTM to learn temporal relationships \cite{SRCN}. Yao \textit{et al. } used start and end flow values for a two-channel image \cite{complicated_cnn_lstm}.
\paragraph{Graph Convolutions}
While CNNs consider spatial relationships in a more proper way, they still bear an inevitable mismatch with traffic data. Road networks are inherently graphs and not grids---they are not accurately represented as images. To this end, we turn to the graph convolution, which is perfect for traffic data. The graph structure is explicitly baked into the architecture of Graph Convolutional Networks (GCN) instead of being implicitly included with the data or imprecisely approximated with images.
Atwood and Towsley defined the Diffusion Convolutional Neural Network (DCNN), which uses the power series of the degree-normalized transition matrix to model a diffusion process; DCNN output high-quality results for citation graph datasets \cite{DCNN}. Li \textit{et al. } adapted the DCNN with a seq2seq GRU architecture to create DCRNN \cite{DCRNN}. Other studies approximate the filters with a first order Chebyshev polynomial \cite{STGCN} or use a graph convolution for feature extraction \cite{T-GCN}. More recent works incorporate state-of-the-art innovations such as Wavenet \cite{Graph_WaveNet}, U-Net \cite{ST-UNet}, and attention \cite{ASTGCN, GSTNet} into GCNs. Fewer works take advantage of GCNs to forecast arterial data. Cui \textit{et al. } used an adjacency matrix of nodes in a $k$-hop neighborhood to extract features of the graph before feeding them into an LSTM \cite{TGC-LSTM}. Guo \textit{et al. } optimized the Laplace matrix in a graph convolution in GRU cells and showed that the learned matrices had high correlation with physical proximity \cite{OGCRNN}.
\paragraph{Other Deep Learning Methods}
There are many deep learning methods that have shown promise in traffic flow prediction, but do not leverage the graph convolution. Simpler works for predicting freeway flow utilize multilayer perceptrons \cite{genetic_NN}, stacked autoencoders \cite{SAE}, vector autoregression \cite{sparse_VAR}, CNNs \cite{images, CapsNet}, and ResNets \cite{ST-ResNet}. In order to reap the benefits from multiple models, Zhan \textit{et al. } used a consensus ensemble system to prune outliers in arterial flow forecasts \cite{connected_corridors_consensus}.
The most recent works have incorporated elements of deep reinforcement learning and unsupervised learning into arterial flow prediction. They are often carefully assembled from complicated components that employ meta-learning \cite{MetaST, ST-MetaNet}, attention \cite{STANN}, and Generative Adversarial Networks (GAN) \cite{GCGAN, TrafficGAN} to learn feature representations of traffic. These methods provide flexible and expressive models that, if designed and trained properly, can easily outperform parametric and statistical methods. The additional parameters of these models also provide a way to incorporate extra data, such as signal phase timing information. There is still much to be explored and much room for improvement, especially with arterial traffic prediction.
\section*{Acknowledgment}
The authors express their thanks to Damian Dailisan, Umang Sharaf, Keith Anshilo Diaz, and Carissa Santos for providing thoughtful insights into the experiments.
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\bibliographystyle{IEEEtran}
\section{Method}
\label{method}
Most conventional traffic prediction methods only exploit temporal information. Spatial relations are ignored or not directly built into the architecture. Recently, a new method, DCRNN \cite{DCRNN}, has been proposed to directly integrate spatial information, such as sensor layouts, into the architecture. It models freeway traffic as a diffusion process where cars disperse from upstream sensors to downstream sensors. The parameters for the transition matrix rely on physical properties of the road network, such as distances between sensors, unlike other graph convolutions that use binary weights or learn the weights; thus, signal phase timing data is apt for the model. This spatio-temporal property of DCRNN has also been utilized in other applications and fields: travel time estimation \cite{travel_time_estimation}, ride-hailing demand \cite{ride_hailing}, air quality forecasting \cite{air_quality}, and distributed fleet control in reinforcement learning \cite{distributed_fleet_control}.
Different from the aforementioned studies, we apply DCRNN to arterial traffic prediction. We are one of the first studies to so; moreover, we are the first to use signal phase timing data with deep learning for arterial traffic prediction. We establish that it is possible to model traffic at adjacent arterial intersections as diffusion processes if the architecture is correctly constructed with the right parameter settings. A more detailed description of our model architecture is provided in the following subsections.
\subsection{DCRNN}
DCRNN relies on the title diffusion process to incorporate spatial information into the architecture. This is represented with a transition matrix between the network of sensors which, when multiplied with the state vector at time $t$, outputs the data point for time $t+1$. The transition matrix defines a \textit{diffusion convolution} operation that replaces matrix multiplications in a seq2seq RNN to comprise the DCRNN.
Let us define $D$ as the number of detectors in our network and $F$ as the number of features from each detector (flow, occupancy, etc.). Let us also define $H$ as the \textit{prediction horizon}, the number of time steps that we predict into the future, and $S$ as the \textit{window size}, the number of time steps we use to predict. Then each data point is an $X \in \mathbb{R}^{D \times F}$, and our goal is to learn a model that uses input $(X^{(t - S + 1)}, \hdots, X^{(t)})$ to predict $(X^{(t + 1)}, \hdots, X^{(t + H)})$.
We represent our system as a weighted directed graph $G = \{V, E, \bm{W}\}$, where the detectors are the vertices and the arterial roads are the edges. In our graph, $|V| = D$, and the transition matrix $\bm{W} \in \mathbb{R}^{D \times D}$ is a weighted adjacency matrix, with entry $\bm{W}_{i, j}$ representing the likelihood of transitioning from node $i$ to node $j$. These weights do not have to be probabilities and do not need to be normalized; they must simply be some function that is larger for nodes $j$ that are more likely destinations of cars from node $i$. We define $\bm{D_O} = \text{diag}(\bm{W}\bm{1})$ and $\bm{D_I} = \text{diag}(\bm{W^\top}{\bm{1}})$, where $\bm{1} \in \mathbb{R}^D$ is the all-ones vector and the $\text{diag}$ function takes in a vector and constructs a square matrix with the entries of the vector along its main diagonal. Thus, $\bm{D_O}, \bm{D_I} \in \mathbb{R}^{D \times D}$ are the normalization matrices for the forward and reverse diffusion processes, since traffic flow is affected by both upstream and downstream detectors.
These diffusion processes are represented as random walks on $G$ with a restart probability $\alpha \in [0, 1]$. Then the stationary distribution $\mathcal{P}$ of the forward diffusion process is
\begin{equation}
\mathcal{P} = \sum_{k=0}^\infty \alpha(1 - \alpha)^k(\bm{D_O}^{-1}\bm{W})^k
\end{equation}
The DCRNN model uses a truncated $K$-step diffusion process with learned weights for each step. The diffusion process, which we denote by $\bm{\mathfrak{F}}_\theta$, is parameterized by $\theta \in \mathbb{R}^{K \times 2}$ and acts on an input $X \in \mathbb{R}^{D \times F}$ to produce an output $Y \in \mathbb{R}^{D}$.
\begin{equation}
\begin{split}
\scalebox{0.95}{%
$\bm{\mathfrak{F}}_\theta(X; G, f) = \sum\limits_{k=0}^{K-1} \bigg(\theta_{k, 0}(\bm{D_O}^{-1}\bm{W})^k + \theta_{k, 1}(\bm{D_I}^{-1}\bm{W^\top})^k\bigg)X_{:, f}$
} \\
\text{for } f \in \{1, \hdots, F\}
\end{split}
\end{equation}
To incorporate diffusion convolutions into a model of the network, we use a Gated Recurrent Unit (GRU) \cite{GRU}, but with matrix multiplications replaced by the diffusion convolution. This constitutes the \textit{Diffusion Convolutional Gated Recurrent Unit} (DCGRU). Multiple DCGRUs are then stacked together in a seq2seq architecture, which finalizes the structure of DCRNN (Fig. \ref{fig:dcrnn}). In our paper, we use two cells in the encoder and two cells in the decoder. We feed in a sequence of $S$ inputs $X \in \mathbb{R}^{D \times F}$, and the next $H$ outputs (with earlier outputs recursively fed into the DCRNN to generate later outputs) are the predictions. The network is trained with backpropagation from loss incurred by our labeled data points. The authors also use scheduled sampling during training to switch between using ground truth labeled outputs and predictions from the DCRNN to generate later predictions.
\begin{figure*}
\centering
\includegraphics[width=0.8\textwidth]{dcrnn.png}
\caption{The seq2seq architecture of DCRNN. The encoder and decoder are both composed of multiple DCGRU cells.}
\label{fig:dcrnn}
\end{figure*}
Through the diffusion convolution and transition matrix, spatial information is baked into the architecture and not learned, allowing the model to learn parameters only for the relationships between the spatial and temporal information. Because of this embedded architecture, we need to train the model with a new transition matrix if detectors are added to or removed from the network. In order to adapt DCRNN for use with arteries, we modify the transition matrix.
\subsection{Transition Matrix}
The weights for the transition matrix in \cite{DCRNN} are derived from the road network distances between sensors. These distances are appropriate parameterizations for freeway traffic, as longer distances are correlated with slower diffusion rates. However, for an arterial intersection, road distance is inappropriate. Intersection roads are closely clustered, rendering any variation in distance insignificant.
Instead, we use \textit{phase split fraction}, defined as the fraction of a cycle during which cars are allowed to travel from the inbound sensor to the outbound sensor. The phase split fraction is calculated for each unique combination of intersection and plan. We use the fraction of time and not the actual number of seconds in order to normalize between busy intersections with longer cycles and smaller intersections with shorter cycles. In addition, because DCRNN assumes a static transition matrix, we use the planned phase split for the signal phase timing plan and not the actual phase split.
Let $P$ represent the set of phases active at a set of intersections of interest. For a phase $p \in P$, we let $p_{in}$ denote the inbound direction and $p_{out}$ denote the set of outbound directions of the phase. Let $d^{(i)}$ denote the $i$th detector in our dataset of $D$ detectors, $d_{dir}^{(i)}$ denote the direction of detector $i$, $I_{d^{(i)}}$ denote the intersection of detector $i$, and $\text{adj}(d^{(i)}, d^{(j)})$ be a boolean denoting whether detector $j$ is directly downstream from detector $i$, i.e. there is a direct path from detector $i$ to detector $j$. Let $L(I, p)$ denote the phase split of phase $p$ of intersection $I$. We compute the weights of the transition matrix as follows:
\begin{equation}
\bm{W}_{i, j} = \begin{cases}
\hfil 1 & \hspace*{-1.5cm} \text{if } I_{d^{(i)}} = I_{d^{(j)}}\\[0.2cm]
\dfrac{\splitfrac{\sum_{p \in P} \mathbbm{1}_{\text{adj}(d^{(i)}, d^{(j)})}\mathbbm{1}_{d_{dir}^{(i)} = p_{in}}\mathbbm{1}_{d_{dir}^{(j)} \in p_{out}}}{(L(I_{d^{(i)}}, p))}}{\frac{1}{2}\sum_{p \in P} L(I_{d^{(i)}}, p)} & \text{o.w.}
\end{cases}
\end{equation}
Because we use phase split fraction instead of road distances, we do not transform the weights with the Gaussian kernel as in \cite{DCRNN}. Instead, we leave the probabilities as the weights for the graph. As in \cite{DCRNN}, we zero out values in the matrix less than the threshold of $\varepsilon = 0.1$. In our transition matrices, we incorporate signal phase timing information for Through, Left Turn, and Right Turn directions. However, for both the upstream and downstream directions, we do not include U-Turns in our model. Overall, U-Turns contribute little flow to the data, especially during congested peak hours. In order to avoid this noise and not have to incorporate additional sensors in the opposite direction in our network, we ignore U-Turns.
\subsection{Flow Prediction}
In our study, we use two types of detectors: \textit{advance detectors}, placed in lanes about 100-200 feet before the intersection, and \textit{stopbar detectors}, located just before the intersections. Both types of detectors measure flow, occupancy, and speed. We use flow data in our model because most of the detectors in our network are advance detectors, for which flow measurements are the most reliable. In some cases, we include occupancy measurements during training in order to determine whether occupancy provides any benefit for flow prediction, but we disregard occupancy predictions, as the results are not as accurate as those of flow. Predicting flow instead of speed does not introduce any major changes to the methodology.
\section{Study Site and Dataset}
\label{studysite_dataset}
\subsection{Study Site}
The data used in this report is part of a larger dataset collected for the I-210 Connected Corridors Project.\footnote{\url{https://connected-corridors.berkeley.edu/i-210-pilot-landing-page}} The project dataset includes traffic flow data from stopbar and advance detectors, maps of the cities and sensor layouts, and the corresponding signal timing sheets. We surveyed detectors along Huntington Dr. between Santa Clara St. and Second Ave. in the city of Arcadia (Fig. \ref{fig:studysite}).
In particular, we focus on detectors 508302 and 508306. These detectors were selected because they are heavily covered by both advance and stopbar detectors in both the upstream and downstream directions and for Through, Right Turn, and Left Turn movements. The advance detectors for the downstream turn directions are several blocks down; while there are some leakages that prevent the system from being fully closed, they are only at minor intersections with stop signs. We call this ideal situation the Full Information scenario. See Fig. \ref{fig:phases} for the signal phase cycle and Table \ref{tab:phase_plans} for the signal timing plans.
\begin{figure}
\centering
\includegraphics[clip,trim={10.2cm 7.9cm 28.9cm 4.2cm},width=\columnwidth]{detector_layout.pdf}
\caption{Detector layout at the study site in Arcadia. The detectors we examine in this study are 508302 and 508306.}
\label{fig:studysite}
\end{figure}
\begin{figure}
\centering
\begin{subfigure}{0.53\columnwidth}
\centering
\includegraphics[clip,trim={9.1cm 8cm 12.3cm 5cm},width=\linewidth]{images/phases_5083_condensed.png}
\end{subfigure}
\begin{subfigure}{0.45\columnwidth}
\centering
\includegraphics[clip,trim={11.9cm 8cm 13.1cm 5cm},width=\linewidth]{images/phases_6081_condensed.png}
\end{subfigure}
\caption{The signal phases for the upstream (6081) and downstream (5083) intersections at our study site.}
\label{fig:phases}
\end{figure}
\setlength{\tabcolsep}{2pt}
\begin{table}
\footnotesize
\centering
\caption{Signal timing plans at intersection 5083 in Arcadia. "G", "Y", and "R" stand for "Green Time", "Yellow Time", and "All-Red Time", respectively. The green times provided in the table are the maximum ones from the controller settings. All values are provided in seconds.}
\label{tab:phase_plans}
\begin{tabular}[c]{|c|c|c|c|c|c|c|c|c|c|c|}
\Xhline{2pt}
\multicolumn{11}{|c|}{Huntington Dr \& Santa Anita Ave (ID: 5083)}\\
\hline
\multirow{3}{*}{\shortstack[c]{Plan\\Name}} & \multirow{3}{*}{\shortstack[c]{Activation\\Times}} & \multirow{3}{*}{\shortstack[c]{Cycle\\Length}} & \multicolumn{8}{c|}{Phases}\\
\cline{4-11}
& & &\multicolumn{2}{c|}{1 \& 5}&\multicolumn{2}{c|}{2 \& 6}&\multicolumn{2}{c|}{3 \& 7}&\multicolumn{2}{c|}{4 \& 8}\\
\cline{4-11}
& & &{G}&{Y+R}&{G}&{Y+R}&{G}&{Y+R}&{G}&{Y+R}\\
\hline
\multirow{2}{*}{$E$}&{0:00-6:00}&\multirow{2}{*}{110}&\multirow{2}{*}{20}&\multirow{2}{*}{3}&\multirow{2}{*}{27}&\multirow{2}{*}{5}&\multirow{2}{*}{20}&\multirow{2}{*}{3}&\multirow{2}{*}{27}&\multirow{2}{*}{5}\\
{}&{21:00-24:00}&{}&{}&{}&{}&{}&{}&{}&{}&{}\\
\hline
\multirow{2}{*}{$P_1$}&{9:00-15:30}&\multirow{2}{*}{120}&\multirow{2}{*}{15}&\multirow{2}{*}{3}&\multirow{2}{*}{39}&\multirow{2}{*}{5}&\multirow{2}{*}{14}&\multirow{2}{*}{3}&\multirow{2}{*}{36}&\multirow{2}{*}{5}\\
{}&{19:00-21:00}&{}&{}&{}&{}&{}&{}&{}&{}&{}\\
\hline
{$P_2$}&{6:00-9:00}&{120}&{11}&{3}&{46}&{5}&{11}&{3}&{36}&{5}\\
\hline
{$P_3$}&{15:30-19:00}&{120}&{15}&{3}&{41}&{5}&{12}&{3}&{36}&{5}\\
\Xhline{2pt}
\end{tabular}
\end{table}
\subsection{Dataset}
The dataset includes flow and occupancy measurements of advance and stopbar detectors from 1/1/2017 to 12/31/2017 aggregated into five-minute intervals. Visualizations of the data from an advance detector (Fig. \ref{fig:detector_508302_data}) confirm that the measurements are highly cyclical. Flow measurements are the cleanest, whereas occupancy measurements are slightly noisier. Furthermore, data for the morning and afternoon peaks are more consistent and have larger magnitude changes than off peak data. Stopbar detectors produce noisier flow measurements and much higher occupancy values compared to advance detectors, located further upstream from the intersection. We plot a flow-occupancy graph for the data from detector 508306 (Fig. \ref{fig:fundamental_diagrams}) and note that it exhibits the trapezoidal shape that is typical of traffic fundamental diagrams. The morning peak reaches congestion and queue spillback far more often than the other two periods. Each period has its own set of signal phase timing plans, which explains the varying parameters.
Detector health and signal phase timings are collected at a granularity of one day. The signals at these intersections use four different plans: P1, P2, P3, and E, which correspond respectively to off peak, morning peak, afternoon peak, and nighttime (Table \ref{tab:phase_plans}). As expected, data from P2 and P3 have larger magnitude than data from P1 and E and exhibit very obvious cyclical patterns. The P2 and P3 plans are only active on weekdays, so we train and predict only on weekday data for the morning and afternoon peaks.
\begin{figure}
\centering
\includegraphics[clip,trim={0cm 2.3cm 0cm 0cm},width=\columnwidth]{images/detector_508302_august_data.png}
\caption{Flow and occupancy measured in the month of August by detector 508302.}
\label{fig:detector_508302_data}
\end{figure}
\begin{figure}
\centering
\includegraphics[clip,trim={0cm 0cm 0cm 0.8cm},width=0.95\linewidth]{images/fundamental_diagram_508306.png}
\caption{Flow-occupancy diagram for detector 508306, one of our detectors of interest.}
\label{fig:fundamental_diagrams}
\end{figure}
\paragraph{Preprocessing}
We performed detector health analysis to filter out spurious data. Because the DCRNN requires data for all of the detectors at each timestamp, we kept data only from days where all 12 detectors in our system were healthy. However, one downstream through detector was faulty for the entire year; as a result, we ignored it at the expense of introducing another impurity in our closed system. Otherwise, the detectors were fairly healthy, with only a few outages throughout the year, so not much data was dropped.
|
1,477,468,750,380 | arxiv | \section{Introduction}\label{sec:intro}
Potential field surveys, gravity and magnetic, have been used for many years for a wide range of studies including oil and gas exploration, mining applications, and mapping basement topography \cite{nabighian:2005,Blakely}. The inversion of acquired data is one of the important steps in the interpretation process \cite{LiOl:98,PoZh:99,BoCh:2001,SiBa:2006,Far:2008,Liu:13,Liu:18}. The problem is ill-posed and then, generally, the solution is obtained via the minimization of a global objective function that consists of two terms, the data misfit term and the stabilizing, or regularizing, term. These two terms are balanced by a scalar regularization parameter that weights the contribution of the stabilizing term to the solution. Extensive background on the modeling and the solution of the regularized objective function is provided in the literature, e.g. \cite{LiOl:98,PoZh:99,VAR:2015}. While the concept of regularization is well-known, the numerical solution of a large-scale problem, in conjunction with effective and efficient estimation of a suitable regularization parameter, continues to be computationally challenging. For large-scale problems, the most effective and widely-used strategy is to transform the problem from the original large space to a much smaller subspace. The resulting subspace solution can then be projected back to the original full space, under reasonable assumptions that the subspace problem sufficiently captures the characteristics of the full space problem. For example the LSQR algorithm, which is based on the Golub-Kahan bidiagonalization of the model matrix, is frequently used for the inversion of geophysical data. Using the LSQR algorithm the large-scale problem is projected onto a Krylov subspace of smaller dimension and the subspace solution is then obtained relatively efficiently using a standard factorization such as the singular value decomposition (SVD), \cite{PaSa:1982a,PaSa:1982b,KiOl:2001,ChNaOl:2008,RVA:2017,VRA:2017}. Still the need to solve ever larger problems so as to provide greater resolution of the subsurface structures, while also automatically estimating a suitable regularization parameter, presents a challenge computationally. Even with the continually-increasing computational power and memory that is available, it is sometimes impossible, or computationally prohibitive to obtain effective solutions with these traditional computational algorithms.
Recently, the powerful concept of randomization has been introduced as an alternative strategy for dealing with large-scale inverse problems \cite{Halko:2011,XiZo:2013,VMN:2015,XiZo:2015,WXZ:2016,VRA:2018a,VRA:2018b}. The fundamental idea is that some amount of randomness can be employed to obtain a matrix of smaller dimension that effectively captures the essential spectral properties of the original model matrix, and thus provides an approximately optimal rank-$q$ randomized singular value decomposition (RSVD) of the original system. Effectively, the approach consists of the first stochastic step which finds a random but orthonormal matrix that samples rows and columns of a given matrix, and a second completely deterministic step which then finds the eigen-decomposition of the subsampled matrix and hence yields a rank-$q$ approximation of the original matrix.
For the $3$D inversion of gravity data, Vatankhah et al. \shortcite{VRA:2018b} developed a fast inversion methodology that combined an $L_1$-norm regularization strategy with the RSVD, for the generation of a focused image of the subsurface. For the under-determined problem in which there are $m$ data measurements to be used to find the subsurface structures on a volume with $n$ model parameters, $m \ll n$, their results indicate that acceptable results, nearly equivalent to those using the full SVD (FSVD) for the SVD calculations in the algorithm, are achievable using a target rank $q$ with {$q \gtrsim (m/6)$}.\footnote[1]{The solutions are the same when $q=m$.} Here we will denote the algorithms that use the RSVD and FSVD, respectively, for all steps in the $L_1$-norm regularization strategy the \texttt{Hybrid-RSVD} and \texttt{Hybrid-FSVD} algorithms. For the \texttt{Hybrid-RSVD} algorithm it is important to estimate an appropriate lower bound on $q$ in order that acceptable solutions are provided using an algorithm that is computationally efficient with respect to both time and memory.
The purpose of this discussion is to assess the application of the \texttt{Hybrid-RSVD} algorithm, as presented in Vatankhah et al. \shortcite{VRA:2018b}, for the inversion of magnetic potential data. Our results demonstrate that direct application of the \texttt{Hybrid-RSVD} algorithm does not lead to acceptable solutions. Rather, for a problem of equivalent size as given by the pair $(m,n)$, we find that $q$ must be larger; the lower bound $q \gtrsim m/6$ does not yield acceptable solutions. On the other hand, by introducing power iterations into the determination of the rank-$q$ approximation, improved results are achievable for smaller choices of $q$, for both magnetic and gravity inversion problems. Both cases are analyzed carefully and useful estimates for the appropriate choices of $q$ for both magnetic and gravity problems, when solved using power iterations to improve the approximate RSVD, are provided.
The remainder of the paper is organized as follows. In Section~\ref{inversionmethod} we briefly describe the standard $L_1$-norm stabilized inversion algorithm. A short explanation of the methodology used for estimating the regularization parameter is presented in Section~\ref{parameterchoice}. Then, the concept of the RSVD is reviewed in Section~\ref{RSVD}, with the extension applying power iterations presented in Section~\ref{powerRSVD}. In Section~\ref{synthetic} we show the results of using the \texttt{Hybrid-RSVD} algorithm on two different synthetic models. A small model in Section~\ref{twodikes} is used for further analysis of the algorithm and the impact of using power iterations to obtain the RSVD is also demonstrated.
To further examine the approach we consider a model with multiple bodies in Section~\ref{multiplebodies}. The inversion of magnetic data from a meta-sedimentary gneiss belt in Canada are used to illustrate the application of the presented methodology for real data in Section~\ref{real}. Conclusions and discussions on future work are given in Section~\ref{conclusion}.
\section{ Inversion methodology }\label{inversionmethod}
We discuss the general case for the linear inversion of potential field data in which the subsurface is discretized into a large number of cells of fixed size but with unknown physical properties. The unknown parameters of the cells are stacked in a vector $\mathbf{m} \in \mathcal{R}^{n}$, and measured potential field data are also stacked in a vector $\mathbf{d}_{\mathrm{obs}} \in \mathcal{R}^{m}$. The measurements are connected to the model parameters via the model matrix $G \in \mathcal{R}^{m \times n}$, $m\ll n$, which is the forward model operator which maps from model to data spaces, yielding the under-determined linear system
\begin{eqnarray}\label{d=gm}
\mathbf{d}_{\mathrm{obs}}= G\mathbf{m},
\end{eqnarray}
Element $G_{ij}$ represents the effect of the unit model parameter at cell $j$ on the data at location $i$.
In the case of the inversion of magnetic data, vector $\mathbf{m}$ collects the values of the unknown susceptibilities of the cells and $\mathbf{d}_{\mathrm{obs}}$ collects the total magnetic field. For the gravity problem these are the cell densities and vertical component of gravity field, respectively. Model matrix $G$ depends on the problem. Rao $\&$ Babu \shortcite{RaBa:91} provided a fast approach for computing the total magnetic field anomaly of a cube that is then used here to form the elements of model matrix $G$ for the magnetic problem. For gravity inversion, the elements of $G$ are computed using the formula developed by Ha{\'a}z \shortcite{Haaz}, see for example Boulanger \& Chouteau \shortcite{BoCh:2001} for more details. The spectral properties of these two matrices impact the condition of the under-determined systems, and hence the performance of any algorithm that is used for data inversion.
Stabilization, or regularization, is required to find an acceptable solution of the ill-posed system given by \eqref{d=gm}; its solution is neither unique nor stable. Here we consider the $L_1$-norm stabilized solution of \eqref{d=gm} as presented by Vatankhah et al. \shortcite{VRA:2017}. The approach also includes weighting of the data based on the knowledge, or estimate, of the standard deviation of the independent noise in the data, weighting for the depths of the cells, and the inclusion of prior knowledge on the solution using provided information which may come from geology, logging or previous geophysical surveys, or may be taken to be the vector of $0$ values when no prior information is available. Overall, the formulation finds the minimum of the global objective function $P^{\alpha}(\mathbf{m})$ as given by
\begin{eqnarray}\label{globalfunction1}
\mathbf{m} = \argmin{\mathbf{m}} \{P^{\alpha}(\mathbf{m})\} =\argmin{\mathbf{m}}\{\| W_{\bfd}(G\mathbf{m}-\mathbf{d}_{\mathrm{obs}}) \|_2^2 + \alpha^2 \|W(\mathbf{m}-\mathbf{m}_{\mathrm{apr}}) \|_2^2\}.
\end{eqnarray}
The diagonal entries of the data weighting matrix $W_{\bfd}$ are estimates for the inverse of the standard deviations of the independent noise in the data, $\mathbf{m}_{\mathrm{apr}}$ encodes the prior knowledge of the solution, and stabilizing matrix $W$ is the product of three diagonal matrices, $W_{\mathrm{depth}}$ which is a depth-weighting matrix with diagonal entries $z^{-\beta}$ at depth $z$, $W_{\mathrm{h}}$ which is a hard constraint matrix, and $W_{{{L}}_1}$ which is a matrix that arises from the approximation of the $L_1$-norm stabilizer via an $L_2$-norm term. Details of all aforementioned matrices are provided in Vatankhah et al. \shortcite{VRA:2017,VRA:2018b}, but we note that it is $W_{{{L}}_1}$ which enables the algorithm to produce non-smooth and focused images of the subsurface that are more consistent with real geological structures.
The scalar regularization parameter $\alpha$ balances the two terms in the objective function and is discussed further in Section~\ref{parameterchoice}
Noting that the inverse of a diagonal matrix is obtained at effectively zero computational cost, the objective function \eqref{globalfunction1}
is easily transformed to the standard Tikhonov form
\begin{eqnarray}\label{globalfunction2}
P^{\alpha}(\mathbf{h})=\| \tilde{\tilde{G}}\mathbf{h} - \tilde{\mathbf{r}} \|_2^2 + \alpha^2 \| \mathbf{h} \|_2^2,
\end{eqnarray}
see e.g. Vatankhah et al. \shortcite{VAR:2015}, where $\tilde{\tilde{G}}={W_{\bfd}}GW^{-1}$, $\tilde{\mathbf{r}}={W_{\bfd}}(\mathbf{d}_{\mathrm{obs}}-G\mathbf{m}_{\mathrm{apr}})$, and $\mathbf{h}=W(\mathbf{m}-\mathbf{m}_{\mathrm{apr}})$. The solution of \eqref{globalfunction2}, dependent on the choice of $\alpha$, is then
\begin{eqnarray}\label{hsolution}
\mathbf{h}({\alpha})=(\tilde{\tilde{G}}^T\tilde{\tilde{G}}+\alpha^2 I_n)^{-1}\tilde{\tilde{G}}^T\tilde{\mathbf{r}},
\end{eqnarray}
and the model update is given by
\begin{eqnarray}\label{modelupdate}
\mathbf{m}(\alpha)=\mathbf{m}_{\mathrm{apr}}+W^{-1}\mathbf{h} (\alpha).
\end{eqnarray}
An iteratively reweighted approach is used to find $\mathbf{m}(\alpha)$, dependent not only on $\alpha$, changing with each iteration, but also on matrix $W$ which changes at each step through the update of matrix $W_{{{L}}_1}$. The complete approach is described in Vatankhah et al. \shortcite[Algorithm~$1$]{VRA:2017} and Vatankhah et al. \shortcite[Algorithm~$2$]{VRA:2018b}. At each stage of the algorithm upper and lower bounds on the physical parameters are imposed in order that the recovered model is reliable within known acceptable ranges, and regularization parameter $\alpha$ is adjusted automatically. The iteration is terminated when either the data predicted by the reconstructed model satisfies the observed data to within a $\chi^2$ value relating to the noise level, or a maximum number of iterations $K_{\mathrm{max}}$ is reached without the predicated data satisfying the $\chi^2$ estimate.
When both $m$ and $n$ are small, $\mathbf{h}(\alpha)$ can be found using the FSVD $\tilde{\tilde{G}} = U \Sigma V^T$, where matrices $U\in \mathcal{R}^{m \times m}$ and $V\in\mathcal{R}^{n \times n}$ are orthogonal with columns $\mathbf{u}_i$ and $\mathbf{v}_i$, and $\Sigma \in \mathcal{R}^{m \times n}$ is the matrix of singular values $\sigma_i$, ordered from large to small. The solution obtained using the FSVD is described in Vatankhah et al \shortcite[Algorithm~$1$]{VRA:2017}, and see also Paoletti et al. \shortcite{PHHF:2014} and Vatankhah et al. \shortcite{VAR:2015}. The availability of the FSVD makes it possible to estimate regularization parameter $\alpha$ cheaply using standard parameter-choice techniques \cite{XiZo:2013,ChPa:2015}. Unfortunately the calculation of the SVD for large, or even moderate, under-determined systems is not practical; the cost is approximately $6nm^{2}+20 m^{3}$, Golub $\&$ Van Loan \shortcite{GoLo:2013}. The traditional alternative is the use of a hybrid method such as the iterative LSQR algorithm that can be used to project \eqref{globalfunction2} onto a Krylov subspace of smaller dimension and for which an SVD of the projected problem is then efficiently used to yield the subspace solution. This solution is projected back to the original full space at minimal additional computational cost. The SVD for the projected problem also facilitates the use of parameter-choice algorithms to find an optimal $\alpha$, denoted by $\alpha_{\mathrm{opt}}$, also at minimal additional computational cost.
\subsubsection{Regularization parameter-choice method}\label{parameterchoice}
Here we use the method of unbiased predictive risk estimation (UPRE) to find $\alpha_{\mathrm{opt}}$. This a-posteriori rule for choosing the Tikhonov regularization parameter method is well-described in Vogel \shortcite{Vogel:2002}, and has been extensively applied for the inversion of data when an estimate of the noise covariance is available, including for the inversion of geophysical data, \cite{VAR:2015,VRA:2017} and for more general inversion problems \cite{RVA:2017,ChPa:2015}. Using the SVD of matrix $\tilde{\tilde{G}}$, the UPRE function to be minimized is given by
\begin{eqnarray}\label{upresvd}
U(\alpha)=\sum_{i=1}^{m^*} \left( \frac{1}{\sigma_i^2 \alpha^{-2} + 1} \right)^2 \left(\mathbf{u}_i^T\tilde{\mathbf{r}} \right)^2 + 2 \left( \sum_{i=1}^{m^*} \frac{\sigma_i^2}{\sigma_i^2+\alpha^2}\right) - {m^*}.
\end{eqnarray}
Here, ${m^*}$ indicates the number of non-zero singular values. This is the numerical rank of the matrix, namely $m^*=m$ when UPRE is applied using the FSVD for the underdetermined problem. The optimum regularization parameter, $\alpha_{\mathrm{opt}}$, is found by evaluating \eqref{upresvd} on a range of $\alpha$, between minimum and maximum $\sigma_i$; equivalently $\alpha_{\mathrm{opt}}=\argmin{\sigma_{\mathrm{min}}\le \alpha \le\sigma_{\mathrm{max}}}\{U(\alpha)\}$.
\subsection{Randomized Singular Value Decomposition}\label{RSVD}
While the hybrid-LSQR methodology is effective and practical; acceptable results are obtained as compared with the \texttt{Hybrid-FSVD} solution, see for example Renaut et al. \shortcite{RVA:2017} and Vatankhah et al. \shortcite{VRA:2017}, the approach is still cost-limited due to the need to build a relatively large Krylov space for the solution when both $m$ and $n$ are large. Recent approaches based on randomization have presented an interesting alternative for tackling problems requiring high resolution of the subsurface, \cite{XiZo:2013,VMN:2015,VRA:2018a,VRA:2018b}. In this case, random sampling is used to construct a low-dimensional subspace that approximates the column space of the model matrix and maintains the most dominant spectrum of the original matrix, \cite{Halko:2011}. Standard deterministic matrix decomposition methods such as the SVD, or eigen-decomposition, can then be used to compute a low-rank approximation of the original matrix. Specifically, in the context of potential field inversion, it is desirable to find a $q$-rank matrix $\tilde{\tilde{G}}_q$, which is as close as possible to $\tilde{\tilde{G}}$ in the least-squares sense, while at the same time the target rank $q$ is as small as possible in order that the inversion process is fast. We note that of course the best rank $q$ approximation in the least squares sense is given by the exact truncated SVD of $\tilde{\tilde{G}}$ with $q$ terms \cite{GoLo:2013}. It is generally, however, not practical to calculate the truncated SVD for large scale problems. Thus here we focus on the RSVD and carefully describe the approach that was presented in Vatankhah et al \shortcite{VRA:2018b} for the solution of the gravity inversion problem.
The fundamental aspects of an RSVD algorithm consist of two stages. Here we present this for the under-determined matrix $\tilde{\tilde{G}}$. (i) A low-dimensional subspace is constructed that approximates the column space of $\tilde{\tilde{G}}^T$. The aim is to find a matrix $Q \in \mathcal{R}^{n \times q}$ with orthonormal columns such that $\tilde{\tilde{G}} \approx \tilde{\tilde{G}} Q Q^T$; (ii) Given the near-optimal basis spanned by the columns of $Q$, a smaller matrix $B=\tilde{\tilde{G}} Q \in \mathcal{R}^{m \times q}$ is formed. This means that $\tilde{\tilde{G}}$ is restricted to the smaller subspace spanned by the basis from the columns of $Q$. Moreover, $B$ is a projection from high-dimensional space into a low-dimensional space which preserves the geometric structure of the matrix in a Euclidean sense \cite{Eri:2016}. Subsequently, $B$ can then be used to compute an approximate matrix decomposition for $\tilde{\tilde{G}}$ using a traditional algorithm. Step (i) is completely random and depends on the selection of a specific approach to find $Q$, while (ii) is deterministic. The fundamental approach, as presented for the gravity problem by Vatankhah et al. \shortcite[Algorithm~$1$]{VRA:2018b}, as summarized in Algorithm~\ref{RSVDAlgorithm}, but with the inclusion of power iterations, is now discussed.
In step~\ref{omega} a random test matrix $\Omega \in \mathcal{R}^{\ell \times m} $ is generated from a standard normal distribution. Probability theory guarantees that the columns of $\Omega$ are linearly independent. Here, $q+p=\ell \ll m$ and $p$ is a small oversampling parameter that provides the flexibility and effectiveness of the algorithm \cite{Halko:2011,Eri:2016}. At step~\ref{Y} a set of independent randomly-weighted linear combinations of the rows of $\tilde{\tilde{G}}$, or equivalently columns of $\tilde{\tilde{G}}^T$, are formed. The sketch matrix $Y$ has a much smaller number of rows than $\tilde{\tilde{G}}$. Ignoring for the moment power iterations at step~\ref{powerstep}, step~\ref{Q} constructs $Q \in \mathcal{R}^{n \times \ell}$. The columns of $Q$ form an orthonormal basis for the range of $Y^T$; equivalently for the range of $\tilde{\tilde{G}}^T$. Given near-optimal $Q$, a smaller matrix $B$ is reconstructed in step~\ref{B}. Therefore, large matrix $\tilde{\tilde{G}}$ is projected onto a low-dimensional space that captures most of the action of $\tilde{\tilde{G}}$. Choosing an optimal $q$ is highly dependent on the task. Generally, $q$ should be as small as possible so that the algorithm is fast and efficient, but simultaneously $q$ should be large enough that the dominant spectral properties of $\tilde{\tilde{G}}$ are accurately captured. We discuss the effect of the choice of $q$ for the inversion of gravity and magnetic data in Section~\ref{synthetic}.
Having obtained matrix $B$ at step~\ref{B}, traditional SVD algorithms could then be used to compute the approximations to the first $q$ left singular vectors as well as the corresponding singular values for matrix $\tilde{\tilde{G}}$. The approximate right singular vectors could also be recovered, see e.g. Xiang and Zou \shortcite{XiZo:2013}. Alternatively, as discussed, with proof, in Vatankhah et al. \shortcite{VRA:2018b}, the much smaller matrix $B^TB\in \mathcal{R}^{\ell \times \ell}$ can be used to find the required SVD components for $B$ using the eigen-decomposition of $ B^TB$. It was also demonstrated that the computational cost of this algorithm, excluding power iterations, is $O(\ell mn)$. Thus, it is feasible to compute the large singular values of a given matrix efficiently.
\begin{algorithm}
\caption{RSVD algorithm with power iterations. Given matrix $\tilde{\tilde{G}} \in \mathcal{R}^{m \times n} (m < n)$, a target matrix rank $q $ and a small constant oversampling parameter $p$ satisfying $q+p=\ell \ll m$, compute an approximate SVD of $\tilde{\tilde{G}}$: $\tilde{\tilde{G}}$ $\approx$ $ U_q \Sigma_q V_q^T$ with $U_q \in \mathcal{R}^{m \times q}$, $\Sigma_q \in \mathcal{R}^{q \times q}$, $V_q \in \mathcal{R}^{n \times q}$.}\label{RSVDAlgorithm}
\begin{algorithmic}[1]
\STATE Generate a Gaussian random matrix $\Omega \in \mathcal{R}^{\ell \times m} $. \label{omega}
\STATE Form the matrix $Y=\Omega \tilde{\tilde{G}} \in \mathcal{R}^{\ell \times n}$. \label{Y}
\STATE Compute power scheme, see Algorithm~\ref{SubPowerAlgorithm}.\label{powerstep}
\STATE Compute orthonormal matrix $Q \in \mathcal{R}^{n \times \ell}$ via QR factorization $Y^T=QR$.\label{Q}
\STATE Form the matrix $ B=\tilde{\tilde{G}} Q \in \mathcal{R}^{m \times \ell}$.\label{B}
\STATE Compute the matrix $B^TB \in \mathcal{R}^{\ell \times \ell}$.\label{BTB}
\STATE Compute the eigendecomposition of $B^TB$; $[\tilde{V}_\ell, D_\ell]=\mathrm{eig}(B^TB)$. \label{eigen}
\STATE Compute $V_q=Q \tilde{V}_\ell(:,1:q)$; $\Sigma_q= \sqrt{D_\ell}(1:q,1:q)$; $\mathrm{and}$ $ U_q=B \tilde{V}_q(:,1:q)$ $ \Sigma_{q}^{-1}. $\label{SVDcomponent}
\STATE Note $\tilde{\tilde{G}}_q = U_q \Sigma_q V_q^T$ is a q-rank approximation of matrix $\tilde{\tilde{G}}$.
\end{algorithmic}
\end{algorithm}
\subsubsection{Randomized Singular Value Decomposition with Power Iterations}\label{powerRSVD}
The quality of the RSVD which is obtained using Algorithm~\ref{RSVDAlgorithm} without step~\ref{powerstep} depends on the quality of the basis matrix $Q$ as providing a basis for the column space of $\tilde{\tilde{G}}^{T}$. Halko et al \shortcite[p224]{Halko:2011} suggested an improvement of their \texttt{Proto} Algorithm for forming the basis matrix $Q$ and the associated RSVD that should lead to an improved approximation of the dominant spectrum. In the \texttt{Prototype} Algorithm for power iterations \cite[p227]{Halko:2011}, the sketch matrix $Y$ is obtained after first preprocessing matrix $\tilde{\tilde{G}}$ to give
\begin{eqnarray}\label{power}
\tilde{\tilde{G}}^{(s)}=\tilde{\tilde{G}}(\tilde{\tilde{G}}^T\tilde{\tilde{G}})^{(s)},
\end{eqnarray}
where integer $s$ specifies the number of power iterations. While this is specifically implemented as \cite[Algorithm 4.3]{Halko:2011}, Halko et al \shortcite{Halko:2011} note that the approach is sensitive to floating point rounding errors, which reduces the quality of the $Q$ basis. Thus, Halko et al \shortcite{Halko:2011} suggested their \texttt{Algorithm 4.4} which orthonormalizes the columns of $Y$ between each application of $\tilde{\tilde{G}}$ and $\tilde{\tilde{G}}^T$. It is this latter approach, described in Algorithm~\ref{SubPowerAlgorithm}, that we implement for step~\ref{powerstep} within Algorithm~\ref{RSVDAlgorithm}, but here with our extension of the approach for the under-determined case.
\begin{algorithm}
\caption{Subspace iteration of power scheme. For input matrix $\tilde{\tilde{G}} \in \mathcal{R}^{m \times n} (m < n)$, the sketch $Y \in \mathcal{R}^{\ell \times n}$, and parameter $s$, return an improved sketch matrix.}\label{SubPowerAlgorithm}
\begin{algorithmic}[1]
\STATE For $j=1,\cdots, s$.
\STATE $[Q,\sim]=qr(Y^{T},0)$. (economic QR decomposition)
\STATE $[Q,\sim]=qr(\tilde{\tilde{G}} Q,0)$.
\STATE $Y^T=\tilde{\tilde{G}}^T Q$.
\STATE End
\end{algorithmic}
\end{algorithm}
To analyze the effect of power iterations on improving the accuracy of the computed matrix, a simplified upper bound on the expected error between the original and the $q$-rank computed matrices is given by
\begin{eqnarray}\label{expectederror}
E \| \tilde{\tilde{G}}- \tilde{\tilde{G}}_q \| \leq \left[1+\sqrt{\frac{q}{p-1}}+\frac{e\sqrt{q+p}}{p}.\sqrt{min(m,n)-q} \right]^{\frac{1}{2s+1}} \sigma_{q+1},
\end{eqnarray}
\cite{Martinsson:2016,Eri:2016}. Here $e$ is Euler's number, $\sigma_{q+1}$ is the $q+1$ largest singular value of the matrix
$\tilde{\tilde{G}}$, $E$ denotes the expectation operator, and it is assumed that $p\ge 2$. Upper bound \eqref{expectederror} indicates how parameters $p$, $q$, and $s$ can be used to control the approximation error. Note immediately that for $q=min(m,n)$ then $\sigma_{q+1}=0$ and $E \| \tilde{\tilde{G}}- \tilde{\tilde{G}}_q \|=0$. With increasing oversampling $p$, the second and third terms in the bracket tend toward zero which means that the bound approaches the theoretically optimum value $\sigma_{q+1}$ \cite{Eri:2016}. For larger values of the subspace iteration parameter $s$, $1/(2s+1)$ goes to zero and the error bound is reduced.
In Section~\ref{synthetic} we will illustrate the application of Algorithm~\ref{RSVDAlgorithm} for the inversion of synthetic gravity and magnetic data sets without power iterations at step~\ref{powerstep}. These results, especially for the magnetic data, suggest that power iterations as indicated in Algorithm~\ref{SubPowerAlgorithm} are needed to improve the approximation of the dominant spectral space. Our tests have shown that setting $s=1$ yields a good performance that trades-off between accuracy and computational time. We will see that the magnitude of the singular values of the magnetic kernel are larger for a given $q$ than their counterparts for the gravity kernel, and thus the upper bound estimate in \eqref{expectederror} for the approximation error is consistently higher for the magnetic problem. Equivalently this leads to the need to implement a power iteration scheme to reduce the factor multiplying $\sigma_{q+1}$ in \eqref{expectederror}.
\section{Synthetic examples}\label{synthetic}
In Section~\ref{twodikes} we first evaluate the performance of the RSVD algorithm without power iterations, comparing its performance for the solution of relatively small-scale gravity and magnetic problems under the same configuration, but the appropriate choice of model matrix $\tilde{\tilde{G}}$. The results are compared to those obtained using the \texttt{Hybrid-FSVD} in each case. To understand the performance of the algorithm we examine the spectrum of the approximate operator, as compared to that of $\tilde{\tilde{G}}$, for each problem in Section~\ref{sec:spectrum}, and then examine the improvement obtained using the power iterations for $s=1$ in Section~\ref{poweriteration}. A more complicated configuration for a structure with multiple bodies is then examined in Section~\ref{multiplebodies} and confirms the conclusions obtained for the dipping dike example in Section~\ref{twodikes}.
In the simulations for the generation of the total field anomaly, the intensity of the geomagnetic field, the inclination, and the declination are selected as $47000$ nT, $50^{\circ}$ and $2^{\circ}$, respectively. The density contrast and the susceptibilities of the model structures embedded in a homogeneous non-susceptible background are $\rho=1$~g~cm$^{-3}$ and $\kappa=0.1$ (SI unit), respectively. The bound constraints $0=\rho_{\mathrm{min}}\le \rho \le \rho_{\mathrm{max}}=1$ in units ~g~cm$^{-3}$ and $0=\kappa_{\mathrm{min}}\le \kappa \le \kappa_{\mathrm{max}}=0.1$ in SI units are imposed at each iteration of the gravity and magnetic inversions, respectively. Further, for all simulations we add Gaussian noise with zero mean and standard deviation $ (\tau_1~|\mathbf{d}_{\mathrm{exact}}|_i + \tau_2~ \mathrm{max}| \mathbf{d}_{\mathrm{exact}} |)$ to datum $i$, for chosen pairs $(\tau_1, \tau_2)$, where $\mathbf{d}_{\mathrm{exact}}$ is the exact data set, yielding $\mathbf{d}_{\mathrm{obs}}$ with a known distribution of noise for the error. This standard deviation is used to generate the matrix $W_{\bfd}$ in the data fit term. The values of
$(\tau_1,\tau_2)$ are selected such that the signal to noise ratios, as given by
\begin{align}\label{snr}
SNR=20\, \mathrm{log}_{10} \frac{\| \mathbf{d}_{\mathrm{exact}} \|_2}{\| \mathbf{d}_{\mathrm{obs}}-\mathbf{d}_{\mathrm{exact}} \|_2},
\end{align}
are close for both gravity and magnetic noise-contaminated data. The values for $(\tau_1, \tau_2)$ and the resulting SNRs are specified in the captions of figures associated with the results for each data set. Then to test convergence of the update $\mathbf{m}^{(k)}$ at iteration $k$ we calculate the $\chi^2$ estimate,
\begin{align}\label{chi2}
(\chi^2)^{(k)}=\| W_{\bfd}(\mathbf{d}_{\mathrm{obs}}-\mathbf{d}_{\mathrm{pre}}^{(k)})\|_2^2,
\end{align}
where $\mathbf{d}_{\mathrm{pre}}^{(k)} = G\mathbf{m}^{(k)}$, and which assesses the predictive capability of the current solution. When $(\chi^2)^{(k)} \leq m+\sqrt{2m}$ the iteration terminates. Otherwise, the iteration is allowed to proceed in all cases to a maximum number of iterations $K_{\mathrm{max}}=50$. The regularization parameter $\alpha$ is adjusted with iteration $k$ and for $k>1$ is obtained in all cases using the UPRE function \eqref{upresvd} with the appropriate approximate SVD terms. Based on the suggestion of Farquharson \& Oldenburg \shortcite{Far:2004}, a large $\alpha$ is always used at the first iteration,
\begin{align*}
\alpha^{(1)} = \left(\frac{n}{m}\right)^{3.5} \frac{\sigma_1}{\mathrm{mean}(\sigma_i)},
\end{align*}
as used in Vatankhah et al. \shortcite[eq.19]{VRA:2017}. In all simulations we use a fixed oversampling parameter, $p=10$, assume $\mathbf{m}_{\mathrm{apr}}=\mathbf{0}$, impose physically reasonable constraints on the model parameters, and for $W_{\mathrm{depth}}$ take $\beta=0.8$ and $1.4$, for gravity and magnetic inversions, respectively, with values close to those suggested by Li $\&$ Oldenburg \shortcite{LiOl:98}. In the results we examine the dependence of the solution on the choice of $q$ for the rank $q$ approximation and record (i) the number of iterations $K$ required, (ii) the relative error progression with increasing $k$ as given by
\begin{align}\label{RE}
RE^{(k)}=\frac{\|\mathbf{m}_{\mathrm{exact}}-\mathbf{m} ^{(k)} \|_2}{\|\mathbf{m}_{\mathrm{exact}} \|_2}
\end{align}
(iii) the relative error in the rank $q$ approximation to $\tilde{\tilde{G}}$,
\begin{align}\label{RE}
RG^{(k)}=\frac{\|\tilde{\tilde{G}}-\tilde{\tilde{G}}_q^{(k)} \|_2}{\|\tilde{\tilde{G}} \|_2}
\end{align}
(iv) $\alpha^{(k)}$ with increasing $k$, and (v) all values at the final iteration $K$ as well as the time for the iterations.
Moreover, in recording the computational time to convergence, in all cases we do not include the calculation of the original model matrix $G$.
Computations are performed on a desktop computer with Intel(R) Xeon(R) W-2133 CPU 3.6~GHz processor and 32 GB RAM.
\subsection{Small-scale model consisting of two dipping dikes}\label{twodikes}
The small-scale but complicated structure of two dipping dikes, illustrated in Fig.~\ref{fig1}, makes it computationally feasible to compare the solutions for the gravity and magnetic inverse problems using Algorithm~\ref{RSVDAlgorithm} without power iteration, with the solutions obtained using the \texttt{Hybrid-FSVD}.
The data for the problem, the vertical component of the gravity and the total magnetic field, are generated on the surface on $30 \times 30 = 900$ grid with grid spacing $50$~m. The noisy gravity and magnetic data in each case are illustrated in Figs.~\ref{fig2a} and \ref{fig2b}.
\begin{figure*}
\subfigure{\label{fig1a}\includegraphics[width=.45\textwidth]{figure1a}}
\subfigure{\label{fig1b}\includegraphics[width=.45\textwidth]{figure1b}}
\caption {Cross-section of the synthetic model consisting of two dipping dikes. (a) Density distribution; (b) Susceptibility distribution.} \label{fig1}
\end{figure*}
\begin{figure*}
\subfigure{\label{fig2a}\includegraphics[width=.40\textwidth]{figure2a}}
\subfigure{\label{fig2b}\includegraphics[width=.43\textwidth]{figure2b}}
\caption {Anomaly produced by the model shown in Fig.~\ref{fig1} and contaminated by Gaussian noise. (a) Vertical component of the gravity field. The noise is generated using parameter pairs $(\tau_1=0.02, \tau_2=0.02)$; (b) Total magnetic field. Here, $(\tau_1=0.02, \tau_2=0.015)$. The $\mathrm{SNR}$ for gravity and magnetic data, respectively, are $21.9188$ and $21.7765$.} \label{fig2}
\end{figure*}
The subsurface volume is discretized into $9000$ cubes of size $50$~m in each dimension, corresponding to $10$ slices in depth for a cross section of $30$ by $30$ cells. The resulting matrix $\tilde{\tilde{G}}$ is of size $900 \times 9000$ and thus facilitates solutions using the FSVD for comparison with the RSVD in the inversion algorithm. The results obtained using the \texttt{Hybrid-FSVD} and \texttt{Hybrid-RSVD}, with the choices $q=100$, $150$, $200$, $300$, $500$, $700$, and $900$, are detailed in Table~\ref{gravitytab} and Table~\ref{magnetictab}, respectively. The results for the inversions using the \texttt{Hybrid-FSVD} for the gravity and magnetic data inversions are given in Fig.~\ref{fig3} and Fig.~\ref{fig5}, respectively. For comparison the results using $q=200$ are illustrated in Fig.~\ref{fig4} and Fig.~\ref{fig6} for the gravity inversion and magnetic data inversion, respectively.
The results presented for the inversion of the gravity data using the \texttt{Hybrid-FSVD} show that the iteration terminates after just $12$ iterations, and as indicated in Fig.~\ref{fig3a} the reconstructed model is in good agreement with the original model. A sharp and focused image of the subsurface is obtained, and while the depths to the top of the structures are consistent with those of the original model, the extensions of the dikes are overestimated for the left dike and underestimated for the right dike. Figs.~\ref{fig3b} and \ref{fig3c} illustrate the progression of the relative error and regularization parameter at each iteration, respectively, and Fig.~\ref{fig3d} shows the UPRE function at the final iteration. These figures are presented for comparison with the results obtained using the \texttt{Hybrid-RSVD} algorithm.
For the same gravity problem the results using the \texttt{Hybrid-RSVD} algorithm for very small values of $q$, $q<100$, are not acceptable. With increasing $q$ the solution improves until at $q=m$ the solution matches the \texttt{Hybrid-FSVD} solution. For the reported choices of $q$ all the solutions terminate prior to $K_{\mathrm{max}}$, with $K=12$ for $q\ge 150$, and demonstrating that the $\chi^2$ estimate, \eqref{chi2} is satisfied. We can see that for a suitable value of $q$, the RSVD leads to a solution that is close to that achieved using the \texttt{Hybrid-FSVD}. These conclusions confirm the results in Vatankhah et al. \shortcite{VRA:2018b}; the \texttt{Hybrid-RSVD} algorithm can be used with $q \gtrsim (m/6)$ for the inversion of gravity data. We illustrate the results of the inversion using $q=200$ in Fig.~\ref{fig4}.
\begin{table}
\caption{Results of the inversion algorithms applied on gravity data of Fig.~\ref{fig2a}. }\label{gravitytab}
\begin{tabular}{c c c c c c c c c}
\hline
Method & $q$ &$\alpha^{(1)}$& $\alpha^{(K)}$& $RE^{(K)}$ & $RG^{(K)}$& $K$ & $\chi^2$ & Time (s) \\ \hline
\texttt{Hybrid-FSVD} & $-$ & $61712$ & $32.2$ & $0.7232$ & $-$ &$12$ & $811.8$ & $31.5$ \\ \hline
\texttt{Hybrid-RSVD} & $100$ & $24074$ & $44.5$ & $0.7550$ & $0.0521$ & $14$ & $935.8$ & $37.2$ \\
& $150$ & $29524$ & $37.4$ & $0.7441$ & $0.0497$ & $12$ & $928.7$ & $32.7$ \\
& $200$ & $33726$ & $34.5$ & $0.7221$ & $0.0448$ & $12$ & $905.6$ & $32.3$ \\
& $300$ & $40360$ & $33.4$ & $0.7153$ & $0.0257$ & $12$ & $901.4$ & $33.0$ \\
& $500$ & $49504$ & $29.5$ & $0.7077$ & $0.0143$ & $12$ & $801.9$ & $35.0$ \\
& $700$ & $56105$ & $31.1$ & $0.7061$ & $0.0082$ & $12$ & $908.1$ & $35.9$ \\
& $900$ & $61712$ & $32.2$ & $0.7232$ & $2.8510e^{-14}$ & $12$ & $811.8$ & $37.9$ \\ \hline
\end{tabular}
\end{table}
\begin{figure*}
\subfigure{\label{fig3a}\includegraphics[width=.45\textwidth]{figure3a}}
\subfigure{\label{fig3b}\includegraphics[width=.45\textwidth]{figure3b}}
\subfigure{\label{fig3c}\includegraphics[width=.45\textwidth]{figure3c}}
\subfigure{\label{fig3d}\includegraphics[width=.45\textwidth]{figure3d}}
\caption {\texttt{Hybrid-FSVD} results for the inversion of gravity data given in Fig.~\ref{fig2a}. (a) Cross-section of reconstructed model; (b) The progression of relative error $RE^{(k)}$ with iteration $k$; (c) The progression of regularization parameter $\alpha^{(k)}$ with iteration $k$; (d) The UPRE function at the final iteration.} \label{fig3}
\end{figure*}
\begin{figure*}
\subfigure{\label{fig4a}\includegraphics[width=.45\textwidth]{figure4a}}
\subfigure{\label{fig4b}\includegraphics[width=.45\textwidth]{figure4b}}
\subfigure{\label{fig4c}\includegraphics[width=.45\textwidth]{figure4c}}
\subfigure{\label{fig4d}\includegraphics[width=.45\textwidth]{figure4d}}
\caption {\texttt{Hybrid-RSVD} results using target rank $q=200$ for the inversion of gravity data given in Fig.~\ref{fig2a}. (a) Cross-section of reconstructed model; (b) The progression of relative error $RE^{(k)}$ with iteration $k$; (c) The progression of regularization parameter $\alpha^{(k)}$ with iteration $k$; (d) The UPRE function at the final iteration.} \label{fig4}
\end{figure*}
The results presented for the inversion of the magnetic data using the \texttt{Hybrid-FSVD} show that the iteration terminates after $8$ iterations, and as indicated in Fig.~\ref{fig5a} the reconstructed model is in reasonable agreement with the original model. Again a focused image of the subsurface is obtained, but the overestimation of the depth of the left dike is greater as compared with the results in Fig.~\ref{fig3}. Correspondingly, the relative error is larger than that achieved in the inversion of the gravity data.
For the same magnetic problem the results using the \texttt{Hybrid-RSVD} algorithm are not acceptable with reasonable choices for $q$, indeed the algorithm does not terminate prior to $K=K_{\mathrm{max}}$ for $q<500$, and the cost therefore increases significantly. As compared to the inversion of gravity data, larger values of $q$, are required to yield acceptable results. Observe, for example, that $q=200$ is not a suitable choice because the relative error of the reconstructed model is unacceptable, and the predicted data does not satisfy the observed data at the given noise level; \eqref{chi2} is not satisfied for $k\le 50$. Using $q=500$, the reconstructed model relative error is reduced and the inversion terminates at $9$ iterations with an acceptable $\chi^2$ value. We deduce that for the inversion of the magnetic data it is necessary to take $q$ larger than we would use for the inversion of the gravity data, as is indicated by the relatively larger estimates for the rank-$q$ error, $RG^{(K)}$ in Tables~\ref{gravitytab} and \ref{magnetictab}, respectively. To demonstrate the impact of the choice of $q$ we show the results of the inversion for
$q=200$ and $q=500$, in Figs.~\ref{fig6} and \ref{fig7}, respectively.
\begin{table}
\caption{Results of the inversion algorithms applied on magnetic data of Fig.~\ref{fig2b}.}\label{magnetictab}
\begin{tabular}{c c c c c c c c c}
\hline
Method & $q$ &$\alpha^{(1)}$& $\alpha^{(K)}$& $RE^{(K)}$ & $RG^{(K)}$& $K$ & $\chi^2$ & Time (s) \\ \hline
\texttt{Hybrid-FSVD} & $-$ & $21086$ & $4106.4$ & $0.8454$ & $-$ & $8$ & $898.6$ & $20.7$ \\ \hline
\texttt{Hybrid-RSVD} & $100$ & $9866$ & $4372.3$ & $1.0651$ & $0.3735$ & $50$ & $2880.6$ & $125.0$ \\
& $150$ & $11247$ & $3089.6$ & $0.9904$ & $0.4143$ & $50$ & $2592.5$ & $127.9$ \\
& $200$ & $12415$ & $3018.0$ & $0.9050$ & $0.3701$ & $50$ & $1673.7$ & $134.9$ \\
& $300$ & $14371$ & $3030.9$ & $0.8606$ & $0.2948$ & $50$ & $1120.5$ & $135.0$ \\
& $500$ & $17011$ & $3861.2$ & $0.8426$ & $0.0453$ & $9$ & $917.6$ & $26.9$ \\
& $700$ & $19071$ & $3854.1$ & $0.8470$ & $0.0268$ & $8$ & $918.2$ & $24.8$ \\
& $900$ & $21086$ & $4106.4$ & $0.8454$ & $2.8165e^{-14}$ & $8$ & $898.6$ & $26.1$ \\ \hline
\end{tabular}
\end{table}
\begin{figure*}
\subfigure{\label{fig5a}\includegraphics[width=.45\textwidth]{figure5a}}
\subfigure{\label{fig5b}\includegraphics[width=.45\textwidth]{figure5b}}
\subfigure{\label{fig5c}\includegraphics[width=.45\textwidth]{figure5c}}
\subfigure{\label{fig5d}\includegraphics[width=.45\textwidth]{figure5d}}
\caption {\texttt{Hybrid-FSVD} results for the inversion of magnetic data given in Fig.~\ref{fig2b}. (a) Cross-section of reconstructed model; (b) The progression of relative error $RE^{(k)}$ with iteration $k$; (c) The progression of regularization parameter $\alpha^{(k)}$ with iteration $k$; (d) The UPRE function at the final iteration.} \label{fig5}
\end{figure*}
\begin{figure*}
\subfigure{\label{fig6a}\includegraphics[width=.45\textwidth]{figure6a}}
\subfigure{\label{fig6b}\includegraphics[width=.45\textwidth]{figure6b}}
\subfigure{\label{fig6c}\includegraphics[width=.45\textwidth]{figure6c}}
\subfigure{\label{fig6d}\includegraphics[width=.45\textwidth]{figure6d}}
\caption {\texttt{Hybrid-RSVD} results using target rank $q=200$ for the inversion of magnetic data given in Fig.~\ref{fig2b}. (a) Cross-section of reconstructed model; (b) The progression of relative error $RE^{(k)}$ with iteration $k$; (c) The progression of regularization parameter $\alpha^{(k)}$ with iteration $k$; (d) The UPRE function at the final iteration.} \label{fig6}
\end{figure*}
\begin{figure*}
\subfigure{\label{fig7a}\includegraphics[width=.45\textwidth]{figure7a}}
\subfigure{\label{fig7b}\includegraphics[width=.45\textwidth]{figure7b}}
\subfigure{\label{fig7c}\includegraphics[width=.45\textwidth]{figure7c}}
\subfigure{\label{fig7d}\includegraphics[width=.45\textwidth]{figure7d}}
\caption{\texttt{Hybrid-RSVD} results using target rank $q=500$ for the inversion of magnetic data given in Fig.~\ref{fig2b}. (a) Cross-section of reconstructed model; (b) The progression of relative error $RE^{(k)}$ with iteration $k$; (c) The progression of regularization parameter $\alpha^{(k)}$ with iteration $k$; (d) The UPRE function at the final iteration.} \label{fig7}
\end{figure*}
\subsubsection{The Spectral Properties}\label{sec:spectrum}
We now compare the singular values $\sigma_i^{(k)}$ and $(\sigma_i^{(k)})_q$ for the matrices $\tilde{\tilde{G}}^{(k)}$ and $\tilde{\tilde{G}}^{(k)}_q$, respectively, for the gravity and the magnetic problems in Figs.~\ref{fig8} and \ref{fig9}, respectively.
In both figures, we show the values for $q=200$ and $q=500$ at iteration $k=8$. It is immediate from these plots that the \texttt{Hybrid-RSVD} algorithm does not capture the dominant spectrum of the original matrix for the magnetic problem as closely as is the case for the gravity problem. This clarifies why the magnetic inversion requires larger values of $q$ in order for the inversion to converge. It is also evident that the singular values of both gravity and magnetic problems decay rather slowly, after an initial fast decay. Generally, the RSVD algorithm is more efficient when applied to matrices with rapidly decaying singular values, as can be seen from the error estimate \eqref{expectederror}.
\begin{figure*}
\subfigure{\label{fig8a}\includegraphics[width=.45\textwidth]{figure8a}}
\subfigure{\label{fig8b}\includegraphics[width=.45\textwidth]{figure8b}}
\caption{The singular values $\sigma_i^{(k)}$ and $(\sigma_i^{(k)})_q$ for the matrices $\tilde{\tilde{G}}^{(k)}$ (blue circles) and $\tilde{\tilde{G}}^{(k)}_q$ (red crosses), respectively, for the gravity problem. (a) For $q=200$ at iteration $k=8$; (b) For $q=500$ at iteration $k=8$.} \label{fig8}
\end{figure*}
\begin{figure*}
\subfigure{\label{fig9a}\includegraphics[width=.45\textwidth]{figure9a}}
\subfigure{\label{fig9b}\includegraphics[width=.45\textwidth]{figure9b}}
\caption {The singular values $\sigma_i^{(k)}$ and $(\sigma_i^{(k)})_q$ for the matrices $\tilde{\tilde{G}}^{(k)}$ (blue circles) and $\tilde{\tilde{G}}^{(k)}_q$ (red crosses), respectively, for the magnetic problem. (a) For $q=200$ at iteration $k=8$ (b) For $q=500$ at iteration $k=8$.} \label{fig9}
\end{figure*}
\subsubsection{Power Iterations}\label{poweriteration}
Now as discussed in Section~\ref{RSVD} the error estimate \eqref{expectederror} will decrease with increasing $s$. Having seen that the lack of power iterations leads to lack of convergence for the inversion of the magnetic data unless $q$ is taken relatively large, $q \gtrsim m/2$, as compared to just $q\gtrsim m/6$ for the gravity problem, we investigate the power iteration step given by Algorithm~\ref{SubPowerAlgorithm} to improve the column space approximation of $\tilde{\tilde{G}}^{T}$. We therefore repeat the simulations for the data of the two dike problem illustrated in Figs.~\ref{fig2a}-\ref{fig2b} but employing a power iteration with $s=1$. The results are presented in Tables~\ref{gravitytabpower} and \ref{magnetictabpower} for gravity and magnetic data, respectively, and indicate improvements as compared to the results presented without power iterations. For both problems the number of iterations $K$ is generally reduced, once $q$ is large enough, and the error is generally decreased for a result that used the same number of iterations $K$ with and without power iterations. Moreover, the results are achieved without a large increase in computational cost, where the algorithm converged both with and without power iterations. But the major impact is that it is possible to take a much smaller $q$ to obtain convergence of the inversion of the magnetic problem with reasonable computational cost. In both cases it is sufficient to now take $q=200$ to obtain converged solutions for a relatively small $K$, $11$ and $12$, respectively.
\begin{table}
\caption{Results of the \texttt{Hybrid-RSVD} algorithm via power iterations applied on gravity data of Fig.~\ref{fig2a}. }\label{gravitytabpower}
\begin{tabular}{c c c c c c c c}
\hline
$q$ &$\alpha^{(1)}$& $\alpha^{(K)}$& $RE^{(K)}$ & $RG^{(K)}$& $K$ & $\chi^2$ & Time (s) \\ \hline
$100$ & $20629$ & $54.3$ & $0.7018$ & $0.0177$ & $11$ & $913.4$ & $30.4$ \\
$150$ & $25862$ & $45.5$ & $0.7288$ & $0.0152$ & $11$ & $909.6$ & $29.5$ \\
$200$ & $30218$ & $40.8$ & $0.7185$ & $0.0129$ & $11$ & $904.7$ & $30.9$ \\
$300$ & $37279$ & $34.8$ & $0.7188$ & $0.0101$ & $11$ & $898.1$ & $31.8$ \\
$500$ & $47447$ & $29.6$ & $0.7099$ & $0.0063$ & $12$ & $851.0$ & $38.0$ \\
$700$ & $54892$ & $29.0$ & $0.7093$ & $0.0047$ & $12$ & $893.6$ & $39.6$ \\
$900$ & $61712$ & $32.2$ & $0.7232$ & $1.5534e^{-15}$ & $12$ & $811.8$ & $41.3$ \\ \hline
\end{tabular}
\end{table}
\begin{table}
\caption{Results of the \texttt{Hybrid-RSVD} algorithm via power iterations applied on magnetic data of Fig.~\ref{fig2b}. }\label{magnetictabpower}
\begin{tabular}{c c c c c c c c}
\hline
$q$ &$\alpha^{(1)}$& $\alpha^{(K)}$& $RE^{(K)}$ & $RG^{(K)}$& $K$ & $\chi^2$ & Time (s) \\ \hline
$100$ & $8375$ & $3127.0$ & $0.9187$ & $0.2880$ & $50$ & $1310.6$ & $128.3$ \\
$150$ & $9920$ & $3206.5$ & $0.8464$ & $0.2294$ & $50$ & $955.5$ & $128.7$ \\
$200$ & $11198$ & $5098.5$ & $0.8235$ & $0.0542$ & $12$ & $915.4$ & $33.2$ \\
$300$ & $13235$ & $4447.1$ & $0.8310$ & $0.0323$ & $10$ & $848.9$ & $28.7$ \\
$500$ & $16152$ & $3719.6$ & $0.8421$ & $0.0269$ & $9$ & $814.5$ & $29.0$ \\
$700$ & $18528$ & $4036.8$ & $0.8405$ & $0.0165$ & $8$ & $887.5$ & $26.7$ \\
$900$ & $21086$ & $4106.4$ & $0.8454$ & $2.0710e^{-15}$ & $8$ & $898.6$ & $28.7$ \\ \hline
\end{tabular}
\end{table}
The results for magnetic data inversion with $q=200$ and $s=1$ are illustrated in Fig.~\ref{fig10} for comparison with Fig.~\ref{fig6} obtained without the power iterations. A noticeable improvement in the reconstructed model is obtained. To further show the impact of the power iterations we also show the singular values for the power iterations with $s=1$ and $q=200$ for both gravity and magnetic problems in Figs.~\ref{fig11a}-\ref{fig11b}, comparing with Fig.~\ref{fig8a} and Fig.~\ref{fig9a}, respectively. These plots demonstrate that the power iterations have indeed improved the accuracy of the estimated singular values.
\begin{figure*}
\subfigure{\label{fig10a}\includegraphics[width=.45\textwidth]{figure10a}}
\subfigure{\label{fig10b}\includegraphics[width=.45\textwidth]{figure10b}}
\subfigure{\label{fig10c}\includegraphics[width=.45\textwidth]{figure10c}}
\subfigure{\label{fig10d}\includegraphics[width=.45\textwidth]{figure10d}}
\caption {\texttt{Hybrid-RSVD} results using power iterations with $s=1$ and target rank $q=200$ for the inversion of magnetic data given in Fig.~\ref{fig2b}. (a) Cross-section of reconstructed model; (b) The progression of relative error $RE^{(k)}$ with iteration $k$; (c) The progression of regularization parameter $\alpha^{(k)}$ with iteration $k$; (d) The UPRE function at the final iteration.} \label{fig10}
\end{figure*}
\begin{figure*}
\subfigure{\label{fig11a}\includegraphics[width=.45\textwidth]{figure11a}}
\subfigure{\label{fig11b}\includegraphics[width=.45\textwidth]{figure11b}}
\caption {The singular values $\sigma_i^{(k)}$ and $(\sigma_i^{(k)})_q$ for the matrices $\tilde{\tilde{G}}^{(k)}$ (blue circles) and $\tilde{\tilde{G}}^{(k)}_q$ (red crosses), respectively, for $q=200$ and $s=1$. (a) Gravity kernel at iteration $k=8$; (b) Magnetic kernel at iteration $k=8$.}\label{fig11}
\end{figure*}
Naturally these results raise the question as to whether it is better to apply the \texttt{Hybrid-RSVD} algorithm without power iterations and a large choice for $q$ or to use power iterations and take a smaller $q$. But the main purpose of using Algorithm~\ref{RSVDAlgorithm} is to make it feasible, in terms of both memory and computational cost, to find accurate solutions of large scale problems. Indeed the aim is to solve problems which are either too expensive to solve using the \texttt{Hybrid-FSVD} or cannot be solved at all using the \texttt{Hybrid-FSVD}. We discuss this further for a larger problem in Section~\ref{multiplebodies}
\subsection{Model of multiple bodies}\label{multiplebodies}
We now study the application of the \texttt{Hybrid-RSVD} algorithm for the solution of a larger and more complex model consisting of six different bodies with different shapes, dimensions and depths, as shown in the perspective view in Fig.~\ref{fig12} and the six plane-sections in Fig.~\ref{fig13}. The data for the problem, the vertical component of the gravity and the total magnetic field, are generated on the surface on a $100 \times 80$ grid with $100$~m spacing. The noisy gravity and magnetic data in each case are illustrated in Figs.~\ref{fig14a} and \ref{fig14b}.
\begin{figure*}
\includegraphics[width=.8\textwidth]{figure12}
\caption {Model consisting of six bodies with different shapes, depths and dimensions.} \label{fig12}
\end{figure*}
\begin{figure*}
\subfigure{\label{fig13a}\includegraphics[width=.45\textwidth]{figure13a}}
\subfigure{\label{fig13b}\includegraphics[width=.45\textwidth]{figure13b}}
\subfigure{\label{fig13c}\includegraphics[width=.45\textwidth]{figure13c}}
\subfigure{\label{fig13d}\includegraphics[width=.45\textwidth]{figure13d}}
\subfigure{\label{fig13e}\includegraphics[width=.45\textwidth]{figure13e}}
\subfigure{\label{fig13f}\includegraphics[width=.45\textwidth]{figure13f}}
\caption {The susceptibility distribution of the model in Fig.~\ref{fig12} is displayed in six plane-sections. The depth of the sections are: (a) $100$~m; (b) $200$~m; (c) $300$~m; (d) $400$~m; (e) $500$~m; and (f) $600$~m.} \label{fig13}
\end{figure*}
\begin{figure*}
\subfigure{\label{fig14a}\includegraphics[width=.43\textwidth]{figure14a}}
\subfigure{\label{fig14b}\includegraphics[width=.45\textwidth]{figure14b}}
\caption {Anomaly produced by the model shown in Fig.~\ref{fig12} and contaminated by Gaussian noise. (a) Vertical component of the gravity field. The noise parameters are $(\tau_1=0.02, \tau_2=0.02)$; (b) Total magnetic field. Here, $(\tau_1=0.02, \tau_2=0.018)$. The $\mathrm{SNR}$ for gravity and magnetic data, respectively, are $22.0348$ and $21.8817$. } \label{fig14}
\end{figure*}
To perform the inversion, the subsurface volume is discretized into $100 \times 80 \times 10$ cubes of size $100$~m in each dimension. The resulting matrix $\tilde{\tilde{G}}$ is of size $8000 \times 80000$ and is too large for efficient use of the \texttt{Hybrid-FSVD}. The results obtained using the \texttt{Hybrid-RSVD} algorithm with the choices $q=1000$, $1500$, $2500$, and $4200$ are detailed in Tables~\ref{gravitytablarge} and \ref{magnetictablarge}, respectively, both with and without power iterations.
\begin{table}
\caption{Results of the inversion algorithms applied on the gravity data of Fig.~\ref{fig14a}. }\label{gravitytablarge}
\begin{tabular}{c c c c c c c c c}
\hline
Method & $q$ &$\alpha^{(1)}$& $\alpha^{(K)}$& $RE^{(K)}$ & $K$ & $\chi^2$ & Time (s) \\ \hline
\texttt{Hybrid-RSVD} & $1000$ & $40276$ & $38.4$ & $0.7389$ & $21$ & $8108.5$ & $376.8$ \\
& $1500$ & $48847$ & $44.6$ & $0.7168$ & $20$ & $7706.2$ & $524.3$ \\
& $2500$ & $61439$ & $39.1$ & $0.7111$ & $19$ & $7893.3$ & $946.6$ \\
& $4200$ & $76008$ & $37.9$ & $0.7013$ & $20$ & $6943.7$ & $2114.0$ \\ \hline
\texttt{Hybrid-RSVD} & $1000$ & $34324$ & $38.1$ & $0.6927$ & $21$ & $7915.9$ & $617.0$ \\
with power iterations & $1500$ & $43000$ & $36.7$ & $0.6976$ & $20$ & $7985.6$ & $886.3$ \\
& $2500$ & $56392$ & $40.0$ & $0.6942$ & $20$ & $7561.5$ & $1775.4$ \\
& $4200$ & $72451$ & $38.2$ & $0.6986$ & $20$ & $7989.2$ & $4178.8$ \\ \hline
\end{tabular}
\end{table}
\begin{table}
\caption{Results of the inversion algorithms applied on the magnetic data of Fig.~\ref{fig14b}.}\label{magnetictablarge}
\begin{tabular}{c c c c c c c c c}
\hline
Method & $q$ &$\alpha^{(1)}$& $\alpha^{(K)}$& $RE^{(K)}$ & $K$ & $\chi^2$ & Time (s) \\ \hline
\texttt{Hybrid-RSVD} & $1000$ & $12600$ & $10774.9$ & $1.2110$ & $50$ & $26292.4$ & $887.9$ \\
& $1500$ & $14564$ & $8947.8$ & $1.1720$ & $50$ & $20789.4$ & $1312.4$ \\
& $2500$ & $17376$ & $9253.8$ & $1.1063$ & $50$ & $11891.4$ & $2520.0$ \\
& $4200$ & $20695$ & $8778.1$ & $1.0975$ & $13$ & $7750.3$ & $1435.9$ \\ \hline
\texttt{Hybrid-RSVD} & $1000$ & $10705$ & $11128.1$ & $1.1315$ & $50$ & $13480.7$ & $1459.1$ \\
with power iterations & $1500$ & $12744$ & $8992.1$ & $1.0910$ & $50$ & $9570.6$ & $2263.9$ \\
& $2500$ & $15837$ & $8425.2$ & $1.0947$ & $14$ & $7863.0$ & $1260.4$ \\
& $4200$ & $19518$ & $8665.2$ & $1.1080$ & $12$ & $7299.8$ & $2685.4$ \\ \hline
\end{tabular}
\end{table}
As with the inversion of the two-dike problem, it is immediate that the inversions for the gravity problem are acceptable, without power iterations, for far smaller $q$ than for the magnetic case. Applying the power iterations reduces the size of $q$ that is required to obtain convergence and excellent results are obtained with just $q=1000$ and a time that is little more than is required for $q=1500$ and no power iterations. This suggests that we may use $q\gtrsim m/8$ with $s=1$ for the power iterations. As for the two-dike problem the magnetic inversion iteration does not converge to the required $\chi^2$ level within $50$ iterations, except when we take $q=4200>m/2$. Further, the relative errors are large and the computational cost is high. Including the power iterations yields convergence at $K=14$ when $q=2500$ and a smaller relative error for an acceptable computational cost. These results suggest that it may be acceptable to take $q\gtrsim m/4$ in the inversion of magnetic data with the \texttt{Hybrid-RSVD} algorithm combined with $s=1$ power iterations. This choice yields run times for the magnetic inversion that are comparable to those for the gravity inversion.
The cross-sections for the inversions using the power iterations and $q=2500$ for the gravity and magnetic data are given in Fig.~\ref{fig15} and Fig.~\ref{fig16}, respectively. The perspective view of these solutions is also given in Figs.~\ref{fig17} and \ref{fig18}, respectively. We observe that the horizontal borders of the reconstructed models, in both inversions, are in good agreement
with those of the original model, but that additional structures appear at depth. Here, the reconstructed model of the magnetic susceptibility exhibits more artifacts at depth and has a higher relative error than achieved for the gravity reconstruction. On the other hand, the magnetic structure better illustrates the dip of both dikes which is significant for accurate geophysical interpretation of the structures. Moreover, these results indicate that joint interpretation of the individual magnetic and gravity inversions may improve the quality of the final subsurface model.
\begin{figure*}
\subfigure{\label{fig15a}\includegraphics[width=.45\textwidth]{figure15a}}
\subfigure{\label{fig15b}\includegraphics[width=.45\textwidth]{figure15b}}
\subfigure{\label{fig15c}\includegraphics[width=.45\textwidth]{figure15c}}
\subfigure{\label{fig15d}\includegraphics[width=.45\textwidth]{figure15d}}
\subfigure{\label{fig15e}\includegraphics[width=.45\textwidth]{figure15e}}
\subfigure{\label{fig15f}\includegraphics[width=.45\textwidth]{figure15f}}
\caption {\texttt{Hybrid-RSVD} density results using power iteration with $s=1$ and target rank $q=2500$ for the inversion of gravity data given in Fig.~\ref{fig14a}.
The depth of the sections are: (a) $100$~m; (b) $200$~m; (c) $300$~m; (d) $400$~m; (e) $500$~m; and (f) $600$~m.} \label{fig15}
\end{figure*}
\begin{figure*}
\subfigure{\label{fig16a}\includegraphics[width=.45\textwidth]{figure16a}}
\subfigure{\label{fig16b}\includegraphics[width=.45\textwidth]{figure16b}}
\subfigure{\label{fig16c}\includegraphics[width=.45\textwidth]{figure16c}}
\subfigure{\label{fig16d}\includegraphics[width=.45\textwidth]{figure16d}}
\subfigure{\label{fig16e}\includegraphics[width=.45\textwidth]{figure16e}}
\subfigure{\label{fig16f}\includegraphics[width=.45\textwidth]{figure16f}}
\caption {\texttt{Hybrid-RSVD} susceptibility results using power iteration with $s=1$ and target rank $q=2500$ for the inversion of magnetic data given in Fig.~\ref{fig14b}. The depth of the sections are: (a) $100$~m; (b) $200$~m; (c) $300$~m; (d) $400$~m; (e) $500$~m; and (f) $600$~m.} \label{fig16}
\end{figure*}
\begin{figure*}
\includegraphics[width=.8\textwidth]{figure17}
\caption {$3$-D view of the reconstructed density model shown in Fig.~\ref{fig15}, illustrating cells with $\rho>0.5$~g~cm$^{-3}$.} \label{fig17}
\end{figure*}
\begin{figure*}
\includegraphics[width=.8\textwidth]{figure18}
\caption {$3$-D view of the reconstructed susceptibility model shown in Fig.~\ref{fig16}, illustrating cells with $\kappa>0.05$ (SI unit).} \label{fig18}
\end{figure*}
\section{Real data}\label{real}
The \texttt{Hybrid-RSVD} algorithm with and without power iterations, is now applied for the inversion of total magnetic field data that have been obtained over a portion of the Wuskwatim
Lake region in Manitoba, Canada, Fig.~\ref{fig19a}\footnote{Results showing the application of the methodology for real gravity data were already presented in Vatankhah et al. \shortcite{VRA:2018b}.}. The given area lies within a poorly exposed meta-sedimentary gneiss belt consisting of paragneiss, amphibolite, and migmatite derived from Proterozoic volcanic and sedimentary rocks \cite{Pi:09}. The total field anomaly shows magnetic targets elongated in the NE-SW direction. A data-space inversion algorithm with a Cauchy norm sparsity constraint on model parameters was applied by Pilkington \shortcite{Pi:09} for the inversion of this data set. Furthermore, the results of the inversion algorithm of Li $\&$ Oldenburg \shortcite{LiOl:98} are also presented in Pilkington \shortcite{Pi:09}. Therefore, the results presented here can be compared with the results of presented in Pilkington \shortcite{Pi:09} and for consistency we thus use a grid of $64 \times 64$ data points with $100$~m spacing and a uniform subsurface discretization of $64 \times 64 \times 20 = 81920$ blocks. The intensity of the geomagnetic fields, the inclination and the declination are $60000$~nT, $78.5^{\circ}$, $5.3^{\circ}$, respectively. As for the simulations we set $K_{\mathrm{max}}=50$, $\mathbf{m}_{\mathrm{apr}}=\mathbf{0}$, and impose bound constraints on the model parameters. In this case, these are $0=\kappa_{\mathrm{min}} \le \kappa \le \kappa_{\mathrm{max}}=0.2$ SI unit \cite{Pi:09}. The values of parameter $q$ are selected based on the presented rules in Section~\ref{synthetic}. We select $q=1100>m/4$ and $q=2100>m/2$ with and without power iterations, respectively.
The results of the inversions, as given in Table~\ref{realdatatab}, demonstrate that the methodology generates converged solutions using a limited number of iterations and at computational cost on the order of a few minutes only. Overall these results demonstrate the feasibility of using the \texttt{Hybrid-RSVD} algorithm for the inversion of large-scale geophysical data sets. Three plane-sections of the reconstructed model obtained using $s=1$ for the power iterations are illustrated in Fig.~\ref{fig20}. The anomaly produced by this model is shown in Fig.~\ref{fig19b}. The progression of the regularization parameter at each iteration and the UPRE functional at the final iteration are also presented in Fig.~\ref{fig21}. Furthermore, Fig.~\ref{fig22} illustrates a $3$-D view of the model for cells with $\kappa>0.05$. Our results indicate that, generally, there are three main subsurface targets. The target in the South-East of the area starts from $300$~m and extends to $400$~m; it is not as deep as the other two targets. The target in the central part of the area, is elongated in the SW-NE direction, and starts from about $300$~m and extends to about $800$~m in depth. This target in its northeastern part is divided into two sub-parallel targets. The third main target, located in the central north part of the area, is the deepest target which starts at about $400$~m and extends to about $900$~m. The results in the shallow and intermediate layers are in agreement with the results of Pilkington \shortcite{Pi:09}, but at depth the results presented here are more focused.
\begin{figure*}
\subfigure{\label{fig19a}\includegraphics[width=.45\textwidth]{figure19a}}
\subfigure{\label{fig19b}\includegraphics[width=.45\textwidth]{figure19b}}
\caption { (a) Total magnetic field over a portion of the Wuskwatim
Lake region in Manitoba, Canada; (b) The anomaly produced by reconstructed model in Fig.~\ref{fig20}. } \label{fig19}
\end{figure*}
\begin{table}
\caption{Results of the inversion algorithms applied on the magnetic data of Fig.~\ref{fig19a}.}\label{realdatatab}
\begin{tabular}{c c c c c c c c c}
\hline
Method & $q$ &$\alpha^{(1)}$& $\alpha^{(K)}$& $K$ & $\chi^2$ & Time (s) \\ \hline
\texttt{Hybrid-RSVD} & $2100$ & $262575$ & $3630.2$ & $15$ & $4007.1$ & $559.7$ \\ \hline
\texttt{Hybrid-RSVD} with power iterations & $1100$ & $215644$ & $2906.7$ & $16$ & $4089.7$ & $464.0$ \\
\hline
\end{tabular}
\end{table}
\begin{figure*}
\subfigure{\label{fig20a}\includegraphics[width=.32\textwidth]{figure20a}}
\subfigure{\label{fig20b}\includegraphics[width=.32\textwidth]{figure20b}}
\subfigure{\label{fig20c}\includegraphics[width=.32\textwidth]{figure20c}}
\caption {Reconstructed susceptibility model using \texttt{Hybrid-RSVD} via power iterations with $s=1$ and target rank $q=1100$ for the inversion of magnetic data given in Fig.~\ref{fig19a}. The plane-sections illustrate depths: (a) $300-400$~m; (b) $500-600$~m; (c) $700-800$~m. } \label{fig20}
\end{figure*}
\begin{figure*}
\subfigure{\label{fig21a}\includegraphics[width=.45\textwidth]{figure21a}}
\subfigure{\label{fig21b}\includegraphics[width=.45\textwidth]{figure21b}}
\caption { (a) The progression of regularization parameter $\alpha^{(k)}$ with iteration $k$; (b) The UPRE function at the final iteration. } \label{fig21}
\end{figure*}
\begin{figure*}
\includegraphics[width=.8\textwidth]{figure22}
\caption {$3$-D view of the reconstructed susceptibility model shown in Fig.~\ref{fig20}, illustrating cells with $\kappa>0.05$ (SI unit).} \label{fig22}
\end{figure*}
\section{Conclusions}\label{conclusion}
We present an algorithm for fast implementation of large-scale focusing inversion of both magnetic and gravity data. The algorithm is based on a combination of the $L_1$-norm regularization strategy with the RSVD. For large-scale problems, the powerful concept of the RSVD provides an attractive and indeed fast alternative to methods such as the LSQR algorithm. Here we have presented a comprehensive comparison of the \texttt{Hybrid-RSVD} methodology with power iterations for the inversion of gravity and magnetic data. Generally, we have shown that there is an important difference between gravity and magnetic inverse problems when approximating a $q$-rank matrix from the original matrix. For the inversion of magnetic data it is necessary to take larger values of $q$, as compared with the inversion of gravity data, in order that a suitable approximation of the system matrix is obtained. Furthermore, including power iterations within the algorithm improves the approximation quality. Indeed the RSVD obtained using the power iterations step yields approximation of the dominant singular space even for small choices of $q$. Thus, the RSVD with power iterations yields an efficient strategy when the singular values of input matrix decay slowly. In particular, the presented methodology can be used for other geophysical data sets and that the choice of the rank $q$ approximation will depend on the spectral properties of the relevant kernel matrices. If the RSVD without power iteration does not approximate the dominant singular values then power iterations should be included to improve the quality of estimated singular values. Our results also demonstrate that it is possible to obtain a reasonable lower estimate for $q$, for both gravity and magnetic data inversions, based on the number of data measurements, $m$. In conclusion, we have demonstrated that it is feasible to use an efficient RSVD methodology for problems that are too large to be handled using the full SVD. Application of the RSVD for the joint inversion of gravity and magnetic data is a topic for future work.
\begin{acknowledgments}
The authors would like to thank Dr. Mark Pilkington for providing data from Wuskwatim Lake area. This study is supported by the National Key R\&D Program (NO. 2016YFC0600109) and NSF of China (NO. 41604087 and 41874122).
\end{acknowledgments}
|
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.